DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Customer 360: Fraud Detection in Fintech With PySpark and ML
  • Mastering Advanced Aggregations in Spark SQL
  • Scalable, Resilient Data Orchestration: The Power of Intelligent Systems
  • How Trustworthy Is Big Data?

Trending

  • AWS to Azure Migration: A Cloudy Journey of Challenges and Triumphs
  • How Trustworthy Is Big Data?
  • Beyond Code Coverage: A Risk-Driven Revolution in Software Testing With Machine Learning
  • FIPS 140-3: The Security Standard That Protects Our Federal Data
  1. DZone
  2. Data Engineering
  3. Data
  4. Big Data TCO Lessons From Virtualization Technology Sprawl

Big Data TCO Lessons From Virtualization Technology Sprawl

By 
Rick Delgado user avatar
Rick Delgado
·
Jun. 22, 15 · Interview
Likes (0)
Comment
Save
Tweet
Share
1.8K Views

Join the DZone community and get the full member experience.

Join For Free

The complexity of big data makes it a difficult concept for many to grasp, and utilizing it effectively is one of the biggest challenges businesses face today. There is little doubt that big data offers organizations a number of clear advantages, but applying them across the entire enterprise is one obstacle that can truly be described as formidable, even daunting, to even the most technologically savvy companies. One department might be able to create its own business solutions through big data analytics, while another department might come up with answers of their own, but lack of true coordination and collaboration remains a significant problem. Businesses aren’t without help in this area, however, because they’ve encountered similar problems before. Many companies have encountered issues such as virtualization technology sprawl, and the lessons learned from addressing that problem could prove to be exceptionally valuable when dealing with big data true cost of ownership (TCO).

To understand the problem and the solution, we must first look back at the rapid growth of virtualization technology, more specifically server virtualization. As businesses adopted virtualization, the mainframe systems soon diverged into multiple systems. The more popular virtualization became, the more projects were taken on and the more technologies diverged. Larger companies eventually sought technology specialists to work within their areas of expertise. The result of the use of these individual teams was virtualization technology sprawl, an inefficient development that eventually lead to even higher operational costs. For all the benefits virtualization technology offered, many of them were outweighed by the increased demands and greater management complexity that came from technology sprawl.

Businesses were quick to come up with new solutions for the problem. The most common was to adopt a converged infrastructure . This strategy directly addressed the higher operational costs that resulted from technology sprawl, basically breaking through the silos by taking multiple technologies and combining them into single stacks for computing, storage, and networking. This made the management of virtualization technology much easier since operational complexity was significantly reduced. In other words, management of this technology was kept at a reasonable size.

The same principle can apply to big data management across an entire organization. When it comes to management of big data and hadoop security, it’s easy to get caught up in the immensity of it all. The fact that big data is so versatile and can be applied to so many different use cases also means it can apply to any number of different divisions within a company. This creates silos and a general desire to hold onto data sets. In other words, big data ends up in a sprawl of its own, becoming that much more unwieldy and complicated, which is a major problem for a technology that’s already so complex to begin with.

The lesson that every company should take away from the solution to virtualization technology sprawl is the breaking down of barriers to big data management. It all comes down to ready access to all the necessary data no matter what roles an employee may have within a company. Businesses shouldn’t have to worry over the cost it takes to store and process data since the insights gained from big data analytics are particularly valuable. Most importantly, it’s about avoiding big data from getting too big, to the point where it becomes unmanageable and merely adds to the overall operating costs of a company. It’s true that big data introduces more complexity, but businesses that have learned how to store and process it efficiently, sometimes through big data platforms or cloud-based services, are in a more advantageous position than companies still dealing with technology sprawl. The lessons learned from previous problems can indeed play a helpful role in solving the problems many experience today.

Big data Virtualization

Opinions expressed by DZone contributors are their own.

Related

  • Customer 360: Fraud Detection in Fintech With PySpark and ML
  • Mastering Advanced Aggregations in Spark SQL
  • Scalable, Resilient Data Orchestration: The Power of Intelligent Systems
  • How Trustworthy Is Big Data?

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!