In the world of Anything-as-a-Service (I will leave the acronym to your imagination), Network-as-a-Service is not a new term. In fact, it even has its own wikipedia page which will tell you it has been used for many years now, well before the current set of service related terms in IT have become popular.
Using all the data sources at your disposal to increase the visibility of the state, health and contextual performance of your network is key to just that.
To scale larger, technology has time and again become more atomic and adopted more distributed architectures. The 1990s were about storage’s transition, the 2000s about compute’s transition, and the 2010s are going to be about networking’s transition. When this transition is done, we will have a larger number of smaller links, we will have different control mechanisms to use them, and the per-link cost will be lower.
The real point of the market sizing exercise was to predict that SDN is going to be huge. Since that time, every major vendor has built SDN into their roadmap. Yes, deployments are lagging as people figure out just what to do with the technology, but there is no doubt that the competitive landscape has been forever changed.
The real reason the Theory of Constraints remains a statistical game in networking has nothing to do with technology. The reality is that once anyone grows accustomed to solving a problem in a particular way, they use that method to solve it over and over again.
Most of today’s 10GbE switches ship with SFP+ cages and a handful of QSFP cages. Now comes the hard part. What do I plug into these cages? There are lots of choices all with their own pros and cons.
While most customers loved the concepts of a scale-out network that leveraged dynamic photonic connections instead of hard-wired paths, most of them also told us that they “didn’t really know (or want to know)” about the applications at all.
Integration goes well beyond OpenStack and DevOps. While those are also extremely useful, there are gains to be had by looking at nontraditional integration targets.
But if cloud has been around for so long, why will it take so long for cloud to become the dominant source of IT spend?
The question is how to take devices explicitly designed to operate discretely and make them behave as a single cohesive unit.
Plasticity in this context means that a seemingly rigid structure, like the law, can be deformed to meet a new need. Examples of this range from the mundane structuring of contracts to limit the downside of risky deals to the industry redefining methods of companies like Uber that challenge conventional practices and laws.
As we head through earnings season, we get the opportunity to learn a bit about the state of the networking industry. Each data point gives us a bit more information about how the competitive landscape is unfolding, and ultimately how this will shape the market. So what do we know so far?
Networking is a visual field. In fact, we can generalize and say all of IT is. Come to think of it, is there a technical field that isn’t? In this post, I’ll cover a handful of visualization tools that have recently helped me think through and communicate difficult Network Engineering stuff.
We’ve spent some cycles talking about user experience and workflow in previous articles. So in this post, we’re going to explore how these things relate to one another in the context of networking.
Virtualization, in conjunction with ADCs, can make a big difference for businesses, whether large or small.
Traditionally, just getting a network feature to work and inter-operate was the goal, but now we must consider how this feature fits into common workflows performed by network engineers.
In network engineering, the “T” is very long: Configuring a network to achieve business goals requires considerable skill and knowledge. In this post, we’ll explore this in a little more detail.
This week I read a perfectly reasonable article by Tom Hollingsworth, which then deteriorated in the comments section to a “you don’t know how a switch works” exchange.
As SDN gains traction within the private sector we are also seeing federal agencies adopt it as they identify the need for network infrastructure changes.
Just when you got used to ethernet speeds being a nice decimal based system where we simply add zeros every few years, someone threw in 40GbE a few years ago. And that’s ok, powers of two we can deal with, but 25?
Whenever a new networking platform is evaluated, one of the early sales calls includes a packet walkthrough. But why?
Much has been published about the Open Compute Project. Initiated by Facebook, it has become an industry effort focused on standardization of many parts and components in the datacenter. It’s not a huge innovative leap, but it’s a significant convenience.
Most people in the IT space understand the trend towards bare metal everything (servers, switches, etc.). The movement to commodity hardware drives down costs in infrastructure where price is disproportionately determined by the cost of hardware.
Why can’t companies adapt to change? The ideal sector to see disruptive innovation at work is the technology sector. Many billions are spend on bringing products to market that fail.
As connectivity becomes more and more a commodity service and requirements become more demanding, how that connectivity is provided has to evolve.