Sunday, February 9, 2014

Blueprint: SDDC – Moving Beyond the Early Adopter

by Steve Riley, Technical Director, CTO Office, Riverbed

The term “early adopter” can carry with it something of a stigma. We tend to think of early adopters as technophile geeks who overpay for some new piece of consumer technology that is only partially functional. One significant example was those who scoffed at the earliest automobiles, because they were initially less reliable than the horses that were the standard form of transportation.

What many people failed to realize, however, was that automotive technology would rapidly improve until those who insisted on continuing to use horses and wagons were seen as the odd ones out. This has happened with business technology as well over the last several decades, with some organizations (such as many doctors’ offices) reluctant to adopt computerized systems until they were forced to in order to remain competitive. The challenge for any company is to know whether a new technology will be beneficial long-term, or if it's merely a flash in the pan.

Under the Hood in the Business

One area of technology that is relevant to business performance – but often goes unnoticed by the end user – is that of the corporate data center and its underlying network architecture. While the user is concerned with access to applications and services, similar to the way we evaluate a car's appearance and stereo system, what is “under the hood” will make or break the experience in the long run.

Key to business success in the current economic climate is agility: the ability to quickly adapt to changing business circumstances. To that end, virtualization and cloud computing have emerged to accelerate deployment of the services an organization needs, when they’re needed. But the need for agility goes beyond virtualized software applications and servers. Businesses are quickly growing beyond the network “box,” or the limitations of the physical network infrastructure.

What Is Software-Defined Networking?

To overcome the rigidity of the traditional data center network, software-defined networking (SDN) has emerged as a popular solution. Traditional network equipment bundles the decision-making logic (the “control plane”) and the data routing mechanism (the “forwarding plane”) into a single box. In SDN, these functions are separated. Boxes still move data, but the decisions are made by software running on general-purpose computers. SDN provides the fundamentals for effective network virtualization.

Administrators are already familiar with the benefits of server virtualization, which has streamlined workload management in organizations of all sizes. By deploying right-sized application-specific logical servers over a farm of inexpensive general-purpose physical server hardware, resource utilization is increased and provisioning can be accomplished much more quickly.

As server virtualization became more commonplace, desktops soon followed. Rather than provision each machine and piece of software individually, IT soon discovered the advantages of centralizing these processes and delivering them either through local servers, over the WAN, or even over the Internet.

SDN relies on well-defined application programming interfaces (APIs), which allow an organization to develop specialized software that extends functionality beyond what is available out of the box. Load balancing, for example, no longer requires an expensive specialized appliance in an SDN environment, but can be handled with software and provisioned in a “service chain” along with other networking services such as firewalls. These services run on commodity hardware that is sized (and can be resized) as appropriate. The underlying physical network is simplified, and redundant tools can be eliminated because resources can be moved around as needed. Adjustments to the network can be made in real time through software applications, rather than having to frequently replace or reconfigure physical devices in the data center. And SDN delivers the same benefits as other virtualization initiatives, such as the ability to house logically separate entities on a single device, even if they have conflicting requirements that would ordinarily cause compatibility issues.

Virtualizing the network

To varying degrees, network virtualization isn’t new. Virtual LANs (VLANs) create logical local network segments across distinct physical network segments. Virtual switches manage the traffic between virtual machines, on either the same or separate physical hosts. But neither of these techniques can be considered full network virtualization.

Administrators are beginning to consider whether it would be beneficial to bring full virtualization to the network and, if so, how. For years this has been considered a legitimate possibility, but there have been concerns. Managing state changes, access control lists, and counters in logical networks with thousands of virtual nodes can be a real challenge. It turns out that SDN is very good at solving these particular challenges, and with SDN it becomes possible to build fully virtualized networks completely decoupled from the underlying hardware.

The end result: software-defined data centers

Data centers have enjoyed the benefits of compute and storage virtualization for many years. SDN brings effective virtualization to the network. The logical culmination of all these, then, is the software-defined data center (SDDC).

The SDDC is characterized by broad programmability across all elements: compute, storage, and networking. Consumable services are decoupled from hardware and implemented as abstractions that, for all practical purposes, behave just like their old-fashioned physical counterparts. But they’re free from old-fashioned physical constraints: they can be relocated as necessary, scaled according to demand, and billed according to usage. Applications will require no fundamental reconfiguration to keep processes running normally.

The software-defined data center delivers benefits in several important areas:
  • Today’s applications are utilizing more complex infrastructure requirements that can be a challenge to meet in order to ensure proper quality of service. The delicate balance of meeting each requirement without harming another process is improved by the level of abstraction made possible by the SDDC.
  • Because resources are provisioned on demand, developers are free to focus on the business functionality of applications without undue concern about whether the network can respond—the network in an SDDC automatically reacts to changing application requirements.
  • Combining a more consolidated and centralized control framework on top of commodity hardware means there are fewer specialized physical components that can break down and inhibit operations. In addition, centralized control brings improved visibility, which makes it more difficult for attackers to hide and conduct malicious actions.
  • With a reduced need for specialized network equipment, organizations employing an SDDC will likely see reduced capital and operational expenditures. With IT budgets frequently first on the chopping block in businesses, the SDDC is an ideal way to ensure continued operations at a lower cost.
A fully software-defined data center will be a game-changer for those organizations that successfully execute the vision. But it will require effective planning to execute, and it may still be several steps in the future for many companies. But even without being an early adopter, businesses today can look ahead and begin to make preparations, such as conducting test implementations of SDN and increasing their experience with virtualization.

Just as the automobile quickly redefined travel, the SDDC is likely to define the corporate network in the years to come. Organizations should overlook the growing pains of the technology and plan how and when to make the transition, to ensure that they don’t find themselves eating the dust of the competition.

About the author

Steve Riley is Technical Leader in the Office of the CTO at Riverbed Technology. His specialties include cloud computing, information security, compliance, privacy, and policy. Steve has spoken at hundreds of events around the world. Before Steve joined Riverbed, he was the cloud security evangelist at Amazon Web Services and a security consultant and advisor at Microsoft. Steve enjoys sharing his opinions about the intersection of technology and culture.

About Riverbed

Riverbed Riverbed delivers application performance for the globally connected enterprise. With Riverbed, enterprises can successfully and intelligently implement strategic initiatives such as virtualization, consolidation, cloud computing, and disaster recovery without fear of compromising performance.  By giving enterprises the platform they need to understand, optimize and consolidate their IT, Riverbed helps enterprises to build a fast, fluid and dynamic IT architecture that aligns with the business needs of the organization. Additional information about Riverbed (NASDAQ: RVBD) is available at


NTT Teams with ALU and Fujitsu on Server Study

NTT has launched a collaborative study with Alcatel-Lucent Japan and Fujitsu to develop server architecture for core systems of telecom networks.

NTT said it is interested in servers based on general purpose hardware that enable faster development of applications.  In this server architecture, all network functions would be realized on network-wide virtualized hardware.

NTT Laboratories will contribute core technologies. Alcatel-Lucent Japan is developing virtualization and orchestration technologies for network server systems, and Fujitsu has extensive experience in systematization of distributed computing and maintaining large-scale server systems.

The project is now underway.

Pluribus Networks Unveils its Virtualized Data Center Architecture

Pluribus Networks, a start-up based in Palo Alto, California, unveiled its "Freedom" architecture for integrating compute, network, storage and bare-metal hypervisor OS technologies.

Pluribus said its  he Freedom platform brings full bare-metal control and visibility into the network through powerful, Unix-style API to deliver true inNetwork Application Programmability, inNetwork virtualization, inNetwork analytics and inNetwork automation.

The solution is based on a distributed network operating system with hypervisor bare-metal virtualization capabilities of computing resources - CPU, memory, and storage - and merchant silicon switch chip. This is matched to a powerful server platform combined with a high-density 10/40 GbE merchant silicon switch and network processor.  The company said its technology partners include Intel and Broadcom.

In the Freedom architecture, the network switch becomes a true extension of the server. Merchant silicon chips are fully integrated into the operating system, controlled and virtualized like a NIC, and used as an offload/hardware acceleration engine for application flows and network functions. The network switch is managed by a server-class control plane through multi 10Gbps high-speed connections, unleashing a new class of services and functions to run directly “inside” the network; examples include the ability to run scalable monitoring and analytics for “physical” and “virtual” (tunneled) flows, free of taps and external monitoring gear.

Key components of the Pluribus solution include:

1) Netvisor 2.0, the industry’s first and only bare-metal, distributed network hypervisor operating system with full integration of merchant silicon switch chips into the server hypervisor

2) The Freedom Server-Switch product line, the industry’s most programmable network services platform based on off-the-shelf, open components to truly program, virtualize and automate the network exactly like a server

3)Pluribus Network Freedom Care, 24x7x365 support with escalation engineers in the U.S., India and China.

4) Freedom Development Kit (FDK), which allows developers to experience true inNetwork™ application programmability (with Unix-style tools such as C and Java) to support scalable and dynamic deployment of network-aware mission critical applications

The company said the its architecture simplifies the infrastructure by eliminating:

  • Separate monitoring network
  • Separate SAN
  • Separate overlay-underlay
  • Separate external controllers
  • L4-L7 appliance sprawl
  • Separate servers for services and orchestration (PXE, DHCP, DNS, OpenStack controllers, Argus, Wireshark, and more)

Pluribus expects to enter general availability in a few weeks. Oracle and Cloud Flare are reference trial sites.

  • Pluribus is headed by Kumar Srikantan (CEO), who was previously VP/GM of HW Engineering for the Enterprise Networking Business at Cisco where he was responsible for the HW engineering execution of Cisco’s Enterprise Networking portfolio.
  • Investors in Pluribus include New Enterprise Associates, Menlo Ventures, Mohr Davidow and China Broadband Capital.