Sunday, May 11, 2014

Video: Microsoft's Brad Booth on Building Hyper-Scale Data Centers

In this video, Brad Booth, Principal Engineer at Microsoft, talks about radically changing the way data centers are built. The old way of doing things simply did not achieve the scale that Microsoft needed.  The basic idea is to build "castle on a cloud" - essentially giant warehouses that can accommodate all the technology needed.

Another prominent feature of Microsoft's data center strategy is the concept of "crop rotation" - key technology is swapped out about every 36 months and replaced by updated systems.  Here he previews some the new technology in the next crop rotation cycle.

Filmed at Ethernet Technology Summit

See 3-minute video:

Saturday, May 10, 2014

Fujitsu Expands its SDN Portfolio with Multi-layer WAN Solutions

Fujitsu introduced the second round of products in its Intelligent Networking and Computing Architecture, which uses SDN to optimize network-wide flows across the entire ICT platform.  The first round of SDN solutions from Fujitsu included network virtualization products for data centers. This round focuses on multi-layer, wide-area implementations.

Fujitsu said the new products, which became available for ordering beginning May 9 in Japan, enable virtual networks with centralized management across WANs. The solutions provide visualization of communication routes for each service and provide control and management functions, including for existing IP-communications equipment. The new set includes:

  • Virtuora NC software -- centrally manages network-configuration information and integrates virtual networks. It uses Fujitsu's proprietary routing-design engine to determine the best route based on multiple types of information, including number of hops and guaranteed bandwidth. It also provides a service chaining functionality that selects and connects to functions (such as firewalls and load balancers) available on a wide-area network. It enables visualization of communications-routing information for each service in an easy-to-read display to simplify virtual network operations that usually require a high level of specialized knowledge, which contributes to lower operating expenses.
  • Virtuora SN-V software -- carries out virtual-network control and user-data transport processes based on the information set up by Virtuora NC. The software, which runs on standard x86 servers, uses Fujitsu's proprietary execution-control technology to achieve high-speed data-communications processing. It is controlled by Virtuora NC, and it can autonomously perform routing-trouble monitoring and instant switchover when faults are detected. Virtuora SN-V uses standard OpenFlow 1.3(4) protocol as its control interface.
  • Proactnes II QM software -- provides for network-quality management. The latest release analyzes communications traffic, then instantly detects any deterioration of quality and accurately identifies problem areas. It reports information on degraded quality to Virtuora NC, and has supports the NVGRE protocol and the Q-in-Q protocol.

NEC Demos SDN with Microsoft's Hyper-V Network Virtualization

NEC Corporation of America (NEC) will be demonstrating a beta version of a new solution that enables Microsoft’s Hyper-V Network Virtualization (HNV) to co-exist with its own ProgrammableFlow SDN networking suite.

Last fall, NEC announced a solution that integrates network and compute orchestration for Windows Server 2012 R2 Hyper-V customers who use the NEC ProgrammableFlow networking suite along with Hyper-V and System Center Virtual Machine Manager (SCVMM). This solution is now generally available in North America. Once both solutions are GA, customers will have, for the first time, a single point of network policy administration that integrates both tunnel based network virtualization (ProgrammableFlow and HNV) and fabric-based network virtualization (ProgrammableFlow and SCVMM).

“Having two different options for network virtualization significantly improves flexibility for customers who are virtualizing their networks,” said Don Clark, director business development, NEC Corporation of America. “Best of all, customers also gain the ability for end-to-end configurability and visibility throughout their networks. With each new SDN development, NEC continues to improve our customers’ ability to reduce the manual processes and complexity related to conventional networks, improve agility of network services, and increase performance and throughput.”

NEC will be showcasing this demonstration during Microsoft’s TechEd North America 2014 conference May 12-15 in Houston. General availability of the integration of Hyper-V Network Virtualization and the ProgrammableFlow Networking Suite is expected in the second half of 2014.

Friday, May 9, 2014

Tata Tests 400G Long-Haul Subsea Network with Huawei

Tata Communications has conducted a 400G field trial on a subsea network over 6000 km.

The test, which was conducted in partnership with Huawei and Huawei Marine, demonstrated an optical transmission of 400G signals, an industry-first for a submarine cable system of this length.

Huawei and Huawei Marine’s technical solution adopted the modulation format of Dual Carrier Polarisation Division Multiplexing Quadrature Phase Shift Keying (DC-PDM-QPSK), an innovative Faster-Than-Nyquist (FTN) compensation and recovery algorithm, proprietary clock recovery technology and Soft Decision Forward Error Correction (SD-FEC) technology to address the problems of high-speed signal distortion and unstable clocks. The use of such advanced technology underpins Huawei and Huawei Marine’s significant commitment to investing in research and development to meet the needs of their customers.

"The 400G technology of Huawei & Huawei Marine demonstrates that our existing subsea network asset is capable of supporting future, next-generation transmission technology as shown in the 400G trial. Tata Communications constantly assesses new technology to expand our capabilities and to enhance our ability to support traffic growth demand from our customers," said Mr. Hon Kit Lam, Vice President, International Transmission and IP business, Tata Communications.

Promisec Offers Endpoint Security Built on Microsoft Azure

Promisec introduced a suite of cloud-based solutions built on Microsoft Azure and designed to provide endpoint security for small-to-medium enterprise organizations.

The Promisec Integrity enables IT organizations to ensure compliance, defend against cyber threats and validate endpoint integrity across all deployed security and IT endpoint technologies. The full suite of cloud-based offerings includes a “freemium” service that checks for specific advanced malware, as well as a more comprehensive security solution with patch management validation, endpoint protection validation services and unauthorized application visibility.

Promisec said its Software-as-a-Service offering is able to detect some of the most sophisticated malware and cyber threats and provide actionable insight for remediating issues and limiting business impact. Built for environments up to 5,000+ total endpoints, the solution ensures that all leading anti-virus agents are up to date and running, leading patch management solutions are deployed and operational and informs definitively when untrusted, blacklisted software is running within a company’s environment.

“Promisec’s cloud-based Integrity offering was easy to deploy and saved me time by pulling together a view of all my endpoints which immediately gave us a comprehensive view of each end user system,” said Nick Sciarra, IT Manager of Corporate Synergies Group.  “Without Promisec Integrity, I would have to leverage multiple applications and inspection tools to create the same crisp understanding that Promisec can provide.”

Huawei Wins CEM Contract in Azerbaijan

Bakcell, the leading mobile operator in Azerbaijan, selected Huawei to build up a new service quality evaluation indicator system.  The deployment features the HUAWEI SmartCare system featuring a Key Quality Indictor (KQI) to provide network quality, service quality monitoring capabilities and VIP Care. It will also cover enterprise user Service Level Agreement (SLA) assurance, roaming analysis, complaint handling assistance, and other auxiliary functions for GSM/UMTS /LTE networks. Huawei will also provide professional services.  Financial terms were not disclosed.

SiteMinis Picks IBM's Softlayer for Cloud Services

SiteMinis, a mobile Web technology and services provider, has adopted IBM Cloud services including SoftLayer infrastructure, to enable the  dynamic mobile applications and websites that it offers.  IBM said its Softlayer solution allows SiteMinis to deliver a comprehensive infrastructure platform at a lower cost, with exceptional ease of use and reliability.  Financial terms were not disclosed.

Thursday, May 8, 2014

Cisco Introduces Virtualized and Programmable Elastic Access Portlio

Cisco introduced a new Elastic Access" portfolio of software and hardware products designed to bring virtualization, programmability, economical scale and architectural convergence in the access segment of the network.

The new products, which are key elements of Cisco's recently introduced Evolved Programmable Network (EPN), leverage software-defined networking (SDN) to deliver new levels of service agility, including "bandwidth on the fly" and Cisco ESP orchestration.

Cisco said its goal is to extend autonomic access operations and management, along with zero-touch nV provisioning, to the furthest end points. Additionally, highly secure auto discovery, auto-configuration, uninterrupted management and configuration repair of network elements is enabled. Automatic provisioning and management for "bandwidth-on-the-fly" extensibility from the core to the most remote access point and back is possible. As a result of Cisco Elastic Access solutions working in conjunction with the Cisco ESP, orchestration, automation and simplification are extended to the last mile, creating the opportunity to reduce provisioning steps by up to 56 percent.

The new Cisco Elastic Access portfolio includes:

Cisco ME 4600 Series Multiservice Optical Access Platform: scales aggregation services and enables operators to offer both end-user and wholesale services using Gigabit Passive Optical Network (GPON) technology.

Cisco ASR 902 and ASR 920: these are converged time-division multiplexing (TDM) and Ethernet aggregation platforms that offer reduced footprint, cost, and feature compatibility with the ASR 903.  They support autonomic networking for IP devices and complement the ASR 901 cell site router.

Cisco ME 1200 Ethernet Access Device: a fully featured service delivery demarcation device is ready for Metro Ethernet Forum (MEF) 2.0 services for today's mobile and cloud applications.
Virtualized Elastic Access management

Virtualized management-based controllers on the Cisco Unified Computing System (Cisco UCS) allow cloud-based management to scale to thousands of access devices.

"With this announcement, Cisco is leading the way to deliver the benefits of virtualization, management and software-defined networking-based advances to the last mile," said Liz Centoni, Cisco vice president and general manager of the service provider access group. "These new Cisco Elastic Access products demonstrate our commitment to delivering the most comprehensive and programmatic approach to software-defined networking and network function virtualization in the telecommunications and networking industries. With our elastic core, edge and access products, the Cisco Evolved Programmable Network is the most programmable, end-to-end solution on the market."

  • Cisco's Evolved Programmable Network (EPN) is the foundation of its Open Network Environment (ONE) architecture. The EPN is the infrastructure layer consisting of physical and virtual devices working together to form an end-to-end unified fabric for a programmable network. The EPN is designed to converge edge, core, and data center functions using Cisco's portfolio of technologies. 

Nokia: Three Metrics of Disruptive Innovation

by David Letterman, Nokia

‘Disruptive innovation’ has been a favorite discussion topic for years. I am sure every industry, every company and every innovation team, has had rounds and rounds of discussion about what disruptive innovation means for them.

Rather than attempting another end all, be all definition for innovation, let’s focus on the passion it evokes and the permissions it enables. Disruptive innovation, as an internal charter, allows expansion beyond previous boundaries; it gives permission to go after new markets, new customers and new business models.  If left unencumbered, it can guide the company to proactively find and validate big problems for which external partners, new products and new markets can be created.  Disruptive innovation can be a source of otherwise unattainable revenue growth and market share.

Innovation is about converting ideas into something of value, making something better AND hopefully something that our customers are willing to pay for. For the purpose of putting the framework into two buckets, let’s distinguish between incremental and disruptive innovation.

Most innovation in established companies is developed by corporate innovation engines, whose job is to continually improve their products and services.  This continuous innovation delivers incremental advances in technology, process and business model.  Specialized R&D teams can add value to these innovation engines by solving problems differently or having a specific charter to go after larger levels of improvement.  Although the risks are higher, breakthrough innovation occurs when these teams achieve significantly better functionality or cost savings.  This combination of corporate and specialized incremental innovation is absolutely necessary for companies to keep up with or get ahead of the competition – and which most successful companies are very good at.

Disruptive innovation, on the other hand, is much more difficult for the corporate machinery. Here, new product categories are created, new markets are addressed and new value chains are established.

There is no known baseline to refer to.

Disruption implies that someone is losing – being disrupted. So clearly you won’t find a product roadmap for it in the company catalog. And it’s not even necessarily solving the problems of the current customer base. This is an area where, with the right passion, permissions and charter, a specialized innovation team can take a lead role and create significant growth for the company.

Here is my take on three characteristics of teams chartered to do disruptive innovation -

  • A strong outside-in perspective is crucial, for not only identifying the problem and validating the opportunity, but also for finding and creating a solution, and perhaps even taking it to market. Collaboration is everything when it comes to disruption.
  • Risk quotient - Arguably, all innovation contains some element of risk.  But, in this case of proactively seeking disruption, we must allow for an even higher degree of risk. For most innovation teams, ‘Fail fast’, ‘Fail often’ and ‘Fail safe’ are the mantras.  But in case of disruptive innovation when we are seeking new markets, perhaps based on new technologies, our probability for success is untested. And to the incumbents, this new  solution is unacceptable, often something they have never considered or simply cannot deliver.  If you are solving a really important problem it justifies embracing the risk, revalidating the opportunity and digging deeper to create a solution.  Redefine risk in the context of meaningful disruption – ‘Fail proud’ and keep on solving.  Remember SpaceX?
  • How disruptive is disruptive - For a new entrant to eventually become disruptive it needs to be significantly better in functionality, performance and efficiency - or much cheaper - than the alternatives.  Although the benefits may initially only be noticed by early adopters, for the solution to disrupt a category it must be made available to, and eventually accepted by, the masses.

A simple example that addresses these three characteristics – is how the Personal Navigation Device market was disrupted by the smartphone.

In the early and mid 2000s, Garmin and TomTom had a lock on the personal navigation market. When Nokia and the other phone manufacturers began delivering GPS via phones, they were coming to the market via a totally new channel, embedding the functionality in a device that the consumer would carry with them at all times.

The incumbents may have acted unfazed.  But in reality, they couldn’t respond to the threat.  The functionality may have been inferior to what they were selling but the cost was perceived as free.  It was totally unacceptable and the business model was “uncopiable.” What started as a feature in just select high-end phones would soon be adopted as a standard functionality in every smartphone, and expected by end users by default. In just two years, there were five times as many people carrying GPS enabled phones in their pockets as there were PNDs being sold.

Silicon Valley Open Innovation Challenge

There are many other characteristics you might consider to be the most important measurements for disruptive innovation.  For me, these three are as good as any.  It comes down to the simple questions of “Why does it matter?”  “What problem does this empower us to solve that was otherwise unmet?” and “How can we provide significantly positive impact for the company and for the people to whom the innovation will serve?”

Nokia’s Technology Exploration and Disruption (TED) team is chartered to look at exactly these questions. In its search for the next disruption, it has launched the – Silicon Valley Open Innovation Challenge.

This competition is an open call to Silicon Valley innovators to collaboratively discover and solve big problems with us, and to do so in ways that are significantly better, faster or cheaper than we could have done alone. We see Telco Cloud and colossal data analytics as the two major transformational areas for the wireless industry, opening up possibilities for disruption – and those are the focus themes for the Open Innovation Challenge. We’re willing to take the risk because we know the rewards of innovation are worth it.

Click here to submit your ideas and be part of something truly disruptive. Apply now!
Last date is 19th May 2014.

David Letterman works in the Networks business of Nokia within its Innovation Center in the heart of Silicon Valley. Looking after Ecosystem Development Strategy for the Technology Exploration and Disruption global team, David is exploring how to create exponential value by pushing the boundaries of internal innovation. An important initiative is Nokia’s Silicon Valley Open Innovation Challenge, calling on the concentrated problem-solving intellect of the Valley, to solve two of the biggest transformations for Telco: Colossal data analytics and Telco Cloud. Prior to his current position, David worked for a top tier Product Design and Innovation Consultancy, and held various business development and marketing management roles during a previous 10-year tenure with Nokia.

Nokia invests in technologies important in a world where billions of devices are connected. We are focused on three businesses: network infrastructure software, hardware and services, which we offer through Networks; location intelligence, which we provide through HERE; and advanced technology development and licensing, which we pursue through Technologies. Each of these businesses is a leader in its respective field. Through Networks, Nokia is the world’s specialist in mobile broadband. From the first ever call on GSM, to the first call on LTE, we operate at the forefront of each generation of mobile technology. Our global experts invent the new capabilities our customers need in their networks. We provide the world’s most efficient mobile networks, the intelligence to maximize the value of those networks, and the services to make it all work seamlessly.

HP's Helion Portfolio Pulls Together OpenStack Cloud Resources

HP introduced its "Helion" portfolio that brings together all of its resources in hardware, software, and services for private, public, and hybrid cloud solutions. The architectural vision for Helion is premised on OpenStack. HP also announced plans to invest more than $1 billion to support and deliver new open source cloud products and platforms in the new Helion portfolio in the years ahead.

“Customer challenges today extend beyond cloud. They include how to manage, control and scale applications in a hybrid environment that spans multiple technology approaches,” said Martin Fink, executive vice president and chief technology officer, HP. “HP Helion provides the solutions and expertise customers need to select the right deployment model for their needs and obtain the greatest return for their investment.”

As part of Helion, HP is introducing several new cloud products and services, including:

  • HP Helion OpenStack Community edition — a commercial product line of OpenStack that is delivered, tested and supported by HP. Available today, the community edition is a free version ideal for proofs of concept, pilots and basic production workloads. An enhanced commercial edition that addresses the needs of global enterprises and service providers will be released in the coming months.
  • HP Helion Development Platform — a Platform as a Service (PaaS) based on Cloud Foundry, offering IT departments and developers an open platform to build, deploy and manage applications quickly and easily. HP plans to release a preview version later this year.
  • HP’s OpenStack Technology Indemnification Program — protects qualified customers using HP Helion OpenStack code from third-party patent, copyright and trade-secret infringement claims directed to OpenStack code alone or in combination with Linux code.

  • HP Helion OpenStack Professional Services — a new practice made up of HP’s experienced team of consultants, engineers and cloud technologists to assist customers with cloud planning, implementation and operational needs.
HP Helion OpenStack–based cloud services will be made available globally via HP’s partner network of more than 110 service providers worldwide and in HP data centers.

HP noted that it currently operates more than 80 data centers in 27 countries. HP plans to provide OpenStack-based public cloud services in 20 data centers worldwide over the next 18 months.

NTT DOCOMO to Conduct 5G Experimental Trials

NTT DOCOMO announced plans to conduct experimental trials of emerging 5G technologies with leading vendors, including Alcatel-Lucent, Ericsson, Fujitsu, NEC, Nokia and Samsung.

The research program will look at the potential of 5G mobile technologies to exploit frequency bands above 6GHz and realize very high system capacity per unit area, and new radio technologies to support diverse types of applications including machine-to-machine (M2M) services. DOCOMO also expects to collaborate with other companies in its effort to test a wide range of 5G mobile technologies.

DOCOMO is looking for 5G systems to be ready for commercial deployment in 2020. The new system is expected to enable ultra-high-speed data transmissions at more than 10 Gbps.

"5G studies are starting to gain real momentum as we point toward 2020," said Seizo Onoe, Executive Vice President and Chief Technical Officer at DOCOMO. "I am delighted that we will collaborate on 5G experimental trials with multiple global vendors from this early stage."
DOCOMO will begin indoor trials at the DOCOMO R&D Center in Yokosuka, Kanagawa Prefecture this year, to be followed by outdoor field trials planned for next year. Key findings and achievements will be shared with research institutes and at international conferences to contribute to 5G standardization, which is expected to start from 2016. Key findings also will be utilized for research aimed at incubating future advanced technologies."

AWS Adds CloudFront CDN to Free Usage Tier

Amazon Web Services added its CloudFront content delivery network to its list of AWS Free Usage Tier benefits.

AWS Free Usage Tier is the company's introductory program for customers to launch new applications, test existing applications in the Cloud, or simply gain hands-on experience with AWS.

The free tier for Amazon CloudFront includes up to 50 GB data transfer and 2,000,000 requests per month aggregated across all AWS edge locations.

BT Passes Two Thirds of UK Premises with Fibre Broadband

BT's open fibre network now passes more than 19 million homes and businesses, represting about two thirds of UK premises.

BT originally said it would cover 19m premises with fibre by the end of 2015. It passed that total in March this year, around 21 months earlier than planned, with the vast majority of that footprint being enabled by BT under its commercial plan. The remainder has been enabled in partnership with the public sector.

BT noted that its fibre rollout has been among the fastest in the world. The UK now has the widest fibre availability of the EU 'big 5' countries as well as the highest take up of the technology and the most competitive marketplace.  UK fibre availability currently stands at 73 per cent of premises when all networks are taken into account. This widespread availability compares with just 20-25 per cent for France.

Gavin Patterson, Chief Executive, BT Group said: “Fibre broadband is the future and BT has invested billions of pounds to ensure as many people as possible can benefit. The early achievement of this milestone marks the culmination of several years of hard work by our engineers and planners. They have pulled out all the stops to bring fibre to a vast expanse of the country over a very short period and I would like to thank them for their efforts and commitment.

“Great progress has been made but we aren't stopping here. We need to ensure as many people as possible have access to fibre and that is why our engineering teams are working hard to extend the digital superhighway into rural areas.

"The UK broadband market is intensely competitive and consumers are enjoying fantastic value for money. Broadband speeds have increased dramatically over the last decade whereas prices have tumbled. Customers are the winners."

CyrusOne Expands Data Center in Phoenix

CyrusOne is breaking ground on a second data center at its Phoenix Campus in Chandler, Arizona. The new building will have 60,000 square feet of white floor space at full build, with up to 12 megawatts of power to serve customers in the Western region of the United States. The expansion adds to the more than 77,500 square feet of space already commissioned.

Taiwan's Accton Tech Joins Open Compute Project

Taiwan-based Accton Technology has joined the Open Compute Project and announced its plans to open source a design for a 10GbE top-of-rack switch and adapter to allow standard 19” rack switches to function in an Open Rack.

Accton said its Edge-Core AS5712-54X Top-of-Rack Switch will offer forty-eight 10GbE SFP+ and six 40GbE QSFP ports in a 1U form factor. The switch is based on Broadcom’s StrataXGS Trident II Ethernet Switch silicon, and has a CPU daughter module with an Intel Atom C2538 processor.  The adapter is a mechanical sled that enables any standard 19” rack-mountable 1U network switch to slide into an Open Rack, with a means to secure the adapter to the Open Rack and provide cabling to the 12VDC bus bar.  Accton plans to contribute complete design files for the hardware, including schematics, Gerber files, and mechanical design when the designs are accepted by OCP.

Wednesday, May 7, 2014

Video interview with Cisco: What is OpFlex?

In this video, Tom Edsall, CTO of the Insieme business unit at Cisco, introduces OpFlex, a new policy protocol designed to support physical and virtual switching infrastructure.  OpFlex
provides an abstraction that Cisco believes is better suited than OpenFlow is scaling out policy.

Cisco has published an OpFlex draft in the IETF and will be releasing an OVS implementation in the public domain as well as a controller in the Open Daylight consortium.  The company hopes that open APIs will enable more equipment to be brought in under the OpFlex policy umbrella.  OpFlex fits in with Cisco's Application Centric Infrastructure vision.  Edsall describes differences between the imperative control plane model of "traditional SDN" and the declarative control plane of its ACI model, saying that some things should be centralized while others are best to be distributed.

See 3-minute video:

In April, Cisco introduced OpFlex - a new networking protocol designed to open up its vision of Application Centric Infrastructure (ACI) in the data center for automated applications and interoperability with other software-defined networking (SDN) elements.

OpFlex is a southbound protocol that is co-authored by Citrix, IBM, Microsoft, and Sungard Availability Services. It provides a mechanism that enables a network controller to transfer abstract policy to a set of “smart” devices capable of directly rendering rich network policy on the device.  OpFlex will enable leading hypervisors, switches and network services (layer 4-layer 7) to self-configure driven by application policy.

Cisco is submitting to the IETF for standardization. It is also an open source Contribution that Cisco is making to OpenDaylight in partnership with IBM, Plexxi and Midokura.  Other companies that are supporting OpFlex include Microsoft, RedHat, F5, Citrix, Canonical, and Embrane.  Hypervisor and software vendors will support OpFlex-enabled virtual switches and extend the Cisco ACI policy framework in their virtual environments. Network services vendors like Avi Networks, Citrix, Embrane, and F5 Networks will be shipping an OpFlex agent with their appliances.

In addition, Cisco is working with OpenDaylight to create a 100 percent open source, ACI-compatible policy model and OpFlex reference architecture.

Compared to the current SDN model, Cisco said its Application Centric Infrastructure avoids the scalability/resiliency challenge of having a single SDN controller managing the state of the network. Its ACI approach is to distribute complexity to the edges and operate disconnected from a central policy manager.  It also would not require application developers to describe their requirements with low level constructs.

Cisco is planning to support the OpFlex Protocol on the following Cisco products:
  • Cisco Application Centric Infrastructure, Nexus 9000 Series
  • Cisco Nexus 1000V
  • Cisco ASR 9000 Series
  • Cisco Nexus 7000 Series
  • Cisco ASA
  • Cisco SourceFire

Web Companies Call on FCC to Defend Open Internet

A coalition of leading Web companies published an open letter to the FCC asking commissioners to defend the principles of the Open Internet.  The letter comes in response to published reports that FCC Commissioner Tom Wheeler is circulating new rules concerning Net Neutrality.  Signatories of the Open Internet letter include:

Level 3
Vonage Holdings Corp.
Yahoo! Inc.

Wind River Delivers Accelerated vSwitch Optimized for NFV

Wind River announced a new performance benchmark for its accelerated virtual switch (vSwitch) integrated within Wind River Carrier Grade Communications Server, which is designed for network functions virtualization (NFV).

Wind River said its accelerated vSwitch can deliver 12 million packets per second to guest virtual machines (VMs) using only two processor cores on an industry-standard server platform, in a real-world use case involving bidirectional traffic. This performance represents 20 times that of the standard Open vSwitch (OVS) software used in typical enterprise data centers. Providing unlimited scalability when instantiated on multiple cores, this industry-leading performance is achieved using up to 33% fewer CPU resources than other commercial solutions, with no requirement for specific hardware acceleration.

“As a key element in our Carrier Grade Communications Server, our accelerated vSwitch was designed from the ground-up to incorporate the Carrier Grade features that are critically important for telecom networks that must deliver six 9s reliability,” said Mike Langlois, general manager of the communications business for Wind River. "For example, the accelerated vSwitch provides fast convergence during live migration of VMs, while minimizing the impact of dirty page updates. To allow for optimum resource allocation, it provides deterministic processing performance without the jitter of over 10% exhibited by the standard Open vSwitch. Finally, protocols such as LAG, VLAN tagging, and VXLAN provide the security features that are essential for telecom networks.”

Dell Charts a Cloud-agnostic and Open Approach

Dell reaffirmed its commitment to open, standards-based architectures on which to build public, private and hybrid clouds.

Dell highlighted an enterprise-ready OpenStack-based private cloud solutions that it has co-engineered with Red Hat. As part of the expanded relationship, Dell was the first company to OEM Red Hat Enterprise Linux OpenStack Platform. In addition, Dell announced support for Docker containers in Red Hat Enterprise Linux.

In addition, Dell has partnerships with public cloud infrastructure providers like Google, Microsoft and CenturyLink, among others.

“At Dell, our cloud solutions are based on open architectures with no proprietary lock in. Customers get choice, flexibility and the maximum benefit from their investment,” said Michael Dell, Chairman and CEO of Dell. “Our partnerships with companies like Red Hat further demonstrate that Dell is the only truly open cloud vendor that’s helping customers design, build and manage across public, private and hybrid clouds.”

“Open source is the backbone of the cloud, and the cloud is inherently hybrid,” said Jim Whitehurst, president and CEO, Red Hat. “Our continued collaboration with Dell is about bringing the open hybrid cloud to the enterprise. Industry feedback on our collaboration has been outstanding, and I’m excited to continue working together to bring the value and power of OpenStack – and now OpenShift – to even more enterprises around the world.”

Open Network Install Environment Lab Opens

The University of Texas at San Antonio (UTSA) opened the first ONIE certification lab.

ONIE is an industry standard network boot loader for installing software on network switches.

“With each new platform and chipset, there is a significant amount of development work that is involved to ensure compatibility. ONIE certification and compliance leverages best practices to validate this process in the most expedient way possible,” said Carlos Cardenas, Associate Director, Cloud and Big Data Lab, UTSA. “We are pleased to launch the certification lab as the demand for standardization and reliability across the entire data center ecosystem – from servers to switches and now networking – becomes standard protocol.”