Showing posts with label OpenFlow. Show all posts
Showing posts with label OpenFlow. Show all posts

Wednesday, October 16, 2013

Layer123 SDN & OpenFlow World Congress: Platforms for Innovation

"The basic value proposition of SDN remains the same -- either you make money or save money," said Dan Pitt, Executive Director, Open Networking Forum speaking at the opening of the Layer123 SDN & OpenFlow World Congress in Bad Homberg, Germany.


What has change in the past year since the last SDN World Congress?

Attendance at the event has nearly doubled to about 700 and the companies presenting at the event have increased in number and diversity as real world use cases become the topic of discussion.

From the perspective of the ONF, Dan Pitt said OpenFlow has established its value as the open, vendor-neutral standard made for SDN.

With new SDN use cases showing CAPEX and OPEX savings, Pitt said there is no doubt that SDN will have a big market impact on both data centers and carrier networks.  The first signs of SDN hitting the mainstream press can now be seen in advertising from NTT Communications touting its software-defined capabilities.

Some of the ONF achievements for the past year:
  • Numerous releases to OpenFlow
  • Updates to OF-Config
  • Launched a new Mobile and Wireless working group
  • Launched a new Northbound Interface (NBI) working group
  • Established 3 sanctioned conformance testing labs -- Indiana University, Beijing Internet Institute and University of New Hampshire
  • First OpenFlow-conformant product certification (NEC)
  • Formed a Chipmakers Advisory Board,
  • Launched an Open-Source OpenFlow driver competition.
  • Added 30 new companies, bringing total ONF membership to 113.
  • Goldman Sachs has joined rhe ONF Board.
  • Established liaisons with other industry groups.


Thursday, August 29, 2013

ONF Picks Beijing Internet Institute as First International Certified Testing Lab

The Open Networking Foundation (ONF) has selected Beijing Internet Institute (BII) as its first international certified conformance testing lab in Asia.

BII’s testing lab in China evaluates vendors’ networking products for interoperability, conformance, function, and performance. The company will expand its evaluation services by partnering with ONF to test SDN and OpenFlow conformance. The lab will also offer organizations the opportunity to receive OpenFlow certification of their products.


In addition, BII became a member of ONF to foster deployment and commercialization of SDN and the OpenFlow protocol amongst China’s service provider, data center, and enterprise markets.

“In support of ONF’s initiatives, we believe that the next generation of networking in China is SDN and the OpenFlow protocol,” said Liu Dong, director of BII. “Our goal is to help our service provider and enterprise users realize the benefits of this technology and demonstrate true conformance of SDN. We are honored to be named the second ONF-approved conformance testing center, as well as the first in Asia.”

“Conformance testing of commercial products is significant because it validates that the products are using the OpenFlow protocol and increases customer confidence in and acceptance of SDN,” said Dan Pitt, executive director of the Open Networking Foundation. “Having BII as the first ONF certified lab in Asia represents another significant milestone for ONF as it supports our efforts to accelerate the global adoption of open SDN. We welcome BII to our organization and look forward to increasing the awareness of the benefits of SDN on a global scale.”

https://www.opennetworking.org/openflow-conformance-testing-program
http://www.opennetworking.org

Wednesday, August 7, 2013

Ciena Collaborates with Research Nets on Software-defined Packet/Optical

Ciena is collaborating with CANARIE, Internet2 and StarLight to build a software-defined wide-area network that leverages OpenFlow across both the packet and transport layers.  The network features an open architecture carrier-scale controller and intrinsic multi-layer operation.

The network initially connects Ciena’s corporate headquarters in Hanover, Maryland, USA with Ciena’s largest R&D center in Ottawa, Ontario, Canada. International connectivity is achieved with Internet2 through the StarLight International/National Communications Exchange in Chicago and CANARIE, Canada's national optical fiber based advanced R&E network.

Ciena solutions included in the testbed:

  • OpenFlow v1.3-enabled 4Tb/s core switches, featuring 400G packet blades;
  • Transport – Layer 0 and Layer 1 OTN – network elements from Ciena’s industry-leading 6500 and 5400 converged packet-optical product families, configurable under extended OpenFlow protocol control;
  • A prototype open, modular and modifiable control software system that leverages open source components and is suitable for large-scale and geographically-distributed network control;
  • Multi-layer provisioning and control, driven by an abstracted northbound API;
  • Real-time analytics software designed to enable multi-layer resource optimization and dynamic network service pricing for revenue optimization.

"Going above and beyond a simple testbed, this live, fully functional network will drive continued innovation and demonstrate how a truly OPn network architecture can unleash the full power of SDN in the WAN. By building the industry’s first fully-featured, fully-open and fully-operational, end-to-end and multi-layer SDN-powered WAN, we can offer a real-world experience for customers and researchers to trial, refine and prove SDN concepts and technologies in both the network and the back office – without having to build a unique infrastructure for every use case," stated Steve Alexander, senior vice president and chief technology officer at Ciena.

http://www.ciena.com


Wednesday, June 19, 2013

ONF PlugFest Tests OpenFlow 1.3

The Open Networking Foundation (ONF)completed its third semi-annual PlugFest designed to drive interoperability, deployment, and commercialization of SDN and the OpenFlow protocol.

The event, which was hosted earlier this month at the Indiana Center for Network Translational Research and Education (InCNTRE), was attended by nearly 50 network engineers from 20 member companies with the common goal of ensuring that new SDN protocols work across all of their products.


Testing focused on OpenFlow versions 1.0, 1.2, and 1.3 in commercial and test controllers and hardware and virtual switches. The event also allowed member companies to work in a neutral environment to test implementations of OpenFlow-based SDN that would be commercially applied to service provider, data center, and enterprise markets.

The ONF said the addition of OpenFlow 1.3 allowed for innovative test cases that spanned into performing match actions and functions in IPv6 and MPLS.

"This PlugFest showed that ONF has successfully encouraged implementation of OpenFlow 1.3 within its membership," said Michael Haugh, senior manager market development at Ixia and chair of the ONF Testing & Interoperability Working Group. “Event participants were able to accomplish an equivalent of two months of quality assurance (QA) testing in a about week, complete unique testing that could take multiple months in a QA lab.  The breadth of participation also allowed them to resolve features and interoperability issues with different vendors.”

“ONF PlugFests are critical to fostering deployments and enhancing implementations of OpenFlow-based SDN,” said Dan Pitt, executive director of the Open Networking Foundation. “Through interoperability testing, ONF helps product developers get their implementations working properly and assures their customers of a supply of interoperable products. I was particularly happy to see the broad turnout for OpenFlow 1.3.”

“Hosting the ONF PlugFest is one example of how InCNTRE supports the SDN industry,” said Steve Wallace, InCNTRE executive director. “By providing 50 network engineers from nearly 20 companies with access to four simultaneously operating testbeds in our state-of-the-art lab, we help them make sure that the new SDN protocols work across all their products – to essentially translate the theoretical aspects of SDN to real-world, industry situations.”

INCNTRE is the first ONF certified lab for conformance testing. The next ONF PlugFest will be held at the InCNTRE lab November 2013.

http://www.opennetworking.org
http://incntre.iu.edu/

Wednesday, May 8, 2013

Internet2 Offers Cash Prizes for SDN Research Apps

Internet2, in conjunction with Juniper Networks, Ciena, and Brocade, is offering cash awards for winning open source, data movement, software-defined networking (SDN) enabled, end user applications, which benefit the research and education (R&E) community, implemented on the nation’s first open, national-scale SDN platform -- the Internet2 Network.

The new Internet2 Network is the first open, national-scale 100G network that employs SDN and OpenFlow standards. It includes equipment from Juniper Networks, Ciena, and Brocade.

“The new Internet2 Network can advance global research collaboration in previously unimaginable ways," said Rob Vietzke, vice president of Network Services, Internet2. “Leveraging our new 100G, SDN-enabled network, we're on the hunt for the most innovative applications that help to accelerate and transform discovery in big data sciences."

More info is posted here:

http://www.internet2.edu/network/innovative-application-awards.html


Tuesday, May 7, 2013

Cavium Intros SDN-capable Server Adapters


Cavium introduced a new LiquidIO Server Adapter family designed to support software defined networking (SDN) by cloud service providers, enterprises and private data centers worldwide.

Cavium said its LiquidIO server adapters offer the performance required for software based switching and tunneling. The adapters offload SDN and network processing from the main server CPU and are fully programmable to handle expected updates to virtualization protocols.

Cavium's new family of adapters includes LiquidIO Server Adapter 210Sv with two 10 Gigabit Ethernet ports and the 110Sv with one 10 Gigabit Ethernet port.  This product family is offered in a standard server compliant half-height PCI Express form-factor and is supported by a feature rich Software Development Kit (SDK). This SDK allows customers and partners to develop high performance SDN applications with packet processing, tunneling, QoS, security and metering.  The adapter family provides out-of-the-box 10 Gbps OVS, Open Flow, VXLAN and NVGRE tunneling protocol performance.

In addition, Cavium said its LiquidIO adapter family has been tested against several commonly used Controllers including Floodlight and NOX.  Customers also have the added advantage of being able to run L4-L7 networking & security services on this platform.  Several partner companies including PLUMGrid and One Convergence are leveraging this family to offer high performance network virtualization solutions to their customers.

“The requirements of Software Defined Networks align very well with Cavium’s strong core competencies of programmability, networking and security.  The LiquidIO Server Adapter Family eliminates the performance penalties associated with software only based network virtualization. It delivers high performance with very low power dissipation along with rich software development and application toolkits to enable cloud scale SDN deployments,” said Rajneesh Gaur, General Manager of Cavium’s Accelerator & Adapter Group.

http://www.cavium.com

Monday, May 6, 2013

Juniper Announces its JunosV Contrail SDN Controller

Juniper Networks introduced its JunosV Contrail Contrail Controller for software-defined networks (SDN).

The JunosV Contrail, which is currently in trials with service provider and enterprise customers, virtualizes the network to enable seamless automation and orchestration among private and public cloud environments, elastic management of IP-based network and security services, and a "Big Data for Infrastructure" offering for enhanced analytics, diagnostics and reporting. It leverages intellectual property from Contrail Systems, which Juniper acquired earlier this year.

The SDN controller will be the first in a JunosV Contrail family of products. The JunosV Contrail family of products is scheduled to be available for purchase in the second half of 2013 under the Juniper Software Advantage licensing program.

"Customers are looking for agility in their networks. With JunosV Contrail, Juniper will deliver a network infrastructure that meets customers' immediate and long term needs. The response from our trial customers has been overwhelmingly positive and from a roadmap perspective, we are delivering on our SDN strategy ahead of schedule. From our systems to a growing portfolio of new SDN software, and a cloud-oriented software licensing model that grows with our customers' needs, Juniper provides a simpler and lower risk option to begin the transition to an SDN future," stated Bob Muglia, executive vice president, Software Solutions Division, Juniper Networks.

http://juniper.mwnewsroom.com/press-releases/juniper-networks-introduces-controller-technology-nyse-jnpr-1013854


In January 2013, Juniper Networks outlined a four-step roadmap to software-defined networking with the goal of improving automation and agility in data centers and across service provider networks.
A key part of Juniper's SDN strategy involves the concept of "Service Chaining" whereby an SDN controller is used to virtually insert services into the flow of network traffic.  The company sees SDN extending all the way across all domains of the network: Core, Edge, Access & Aggregation, Data Center, WAN, Campus & Branch.  Juniper's SDN roadmap initially targets two of these areas -- the Service Provider Edge and the Data Center.

Juniper is enabling the SDN virtualization with existing protocols, including BGP, thereby enabling the existing routing and switching infrastructure to participate in the SDN transformation. Juniper will adopt the OpenStack model as its primary orchestration system and will work with others including VMware and IBM. Juniper is introducing a new software licensing and maintenance model that allows the transfer of software licenses between Juniper devices and industry-standard x86 servers.

Juniper's Four Step Roadmap
  • Step 1: Centralize network management, analytics and configuration functionality to provide a single master that configures all networking devices.
  • Step 2: Extract networking and security services from the underlying hardware by creating service virtual machines (VMs). This enables network and security services to independently scale using industry-standard x86 hardware based on the needs of the solution.
  • Step 3: Introduce a centralized controller that enables multiple network and security services to connect in series across devices within the network using "SDN Service Chaining" – using software to virtually insert services into the flow of network traffic. The SDN Service Chaining will be introduced in 2014 utilizing the SDN controller technology acquired from Contrail Systems, together with the evolution of the JunosV App Engine.
  • Step 4: Optimize the usage of network and security hardware to deliver high performance.  Specifically, Juniper's MX Series and SRX Series products will evolve to support software-based Service Chaining architecture.


In December 2012, Juniper Networks agreed to acquire Contrail Systems, a start-up developing software defined networking (SDN) solutions for approximately $176 million in cash and stock.


Contrail Systems, which was based in Santa Clara, California, was founded in early 2012 is still in stealth mode. No products had been announced at the time of the acquisition.   Juniper said the acquisition brings an SDN architectural approach that augments its portfolio of products and services. 

Contrail Systems was headed by Ankur Singla (CEO), who previously served as Chief Technology Officer and VP of Engineering at Aruba Networks.  The Contrail team  included Dr. Kireeti Kompella (CTO), who was formerly CTO and Chief Architect, JunOS at Juniper; Pedro Marques,previously a developer of control applications for the Cluster Management Team at Google and before that a distinguished engineer at Cisco and Juniper; Harshad Nakil, previously at Aruba Fellow and also distinguished engineer at Juniper and Cisco; and others.  Juniper was a strategic investor in Contrail. Khosla Ventures was also an investor.

Wednesday, May 1, 2013

Brocade Leverages Virtualization for "On-Demand Data Center" Strategy


Brocade outlined an "On-Demand Data Center" Strategy for evolving networks toward a highly virtualized, open and flexible infrastructure.

Brocade's strategy will leverage its VCS Fabric technology as its foundation.  The goal is to enable data center customers to provision compute, network, storage and services faster and easier than ever before as a step on the path toward mass customer adoption of Software-Defined Networking (SDN).

This week Brocade announced a series of networking hardware and software products including the Brocade Vyatta vRouter, the Brocade Virtual ADX Application Delivery Switch, the Brocade MLXe 4×40 GbE Core Router module, Brocade the NetIron CES/CER Carrier Ethernet Switch/Router modules and a Brocade NetIron OS update.

Building on its acquisition of Vyatta last year, the Brocade Vyatta 5400 vRouter family is a software networking solution for highly virtualized data centers. The lastest software release (6.6) adds support for Multicast routing and Dynamic Multipoint VPN (DMVPN).  The platform- and hypervisor-agnostic Brocade Vyatta vRouter is already deployed in environments ranging from virtual private data centers to public clouds, such as Amazon Web Services (AWS), and supports all major hypervisors, including VMware, Microsoft, Citrix and Red Hat.

The Brocade Virtual ADX is a virtual application delivery platform that increases the speed of application resource deployment and differentiated services for dynamic cloud environments. It enables rapid application delivery service provisioning via the SOAP/XML API, enabling integration with third-party or homegrown orchestration and automation tools.  Brocade said this is especially useful to validate, test and replicate production or QA environments on demand.  Brocade also enhanced its cloud provisioning capability with an update to Brocade Application Resource Broker and continued work on the OpenStack plugin for load balancing as a service.

"Strengthening the Brocade software networking portfolio, the Brocade Virtual ADX combined with the Brocade Vyatta vRouter and Brocade Application Resource Broker delivers an end-to-end software networking solution that increases data center agility and reduces network complexity," said Ken Cheng, vice president of the Routing, Application Delivery and Software Networking Group at Brocade. "Brocade's ability to unite the physical and virtual networking elements provides our customers with heightened agility not only in their deployment options, but also when it comes to implementing emerging technologies that simplify business processes."

The new four-port 40 GbE Brocade MLXe module for the flagship Brocade MLXe Core Router integrates with the VCS Fabric technology for an end-to-end multitenant 40 GbE solution in the data center. For smaller data centers that are integrated into Carrier Ethernet networks, Brocade has introduced the new versions of the compact Brocade NetIron CER routers, featuring up to four ports of 10 GbE.

The NetIron software updates deliver new routing and SDN capabilities, including support for OpenFlow Hybrid Port Mode technology. This enables customers to optimize specific data flows using OpenFlow without disrupting the existing production traffic. Additionally, new software features support multitenant data center environments to improve cloud service delivery and enforce tighter service level agreements.

http://www.brocade.com

Sunday, April 21, 2013

Reporter Notes from ONS 2013: NTT Com, Internet 2, Google

By James E. Carroll, Editor
NTT Communications was among the first Service Providers to see the potential of OpenFlow and SDN for transforming its operations, said Yukio Ito, Senior VP Service Infrastructure, NTT Communications.  Some notes from his keynote at the Open Networking Summit 2013 in Santa Clara, California:

  • NTT Com's reasons for pursuing SDN include shorter time to market, service differentiation, and reduced CAPEX/OPEX.
  • NTT Communications has a Global Cloud Vision encompassing many of its enterprise services and all self-managed under an integrated cloud portal.
  • The company launched its SDN-enabled Enterprise Service in 2012, including the self-provisioning portal website.
  • OpenFlow is being used for inter-data center back-ups between NTT Communications' Global Data Centers.  The service allows bandwidth can be boosted on-demand using the OpenFlow controller.
  • The results of using OpenFlow/SDN for the Enterprise Cloud service have been good, including better service automation, a topology-free design, an the overcoming the 4K VLAN limitation that the network would otherwise face.  There have been some issues.  The OpenFlow v1.0 specification did not meet requirements for redundancy, current silicon has meant a "flow table shortage", and the network has generally been less programmable than NTT Communications expects.  The company is working to overcome these issues.
  • NTT Com is working on its SDN architecture that will provide a common framework for northbound and southbound interfaces.  The company will use vendors and/or open source if they meet its criteria.
  • NTT Com is very interested in extending SDN to the optical transport layer. The ONF's Optical Transport WG is expected to accelerate this discussion.
  • One additional challenge is that the interconnection between a data center network and an MPLS-VPN is not currently automated.  The company is developing a "Big Boss" SDN controller to address this challenge.

INTERNET2

The experimentation with Software Defined Networking underway in Internet2 in many ways parallels the birth of the commercial Internet, said Dave Lambert, President and CEO of Internet2.  Many U.S. companies, in fact, have their roots in academia, such as Cisco (Stanford), Sun (Berkeley and Stanford), Google (Stanford), Arbor ( U.of Michigan), Akamai (MIT), etc.

Some notes from his presentation:

  • It's time for a change. Most of the network paradigm was created 35 to 40 years ago, when Ethernet and IP emerged despite strong technical objections.
  • Will we fight re-centralization of an open control plane and hybridization to a potentially post-IP, SDN-based packet environment? This is like the packet-circuit debate with IBM's SNA group back in the day.
  • Getting bandwidth limitations out of the way for the academic community is a key objective.
  • Bandwidth and openness are imagination enablers.  An open networking stack is risky but is among the most exciting things.
  • Data intensive science in genomics and physics really do demand flexibility to handle massive data flows.
  • 29 major universities are committed to the Innovation Platform Program.  This entails (1) 100 GigE connectivity to their campus and across their campus (2) support and access Intenet2's Layer 2 OpenFlow-based service (3) Invest in developing applications that run across this network.
  • The U.S. academic is admittedly on the cutting edge. 

SDN @ GOOGLE

Google's software defined WAN, which is the basis its internal network between data centers, is real, it works, and has met the company's expectations in terms of scalability and reliability, said Amin Vahdat, Distinguished Engineer at Google.  Some notes from his presentation:

  • It's been a year since Google announced that its internal backbone had been migrated to SDN.
  • Growth in bandwidth continues unabated.   Google's internal backbone actually carries more traffic than its public-facing network.
  • Planning, building and provisioning bandwidth at Google scale had been a major headache, hence the interest in SDN. Over-provisioning costs were also a major driver to adopt SDN.  Slow convergence time in the event on an outage was another factor.
  • Google wanted to go with  logically centralized network control instead of the decentralized paradigm of the Internet.  This centralized approach leads to a network that is more deterministic, more efficient and more fault tolerant, according to Vahdat.
  • B4 is the name of Google's software defined WAN.  Vahdat describes it as a warehouse-scale-computer (WSC) network.  It links data centers around the world  (a map shows 12 nodes across Asia, North America and Europe) . So far this network is successful, so the next step may be to run some Internet user-facing traffic across this same backbone.
  • The B4 network runs OpenFlow. Google built its own network hardware using merchant silicon. The are 100s of ports of non-blocking 10GE.
  • Google uses Quagga for BGP and IS-IS.  A hybrid SDN architecture is used to bridge sites that are fully under SDN control and legacy sites. This means that SDN can be deployed incrementally.  You don't have it deploy it everyone on Day 1.
  • Traffic engineering is the first application on the SDN WAN. This was implemented about a year ago.  It takes into account current network demand and application priority.
  • Google has been adding capabilities pretty quickly through frequent software releases. 



http://www.opennetsummit.org



Thursday, April 18, 2013

Video: Where is SDN today? (part 2)



Where does SDN stand today in Service Provider networks?

00:10 - Verizon -- Prodip Sen, Director, Network Architecture
01:45 - NTT Communications, Nayan Naik, Director, Product Strategy, Data Center, Services
03:17 - Telstra, Frank Ruhl, CTO
03:45 - Ericsson, Jan Haglund, VP, Product Area IP & Broadband
04:30 - Cyan, Steve West, CTO
05:29 - Overture - Michael Aquino, President and CEO
06:17 - Metaswitch, Andrew Randall, SVP Corporate Development
07:45 Netronome, Daniel Proch, Director of Product Management

Netronome Raises $19 Million for Flow Processors


Netronome raised $19 million in Series E funding for its flow processors designed for cyber-security, software-defined networking and mobile networking applications.

Netronome said it will use the funds to expand its software and customer engineering organizations. The company cited new wins based on its recently disclosed next-generation flow processor line, the NFP-6xxx, being built using Intel’s state-of-the-art 22nm 3D tri-gate technology.

The funding came from Sourcefire, Intel Capital and existing investors DFJ Esprit and the Raptor Group.

“The customer demand following the disclosure of our newest sixth generation 200 Gbps flow processors has been larger than even we anticipated,” said Howard Bubb, CEO at Netronome. “We welcome the investment by Sourcefire and strategic engagement by Intel as they represent valuable relationships to Netronome at both ends of our business.”

http://www.netronome.com

Monday, April 15, 2013

An Update from the Open Networking Foundation


Click here to view video:  http://youtu.be/uO9zYdNFGXE

In this interview, Dan Pitt, Executive Director of the Open Networking Foundation, provides a progress report, including:

1:00 - Technical Objectives for ONF in the coming year
2:33 - Adapting OpenFlow for OA&M
3:07 - A new working group for Optical Transport
3:41 - Addressing OpenFlow security
4:10 - Commercial deployments
5:22 - Strengthening the hardware supply chain
6:18 - The ONF Research Associates program
7:18 - Defining a Framework for Services
7:59 - ETSI's Network Functions Virtualization Effort
8:38 - Working with the Open Daylight Project

Thursday, April 11, 2013

Huawei Develops Protocol-Oblivious Forwarding Control Plane


Huawei introduced a software-defined networking (SDN) forwarding plane technology named Protocol-Oblivious Forwarding (POF).

Huawei said the goal of its Protocol-Oblivious Forwarding is to evolve the Open Networking Foundation's OpenFlow protocol towards a more flexible programming model where forwarding devices are no longer limited by pre-defined packet protocols or forwarding rules.  Data plane hardware is not limited by hard­-wired protocol implementations.

With POF, packet forwarding processes are defined by software in a controller which can program forwarding devices via fine-grained forwarding instructions (including data offsets and lengths). This software-based programming is flexible and the actual packet processing and forwarding is performed by the program in forwarding devices (i.e. packets do not flow through the controller).


Huawei has developed POF prototypes based on the NE5000E core router platform and has tested the forwarding of multiple services. The company said its testing has shown that, with POF, the forwarding devices no longer need to directly support specific protocols and those requirements around forwarding performance are successfully met in various scenarios.

"Our hope is to help accelerate the pace of innovation for open SDN and future-proof evolution of networks. Carriers and users of networks in particular can benefit from more flexible switches and can reduce the total cost of ownership by focusing on building simpler, fit-for-purpose networks where only required forwarding behaviors need to be programmed in each switch. Such forwarding plane evolution technologies help remove protocol dependency in forwarding devices and can ultimately enable support of any existing/customized packet-based protocols via generic instructions. "We believe that openness and software-based programmability of forwarding devices can help increase the adoption of OpenFlow, particularly in the carrier space where we see a huge potential for simplification." said Dr. Justin Joubine Dustzadeh, VP of Technology Strategy & CTO of Networks at Huawei Technologies.

"With the POF technology, user-defined fields can be added to packets to implement advanced network functions. Forwarding devices will be able to more flexibly support layer 4-7 services and enable network functions virtualization (NFV) through programming of the POF engine."

http://www.huawei.com/en/about-huawei/newsroom/press-release/hw-258922-sdn.htm

Tuesday, April 9, 2013

Arista Integrates with OpenStack and OpenFlow


Arista Networks is integrating its EOS (Extensible Operating System) natively with OpenStack as well as delivering enhanced OpenFlow extensions via direct flow-based configuration interfaces and data-plane programmability.

The idea is to enhance cloud networking workflows for rapid cloud provisioning. The OpenFlow extensions into Arista EOS gives customers the ability to implement both IP and OpenFlow in a heterogeneous solution. Standard OpenFlow support in EOS can be orchestrated through a SDN Controller, and value added extensions to OpenFlow are now possible via direct, data plane manipulation of flow tables in the Arista switches. This brings open, application-driven and programmatic control of network path selection at wire speed.

Arista's key SDN introductions include:

  • Arista EOS application programmatic interfaces (eAPIs) for integration with leading orchestration and provisioning tools and customer applications.
  • Arista contributed code to the OpenStack Quantum project that enables unified physical and virtual network device configuration.
  • A new modular hardware driver architecture in the Quantum OVS plugin, and an open source version of Arista's driver.
  • New EOS native OpenStack provisioning capability that connects the Quantum OVS plugin to EOS via eAPI.
  • OpenFlow 1.0 support in Arista EOS for external controllers.
  • Enhanced Data Plane Programmability via direct flow-based OpenFlow extensions


"Extending Arista EOS for connection to cloud orchestration platforms provides programmability for building agile, self-service cloud architectures. This has been core to Arista EOS development from its inception," stated Tom Black, vice president, SDN Engineering for Arista. "These software innovations demonstrate Arista's increasing relevance and agility in addressing SDN for public cloud operators and private clouds."

"Arista continues to lead the way in data center network innovations. This is the first real API integration of a broad-based data center network platform, and seeing it connect with OpenStack and solve real customer provisioning issues is exactly what this industry has needed to scale cloud computing," said Paul Rad, vice president of Rackspace.

Arista EOS now supports OpenFlow, Arista Direct Flow mode, eAPIs and OpenStack.

http://www.aristanetworks.com

Monday, April 8, 2013

OpenStack Grizzly Builds Compute, Storage, Networking and Security Capabilities


Last week, the OpenStack community released Grizzly -- the seventh release of its open source software for building public, private, and hybrid clouds.

Grizzly has nearly 230 new features to support production operations at scale and greater integration with enterprise technologies, including broad Software-Defined Networking support. These include:

  • OpenStack Compute – Compute delivers improved production operations at greater scale, with "Cells" to manage distributed clusters and the "NoDB" host architecture to reduce reliance on a central database. Improvements in virtualization management deliver new features and greater support for multiple hypervisors, including ESX, KVM, Xen and Hyper-V. Additional functionality was added for bare metal provisioning, shared storage protocols and online networking features such as the ability to hot add/remove network devices.
  • OpenStack Object Storage – Cloud operators can now take advantage of quotas to automatically control the growth of their object storage environments. Additionally, the ability to perform bulk operations makes it easier to deploy and manage large clusters and provides an improved experience for end users. Cross-origin resource sharing (CORS) enables browser connections directly to the back-end storage environment, improving the performance and scalability of web-integrated object storage clusters.
  • OpenStack Block Storage – The second full release of OpenStack Block Storage delivers a full storage service for managing heterogeneous storage environments from a centralized access point. A new intelligent scheduler allows cloud end users to allocate storage based on the workload. There are also new drivers for a diverse selection of backend storage devices, including Ceph/RBD, Coraid, EMC, Hewlett-Packard, Huawei, IBM, NetApp, Red Hat/Gluster, SolidFire and Zadara.
  • OpenStack Networking – The Grizzly network-as-a-service platform adds support for Big Switch, Hyper-V, PlumGrid, Brocade and Midonet to complement the existing support for Open vSwitch, Cisco UCS/Nexus, Linux Bridge, Nicira, Ryu OpenFlow, and NEC OpenFlow. OpenStack Networking achieves greater scale and higher availability by distributing L3/L4 and dynamic host configuration protocol (DHCP) services across multiple servers. A new load-balancing-as-a-service (LBaaS) framework and API lays the groundwork for further innovation from the broad base of networking companies already integrating with OpenStack.
  • OpenStack Dashboard – OpenStack Dashboard brings an improved user experience, greater multilingual support, and exposes new features across OpenStack clouds, like Networking and LBaaS. The Grizzly Dashboard is also backwards compatible with the Folsom release, allowing users to take advantage of additional features in their Folsom cloud prior to a full upgrade to the latest version.
  • OpenStack Identity – A new token format based on standard PKI functionality provides major performance improvements and allows offline token authentication by clients without requiring additional Identity service calls. OpenStack Identity also delivers more organized management of multi-tenant environments with support for groups, impersonation, role-based access controls (RBAC), and greater capability to delegate administrative tasks. OpenStack Image Service – There were major advancements in image sharing between cloud end users, and the creation of a set of common properties on images to provide more discoverable images and better performance when retrieving images.

"The Grizzly release is a clear indication of the maturity of the OpenStack software development process, as contributors continue to produce a stable, scalable and feature-rich platform for building public, private and hybrid clouds," said Jonathan Bryce, executive director of the OpenStack Foundation. "The community delivered another packed release on schedule, attracting contributions from some of the brightest technologists across virtualization, storage, networking, security, and systems engineering. They are not only solving the complex problems of cloud, but driving the entire technology industry forward."

"With OpenStack, we have been able to launch a stable infrastructure service to support our agile development teams," said Reinhardt Quelle, operations architect, Cisco WebEx. "Instead of waiting weeks for deployments, the devops teams who have adopted the platform are deploying multiple times a day, and the pace of product innovation that enables will be critical to our success. OpenStack's modularity and extensibility has enabled us to adapt the service to our specific problems."

http://www.openstack.org
http://www.openstack.org/software/grizzly/

Wednesday, April 3, 2013

Interview: Nuage on Automating Data Centers for Cloud Services and MPLS VPNs


A redacted interview between Jim Carroll, Editor of Converge! Network Digest and Manish Gulyani, VP of Product Marketing, Alcatel-Lucent / Nuage Networks.

Converge! Digest:  How do you describe the Nuage Networks' solution?

Manish Gulyani: The Nuage Networks Virtualized Services platform is a software-only solution for fully automating and virtualizing data center networks. That’s our main value proposition.  As you know, today’s data center networks are very fragile, they use old technology, and they are very cumbersome to operate.  When we looked at cloud services, we found that storage and compute resources had been virtualized quite nicely, but the network really wasn’t there.  We saw a great opportunity to apply the lessons that we have learned in wide area networking along with SDN.  The idea is that if you want to sell cloud services, you need to support thousands of tenants.  And you want each tenant to think that they own their piece of the pie.  It has to feel like the experience of a private network, with full control, full security, full performance of a private network but with the cost advantages of a cloud solution, which is a shared infrastructure.  That’s what we’re bringing to the table with the Nuage solution.

Converge! Digest: So is the Nuage solution aimed specifically at those who want to sell cloud services?

Manish Gulyani: It is designed for anybody who runs a large enough data center that needs automation. For instance, the University of Pittsburgh Medical Center, which is one of our trial customers, does not sell cloud services but they have enough internal users and external tenants that want full control over a particular cloud resource.  If you can’t give them full control and automation, then the cloud resource is of no use.  You have to be able to turn up the cloud service as fast as the user turns up a VM, otherwise the cloud service doesn't work.  Whether it is a large enterprise, a web-scale company or a cloud service provider, all can benefit from the Nuage solution.

Converge! Digest: What are the strategic differentiators versus other SDN controllers out there?

Manish Gulyani: Some initial SDN solutions have come out in the last two years for data centers.  They took the approach of virtualizing primarily at Layer 2, which was a good first step beyond the VLAN architectures. But in our view, this isn't sufficient to go beyond the basic applications.  If you are limited to just Layer 2, you are not able to get the application design done the right way.  For example, if you want to do a three tier application, you need to use routing, load balancing, firewalls – and all those elements in a real architecture are very hard to coordinate in current SDN solutions.  So first, Nuage needs to overcome this obstacle. We give you full Layer 2 to Layer 4 virtualization as a base requirement.  Once we’ve done that, the next issue is how do you make it scale?  You can’t restrict cloud service to one data center.

If you have ambitions of being a cloud services provider and you run multiple data centers, you want the power to freely move around server workloads between data centers.  If you cannot connect the data centers in a seamless fashion, then you haven’t satisfied the demand. So our solutions scales to multiple data centers and provides seamless connectivity.  The third obstacle we overcome is this:  now that the cloud services are running, how can people on a corporate VPN get access to these resources?  How can they securely connect to a resource that has just been turned up in a data center?

We provide the full, seamless connectivity to a VPN service.  We extend from Layer 2 to Layer 4, we made it seamless across data centers, and then we extend it across the wide area network by seamlessly integrating with MPLS VPNs. So that is our virtual connectivity layers.

We also automate it and make it easy to use.  A lot of our energy has gone into the policy layer, which lets the user define a service without knowing any network-speak.  It’s just IT speak and no network-speak.  This might seem strange for a networking company to say that its customer do not need to learn about VLANs or subnets or IP addresses – just zones and domains and application connectivity language.  When a workload shifts from one data center to another, all of the IP addresses and sub-netting has to change, but real users can’t figure this out because it is too hard to do. If this function can just happen in the background, they’re good with that.  The final thing we said is that it has to be totally touchless.

The reason people are excited about the cloud is that it is quick. In fact, IT departments worry that users sign up for public cloud services because the internal IT guys can’t deliver quickly enough.  If you need 10 new servers or VMs of capacity, why wait 3-4 weeks for your IT department to purchase and install the equipment, when you can log onto Amazon Web Services today and activate this capacity immediately with a credit card?  The Nuage policy driven architecture basically says “turn up the VM, look up the policy, set-up the connection” – nobody actually touches the network.  That’s our innovation.

Converge! Digest:  Since it is a software suite, what type of hardware do you run on?

Manish Gulyani:  Nuage runs on virtual machines.  It runs on general purpose compute.  Our Services Directory is a virtual machine on any compute platform. Our Services Controller runs on a VM. And our virtual routing and switching Open vswitch implementation is essentially an augmentation of what runs today on a hypervisor.  You can’t go into a cloud world and propose new hardware because it is a virtualized environment.  We have no constraints on what time of compute platform.  The whole idea is to apply web-scale technologies.  We also offer horizontal scaling, where many instances run in parallel and can be federated.

Converge! Digest:  Alcatel-Lucent is especially known for IP MPLS, and yet Nuage is largely a data center play.  What technologies does Nuage inherit from Alcatel-Lucent that give it an edge over other SDN start-ups?

Manish Gulyani:  At Alcatel-Lucent, we learned a lot about building very large networks with IP MPLS.  That is a baseline technology deployed globally to offer multi-tenancy with VPNs on shared WAN infrastructure.  Why not use similar techniques inside the data center to provide the massive scale and virtualization needed for cloud services?  We took our Service Router operating system, which is the software running on all our IP platforms, and took the elements that we needed and then virtualized them.  This enables them to run in virtual machines instead of dedicated hardware. This give us the techniques and protocols for providing virtualization. Than we applied more SDN capabilities, such as a simplified forwarding plane that’s controlled by OpenFlow, which lives in the server and enables us to quickly configure the forwarding tables. Because of the way that we use IP protocols in wide area networks, we can support federation of our controller.  That’s how we link data centers together.  They talk standard IP protocols -- BGP – to create the topology of the service and the same way they extend to MPLS VPNs.  As I said, the key requirement for enterprises is to connect to data center cloud services using MPLS VPNs they are familiar with today.  This same SDN controller can now easily talk to the WAN edge router running MPLS VPNs.  We seamlessly stitch the data center virtualization all the way to the MPLS VPN in the wide area network and provide end-to-end connectivity.

Converge! Digest:  Two of the four trial customers for Nuage announced so far are Service Providers (SFR and TELUS), presumably Alcatel-Lucent MPLS customers as well, and of course many operators are trying to get into cloud services.  So, is that a design approach of Nuage?  Build off of the MPLS deployments of Alcatel-Lucent?

Manish Gulyani:  It doesn't have to be.  At Nuage, we don’t need for Alcatel-Lucent to be the incumbent supplier to sell this solution.  But of course it helps if they already know us and and already trust us in running highly-scalable networks. So when we talk about scalablity of data centers, we have a lot of credibility built in. Both SFR and TELUS have the ambition to offer cloud services.  I think they recognize that they must move to virtualization in the data center network and that the connectivity must be extended all the way to enterprise.  Nuage can deliver a solution unlike anything from anybody else today.  Existing SDN approaches only deliver virtualization in some subset of the data center, they can’t cross the boundary.  Carriers want to have multiple cloud data centers, but they cannot connect their resources easily to MPLS VPNs today. We give them that solution.

Converge! Digest:  In cloud services, it’s becoming clear that a few players are running away with the market.  You might say Amazon Web Services, followed by Microsoft Azure, Rackspace, Equinix and maybe soon Google, are capturing all the momentum.  One thing these guys have in common is a desire to be carrier neutral, so they are not tied to a particular MPLS service or footprint. Will Nuage appeal to these cloud guys too?

Manish Gulyani:  We do.  In fact, we are talking to some of these guys. As I said, Nuage is not designed for telecom operators.  It is designed for people who want to sell cloud services and who run very large data centers.  Carrier with multiple data center, like Equinix, will need the automation.  Until you virtualize and automate the data center, forget about selling cloud services.  Step 1 is creating the automation inside the data center.  Connecting to MPLS VPNs is step 2.  Amazon has been among the first ones, but they had to develop all of this themselves.  There was no solution on the market. They build that step 1 automation themselves. We now know that Amazon found it quite cumbersome to get secure connectivity between clouds. They are also experiencing how hard it is to connect a corporate VPN into the Amazon cloud. It can be tedious.  If others are going to offer services like Amazon, and they don’t have the size and wherewithal to figure it out themselves, then Nuage will get them there.

Converge! Digest:  On this question of data center interconnect (DCI), Alcatel-Lucent also has expertise at the optical transport layer, especially with your photonic switch. Will Nuage extend this SDN vision to the optical transport layer?

Manish Gulyani: We sell a lot of data center interconnect both at the optical layer and the MPLS layer, such as DWDM hitting the data center and also MPLS in an edge router.  We sell a lot of 100G on our optical transport systems because they really are the capacity needed for DCI. So that’s the physical connectivity.  The logical connectivity is what you need to move one virtual machine in one data center to another.  Even though the secure, physical connectivity exists between these data centers, the logical connectivity just is not there today. Nuage gives you that overlay on top of the physical infrastructure to deliver a per-tenant slice with the policy you want.

Converge! Digest:  How big is Nuage as a company in terms of number of employees?

Manish Gulyani:  We haven’t talked publicly about the size of the company or head count.

Converge! Digest:  About this term “spin-in” that is being used to describe Nuage… what does it mean to call Nuage a spin-in of Alcatel-Lucent?  How is the company organized?

Manish Gulyani:  Spin-in means that we are an internal start-up inside of Alcatel-Lucent.  There is a very good reason Alcatel-Lucent structured this as an internal start-up instead of an external start-up.  Nuage leverages so much existing Alcatel-Lucent intellectual property, there was no way it could let this outside of the company for others to have.  We would essentially have had to put out our Service Routing operating system for others to value and control the intellectual property and associate equity investments with it.  This would have been too complicated.  Others have tried to spin-out a new start-up with third party investors, only to find that they must acquire it back because they did not want their intellectual property to fall into the hands of others. Still, Nuage has full freedom to develop its solution and the right atmosphere to pull in the right talent.  We need a good mix of networking people and IT people.  We've been able to bring in guys who did Web 2.0 scaled-out IT solutions.

Converge! Digest: So Nuage is not a separate legal entity that can offer stock options to attract talent?

Manish Gulyani: No, Nuage is a fully funded internal start-up that is not a separate legal entity.

The start-up identity separate from Alcatel-Lucent also enables us to sell into the new cloud market, which is a different space from what Alcatel-Lucent has traditionally pursued. So, we can go after different market, we can attract new talent but still leverage the existing intellectual property that is essential to really get a good solution to market. This structure gives us freedom in multiple dimensions.

Tuesday, March 26, 2013

Big Switch Offers Open Source OpenFlow Software


Big Switch Networks took the wraps off of its Switch Light, a thin switching, open source software platform that can be deployed as a virtual switch for server hypervisors and in merchant silicon-based physical switching platforms.  The open source software is based on the Indigo Project, a sub-project within Project Floodlight, which is an SDN developer community that already includes some 200,000 lines of code.

Big Switch said the aim of its Switch Light is to accelerate the adoption of OpenFlow-based networking by empowering commodity "white box" hardware.  Initially, Switch Light will be available to run on a range of merchant silicon-based physical switches (Switch Light for Broadcom) and virtual switches (Switch Light for Linux), and will be ported to other data plane devices in the future.

"In making our open-source thin switching platform available to the market, we aim to accelerate the development of OpenFlow-based switches, both through ODM and OEM partners, thereby catalyzing the deployment of OpenFlow networks,” said Guido Appenzeller, CEO of Big Switch Networks. “Customers are demanding choice in Open SDN hardware and want to unite their physical and virtual platforms. Switch Light is an important step down that path."

Big Switch cited a number of industry partners for Switch Light, including Broadcom as a leading merchant silicon partner, and Accton and Quanta for merchant silicon-based switches. Extreme Networks, a key Big Switch data center switching partner, will be enhancing their hybrid switch offerings by supporting Switch Light.

http://www.bigswitch.com


Extreme Networks Looks Toward Lightweight SDN Switch


Extreme Networks announced a commitment to offering an open source, OpenFlow thin switching platform in line with Big Switch Networks' Switch Light initiative.

Later this year, Extreme Networks plans to introduce its first such switch, the Slalom -- an optimized SDN switch supporting lightweight software and network services based on the OpenFlow protocol.  The company said this new Slalom switch will provide an evolutionary progression of its Open Fabric portfolio, complementing its ExtremeXOS based SDN-capable stackable and chassis-based switches.

"Extreme Networks Open Fabric is designed to offer customers an open and broad portfolio of next generation data center networking solutions that support emerging SDN solutions in hardware and software," said Oscar Rodriguez, president and CEO for Extreme Networks.  "Providing customers with the widest amount of choice and performance for their networks is what reduces their costs and helps them scale."

http://www.extremenetworks.com


  • In February, Extreme Networks began shipments of OpenFlow with the release of ExtremeXOS 15.3 and SDN applications from Big Switch Networks.  These applications include Big Tap, providing traffic monitoring and dynamic network visibility with flow filtering, and Big Virtual Switch (BVS), an application for virtualized data center networks which provisions the physical network into multiple logical networks across the stack, from Layer 2 to 7. 


Thursday, February 14, 2013

Extreme Adds SDN in Latest OS Release

Extreme Networks released the latest version of its modular Operating System (ExtremeXOS v15.3) with new features and support for SDN technology, including OpenFlow and support for SDN applications from partners such as Big Switch Networks and NEC.

The company is also shipping its OpenStack Quantum plugin, a downloadable software module providing a rich API for ExtremeXOS that enables orchestration and management of multi-tenant networks providing security, load balancing and data center interconnect infrastructure as network services.

The OS supports Big Switch Networks' Big Tap, which provides traffic monitoring and dynamic network visibility with flow filtering, and Big Virtual Switch (BVS), an application for virtualized data center networks which provisions the physical network into multiple logical networks across the stack, from Layer 2 to 7. Additionally, ExtremeXOS 15.3 delivers support for AVB (Audio Video Bridging), Identity Management enhancements, XNV Dynamic VLANs and GRE Tunneling enhancements.

http://investor.extremenetworks.com/releasedetail.cfm?ReleaseID=740278


Wednesday, February 13, 2013

Netronome Develops OpenFlow Reference Design based on its Flow Processors

Netronome will demonstrate the first software defined networking (SDN) reference design that uses flow processors and open dataplane software controlled by the OpenFlow protocol. The company is showing its reference at this week's Open Networking User Group (ONUG) in Boston.

Netronome’s NFP‐6xxx is powered by 96 packet processing cores and 120 multi‐threaded flow processing cores operating at up to 1.2 GHz. It delivers 200 Gbps of packet processing with deep packet inspection, network and security processing, and I/O virtualization for over 100 million simultaneous flows. It is also specifically designed for tight coupling with x86 processors.

"In an SDN world, applications and services require flow intelligence at a very granular level," said Jarrod Siket, senior vice president of marketing at Netronome. "This requirement is forcing devices to support all OpenFlow match fields in SDNs. Programmable flow processors are well positioned to service these applications by supporting the evolving OpenFlow standards."

http://www.netronome.com