Thursday, June 18, 2015

#ONS2015 - A Look Inside Google's Data Center Network

Networking is at an inflection point in driving next-gen computing architecture, said Amin Vahdat, Senior Fellow and Technical Lead for Networking at Google, in a keynote address at the Open Networking Summit in Santa Clara, California. Creating great computers will largely be determined by the network.

In constructing its "Jupiter" fifth-generation data centers, Google is essentially the bandwidth equivalent of the Internet under one roof.

Some key takeaways from the presentation:
  • Google will open source its gRPC load-balance and app flow-control code 
  • Google's B4 software-defined WAN links its global data centers and is bigger than its public-facing network
  • Andromeda Network Virtualization continues to advance as a means to slice the physical network into isolated, high-performance components
  • Google is deploying its "Jupiter" fifth-generation data center architecture.  Traditional designs and data center switches simply cannot keep up and require individual management, so Google decided to build its own gear.
  • Three principles in Google's data center network are: Clos Topologies, Merchant Silicon, and Centralized Control. Everything is designed for scale-out.
  • Load balancing is essential to ensure that resources are available and to manage cost.
  • Looking forward, a data center network may have 50,000 servers, each with 64 CPU cores, access to PBs of fast Flash storage, and equipped with 100G NICs.  This implies the need for a 5 Pb/s network core switch -- more than the Internet today!

The #ONS2015 keynote can be seen here:
https://youtu.be/FaAZAII2x0w



#ONS2015 - Microsoft Azure Puts SDN at Center of its Hyperscale Cloud

To handle its hyperscale growth, Microsoft Azure must integrate the latest compute and storage technologies into a truly software-defined infrastructure, said Mark Russinovich, Chief Technology Officer of Microsoft Azure in a keynote presentation at the Open Networking Summit in Santa Clara, California.

The talk covered how Microsoft is building its hyperscale SDN, including its own scalable controllers and hardware-accelerated hosts.


 Microsoft is making a massive bet on Azure.  It is the company's own infrastructure as well the basis for many of its products going forward, including Office 365, Xbox and Skype.

Some highlights:
  • Microsoft Azure's customer facing offering include App Services, Data Services and Infrastructure Services
  • Over 500 new features were added to Azure in the past year, including better VMs, virtual networks and storage.
  • Microsoft is opening new data centers all over the world
  • Azure is running millions of compute instances
  • There are now more than 20 ExpressRoute locations for direct connect to Azure.  
  • Azure connects with 1,600 peered networks through 85 IXPs
  • One out of 5 VMs running on Azure is a Linux VM
  • A key principle for Microsoft's Hyperscale SDN is to push as much of the logic processing down to the servers (hosts)
  • Hyperscale controllers must be able to handle 500K+ server (hosts) in a region
  • The controller must be able to scale down to smaller data centers as well
  • Microsoft Azure Service Fabric is a platform for micro-service-based applications
  • Microsoft has released a developer SDK for its Service Fabric
  • Azure is using a Virtual Filtering Platform (VFP) to act as a virtual switch inside Hyper-V VMSwitch.  This provides core SDN functionality for Azure networking services. It uses programmable rule/flow tables to perform per-packet actions. This will also be extended to Windows Server 2016 for private clouds.
  • Azure will implement RDMA for very high performance memory transport between servers. It will be enabled at 40GbE for Azure Storage.  All the logic is in the server.
  • Server interface speeds are increasing: 10G to 40G to 50G and eventually to 100G
  • Microsoft is deploying FPGA-based Azure SmartNICs in its servers to offload SDN functions from the CPU. The SmartNICs can also perform crypto, QoS and storage acceleration.

The #ONS2015 keynote can be seen here:
https://youtu.be/RffHFIhg5Sc



#ONS2015: AT&T Envisions its Future as a Software Company

Over the new few years, AT&T plans to virtualize and control more than 75% of its network functions via its new Domain 2.0 infrastructure.  The first 5% will be complete by the end of this year, laying the foundation for an accelerated rollout in 2016.

In a keynote at the Open Networking Summit 2015 in Santa Clara entitled "AT&T's Case for a Software-Centric Network", John Donovan provided an update on the company's Domain 2.0 campaign, saying this strategic undertaking is really about changing all aspects of how AT&T does business.

Donovan, who is responsible for almost all aspects of AT&T's IT and network infrastructure, said AT&T is deeply committed to open source software, including contributing back to open source communities. The goal is to "software-accelerate" AT&T's network.  In the process, AT&T itself becomes a software company.

Here are some key takeaways from the presentation:


  • Since 2007, AT&T has seen a 100,000% increase in mobile data traffic
  • Video represents the majority of traffic on the mobile network
  • Ethernet ports have grown 1,300% since 2010
  • AT&T's network vision is rooted in SDN and NFV
  • The first phase is about Virtualizing Functions.
  • AT&T's Network On-Demand service is its first SDN application to reach customers. It went from concept to trials in six months.
  • The second phase is about Disaggregation.
  • The initial target of disaggregation is the GPON Optical Line Terminals (OLTs), which are deployed in central offices for supporting its GigaPower residential broadband service.  AT&T will virtualize the physical equipment using less expensive hardware.  The company will release an open specification for these boxes.
  • AT&T will contribute its YANG custom design tool to the open source community.
  • AT&T is leading a Central Office Re-architected as Data Center (CORD) project.


http://www.att.com
http://opennetsummit.org/

The ONS2015 keynote can be seen here:
https://youtu.be/7gEvIHCps1Q


Blueprint: Open Standards Do Not Have to Be Open Source

by Frank Yue, Senior Technical Marketing Manager, F5 Networks

Network Functions Virtualization (NFV) is driving much of the innovation and work within the service provider community. The concept of bringing the benefits of cloud-like technologies is driving service providers to radically alter how they architect and manage the services they deliver through their networks.

Different components from different vendors based on different technologies are required to create an NFV architecture. There are COTS servers, hypervisor management technologies, SDN and traditional networking solutions, management and orchestration products, and many distinct virtual network functions (VNFs). All of these components need to communicate with each other in a defined and consistent manner for the NFV ecosystem to succeed.


Source: Network Functions Virtualization (NFV); Architectural Framework

While ETSI has defined the labels for the interfaces between the various components of the NFV architecture, there are currently no agreed-upon standards. And although there are several open source projects to develop standards for these NFV interfaces, most have not matured to the point where they are ready for use in a carrier-grade network.

Are Pre-standards Solutions Premature?

In the meantime, various multi-vendor alliances are developing their own pre-standards solutions. Some are proprietary and others are derivations of the work done by open source groups. Currently, almost all of the proof of concept (POC) trials today are using these pre-standard variations. Each multi-vendor alliance is working in conjunction with the service providers to develop interface models and specifications that everyone within each POC will be comfortable with.

It is possible and even likely that some of these pre-standards will become de facto standards based on their popularity and utility. There is nothing wrong with standards that are developed by the vendor or service provider community as long as they meet these criteria: 1) the standard must work in a multi-vendor environment since the NFV architecture model depends on multiple vendors delivering different components of the solution. 2) The standard needs to be published and open so that a new vendor can easily build its component to be compatible with the architecture.

Looking at the first of these points, the nature of the NFV architecture is to be an interactive and interdependent ecosystem of components and functions. It is unlikely that all of the pieces of the NFV ecosystem will be produced and delivered by a single vendor. In a mature NFV environment, many vendors will be involved. One multi-vendor NFV alliance currently has over 50 members. Another alliance has designed an NFV POC requiring the involvement of nine distinct vendors.

This multi-vendor landscape drives the need for the second point, for the standard to be published and open. No matter what interface model is developed by each vendor and alliance, it still needs to be published in an open form, allowing other vendors to create models to integrate their solutions into the NFV architecture. It is likely that in the mature NFV ecosystem, some components will be delivered by vendors that are not part of the majority alliance that delivered the NFV solution.

No two service provider networks are alike, and there are close to an infinite number of combinations of manufacturers and technologies that can be incorporated into each service provider’s NFV model.  Service providers will require all of the components in the network to interact in a relatively seamless fashion. This can only be accomplished if the interface pre-standards are open and available to the technology community at large.

Proprietary, but Open?

A proprietary, but open standard is one that has been developed without community consensus. While the standard has been developed by a vendor or alliance of vendors, the model is published to allow anybody interested in developing solutions to incorporate the standard without the need for licensing, partnership, or agreement in general.

Proprietary, but open standards can be developed by a single entity or a small community working together towards a common goal. This gives these proprietary standards some advantages. 1) They can be created quickly since universal consortium acceptance may not be required. 2) They can be adapted and adjusted quickly to meet the changing and evolving nature of NFV architectures.

While open source projects and products have the benefit of being available to everyone, there are some tradeoffs for the design of technologies by open committee. Open source projects are always in flux as multiple perspectives and methodologies are competing for a universal consensus. This is especially true when working with standards developing organizations (SDOs). Because of this, standards often take years, instead of months, to develop.

In the meantime, the current NFV alliances can develop interface models that are successful in the limited environment of the alliance ecosystem. This rapid development also allows for the tuning of these interfaces as NFV architectures develop and mature. These proprietary, but open, models can be used as a template within the SDOs to develop a standard that has the benefit of being tested and proven in real-world scenarios.

No Model is Perfect

Ultimately, the standards that are developed will probably be a mixture of open source solutions with customized enhancements and open proprietary standards developed by these alliances. It is likely that individual vendors and alliances will enhance the final standards, adding their unique value to improve functionality and differentiate their solution.

In an ideal world, standards are fixed in nature and in time, but networks are evolving and technologies like NFV continue to evolve and mature. In this world of dynamic architectures, it is essential to have standards that are dynamic and proprietary, but open. This type of standard offers a solution that can deliver functions today and adapt to the models of tomorrow.

About the Author

Frank Yue is the Senior Technical Marketing Manager for the Service Provider business at F5 Networks. In this role, Yue is responsible for evangelizing F5’s technologies and products before they come to market. Prior to joining F5, Yue was sales engineer at BreakingPoint Systems, selling application aware traffic and security simulation solutions for the service provider market. Yue also worked at Cloudshield Technologies supporting customized DPI solutions, and at Foundry Networks as a global overlay for the ServerIron application delivery controller and traffic management product line. Yue has a degree in Biology from the University of Pennsylvania.

About F5

F5 (NASDAQ: FFIV) provides solutions for an application world. F5 helps organizations seamlessly scale cloud, data center, telecommunications, and software defined networking (SDN) deployments to successfully deliver applications and services to anyone, anywhere, at any time. F5 solutions broaden the reach of IT through an open, extensible framework and a rich partner ecosystem of leading technology and orchestration vendors. This approach lets customers pursue the infrastructure model that best fits their needs over time. The world’s largest businesses, service providers, government entities, and consumer brands rely on F5 to stay ahead of cloud, security, and mobility trends. For more information, go to f5.com.


Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

Cisco's David Ward on Open Source Development

Open networking can only evolve with the support of a community of developers, says David Ward, CTO of Engineering and Chief Architect of Cisco. But you can't just launch a developer community, you have to build it. Open Source communities have now emerged.

Cisco is working on many open networking fronts, including OpenDaylight, OPNFV, ONOS and OpenStack.  In this video, Ward also highlights NETCONf and YANG, two standards seen as keys for infrastructure programmability.

http://open.convergedigest.com/2015/04/ciscos-david-ward-on-evolution-of-open.html

Nuage's Houman Modarres on the Value of Open

The move toward open networks in unstoppable, says Houman Modarres, VP of Marketing at Nuage Networks. The attraction of open networks is undeniable. Who would want a stiff, inflexible, vertically-integrated solution, when the Internet has already shown that creativity coming from different parts of the user community and application ecosystem is the right answer.

The crux is this:  with freedom of choice comes complexity.  Nuage, a business unit of Alcatel-Lucent, is working to address this challenge by supporting a variety of deployment models.

http://open.convergedigest.com/2015/04/nuages-houman-modarres-on-value-of-open.html

Intel's John Healy on the End Goal of Open

The end goal of open networking is to have a scalable infrastructure that is also lower cost to manage, says John Healy, General Manager of the SDN Division at Intel.

Open means more open in terms of standards and vendors. It also means being capable of working with the open source community.

http://open.convergedigest.com/2015/04/intels-john-healy-on-evolution-of-open.html

Novatel Wireless to Acquire DigiCore for M2M and Telematics

Novatel Wireless agreed to acquire DigiCore, a provider of advanced machine-to-machine (M2M) communication and telematics solutions based in South Africa, for approximately US$87 million.

The companies have been working together to commercialize a comprehensive end-to-end global service Software-as-a-Service (SaaS) platform.

This combines Novatel Wireless' hardware with DigiCore's Ctrack, a global telematics SaaS offering for the fleet management, user-based insurance, and asset tracking and monitoring markets.

"This combination is the result of a long-standing partnership between the two companies," said Alex Mashinsky, CEO of Novatel Wireless. "As a result of this relationship, we've already gone through an arduous process of integrating Novatel Wireless hardware into DigiCore's SaaS platform to create the industry's most complete IoT stack. These efforts are now bearing fruit as our successful joint venture has validated that the market demands a true end-to-end solution comprised of a comprehensive hardware portfolio, platform and cloud services, and integration and support."

Novatel Wireless said the merger will give it a foundation for developing and marketing comprehensive solutions for the commercial telematics industry. The collective vision is to simplify the delivery of telematics solutions from device deployment to big data collection, analytics, and reporting.

http://investor.novatelwireless.com/releasedetail.cfm?ReleaseID=918626
http://www.ctrack.com/

Dell to Resell NEC's SDN ProgrammableFlow Controller

NEC and Dell announced a new distribution agreement that will allow Dell to resell the NEC ProgrammableFlow Controller Software as one of the software options sold with Dell’s networking hardware. Dell’s S4810, S4820, S5000 and S6000 series of switches running Dell OS9 have been verified compatible with the NEC ProgrammableFlow Controller version 6, with additional validation of Dell switches planned.

“Unlocking the network from the tight coupling of hardware and software opens more customer choice to achieve better service levels at lower costs,” said Arpit Joshipura, vice president, Dell Enterprise Networking & NFV. “We are excited to work with NEC to address the market demand for automation and open standards.”

http://www.necam.com/sdn
http//www.dell.com

Wind River Opens App Store for VxWorks RTOS

Wind River is opening an app store for its VxWorks real-time operating system (RTOS). Wind River Marketplace helps customers find and evaluate best-of-breed add-on solutions from the Wind River partner ecosystem in order to enhance their VxWorks deployment.

The apps in the store are tested and validated by Wind River for seamless interoperability with VxWorks.  Target categories range from safety, security, and storage to connectivity, graphics, and development tools.

“With the synergy and validation already achieved for Wind River Marketplace products, customers get almost immediate access to best-in-class embedded technologies to easily enhance the operating environment and accelerate market deployment,” said Dinyar Dastoor, vice president and general manager of operating system platforms at Wind River.

http://tinyurl.com/WRMarketplace

Wednesday, June 17, 2015

VeloCloud Brings its SD-WAN to Equinix Data Centers

VeloCloud Networks announced a reference architecture that brings its Software-Defined Wide Area Networking (SD-WAN) to Equinix’s Performance Hubs, which are extensions of an enterprise corporate IT network co-located inside Equinix International Business Exchange (IBX) data centers. The service is currently available in Equinix IBX data centers located in Chicago, Dallas, San Francisco, Seattle and Washington, D.C., metro areas.

VeloCloud’s solution enables Equinix customers to leverage SD-WAN as a cost-effective option for connecting to their Performance Hubs with the same performance, reliability and control as if these services were within the customers’ private network.

“VeloCloud’s development of a reference architecture for connecting to enterprise Performance Hubs inside Equinix represents a key step forward for next-generation enterprise networks,” said Sanjay Uppal, CEO and co-founder of VeloCloud. “Equinix is the interconnection and data center leader, and this new architecture can help enterprises achieve unprecedented SaaS application performance levels.”

http//www.velocloud.com



  • VeloCloud Networks, which is based in Mountain View, California, offers a subscription-based, virtualized WAN service for enterprises that aggregates multiple access lines (cable modem, DSL, LTE) into a single secure connection that is defined and controlled in the cloud. The VeloCloud service uses an Intel-based customer premise device at a branch office to communicate with a VeloCloud gateway in the cloud. The service analyzes network performance and application traffic to determine the best path and dynamically steer traffic to corporate data center or cloud services.

FCC Seeks Record $100 Million Fine from AT&T

The FCC is proposing a $100 million fine on AT&T Mobility for misleading consumers about its unlimited data plan.

The FCC said AT&T Mobility willfully and repeatedly violated its 2010 Open Internet Transparency Rule by: (1) using the misleading and inaccurate term “unlimited” to label a data plan that was in fact subject to prolonged speed reductions after a customer used a set amount of data; and (2) failing to disclose the express speed reductions that it applied to “unlimited” data plan customers once they hit a specified data threshold.

The FCC said it was unconvinced that AT&T's disclosures about its unlimited data policies provided consumers with sufficient information to make informed choices about their broadband service.

https://www.fcc.gov/document/att-mobility-faces-100m-fine-misleading-consumers-0


  • Two of the five FCC Commissioners issued statement opposing the fine.

ALU Debuts G.fast Home CPE

Alcatel-Lucent (Euronext Paris and NYSE: ALU) has today launched a new device for residential and business use that will allow operators to accelerate the deployment of ultra-broadband access in order to meet ever-growing demand for faster data speeds in homes and the workplace.

The Alcatel-Lucent G.fast residential gateway is a plug-and-play device that aims to eliminate the need for operators’ engineers to have to enter a customer’s premises to install.  The G.fast residential gateway also delivers concurrent dual-band WiFi to provide even greater signal strength and 1 Gbps data speeds, eliminating the need to run fiber into the home.

Highlights of Alcatel-Lucent’s G.fast residential gateway (7368 ISAM CPE):

  • Integrated reverse power will allow operators to reduce costs and complexity of large scale G.fast deployments.
  • Dual-band Wi-Fi (802.11ac/n on 5GHz and 802.11b/g/n on 2.4GHz) provides over 1 Gbps bandwidth, offering customers with the super-fast speeds they demand across all their connected devices.
  • Higher transmittting power of up to 1000 mW will deliver the same quality of service inside any room or outdoor area.
  • SuperSpeed USB3.0 support for home network access to shared content such as music, videos and photos.
  • The 7368 ISAM CPE can be deployed with VDSL2 networking today and upgraded later without having to switch out the device.
  • Fully integrated, plug-and-play device reduces time and cost of installation.
  • Compatible with Alcatel-Lucent G.fast DPUs, DSL line cards and G.fast standard-compliant DPUs.
  • Leveraging the same advanced home technology as the 7368 ISAM ONT G-240W-B residential gateway ONT, delivering the same experience to both PON and G.fast subscribers.


https://www.alcatel-lucent.com/press/2015/alcatel-lucent-introduces-gfast-home-networking-device-accelerate-deployment-ultra-broadband#sthash.RzlIJLdT.dpuf

Ciena Joins SARNET Research Initiative

Ciena announced its participation in SARNET, a four year collaboration agreement between the University of Amsterdam (UvA), the Netherlands Organisation for Applied Scientific Research (TNO), and an airline to explore methods for autonomous Internet security. The focus is on exploring how software defined networking (SDN) can help alleviate cyber-attacks and program networks to provide enhanced cyber-terror detection and defense. SARNET utilizes a unique, multi-purpose, high-capacity research network and allows researchers to trial advanced network detection and defensive functionalities that automatically reconfigure around anomalies to help create and control agile, resilient and high-performing architectures.

“SARNET exemplifies the significant transformation that future networks need to make in order to reconfigure and self-provision instantly. This collaboration is exploring just one of the use-cases made possible by SDN. Autonomous Internet security would alleviate a lot of pressure on today’s networks, freeing up resources and opening the industry up to a raft of other innovations.”
- Cees deLaat, University of Amsterdam, Principal Investigator, SARNET

http://www.ciena.com
http://hims.uva.nl/news-and-events/news/content4/2014/08/nwo-grant-for-internet-security.html

Breqwatr Delivers Hyper-Converged Cloud Appliance

Breqwatr, a start-up based in Toronto, introduced the second generation of its hyper-converged cloud appliance for scale-out infrastructure.

Breqwatr delivers a curated OpenStack deployment in a fully integrated, highly available and hyper-converged appliance. The Breqwatr Cloud Appliance includes core IaaS components for virtual machine management, as well as chargeback and resource based quota models, with support for external enterprise storage.

Key features include:

  • Linear Expansion: Online and automated expansion of compute and storage resources, allowing for seamless expansion of the cloud platform
  • Consumerized User Interface: Breqwatr’s simplified self-service UI provides users with on-demand access to the compute and storage resources required to do their job
  • Enterprise Resilience: Control-plane high-availability and the introduction of instance-level HA delivers the resiliency that enterprises demand
  • Hardware Management: Breqwatr’s hardware management interface makes managing and monitoring the physical components of the cloud, a simple browser-based experience

“At Breqwatr, we’re dedicated to empowering the enterprise to take full advantage of private clouds,” said John Kadianos, founder and CEO at Breqwatr. “With the Breqwatr Cloud Appliance, we are removing the guesswork and helping enterprises realize the quickest time to value available for private cloud today. We’re committed to making OpenStack more accessible and therefore have kept user experience at the forefront of our design principles. With this product, we’re bringing the value of public cloud computing into the comfort and security of one’s own datacenter.”

https://breqwatr.com

Tuesday, June 16, 2015

Launching Open.ConvergeDigest.com -- An Open Networking Showcase

by James E. Carroll

We're pleased to announce the launch of our mini-website on Open Networking.

This sub-domain of the Converge! Digest site showcases top ideas and technology demonstrations related to SDN, NFV, Open Stack, Open Daylight, Open Flow, ONOS, OPNFV, Open vSwitch, Open Computer and other open source efforts aimed at making networking more agile, programmable, scalable and lower cost.

Our series kicks off with short videos from top insiders sharing their views on how Open Networking initiatives are changing the course of the industry:





Got an idea for this series?  Please contact Jim Carroll 

Internet2 Deploys ONOS to Provision Virtual Nets

Internet2 has deployed the Open source SDN Network Operating System (ONOS) on its nationwide research and education (R&E) network.

Five higher education institutions — Duke University, Florida International University, the Indiana GigaPoP, MAX and the University of Maryland – College Park, and the University of Utah — are connected to a virtual slice of the Internet2 nation-wide network that is piloting this next-generation advanced network technology.

Internet2 said it is using the capabilities of its SDN substrate to provision virtual networks based on FlowSpace Firewall. An ONOS cluster is deployed in a virtual network slice on the Internet2 network, controlling 38 OpenFlow-enabled Brocade and Juniper switches. The SDN-IP Peering application deployed atop ONOS peers with other, traditional networks. An SDN-based network like Internet2 provides benefits such as network programmability, lower TCO and removal of vendor lock-in. In this particular case, the centralized control plane leads to significant improvements in network operation efficiency for the Internet2 network.

“We worked closely together in a lab environment to prepare ONOS for production deployment on the Internet2 Network, providing many valuable insights on production deployment of SDN-controlled virtual networks in a multi-tenant environment,” said Luke Fowler, director of software and systems for the GlobalNOC.

“A primary feature of the Internet2 Network is its ability to serve as a ‘playground’ for piloting new advanced networking capabilities in a real-world environment with demanding users and advanced applications capabilities,” said Vietzke. “The ONOS and SDN-IP peering deployment is another demonstration of how Internet2 and the academic community continue to be a large scale platform in which pre-market innovations can be prototyped at scale.”

"ON.Lab, the ONOS Project and Internet2 have a very synergistic collaboration. At ON.Lab we develop interesting open source SDN platforms and Internet2 is a keen early adopter bringing new capabilities to its customers,” said Bill Snow, vice president of Engineering for ON.Lab. “With the deployment of ONOS on Internet2’s nationwide network, we get to validate and demonstrate ONOS’s scalability, performance and high availability in a production setting and learn from this experience to make ONOS better.”

http://www.internet2.edu/news/detail/8664/

Corsa Introduces SDN Metering and QoS for Big Data

Corsa Technology, a start-up based in Ottawa, unveiled new SDN metering and QoS (Quality of Service) capability for its line of performance SDN hardware. Bandwidth reservation is seen as especially interesting for organizations running Big Data workloads.

The traffic engineering function, which is based on OpenFlow 1.3, allows network architects to better manage bandwidth across their network with dynamic, policy-aware metering and QoS. Metering and queuing allows networks to create bandwidth profiles by putting limits and guarantees on traffic with particular classes.

Corsa said the advantage With SDN is that bandwidth limits are no longer fixed as part of a static topology and rigid hardware platform. Policy-aware provisioning can be dynamically pushed down to the flexible Corsa SDN hardware to make on-going adjustments to meters and queue assignments. The network can then make immediate, informed queuing and discard decisions under congestion. Real-time performance monitoring automatically returns meter statistics and is checked against policy such as SLAs. For network operators including service providers and ISPs, SDN metering and queuing allows new self-serve features to be offered such as “bandwidth reservation” where users can dynamically schedule and reserve bandwidth via separate class of service and meters.

“Dynamic, policy-driven networks is what SDN is about and SDN hardware solutions must be able to respond to these policy changes. SDN metering and QoS functionality is one important area for policy-driven networking requiring a very capable hardware solution,” said Yatish Kumar, Corsa Technology CTO. “Corsa’s line of performance SDN hardware has deep packet buffers, multi-table datapaths, and can support over a million active flows with flow modifications updated at >50,000 flow mods/sec. Together, these attributes make Corsa SDN metering and QoS a powerful tool for creating granular bandwidth profiles.”

http://www.corsa.com/corsa-technology-announces-sdn-metering-and-qos/

See also