Monday, June 22, 2015

Docker Adds Multi-Host, Software-Defined Networking to Containers

Docker is adding software-defined networking (SDN) capabilities to strengthen the portability of multi-container distributed applications across multi-host IP networks. Along with networking capabilities, Docker is adopting a dynamic plugin architecture that provides an opportunity for direct platform extensibility for technology partners and developers. For example, Docker native SDN can be “swapped” with a third-party product. Initial plugin capabilities are in the areas of networking and storage volumes and are available for use from Cisco, Microsoft, Midokura, Nuage Networks, Project Calico, VMware and Weave for SDN and ClusterHQ for storage volumes.

“By bringing SDN directly to the application itself and into the hands of the developers, Docker is driving multi-container application portability throughout the application development lifecycle,” said Solomon Hykes, CTO and chief architect of Docker. “Individual developers, through a single command, can establish the topology of the network to connect discrete Dockerized services into a distributed application. And then through a set of commands be able to inspect, audit and change topology ‘on the fly.’”

Docker's SDN capabilities, which were developed through its recent SocketPlane acquisition, are being extended through the company's three orchestration tools (Docker Machine, Docker Compose and Docker Swarm). The SDN functionality is tied into DNS (domain name system) and VXLAN (virtual extensible LAN). DNS ensures that Dockerized services will be able to communicate without modification. VXLAN enables the creation of portable, distributed networks that allow an application’s microservices to reside on any member of a Swarm, a native Docker cluster.

Docker Compose defines the containers that comprise the distributed application and how they are connected together. Through integration with Docker Swarm, the multi-container application can be immediately networked across multiple hosts and can communicate seamlessly across a cluster of machines with a single command. Docker Swarm now has working integration with Mesos scheduling.

Docker is also announcing a collaboration with Amazon Web Services (AWS) and Amazon EC2 Container Service (ECS) to optimize the scheduling of Dockerized applications for Amazon Elastic Compute Cloud (Amazon EC2), and provides a native cluster management experience for Docker users. Amazon ECS integration with Docker Compose and Docker Swarm will make it easier for customers to launch tasks on Amazon ECS using the same APIs across their local dev environments.

Docker said its SDN functionality provides a new level of consistency in terms of how applications are networked through their full lifecycle. A development team can initially define the topology of its distributed application, while the networking team can, at a later stage, apply the sophisticated networking policy necessary to run an application with the highest level of availability and security in production. Even with these sophisticated policies in place, an operations team will have the freedom of choice – without reconfiguring the Dockerized application – to move the application from their private data center to any cloud.

http://www.docker.com

Docker Contributes its Runtime to Linux Foundation's New Open Container Project

The major developers of software containers for virtual machines, including Docker and CoreOS, agreed to collaborate on a new Open Container Project under the auspices of the Linux Foundation.

Docker agreed to donate the co
de for its software container format and its runtime, as well as the associated specifications.

Additional founders of the OCP include Amazon Web Services, Apcera, Cisco,EMC, Fujitsu Limited, Goldman Sachs, Google, HP, Huawei, IBM, Intel, Joyent, Linux Foundation, Mesosphere, Microsoft, Pivotal, Rancher Labs, Red Hat and VMware.

Docker said that over the past two years, its image format and container runtime have emerged as the de facto standard, with support across every major Linux distribution, Microsoft Windows, every major public cloud provider, all leading virtualization platforms and most major CPU architectures, including: x86, ARM, z and POWER System p.

Containers based on Docker’s image format have been downloaded more than 500 million times in the past year alone and there are now more than 40,000 public projects based on the Docker format.

“Containers are revolutionizing the computing industry and delivering on the dream of application portability,” said Jim Zemlin, executive director of the Linux Foundation. “With the Open Container Project, Docker is ensuring that fragmentation won’t destroy the promise of containers. Users, vendors and technologists of all kinds will now be able to collaborate and innovate with the assurance that neutral open governance provides. We applaud Docker and the other founding members for having the will and foresight to get this done.”

http://www.opencontainers.org/

Infinera Adds 100GbE to Cloud Xpress, along with NETCONF/YANG, MACSec

Infineraannounced the addition of 100 gigabit Ethernet (GbE) client services in its Cloud Xpress family of metro Cloud platforms.

The Cloud Xpress family, which Infinera announced last September and began shipping in December, is designed specifically to address the needs of Cloud service providers, Internet content providers, Internet Exchange service providers, large enterprises and other large-scale datacenter operators.  It leverages the company's unique oPIC-500 metro-optimized photonic integrated circuit to deliver DWDM datacenter interconnect services up to 500 Gbps in a compact two rack unit chassis.

The new Cloud Xpress with 100 GbE extends the hyper-scale density, simplified operations and low power of the existing Cloud Xpress family that operators can use to easily deploy and scale their networks. With the addition of the new platform, the Cloud Xpress family now supports 10 GbE, 40 GbE and 100 GbE client-side interfaces to match customer specific requirements.

Along with the introduction of the new Cloud Xpress with 100 GbE, Infinera announced important enhancements to the Cloud Xpress family including MACsec encryption for improved security, NETCONF & YANG support for Software Defined Networking (SDN) and ease of use, and LLDP discovery protocols enabling datacenter automation. NETCONF & YANG support enables smooth integration into datacenter automation and management systems; LLDP protocol awareness for auto-discovery of adjacent switches and routers across the WAN supports efficient connectivity validation and troubleshooting, and MACsec provides end-to-end encryption to keep data secure.

“The Cloud Xpress platforms are purpose-built for the datacenter interconnect market,” said Stu Elby, senior vice president, Cloud network strategy and technology, Infinera. “Based on Infinera's unique PIC and super-channel technology, the Cloud Xpress provides scale and simplicity while utilizing little power.”

http://www.infinera.com
http://www.infinera.com/j7/servlet/NewsItem?newsItemID=462


VMware to Support Docker Containers on its vSphere

VMware is previewing two key technologies for cloud-native apps running in Docker containers on VMware vSphere:

AppCatalyst --  an API and Command Line Interface (CLI)-driven hypervisor for replicating a private cloud locally on a desktop for building and testing containerized and microservices-based applications. The tool features Project Photon, an open source minimal Linux container host, Docker Machine and integration with Vagrant.

Project Bonneville -- software that enables the seamless integration of Docker container
s into the VMware vSphere platform and allow virtual administrators to use their existing operational and management processes and tools such as VMware vCenter Server without the need for new tools or additional training. Project Bonneville downloads containers from Docker Hub, and isolates and starts up each container in a virtual machine with minimal overhead using the Instant Clone feature of VMware vSphere. Together, Project Bonneville and Instant Clone will make virtual machines lightweight enough to support one container per virtual machine.

"Developers are rapidly adopting Docker to ship apps faster, whether to their private or public cloud infrastructures," said Scott Johnston, senior vice president of product, Docker. "VMware AppCatalyst and Project Bonneville will make it easier for developers and IT/operations teams alike to take advantage of existing management, compliance, networking and security processes and tools to run Dockerized applications."

http://www.vmware.com

Portworx Targets Container-Aware Storage

Portworx, a start-up based in Redwood City, California, unveiled its solution for providing elastic, scale-out block storage natively to Docker containers.

Portworx PWX Converged Infrastructure for Containers allows Dockerized applications to execute directly on the storage infrastructure. It also enables Dockerized applications to be scheduled across machines and clouds, making possible the deployment of stateful, distributed applications.

Key Portworx PWX features:

●     Container-Aware storage enabling data persistence across nodes, container-level snapshots, and storage policies to be applied at container granularity.

●     Self-service IT through Software Defined Storage.

●     Converged Storage ensuring that containers run on storage nodes to maximize I/O performance.

●     An Elastic Storage Orchestrator auto scales block storage to meet application needs and moves underlying storage blocks across nodes to optimize performance and ensure high availability.

In addition, Portworx announced an initial funding round of $8.5 million led by the Mayfield Fund.

“Both stateless and stateful applications can benefit from containerization. Our goal was to build storage infrastructure for Docker containers from the ground up with a keen focus on the new requirements that containers and service oriented architectures create,” said Murli Thirumale, CEO of Portworx.  “Building storage with container level granularity ensures storage policies can be applied at the container level, enabling greater agility, application state consistency, high availability, rolling upgrades, and high performance.  In getting to this point, we had to break a few cherished notions about storage and traditional architectures.”

http://portworx.com/

Rancher Labs Launches its Container Infrastructure Platform

Rancher Labs, a start-up based in Cupertino, California, announced the beta release of its platform for running Docker in production. It includes a fully-integrated set of infrastructure services purpose built for containers, including networking, storage management, load balancing, service discovery, monitoring, and resource management. Rancher connects these infrastructure services with standard Docker plugins and application management tools, such as Docker Compose, to make it simple for organizations to deploy and manage containerized workloads on any infrastructure.

Key Rancher features:

  • Cross-host networking: Rancher creates a private software defined network for each user, allowing secure communication between containers across hosts and even clouds.
  • Container load balancing: Rancher provides an integrated, elastic load balancing service to distribute traffic between containers or services.
  • Storage Management: Rancher supports live snapshot and backup of Docker volumes, enabling users to backup stateful containers and stateful services.
  • Service discovery: Rancher implements a distributed DNS-based service discovery function with integrated health checking that allows containers to automatically register themselves as services, as well as services to dynamically discover each other over the network.
  • Service upgrades: Rancher makes it easy for users to upgrade existing container services, by allowing service cloning and redirection of service requests. This makes it possible to ensure services can be validated against their dependencies before live traffic is directed to the newly upgraded services.
  • Resource management: Rancher supports Docker Machine, a powerful tool for provisioning hosts directly from cloud providers. Rancher then monitors host resources and manages container deployment.
  • Native Docker Support: Rancher supports native Docker management of containers. Users can directly provision containers using the Docker API or CLI, as well as using Docker Compose for more complex application management functions. Third-party tools that are built on Docker API, such as Kubernetes, work seamlessly on Rancher.
  • Multi-tenancy and user management: Rancher is designed for multiple users and allows organizations to collaborate throughout the application lifecycle. By connecting with existing directory services, Rancher allows users to create separate development, testing, and production environments and invite their peers to collaboratively manage resources and applications.

“Much of the excitement around Docker is its use as a universal packaging and distribution format,” said Sheng Liang, co-founder and CEO of Rancher Labs. “However, as users deploy containers across different infrastructures, they quickly realize that different clouds, virtualization platforms and bare metal servers have dramatically different infrastructure capabilities. By building a common infrastructure backplane across any resource, Rancher implements an entirely new approach to hybrid cloud computing.”

Rancher Labs recently announced $10 million in Series A funding from Mayfield and Nexus Venture Partners.

http://www.rancher.com

Datawise.io Targets Network/Storage for Linux Containers

Datawise.io, a start-up based in San Jose, previewed its solution for delivering network and storage solutions for Linux containers.

Project 6 is software for deploying and managing Docker containers across a cluster of hosts, with a focus on simplifying on-premises environments. Datawise.io is making it easy to pack stateless and stateful applications onto the same environment by integrating Docker and Google’s Kubernetes with additional capabilities that provide:

  • Highly-available cluster management, to aggregate resources and coordinate operations across multiple hosts 
  • Simplified networking, so that containers maintain distinct IP network identities independently from hosts, and can easily locate each other  
  • Persistent and temporary storage volumes, to make efficient use of local disks and minimize the need for external storage 
  • Dynamic scheduling augmented with network and storage, so that resources available are efficiently utilized with minimal administration  

“At Datawise.io, we believe containers will revolutionize application development and datacenter operations,” said Jeff Chou, Datawise.io Co-Founder and CEO. “We are providing a preview of Project 6 now to get early community feedback which will be incorporated into other projects our team is working on.”

http://www.datawise.io

Red Hat Appoints Frank Calderoni as CFO

Frank Calderoni has joined Red Hat as executive vice president, operations and chief financial officer.

Most recently, Calderoni served as executive vice president and CFO at Cisco Systems for seven years. From 2004 through 2014, he managed the financial strategy and operations of a company with more than 72,000 employees and total revenue for fiscal year 2014 of $47 billion. During his tenure at Cisco, the company went from $22 billion in revenue to $47 billion, and grew annual profits from $0.62 per share to $1.49 per share. With more than 30 years of experience, he has led high-performing finance organizations at global software and technology companies including Cisco, QLogic Corp., SanDisk Corp., and IBM.

Calderoni succeeds Red Hat executive vice president and CFO Charlie Peters, whose planned retirement was announced in December 2014, and who will remain with Red Hat until July 31.

http://www.redhat.com

Verizon Offers Pay Increase in Labor Negotiations

Verizon is offering a pay increase to approximately 38,000 Verizon East Wireline Employees as part of a three year labor package it presented to the Communications Workers of America and the International Brotherhood of Electrical Workers

Verizon said its offer includes several key proposals:

Wages -- Provided there is a signed agreement by Aug. 1, upon ratification of a new contract there would be a 2 percent wage increase effective Aug. 2, 2015; a 2 percent increase one year later; and a $1,000 lump sum payment in the third year. The average annual salary and benefit package for a Verizon associate in the East is $130,000.  Verizon technicians in the New York City/Long Island region currently have an average total wage-and-benefit package worth in excess of $160,000 a year.

Pensions -- Pension-eligible associates would be given a choice of continuing to earn pension benefits under the defined benefit plan with some limitations and forgoing the existing 401(k) company match, or opting for the enhanced 401(k) plan currently offered to management employees (which includes a bigger company match and a profit-sharing contribution) with a frozen pension benefit. With the exception of union-represented employees hired since Oct. 28, 2012, employees under these collective bargaining agreements currently have both a defined benefit pension plan AND a 401(k) savings plan with a generous company match, a benefit structure that's from another era.

Healthcare -- Negotiating cost controls for the company's healthcare plans is essential. The cost of medical coverage for an East associate and one or more family members currently averages nearly $20,000 a year. In one of the company's East plans, the annual cost for this coverage is over $23,000 annually. By contrast, the national average for family healthcare coverage is about $16,800. The company is proposing an increase of $8.10 per week next year for individual healthcare premiums. Other reasonable cost controls are also important to help keep this Wireline business unit competitive.

Workforce management -- The company is seeking more flexibility in terms of managing the workforce consistent with customer demands.

"More than ever, we need contractual changes that position us to compete with new and emerging technologies," said Tami Erwin, president of Verizon's Consumer and Mass Business unit. "American consumers are communicating in new and innovative ways. The way we work and respond to our customers demands flexibility. Our contract rules and provisions need to be updated to reflect those changes."

http://www.verizon.com

Sunday, June 21, 2015

Blueprint: 5G and the Need For SDN Flow Optimization

by Scott Sumner, VP Solutions Development and Marketing, Accedian Networks

As more subscribers run bandwidth-intensive applications from a variety of devices, mobile access networks are increasingly strained to maintain quality. According to Ericsson, annual mobile traffic throughput is predicted to increase from 58 exabytes in 2013 to roughly 335 exabytes by 2020. It’s clear that brute-force bandwidth over-provisioning is no longer an economically feasible solution.

What strategies can operators implement to meet growing quality of experience (QoE) expectations, especially in the face of finite spectrum?

Part of the answer is improvements to 4G networks using technologies like LTE-A, LTE unlicensed, and voice over LTE (VoLTE). Just in the mobile space alone, Groupe Speciale Mobile Association (GSMA) expects that 4G networks—as fast as they can be deployed—will reach their limits within five years, making this option a stopgap method and a stepping stone on the way to bigger and better things.

5G networks and standards are the inevitable answer, taking bandwidth another order of magnitude forward, supporting 1000% device densification and the seamless coexistence of the Internet of Things (IoT). Getting there requires understanding the real-world dynamics at play, the role of Software Defined Networking (SDN) in 5G, and requirements for performance assurance in a virtualized world.

5G Visions and Realities

As it is now envisioned, 5G will come with further tightening of performance requirements, approaching sub-millisecond latency bounds, minimal packet loss, and higher availability limits approaching 99.95%. These sound great in theory, but in the real world are challenging to achieve.

Complicating planning and development efforts is the fact that 5G proposals like those published recently by GSMA and NGMN focus on multiple end use cases or applications, each with quite different performance demands on the network: some high bandwidth, some low latency, some both, some neither. These competing applications necessitate exceptional quality of service (QoS), meeting the diverse requirements of each service, while maintaining the most efficient use of precious capacity and infrastructure.

Together, all of this requires a new approach to networking and performance assurance.

SDN’s Role in 5G

It’s generally agreed on that SDN is implicit to 5G. SDN separates control and data planes, allowing multiple frequency bands (such as millimeter wave combined with 4G spectrum) to be implemented without requiring changes to the control infrastructure. It also enables the sophisticated traffic delivery over multiple backhaul paths involved in coordinated multipoint (CoMP) arrangements, where multiple carriers simultaneously link to the same user equipment (UE).

SDN control enables spin-up of virtual networks that address each application specifically—including the virtual network functions (VNFs) chained together to deliver the service. The coexistence of these application-specific virtual networks, along with path decisions based on their performance requirements, are unique “layers” in the network, summed up in the NGMN-coined term “network slicing.”


Performance must be assured between chained VNFs, as well as
between endpoints relying on ultra-low latency interaction.

However, the SDN controllers required to support multi-carrier aggregation, dynamic traffic engineering, and performance optimization require a real-time feed of network performance to optimize QoE. Without this visibility, traffic may be sent over routes with the fewest hops, not those with the lowest latency, for example. Optimizing performance for critical applications also means lower-priority services should use less-desired routes, to free up capacity. Performance optimization applications and self-organizing networks (SONs), therefore, require immediate, continuous visibility into the ‘network state.’

Performance Assurance in a Virtualized World

In a multi-slice, multi-application network that is continuously tuned by SDN and application optimization controllers, a real-time performance view—of Layer 2 and 3 metrics such as utilization, capacity, packet loss, and latency; and QoE metrics like VoLTE MOS—must cover every link and service to provide adequate performance feedback.


Optimal multi-path backhaul pathing relies on tight coordination between SDN controllers and an instrumentation layer.

To achieve this ‘instrumentation layer’ over all slices and sections of their network, operators can build on the performance monitoring capabilities and standards already supported by their network infrastructure, supplementing with cost-efficient virtualized test points.

Specific requirements for this level of network performance assurance include:

Performance assurance attributes characterized as real-time, adaptive, directional, ubiquitous, embedded, and open/standards-based with microsecond (µs) precise delay metrics—ensuring that ultra-tight synchronization and control signaling are delivered as required.

Monitoring metrics covering per-flow bandwidth utilization, available capacity, packet loss, latency, delay variation, and QoS/QoE KPIs for VoLTE and applications.

Network visibility that’s ubiquitous, covering all locations and layers, with “resolution on demand” to avoid drowning in the data lake of big analytics.

Affordable technology is now available to help operators gain this needed network visibility. For example, advances in NFV-based instrumentation replicate the full functionality of dedicated test sets. Powerful test probes in smart SFPs and miniaturized modules allow full network performance assurance coverage at savings up to 90% compared with legacy methods.

Using network-embedded instrumentation, LTE-A networks can approach 5G performance with proper optimization:

1. Assess network readiness for incremental capacity and service upgrades.
2. Localize performance pinch points to focus upgrades and optimization efforts.
3. Monitor utilization trends and variation, and tune the network with real-time feedback to get the most out of existing infrastructure.
4. Monitor performance over the migration phase to NFV / SDN for troubleshooting and to optimize network configuration as traffic load increases.

The path to 5G relies on optimizing latency and increasing network capacity, while allowing the assured coexistence of applications as diverse as the Internet of Things (IoT), security, streaming 8K video, and multi-caller VoLTE sessions. SDN flow optimization is the foundation needed to meet those requirements. Visibility into the network state is the first step. Operators can deploy this today and pave an assured path to the higher-capacity networks of tomorrow.

About the Author

Scott Sumner is VP of solutions marketing at Accedian Networks. He has extensive experience in wireless, Carrier Ethernet and service assurance, with over 15 years of experience including roles as GM of Performant Networks, Director of Program Management & Engineering at MPB Communications, VP of Marketing at Minacom (Tektronix), and Aethera Networks (Positron / Marconi), Partnership and M&A Program Manager at EXFO, as well as project and engineering management roles at PerkinElmer Optoelectronics (EG&G).   He has Masters and Bachelor degrees in Engineering (M.Eng, B.Eng) from McGill University in Montreal, Canada, and completed professional business management training at the John Molson School of Business, the Alliance Institute, and the Project Management Institute.

About Accedian Networks 

Accedian Networks is the Performance Assurance Solution Specialist for mobile networks and enterprise ­ to­ data center connectivity. Open, multi­vendor interoperable and programmable solutions go beyond standard ­based performance assurance to deliver Network State+™, the most complete view of network health. Automated service activation testing and real­ time performance monitoring feature unrivalled precision and granularity, while H­QoS traffic conditioning optimizes service performance. Since 2005, Accedian has delivered platforms assuring hundreds of thousands of cell sites globally.www.Accedian.com


Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

By open, we mean not controlled by a single party, says Dan Pitt

Customers love open... but "open" has many different flavors and varieties, says Dan Pitt, Executive Director of the Open Networking Foundation.

"We've been strong advocates of open SDN for a long time. "

"By open, we mean not just published, but not controlled by a single party. It is good that people are opening up and publishing. There are open standards, open specifications, and open interfaces.  It is important that they be community-defined."

http://open.convergedigest.com/2015/06/by-open-we-mean-not-controlled-by-singe.html

Everything that can be virtualized will be virtualized, says @Infinera's Stuart Elby

Open networking brings experts from across the industry together to focus on common problems, says Stuart Elby, SVP, Data Center  Business Group at Infinera.  This leads to faster time-to-market, more use cases, and more security, as more eyes can look out for vulnerabilities.

Disruptive innovations first occur through proprietary solutions but are later subsumed by the open source community.  We are on the verge of seeing that for SDN and NFV.

Everything that can be virtualized will be virtualized. However, no one has figured out how to virtualize photons. This means there are still real optical layer with photons moving through ROADMs, transponders and amplifiers.

http://open.convergedigest.com/2015/06/everything-that-can-be-virtualized-will.html

In Memorium: Ralph Roberts, founder of Comcast, 1920-2015

Comcast mourned the passing of its Founder and Chairman Emeritus, Ralph J. Roberts, who died of natural causes at age 95.

Ralph Roberts founded Comcast in 1963 with the purchase of a 1,200-subscriber cable system in Tupelo, Mississippi.  Over the following decades, he grew the company from its humble roots as a small, regional cable company into the global Fortune 50 media and technology leader. The company went public in 1972.

In addition to his wife, Ralph is survived by four of his children and their spouses, along with eight grandchildren.  His son, Brian L. Roberts, serves as Comcast's CEO.

http://corporate.comcast.com/news-information/news-feed/ralph-j-roberts-statement


Saturday, June 20, 2015

Sogeti Partners with IBM for its Bluemix Cloud

IBM has signed Sogeti, a subsidiary of the Capgemini Group, as a partner for its Bluemix cloud platform-as-a-service (PaaS). Sogeti has more than 20,000 professionals in 15 countries offering IT services.

Bluemix will help Sogeti build hybrid cloud applications across public, private and on-premise infrastructure faster; leveraging IBM’s open-standards based approach to cloud to streamline integration of data. Bluemix runs on SoftLayer cloud infrastructure and combines the strength of IBM’s middleware software with other open services and tools from IBM partners and its developer ecosystem to offer DevOps in the cloud.

In addition to opening access to Bluemix for its own developers, Sogeti will use it to power existing end-user solutions for commerce, the internet of things (IoT) and data analytics for clients in retail, healthcare, transportation, and energy and utilities.

“We’re constantly adding new services to the IBM Cloud to help our developer ecosystem collaborate easier, manage costs, speed time to market, communicate better with their clients and take advantage of their data to drive growth,” said Steve Robinson, General Manager of IBM Cloud Platform Services. “By partnering with IBM, Sogeti will ensure that its own developers and clients will be able to achieve efficiency through innovation with a scalable cloud platform that’s designed for the enterprise.”

http:://www.ibm.com

Thursday, June 18, 2015

#ONS2015 - A Look Inside Google's Data Center Network

Networking is at an inflection point in driving next-gen computing architecture, said Amin Vahdat, Senior Fellow and Technical Lead for Networking at Google, in a keynote address at the Open Networking Summit in Santa Clara, California. Creating great computers will largely be determined by the network.

In constructing its "Jupiter" fifth-generation data centers, Google is essentially the bandwidth equivalent of the Internet under one roof.

Some key takeaways from the presentation:
  • Google will open source its gRPC load-balance and app flow-control code 
  • Google's B4 software-defined WAN links its global data centers and is bigger than its public-facing network
  • Andromeda Network Virtualization continues to advance as a means to slice the physical network into isolated, high-performance components
  • Google is deploying its "Jupiter" fifth-generation data center architecture.  Traditional designs and data center switches simply cannot keep up and require individual management, so Google decided to build its own gear.
  • Three principles in Google's data center network are: Clos Topologies, Merchant Silicon, and Centralized Control. Everything is designed for scale-out.
  • Load balancing is essential to ensure that resources are available and to manage cost.
  • Looking forward, a data center network may have 50,000 servers, each with 64 CPU cores, access to PBs of fast Flash storage, and equipped with 100G NICs.  This implies the need for a 5 Pb/s network core switch -- more than the Internet today!

The #ONS2015 keynote can be seen here:
https://youtu.be/FaAZAII2x0w



#ONS2015 - Microsoft Azure Puts SDN at Center of its Hyperscale Cloud

To handle its hyperscale growth, Microsoft Azure must integrate the latest compute and storage technologies into a truly software-defined infrastructure, said Mark Russinovich, Chief Technology Officer of Microsoft Azure in a keynote presentation at the Open Networking Summit in Santa Clara, California.

The talk covered how Microsoft is building its hyperscale SDN, including its own scalable controllers and hardware-accelerated hosts.


 Microsoft is making a massive bet on Azure.  It is the company's own infrastructure as well the basis for many of its products going forward, including Office 365, Xbox and Skype.

Some highlights:
  • Microsoft Azure's customer facing offering include App Services, Data Services and Infrastructure Services
  • Over 500 new features were added to Azure in the past year, including better VMs, virtual networks and storage.
  • Microsoft is opening new data centers all over the world
  • Azure is running millions of compute instances
  • There are now more than 20 ExpressRoute locations for direct connect to Azure.  
  • Azure connects with 1,600 peered networks through 85 IXPs
  • One out of 5 VMs running on Azure is a Linux VM
  • A key principle for Microsoft's Hyperscale SDN is to push as much of the logic processing down to the servers (hosts)
  • Hyperscale controllers must be able to handle 500K+ server (hosts) in a region
  • The controller must be able to scale down to smaller data centers as well
  • Microsoft Azure Service Fabric is a platform for micro-service-based applications
  • Microsoft has released a developer SDK for its Service Fabric
  • Azure is using a Virtual Filtering Platform (VFP) to act as a virtual switch inside Hyper-V VMSwitch.  This provides core SDN functionality for Azure networking services. It uses programmable rule/flow tables to perform per-packet actions. This will also be extended to Windows Server 2016 for private clouds.
  • Azure will implement RDMA for very high performance memory transport between servers. It will be enabled at 40GbE for Azure Storage.  All the logic is in the server.
  • Server interface speeds are increasing: 10G to 40G to 50G and eventually to 100G
  • Microsoft is deploying FPGA-based Azure SmartNICs in its servers to offload SDN functions from the CPU. The SmartNICs can also perform crypto, QoS and storage acceleration.

The #ONS2015 keynote can be seen here:
https://youtu.be/RffHFIhg5Sc



#ONS2015: AT&T Envisions its Future as a Software Company

Over the new few years, AT&T plans to virtualize and control more than 75% of its network functions via its new Domain 2.0 infrastructure.  The first 5% will be complete by the end of this year, laying the foundation for an accelerated rollout in 2016.

In a keynote at the Open Networking Summit 2015 in Santa Clara entitled "AT&T's Case for a Software-Centric Network", John Donovan provided an update on the company's Domain 2.0 campaign, saying this strategic undertaking is really about changing all aspects of how AT&T does business.

Donovan, who is responsible for almost all aspects of AT&T's IT and network infrastructure, said AT&T is deeply committed to open source software, including contributing back to open source communities. The goal is to "software-accelerate" AT&T's network.  In the process, AT&T itself becomes a software company.

Here are some key takeaways from the presentation:


  • Since 2007, AT&T has seen a 100,000% increase in mobile data traffic
  • Video represents the majority of traffic on the mobile network
  • Ethernet ports have grown 1,300% since 2010
  • AT&T's network vision is rooted in SDN and NFV
  • The first phase is about Virtualizing Functions.
  • AT&T's Network On-Demand service is its first SDN application to reach customers. It went from concept to trials in six months.
  • The second phase is about Disaggregation.
  • The initial target of disaggregation is the GPON Optical Line Terminals (OLTs), which are deployed in central offices for supporting its GigaPower residential broadband service.  AT&T will virtualize the physical equipment using less expensive hardware.  The company will release an open specification for these boxes.
  • AT&T will contribute its YANG custom design tool to the open source community.
  • AT&T is leading a Central Office Re-architected as Data Center (CORD) project.


http://www.att.com
http://opennetsummit.org/

The ONS2015 keynote can be seen here:
https://youtu.be/7gEvIHCps1Q


Blueprint: Open Standards Do Not Have to Be Open Source

by Frank Yue, Senior Technical Marketing Manager, F5 Networks

Network Functions Virtualization (NFV) is driving much of the innovation and work within the service provider community. The concept of bringing the benefits of cloud-like technologies is driving service providers to radically alter how they architect and manage the services they deliver through their networks.

Different components from different vendors based on different technologies are required to create an NFV architecture. There are COTS servers, hypervisor management technologies, SDN and traditional networking solutions, management and orchestration products, and many distinct virtual network functions (VNFs). All of these components need to communicate with each other in a defined and consistent manner for the NFV ecosystem to succeed.


Source: Network Functions Virtualization (NFV); Architectural Framework

While ETSI has defined the labels for the interfaces between the various components of the NFV architecture, there are currently no agreed-upon standards. And although there are several open source projects to develop standards for these NFV interfaces, most have not matured to the point where they are ready for use in a carrier-grade network.

Are Pre-standards Solutions Premature?

In the meantime, various multi-vendor alliances are developing their own pre-standards solutions. Some are proprietary and others are derivations of the work done by open source groups. Currently, almost all of the proof of concept (POC) trials today are using these pre-standard variations. Each multi-vendor alliance is working in conjunction with the service providers to develop interface models and specifications that everyone within each POC will be comfortable with.

It is possible and even likely that some of these pre-standards will become de facto standards based on their popularity and utility. There is nothing wrong with standards that are developed by the vendor or service provider community as long as they meet these criteria: 1) the standard must work in a multi-vendor environment since the NFV architecture model depends on multiple vendors delivering different components of the solution. 2) The standard needs to be published and open so that a new vendor can easily build its component to be compatible with the architecture.

Looking at the first of these points, the nature of the NFV architecture is to be an interactive and interdependent ecosystem of components and functions. It is unlikely that all of the pieces of the NFV ecosystem will be produced and delivered by a single vendor. In a mature NFV environment, many vendors will be involved. One multi-vendor NFV alliance currently has over 50 members. Another alliance has designed an NFV POC requiring the involvement of nine distinct vendors.

This multi-vendor landscape drives the need for the second point, for the standard to be published and open. No matter what interface model is developed by each vendor and alliance, it still needs to be published in an open form, allowing other vendors to create models to integrate their solutions into the NFV architecture. It is likely that in the mature NFV ecosystem, some components will be delivered by vendors that are not part of the majority alliance that delivered the NFV solution.

No two service provider networks are alike, and there are close to an infinite number of combinations of manufacturers and technologies that can be incorporated into each service provider’s NFV model.  Service providers will require all of the components in the network to interact in a relatively seamless fashion. This can only be accomplished if the interface pre-standards are open and available to the technology community at large.

Proprietary, but Open?

A proprietary, but open standard is one that has been developed without community consensus. While the standard has been developed by a vendor or alliance of vendors, the model is published to allow anybody interested in developing solutions to incorporate the standard without the need for licensing, partnership, or agreement in general.

Proprietary, but open standards can be developed by a single entity or a small community working together towards a common goal. This gives these proprietary standards some advantages. 1) They can be created quickly since universal consortium acceptance may not be required. 2) They can be adapted and adjusted quickly to meet the changing and evolving nature of NFV architectures.

While open source projects and products have the benefit of being available to everyone, there are some tradeoffs for the design of technologies by open committee. Open source projects are always in flux as multiple perspectives and methodologies are competing for a universal consensus. This is especially true when working with standards developing organizations (SDOs). Because of this, standards often take years, instead of months, to develop.

In the meantime, the current NFV alliances can develop interface models that are successful in the limited environment of the alliance ecosystem. This rapid development also allows for the tuning of these interfaces as NFV architectures develop and mature. These proprietary, but open, models can be used as a template within the SDOs to develop a standard that has the benefit of being tested and proven in real-world scenarios.

No Model is Perfect

Ultimately, the standards that are developed will probably be a mixture of open source solutions with customized enhancements and open proprietary standards developed by these alliances. It is likely that individual vendors and alliances will enhance the final standards, adding their unique value to improve functionality and differentiate their solution.

In an ideal world, standards are fixed in nature and in time, but networks are evolving and technologies like NFV continue to evolve and mature. In this world of dynamic architectures, it is essential to have standards that are dynamic and proprietary, but open. This type of standard offers a solution that can deliver functions today and adapt to the models of tomorrow.

About the Author

Frank Yue is the Senior Technical Marketing Manager for the Service Provider business at F5 Networks. In this role, Yue is responsible for evangelizing F5’s technologies and products before they come to market. Prior to joining F5, Yue was sales engineer at BreakingPoint Systems, selling application aware traffic and security simulation solutions for the service provider market. Yue also worked at Cloudshield Technologies supporting customized DPI solutions, and at Foundry Networks as a global overlay for the ServerIron application delivery controller and traffic management product line. Yue has a degree in Biology from the University of Pennsylvania.

About F5

F5 (NASDAQ: FFIV) provides solutions for an application world. F5 helps organizations seamlessly scale cloud, data center, telecommunications, and software defined networking (SDN) deployments to successfully deliver applications and services to anyone, anywhere, at any time. F5 solutions broaden the reach of IT through an open, extensible framework and a rich partner ecosystem of leading technology and orchestration vendors. This approach lets customers pursue the infrastructure model that best fits their needs over time. The world’s largest businesses, service providers, government entities, and consumer brands rely on F5 to stay ahead of cloud, security, and mobility trends. For more information, go to f5.com.


Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

Cisco's David Ward on Open Source Development

Open networking can only evolve with the support of a community of developers, says David Ward, CTO of Engineering and Chief Architect of Cisco. But you can't just launch a developer community, you have to build it. Open Source communities have now emerged.

Cisco is working on many open networking fronts, including OpenDaylight, OPNFV, ONOS and OpenStack.  In this video, Ward also highlights NETCONf and YANG, two standards seen as keys for infrastructure programmability.

http://open.convergedigest.com/2015/04/ciscos-david-ward-on-evolution-of-open.html

Nuage's Houman Modarres on the Value of Open

The move toward open networks in unstoppable, says Houman Modarres, VP of Marketing at Nuage Networks. The attraction of open networks is undeniable. Who would want a stiff, inflexible, vertically-integrated solution, when the Internet has already shown that creativity coming from different parts of the user community and application ecosystem is the right answer.

The crux is this:  with freedom of choice comes complexity.  Nuage, a business unit of Alcatel-Lucent, is working to address this challenge by supporting a variety of deployment models.

http://open.convergedigest.com/2015/04/nuages-houman-modarres-on-value-of-open.html