Sunday, June 21, 2015

Blueprint: 5G and the Need For SDN Flow Optimization

by Scott Sumner, VP Solutions Development and Marketing, Accedian Networks

As more subscribers run bandwidth-intensive applications from a variety of devices, mobile access networks are increasingly strained to maintain quality. According to Ericsson, annual mobile traffic throughput is predicted to increase from 58 exabytes in 2013 to roughly 335 exabytes by 2020. It’s clear that brute-force bandwidth over-provisioning is no longer an economically feasible solution.

What strategies can operators implement to meet growing quality of experience (QoE) expectations, especially in the face of finite spectrum?

Part of the answer is improvements to 4G networks using technologies like LTE-A, LTE unlicensed, and voice over LTE (VoLTE). Just in the mobile space alone, Groupe Speciale Mobile Association (GSMA) expects that 4G networks—as fast as they can be deployed—will reach their limits within five years, making this option a stopgap method and a stepping stone on the way to bigger and better things.

5G networks and standards are the inevitable answer, taking bandwidth another order of magnitude forward, supporting 1000% device densification and the seamless coexistence of the Internet of Things (IoT). Getting there requires understanding the real-world dynamics at play, the role of Software Defined Networking (SDN) in 5G, and requirements for performance assurance in a virtualized world.

5G Visions and Realities

As it is now envisioned, 5G will come with further tightening of performance requirements, approaching sub-millisecond latency bounds, minimal packet loss, and higher availability limits approaching 99.95%. These sound great in theory, but in the real world are challenging to achieve.

Complicating planning and development efforts is the fact that 5G proposals like those published recently by GSMA and NGMN focus on multiple end use cases or applications, each with quite different performance demands on the network: some high bandwidth, some low latency, some both, some neither. These competing applications necessitate exceptional quality of service (QoS), meeting the diverse requirements of each service, while maintaining the most efficient use of precious capacity and infrastructure.

Together, all of this requires a new approach to networking and performance assurance.

SDN’s Role in 5G

It’s generally agreed on that SDN is implicit to 5G. SDN separates control and data planes, allowing multiple frequency bands (such as millimeter wave combined with 4G spectrum) to be implemented without requiring changes to the control infrastructure. It also enables the sophisticated traffic delivery over multiple backhaul paths involved in coordinated multipoint (CoMP) arrangements, where multiple carriers simultaneously link to the same user equipment (UE).

SDN control enables spin-up of virtual networks that address each application specifically—including the virtual network functions (VNFs) chained together to deliver the service. The coexistence of these application-specific virtual networks, along with path decisions based on their performance requirements, are unique “layers” in the network, summed up in the NGMN-coined term “network slicing.”

Performance must be assured between chained VNFs, as well as
between endpoints relying on ultra-low latency interaction.

However, the SDN controllers required to support multi-carrier aggregation, dynamic traffic engineering, and performance optimization require a real-time feed of network performance to optimize QoE. Without this visibility, traffic may be sent over routes with the fewest hops, not those with the lowest latency, for example. Optimizing performance for critical applications also means lower-priority services should use less-desired routes, to free up capacity. Performance optimization applications and self-organizing networks (SONs), therefore, require immediate, continuous visibility into the ‘network state.’

Performance Assurance in a Virtualized World

In a multi-slice, multi-application network that is continuously tuned by SDN and application optimization controllers, a real-time performance view—of Layer 2 and 3 metrics such as utilization, capacity, packet loss, and latency; and QoE metrics like VoLTE MOS—must cover every link and service to provide adequate performance feedback.

Optimal multi-path backhaul pathing relies on tight coordination between SDN controllers and an instrumentation layer.

To achieve this ‘instrumentation layer’ over all slices and sections of their network, operators can build on the performance monitoring capabilities and standards already supported by their network infrastructure, supplementing with cost-efficient virtualized test points.

Specific requirements for this level of network performance assurance include:

Performance assurance attributes characterized as real-time, adaptive, directional, ubiquitous, embedded, and open/standards-based with microsecond (┬Ás) precise delay metrics—ensuring that ultra-tight synchronization and control signaling are delivered as required.

Monitoring metrics covering per-flow bandwidth utilization, available capacity, packet loss, latency, delay variation, and QoS/QoE KPIs for VoLTE and applications.

Network visibility that’s ubiquitous, covering all locations and layers, with “resolution on demand” to avoid drowning in the data lake of big analytics.

Affordable technology is now available to help operators gain this needed network visibility. For example, advances in NFV-based instrumentation replicate the full functionality of dedicated test sets. Powerful test probes in smart SFPs and miniaturized modules allow full network performance assurance coverage at savings up to 90% compared with legacy methods.

Using network-embedded instrumentation, LTE-A networks can approach 5G performance with proper optimization:

1. Assess network readiness for incremental capacity and service upgrades.
2. Localize performance pinch points to focus upgrades and optimization efforts.
3. Monitor utilization trends and variation, and tune the network with real-time feedback to get the most out of existing infrastructure.
4. Monitor performance over the migration phase to NFV / SDN for troubleshooting and to optimize network configuration as traffic load increases.

The path to 5G relies on optimizing latency and increasing network capacity, while allowing the assured coexistence of applications as diverse as the Internet of Things (IoT), security, streaming 8K video, and multi-caller VoLTE sessions. SDN flow optimization is the foundation needed to meet those requirements. Visibility into the network state is the first step. Operators can deploy this today and pave an assured path to the higher-capacity networks of tomorrow.

About the Author

Scott Sumner is VP of solutions marketing at Accedian Networks. He has extensive experience in wireless, Carrier Ethernet and service assurance, with over 15 years of experience including roles as GM of Performant Networks, Director of Program Management & Engineering at MPB Communications, VP of Marketing at Minacom (Tektronix), and Aethera Networks (Positron / Marconi), Partnership and M&A Program Manager at EXFO, as well as project and engineering management roles at PerkinElmer Optoelectronics (EG&G).   He has Masters and Bachelor degrees in Engineering (M.Eng, B.Eng) from McGill University in Montreal, Canada, and completed professional business management training at the John Molson School of Business, the Alliance Institute, and the Project Management Institute.

About Accedian Networks 

Accedian Networks is the Performance Assurance Solution Specialist for mobile networks and enterprise ­ to­ data center connectivity. Open, multi­vendor interoperable and programmable solutions go beyond standard ­based performance assurance to deliver Network State+™, the most complete view of network health. Automated service activation testing and real­ time performance monitoring feature unrivalled precision and granularity, while H­QoS traffic conditioning optimizes service performance. Since 2005, Accedian has delivered platforms assuring hundreds of thousands of cell sites

Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

By open, we mean not controlled by a single party, says Dan Pitt

Customers love open... but "open" has many different flavors and varieties, says Dan Pitt, Executive Director of the Open Networking Foundation.

"We've been strong advocates of open SDN for a long time. "

"By open, we mean not just published, but not controlled by a single party. It is good that people are opening up and publishing. There are open standards, open specifications, and open interfaces.  It is important that they be community-defined."

Everything that can be virtualized will be virtualized, says @Infinera's Stuart Elby

Open networking brings experts from across the industry together to focus on common problems, says Stuart Elby, SVP, Data Center  Business Group at Infinera.  This leads to faster time-to-market, more use cases, and more security, as more eyes can look out for vulnerabilities.

Disruptive innovations first occur through proprietary solutions but are later subsumed by the open source community.  We are on the verge of seeing that for SDN and NFV.

Everything that can be virtualized will be virtualized. However, no one has figured out how to virtualize photons. This means there are still real optical layer with photons moving through ROADMs, transponders and amplifiers.

In Memorium: Ralph Roberts, founder of Comcast, 1920-2015

Comcast mourned the passing of its Founder and Chairman Emeritus, Ralph J. Roberts, who died of natural causes at age 95.

Ralph Roberts founded Comcast in 1963 with the purchase of a 1,200-subscriber cable system in Tupelo, Mississippi.  Over the following decades, he grew the company from its humble roots as a small, regional cable company into the global Fortune 50 media and technology leader. The company went public in 1972.

In addition to his wife, Ralph is survived by four of his children and their spouses, along with eight grandchildren.  His son, Brian L. Roberts, serves as Comcast's CEO.

Saturday, June 20, 2015

Sogeti Partners with IBM for its Bluemix Cloud

IBM has signed Sogeti, a subsidiary of the Capgemini Group, as a partner for its Bluemix cloud platform-as-a-service (PaaS). Sogeti has more than 20,000 professionals in 15 countries offering IT services.

Bluemix will help Sogeti build hybrid cloud applications across public, private and on-premise infrastructure faster; leveraging IBM’s open-standards based approach to cloud to streamline integration of data. Bluemix runs on SoftLayer cloud infrastructure and combines the strength of IBM’s middleware software with other open services and tools from IBM partners and its developer ecosystem to offer DevOps in the cloud.

In addition to opening access to Bluemix for its own developers, Sogeti will use it to power existing end-user solutions for commerce, the internet of things (IoT) and data analytics for clients in retail, healthcare, transportation, and energy and utilities.

“We’re constantly adding new services to the IBM Cloud to help our developer ecosystem collaborate easier, manage costs, speed time to market, communicate better with their clients and take advantage of their data to drive growth,” said Steve Robinson, General Manager of IBM Cloud Platform Services. “By partnering with IBM, Sogeti will ensure that its own developers and clients will be able to achieve efficiency through innovation with a scalable cloud platform that’s designed for the enterprise.”


Thursday, June 18, 2015

#ONS2015 - A Look Inside Google's Data Center Network

Networking is at an inflection point in driving next-gen computing architecture, said Amin Vahdat, Senior Fellow and Technical Lead for Networking at Google, in a keynote address at the Open Networking Summit in Santa Clara, California. Creating great computers will largely be determined by the network.

In constructing its "Jupiter" fifth-generation data centers, Google is essentially the bandwidth equivalent of the Internet under one roof.

Some key takeaways from the presentation:
  • Google will open source its gRPC load-balance and app flow-control code 
  • Google's B4 software-defined WAN links its global data centers and is bigger than its public-facing network
  • Andromeda Network Virtualization continues to advance as a means to slice the physical network into isolated, high-performance components
  • Google is deploying its "Jupiter" fifth-generation data center architecture.  Traditional designs and data center switches simply cannot keep up and require individual management, so Google decided to build its own gear.
  • Three principles in Google's data center network are: Clos Topologies, Merchant Silicon, and Centralized Control. Everything is designed for scale-out.
  • Load balancing is essential to ensure that resources are available and to manage cost.
  • Looking forward, a data center network may have 50,000 servers, each with 64 CPU cores, access to PBs of fast Flash storage, and equipped with 100G NICs.  This implies the need for a 5 Pb/s network core switch -- more than the Internet today!

The #ONS2015 keynote can be seen here:

#ONS2015 - Microsoft Azure Puts SDN at Center of its Hyperscale Cloud

To handle its hyperscale growth, Microsoft Azure must integrate the latest compute and storage technologies into a truly software-defined infrastructure, said Mark Russinovich, Chief Technology Officer of Microsoft Azure in a keynote presentation at the Open Networking Summit in Santa Clara, California.

The talk covered how Microsoft is building its hyperscale SDN, including its own scalable controllers and hardware-accelerated hosts.

 Microsoft is making a massive bet on Azure.  It is the company's own infrastructure as well the basis for many of its products going forward, including Office 365, Xbox and Skype.

Some highlights:
  • Microsoft Azure's customer facing offering include App Services, Data Services and Infrastructure Services
  • Over 500 new features were added to Azure in the past year, including better VMs, virtual networks and storage.
  • Microsoft is opening new data centers all over the world
  • Azure is running millions of compute instances
  • There are now more than 20 ExpressRoute locations for direct connect to Azure.  
  • Azure connects with 1,600 peered networks through 85 IXPs
  • One out of 5 VMs running on Azure is a Linux VM
  • A key principle for Microsoft's Hyperscale SDN is to push as much of the logic processing down to the servers (hosts)
  • Hyperscale controllers must be able to handle 500K+ server (hosts) in a region
  • The controller must be able to scale down to smaller data centers as well
  • Microsoft Azure Service Fabric is a platform for micro-service-based applications
  • Microsoft has released a developer SDK for its Service Fabric
  • Azure is using a Virtual Filtering Platform (VFP) to act as a virtual switch inside Hyper-V VMSwitch.  This provides core SDN functionality for Azure networking services. It uses programmable rule/flow tables to perform per-packet actions. This will also be extended to Windows Server 2016 for private clouds.
  • Azure will implement RDMA for very high performance memory transport between servers. It will be enabled at 40GbE for Azure Storage.  All the logic is in the server.
  • Server interface speeds are increasing: 10G to 40G to 50G and eventually to 100G
  • Microsoft is deploying FPGA-based Azure SmartNICs in its servers to offload SDN functions from the CPU. The SmartNICs can also perform crypto, QoS and storage acceleration.

The #ONS2015 keynote can be seen here:

#ONS2015: AT&T Envisions its Future as a Software Company

Over the new few years, AT&T plans to virtualize and control more than 75% of its network functions via its new Domain 2.0 infrastructure.  The first 5% will be complete by the end of this year, laying the foundation for an accelerated rollout in 2016.

In a keynote at the Open Networking Summit 2015 in Santa Clara entitled "AT&T's Case for a Software-Centric Network", John Donovan provided an update on the company's Domain 2.0 campaign, saying this strategic undertaking is really about changing all aspects of how AT&T does business.

Donovan, who is responsible for almost all aspects of AT&T's IT and network infrastructure, said AT&T is deeply committed to open source software, including contributing back to open source communities. The goal is to "software-accelerate" AT&T's network.  In the process, AT&T itself becomes a software company.

Here are some key takeaways from the presentation:

  • Since 2007, AT&T has seen a 100,000% increase in mobile data traffic
  • Video represents the majority of traffic on the mobile network
  • Ethernet ports have grown 1,300% since 2010
  • AT&T's network vision is rooted in SDN and NFV
  • The first phase is about Virtualizing Functions.
  • AT&T's Network On-Demand service is its first SDN application to reach customers. It went from concept to trials in six months.
  • The second phase is about Disaggregation.
  • The initial target of disaggregation is the GPON Optical Line Terminals (OLTs), which are deployed in central offices for supporting its GigaPower residential broadband service.  AT&T will virtualize the physical equipment using less expensive hardware.  The company will release an open specification for these boxes.
  • AT&T will contribute its YANG custom design tool to the open source community.
  • AT&T is leading a Central Office Re-architected as Data Center (CORD) project.

The ONS2015 keynote can be seen here:

Blueprint: Open Standards Do Not Have to Be Open Source

by Frank Yue, Senior Technical Marketing Manager, F5 Networks

Network Functions Virtualization (NFV) is driving much of the innovation and work within the service provider community. The concept of bringing the benefits of cloud-like technologies is driving service providers to radically alter how they architect and manage the services they deliver through their networks.

Different components from different vendors based on different technologies are required to create an NFV architecture. There are COTS servers, hypervisor management technologies, SDN and traditional networking solutions, management and orchestration products, and many distinct virtual network functions (VNFs). All of these components need to communicate with each other in a defined and consistent manner for the NFV ecosystem to succeed.

Source: Network Functions Virtualization (NFV); Architectural Framework

While ETSI has defined the labels for the interfaces between the various components of the NFV architecture, there are currently no agreed-upon standards. And although there are several open source projects to develop standards for these NFV interfaces, most have not matured to the point where they are ready for use in a carrier-grade network.

Are Pre-standards Solutions Premature?

In the meantime, various multi-vendor alliances are developing their own pre-standards solutions. Some are proprietary and others are derivations of the work done by open source groups. Currently, almost all of the proof of concept (POC) trials today are using these pre-standard variations. Each multi-vendor alliance is working in conjunction with the service providers to develop interface models and specifications that everyone within each POC will be comfortable with.

It is possible and even likely that some of these pre-standards will become de facto standards based on their popularity and utility. There is nothing wrong with standards that are developed by the vendor or service provider community as long as they meet these criteria: 1) the standard must work in a multi-vendor environment since the NFV architecture model depends on multiple vendors delivering different components of the solution. 2) The standard needs to be published and open so that a new vendor can easily build its component to be compatible with the architecture.

Looking at the first of these points, the nature of the NFV architecture is to be an interactive and interdependent ecosystem of components and functions. It is unlikely that all of the pieces of the NFV ecosystem will be produced and delivered by a single vendor. In a mature NFV environment, many vendors will be involved. One multi-vendor NFV alliance currently has over 50 members. Another alliance has designed an NFV POC requiring the involvement of nine distinct vendors.

This multi-vendor landscape drives the need for the second point, for the standard to be published and open. No matter what interface model is developed by each vendor and alliance, it still needs to be published in an open form, allowing other vendors to create models to integrate their solutions into the NFV architecture. It is likely that in the mature NFV ecosystem, some components will be delivered by vendors that are not part of the majority alliance that delivered the NFV solution.

No two service provider networks are alike, and there are close to an infinite number of combinations of manufacturers and technologies that can be incorporated into each service provider’s NFV model.  Service providers will require all of the components in the network to interact in a relatively seamless fashion. This can only be accomplished if the interface pre-standards are open and available to the technology community at large.

Proprietary, but Open?

A proprietary, but open standard is one that has been developed without community consensus. While the standard has been developed by a vendor or alliance of vendors, the model is published to allow anybody interested in developing solutions to incorporate the standard without the need for licensing, partnership, or agreement in general.

Proprietary, but open standards can be developed by a single entity or a small community working together towards a common goal. This gives these proprietary standards some advantages. 1) They can be created quickly since universal consortium acceptance may not be required. 2) They can be adapted and adjusted quickly to meet the changing and evolving nature of NFV architectures.

While open source projects and products have the benefit of being available to everyone, there are some tradeoffs for the design of technologies by open committee. Open source projects are always in flux as multiple perspectives and methodologies are competing for a universal consensus. This is especially true when working with standards developing organizations (SDOs). Because of this, standards often take years, instead of months, to develop.

In the meantime, the current NFV alliances can develop interface models that are successful in the limited environment of the alliance ecosystem. This rapid development also allows for the tuning of these interfaces as NFV architectures develop and mature. These proprietary, but open, models can be used as a template within the SDOs to develop a standard that has the benefit of being tested and proven in real-world scenarios.

No Model is Perfect

Ultimately, the standards that are developed will probably be a mixture of open source solutions with customized enhancements and open proprietary standards developed by these alliances. It is likely that individual vendors and alliances will enhance the final standards, adding their unique value to improve functionality and differentiate their solution.

In an ideal world, standards are fixed in nature and in time, but networks are evolving and technologies like NFV continue to evolve and mature. In this world of dynamic architectures, it is essential to have standards that are dynamic and proprietary, but open. This type of standard offers a solution that can deliver functions today and adapt to the models of tomorrow.

About the Author

Frank Yue is the Senior Technical Marketing Manager for the Service Provider business at F5 Networks. In this role, Yue is responsible for evangelizing F5’s technologies and products before they come to market. Prior to joining F5, Yue was sales engineer at BreakingPoint Systems, selling application aware traffic and security simulation solutions for the service provider market. Yue also worked at Cloudshield Technologies supporting customized DPI solutions, and at Foundry Networks as a global overlay for the ServerIron application delivery controller and traffic management product line. Yue has a degree in Biology from the University of Pennsylvania.

About F5

F5 (NASDAQ: FFIV) provides solutions for an application world. F5 helps organizations seamlessly scale cloud, data center, telecommunications, and software defined networking (SDN) deployments to successfully deliver applications and services to anyone, anywhere, at any time. F5 solutions broaden the reach of IT through an open, extensible framework and a rich partner ecosystem of leading technology and orchestration vendors. This approach lets customers pursue the infrastructure model that best fits their needs over time. The world’s largest businesses, service providers, government entities, and consumer brands rely on F5 to stay ahead of cloud, security, and mobility trends. For more information, go to

Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

Cisco's David Ward on Open Source Development

Open networking can only evolve with the support of a community of developers, says David Ward, CTO of Engineering and Chief Architect of Cisco. But you can't just launch a developer community, you have to build it. Open Source communities have now emerged.

Cisco is working on many open networking fronts, including OpenDaylight, OPNFV, ONOS and OpenStack.  In this video, Ward also highlights NETCONf and YANG, two standards seen as keys for infrastructure programmability.

Nuage's Houman Modarres on the Value of Open

The move toward open networks in unstoppable, says Houman Modarres, VP of Marketing at Nuage Networks. The attraction of open networks is undeniable. Who would want a stiff, inflexible, vertically-integrated solution, when the Internet has already shown that creativity coming from different parts of the user community and application ecosystem is the right answer.

The crux is this:  with freedom of choice comes complexity.  Nuage, a business unit of Alcatel-Lucent, is working to address this challenge by supporting a variety of deployment models.

Intel's John Healy on the End Goal of Open

The end goal of open networking is to have a scalable infrastructure that is also lower cost to manage, says John Healy, General Manager of the SDN Division at Intel.

Open means more open in terms of standards and vendors. It also means being capable of working with the open source community.

Novatel Wireless to Acquire DigiCore for M2M and Telematics

Novatel Wireless agreed to acquire DigiCore, a provider of advanced machine-to-machine (M2M) communication and telematics solutions based in South Africa, for approximately US$87 million.

The companies have been working together to commercialize a comprehensive end-to-end global service Software-as-a-Service (SaaS) platform.

This combines Novatel Wireless' hardware with DigiCore's Ctrack, a global telematics SaaS offering for the fleet management, user-based insurance, and asset tracking and monitoring markets.

"This combination is the result of a long-standing partnership between the two companies," said Alex Mashinsky, CEO of Novatel Wireless. "As a result of this relationship, we've already gone through an arduous process of integrating Novatel Wireless hardware into DigiCore's SaaS platform to create the industry's most complete IoT stack. These efforts are now bearing fruit as our successful joint venture has validated that the market demands a true end-to-end solution comprised of a comprehensive hardware portfolio, platform and cloud services, and integration and support."

Novatel Wireless said the merger will give it a foundation for developing and marketing comprehensive solutions for the commercial telematics industry. The collective vision is to simplify the delivery of telematics solutions from device deployment to big data collection, analytics, and reporting.

Dell to Resell NEC's SDN ProgrammableFlow Controller

NEC and Dell announced a new distribution agreement that will allow Dell to resell the NEC ProgrammableFlow Controller Software as one of the software options sold with Dell’s networking hardware. Dell’s S4810, S4820, S5000 and S6000 series of switches running Dell OS9 have been verified compatible with the NEC ProgrammableFlow Controller version 6, with additional validation of Dell switches planned.

“Unlocking the network from the tight coupling of hardware and software opens more customer choice to achieve better service levels at lower costs,” said Arpit Joshipura, vice president, Dell Enterprise Networking & NFV. “We are excited to work with NEC to address the market demand for automation and open standards.”

Wind River Opens App Store for VxWorks RTOS

Wind River is opening an app store for its VxWorks real-time operating system (RTOS). Wind River Marketplace helps customers find and evaluate best-of-breed add-on solutions from the Wind River partner ecosystem in order to enhance their VxWorks deployment.

The apps in the store are tested and validated by Wind River for seamless interoperability with VxWorks.  Target categories range from safety, security, and storage to connectivity, graphics, and development tools.

“With the synergy and validation already achieved for Wind River Marketplace products, customers get almost immediate access to best-in-class embedded technologies to easily enhance the operating environment and accelerate market deployment,” said Dinyar Dastoor, vice president and general manager of operating system platforms at Wind River.

Wednesday, June 17, 2015

VeloCloud Brings its SD-WAN to Equinix Data Centers

VeloCloud Networks announced a reference architecture that brings its Software-Defined Wide Area Networking (SD-WAN) to Equinix’s Performance Hubs, which are extensions of an enterprise corporate IT network co-located inside Equinix International Business Exchange (IBX) data centers. The service is currently available in Equinix IBX data centers located in Chicago, Dallas, San Francisco, Seattle and Washington, D.C., metro areas.

VeloCloud’s solution enables Equinix customers to leverage SD-WAN as a cost-effective option for connecting to their Performance Hubs with the same performance, reliability and control as if these services were within the customers’ private network.

“VeloCloud’s development of a reference architecture for connecting to enterprise Performance Hubs inside Equinix represents a key step forward for next-generation enterprise networks,” said Sanjay Uppal, CEO and co-founder of VeloCloud. “Equinix is the interconnection and data center leader, and this new architecture can help enterprises achieve unprecedented SaaS application performance levels.”


  • VeloCloud Networks, which is based in Mountain View, California, offers a subscription-based, virtualized WAN service for enterprises that aggregates multiple access lines (cable modem, DSL, LTE) into a single secure connection that is defined and controlled in the cloud. The VeloCloud service uses an Intel-based customer premise device at a branch office to communicate with a VeloCloud gateway in the cloud. The service analyzes network performance and application traffic to determine the best path and dynamically steer traffic to corporate data center or cloud services.

FCC Seeks Record $100 Million Fine from AT&T

The FCC is proposing a $100 million fine on AT&T Mobility for misleading consumers about its unlimited data plan.

The FCC said AT&T Mobility willfully and repeatedly violated its 2010 Open Internet Transparency Rule by: (1) using the misleading and inaccurate term “unlimited” to label a data plan that was in fact subject to prolonged speed reductions after a customer used a set amount of data; and (2) failing to disclose the express speed reductions that it applied to “unlimited” data plan customers once they hit a specified data threshold.

The FCC said it was unconvinced that AT&T's disclosures about its unlimited data policies provided consumers with sufficient information to make informed choices about their broadband service.

  • Two of the five FCC Commissioners issued statement opposing the fine.

ALU Debuts Home CPE

Alcatel-Lucent (Euronext Paris and NYSE: ALU) has today launched a new device for residential and business use that will allow operators to accelerate the deployment of ultra-broadband access in order to meet ever-growing demand for faster data speeds in homes and the workplace.

The Alcatel-Lucent residential gateway is a plug-and-play device that aims to eliminate the need for operators’ engineers to have to enter a customer’s premises to install.  The residential gateway also delivers concurrent dual-band WiFi to provide even greater signal strength and 1 Gbps data speeds, eliminating the need to run fiber into the home.

Highlights of Alcatel-Lucent’s residential gateway (7368 ISAM CPE):

  • Integrated reverse power will allow operators to reduce costs and complexity of large scale deployments.
  • Dual-band Wi-Fi (802.11ac/n on 5GHz and 802.11b/g/n on 2.4GHz) provides over 1 Gbps bandwidth, offering customers with the super-fast speeds they demand across all their connected devices.
  • Higher transmittting power of up to 1000 mW will deliver the same quality of service inside any room or outdoor area.
  • SuperSpeed USB3.0 support for home network access to shared content such as music, videos and photos.
  • The 7368 ISAM CPE can be deployed with VDSL2 networking today and upgraded later without having to switch out the device.
  • Fully integrated, plug-and-play device reduces time and cost of installation.
  • Compatible with Alcatel-Lucent DPUs, DSL line cards and standard-compliant DPUs.
  • Leveraging the same advanced home technology as the 7368 ISAM ONT G-240W-B residential gateway ONT, delivering the same experience to both PON and subscribers.

Ciena Joins SARNET Research Initiative

Ciena announced its participation in SARNET, a four year collaboration agreement between the University of Amsterdam (UvA), the Netherlands Organisation for Applied Scientific Research (TNO), and an airline to explore methods for autonomous Internet security. The focus is on exploring how software defined networking (SDN) can help alleviate cyber-attacks and program networks to provide enhanced cyber-terror detection and defense. SARNET utilizes a unique, multi-purpose, high-capacity research network and allows researchers to trial advanced network detection and defensive functionalities that automatically reconfigure around anomalies to help create and control agile, resilient and high-performing architectures.

“SARNET exemplifies the significant transformation that future networks need to make in order to reconfigure and self-provision instantly. This collaboration is exploring just one of the use-cases made possible by SDN. Autonomous Internet security would alleviate a lot of pressure on today’s networks, freeing up resources and opening the industry up to a raft of other innovations.”
- Cees deLaat, University of Amsterdam, Principal Investigator, SARNET

Breqwatr Delivers Hyper-Converged Cloud Appliance

Breqwatr, a start-up based in Toronto, introduced the second generation of its hyper-converged cloud appliance for scale-out infrastructure.

Breqwatr delivers a curated OpenStack deployment in a fully integrated, highly available and hyper-converged appliance. The Breqwatr Cloud Appliance includes core IaaS components for virtual machine management, as well as chargeback and resource based quota models, with support for external enterprise storage.

Key features include:

  • Linear Expansion: Online and automated expansion of compute and storage resources, allowing for seamless expansion of the cloud platform
  • Consumerized User Interface: Breqwatr’s simplified self-service UI provides users with on-demand access to the compute and storage resources required to do their job
  • Enterprise Resilience: Control-plane high-availability and the introduction of instance-level HA delivers the resiliency that enterprises demand
  • Hardware Management: Breqwatr’s hardware management interface makes managing and monitoring the physical components of the cloud, a simple browser-based experience

“At Breqwatr, we’re dedicated to empowering the enterprise to take full advantage of private clouds,” said John Kadianos, founder and CEO at Breqwatr. “With the Breqwatr Cloud Appliance, we are removing the guesswork and helping enterprises realize the quickest time to value available for private cloud today. We’re committed to making OpenStack more accessible and therefore have kept user experience at the forefront of our design principles. With this product, we’re bringing the value of public cloud computing into the comfort and security of one’s own datacenter.”