Tuesday, January 5, 2016

Blueprint: Three Predictions for Network Monitoring in 2016

by Tom Kelly, CEO, AccelOps

Why do armies set up look-outs all around their camps? Why do people read their horoscopes and shake magic eight-balls? Simple: they want to see what’s coming. In business, it’s incredibly helpful to be able to accurately forecast needs and set strategy. In the network security and performance arena of the business, it’s table stakes.

While there’s no crystal ball that can tell us everything, one thing is certain: organizations will need to fundamentally change the way they identify and manage threats. Below are my three predictions on this topic for the new year.

  1. It’s time to outsource security. With the unprecedented benefits and growth of the Internet of Things (IoT) and the vast number of touch points connecting to the network, new challenges and unknown risks associated with these tools will continue to multiply. Unknown risks include network and resource utilization, performance expectations and resource needs, interoperability with current systems and tools and, above all else, security risks and challenges to an organization’s livelyhood. As IT budgets shrink, and a shrinking pool of technical personnel, organizations will increasingly look outside their silos to managed security service providers (MSSP’s) for expert help.
  2. Organizations will map the customer journey. Consumers today have access to nearly infinite sources of information through the click of a mouse, resulting in a higher level of expectation for rapid answers from a variety of engagement channels. From websites to social media to mobile and multi-media, organizations are tasked with keeping up with customer demands from an ever-increasing set of “touch-points.” To that end, organizations will turn to tools that map and analyze a “360 view” of their customers’ journey and the respective “touch-points” throughout their organizations. As this integrated security and performance management requirement transitions from a tactical IT expenditure-driven initiative to a mission-critical, strategic business initiative, the era of CIOs and CISOs reporting to CFOs will shift to stronger oversight by boards of directors and CEOs.
  3. Businesses intelligence sources will converge. Proprietary customer and financial data and intellectual property are high-value targets for hackers. The challenge in protecting these targets will continue to grow as organizations become more reliant on business intelligence and analytics (Big Data) to dissect their various channels of customer engagement, workers, network and application productivity. As organizations store this valuable data in onsite and offsite locations (or a variety of both), Big Data is seen as a big target. These rich and proprietary sources of corporate analytics will spawn new and additional targets for hackers. Current silo-based approaches will need to converge with other business intelligence initiatives to provide more rapid identification and mitigation of risks.
Today’s dynamic, data-driven businesses have never been more reliant on the performance of their networks in managing risk and in the pursuit of their strategic initiatives. These same networks have never been more at risk for security breaches and the network performance impacts. With digital transformation in full swing, the pace of change is rapidly accelerating, and an organization’s ability to see into the network through solutions that provide a holistic, real-time view and correlation of the various elements in their network is becoming more critical than ever.

About the Author

Tom Kelly is CEO of Accelops and a technology industry veteran having led companies through founding, growth, IPO and strategic acquisition. He has served as a CEO, COO or CFO at Cadence Design Systems, Frame Technology, Cirrus Logic, Epicor Software and Blaze Software. Tom led successful turnarounds at Bluestar Solutions, MonteVista Software and Moxie Software, having served as CEO in repositioning and rebranding the companies in advance of their new growth. He serves on the Boards of Directors of FEI, Fabrinet, and ReadyPulse. Tom is a graduate of Santa Clara University where he is member of the University’s Board of Regents.

Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

AT&T Makes a Push for Open Software, Big Data, Connected Health

AT&T aims to have 50% of the software running its systems based on open source code - up from 5% today, said John Donovan, Senior Executive Vice President—AT&T Technology and Operations, speaking at the annual AT&T Developer Summit ahead of CES in Las Vegas. AT&T is working with OpenDaylight, OPNFV, ON.Lab, the Linux Foundation, OpenStack and other industry groups to
further these ambitions.

Donovan said the AT&T Integrated Cloud (AIC) project, which is based on OpenStack, is ahead of schedule. The plan was to deploy 69 AIC nodes in 2015 for running virtual network functions. In fact, the company deployed 74 AIC nodes in 2015.

Some other projects that AT&T is working on:

Nanocubes: a Big Data visualization tool develop by the AT&T Labs team. A Nanocube provides a real-time map of millions or even billions of data points from across the network.

M2X Data Service: a cloud-based data storage service for enterprise IoT developers that was launched last year. This year, AT&T is launching Flow Designer, a cloud-based tool developed at the AT&T Foundry that lets IoT developers quickly build new applications.

OpenDaylight's Internet of Things Data Management project: addressing interoperability across devices and networks.

AT&T Foundry for Connected Health: a new facility located at the Texas Medical Center Innovation Institute in Houston, Texas. The new AT&T Foundry will focus on digital health innovations that benefit those in and out of the clinical care environment.

SmartCities Framework: AT&T has formed alliances with Cisco, Deloitte, Ericsson, GE, IBM, Intel, and Qualcomm Technologies to create impactful solutions for cities. Areas of focus include Infrastructure monitoring (the conditions of roads, bridges, buildings, parks and other venues); Citizen Engagement; Digital Signage for smarter public transportation; and Public Safety (including gun fire detection technology). AT&T is also developing a new digital dashboard that gives cities a high-level look at their communities’ conditions.

A list of the Top 20 Innovative apps presented at the 2016 AT&T Developer Summit Hackathon is here:

BMW Renews Connected Car Agreement with AT&T

BMW  has extended a multi-year, exclusive agreement with AT&T for Connected Car services. Since 2008, AT&T has powered BMW’s ConnectedDrive services and apps. Through a new agreement, we also will connect BMW “infotainment” features such as a Wi-Fi hot spot.

Beginning with the all-new BMW 7 Series, BMW customers now have the option of a Wi-Fi hot spot powered by AT&T’s 4G LTE network.

“We are thrilled to continue our long-standing relationship with BMW and to be a part of a brand that evokes a joy and passion for driving,” said Chris Penrose, senior vice president, Internet of Things, AT&T Mobility. “The new Wi-Fi hot spot lets you connect up to 8 devices at a time and allows passengers to access their favorite apps, play games and surf the net at fast 4G LTE speeds.”


BT Deploys Cisco FirePower for Threat-centric Security

BT recently announced a partnership with Cisco to deliver threat-centric security solutions for both its internal network and for customer services.

Specifically, BT is using Cisco's threat-centric technologies, such as ASA with FirePOWER Services, Advanced Malware Protection (AMP), and Next-Generation IPS (NGIPS) to provide a differentiated capability in the market. In a Cisco blog posting, BT said it has experienced a 1,000% increase in threats over the past 13 months.  The trend includes an increasing number of transport-layer threats where network elements are targeted. BT's response involves a consolidation in the network architecture and deployment of Cisco's FirePower next-generation IPS tools along with Advanced Malware Protection.

The Cisco solution leverages its recent acquisitions of SourceFire, ThreatGrid and Cognitive Security (COSE).

BT said the partnership enables it to sell advanced security solution into complex IT infrastructures across the globe. BT has sold the capability to a nation-state.


Cisco Targets "Security Everywhere," Intros Firepower 9300

Cisco is rolling out a "Security Everywhere" initiative aimed at embedding security throughout the extended network – from the data center out to endpoints, branch offices, and the cloud. The goal is pervasive threat visibility and control for enterprises and service provider networks. To get there, Cisco is adding more sensors to increase visibility; more control points to strengthen enforcement; and pervasive, advanced threat protection to reduce time-to-detection and time-to-response, limiting the impact of attacks.

Cisco is launching the following set of solutions across the entire networking portfolio:

• Endpoints: With Cisco AnyConnect Featuring Cisco AMP for Endpoints, customers using the Cisco AnyConnect 4.1 VPN client now can easily deploy and significantly expand their threat
protection to VPN-enabled endpoints to continuously and retrospectively guard against advanced malware.

• Campus and Branch: FirePOWER Services solutions for Cisco Integrated Services Routers (ISR) provides centrally managed Next-Generation Intrusion Prevention System (NGIPS) and Advanced Malware Protection (AMP) at the branch office integrated in the network fabric, where dedicated security appliances may not be feasible.

• Network as a Sensor and Enforcer: Cisco has embedded multiple security technologies into the network infrastructure to provide broad threat visibility to rapidly identify users and devices associated with anomalies, threats and misuse of networks and applications. New capabilities include:

Broader Integration between Identity Services Engine (ISE) and Lancope StealthWatch: Enterprises can go beyond just mapping IP addresses to identifying threat vectors based on ISE’s context of who, what, where, when and how users and devices are connected and access network resources. This provides greater contextual threat visibility with StealthWatch for accelerated identification of threats.

NetFlow on Cisco UCS: Extending Cisco’s network-as-a-sensor capabilities to the physical and virtual servers, customers now have greater visibility into network traffic flow patterns and threat intelligence information in the data center.

Using the new embedded security capabilities, Cisco networks now have the ability to automate and dynamically enforce security policies. Customers can segment applications and users throughout the network – across the extended enterprise to use policy to define which users can get which applications and what traffic can traverse the network then automate security operations.

TrustSec + ISE and StealthWatch Integration: StealthWatch can now block suspicious network devices by initiating segmentation changes, providing rapid response to identified malicious activity. ISE can then modify access policies for Cisco routers, switches, and wireless LAN controllers embedded with TrustSec technology.

Hosted Identity Services provide a secure, 24/7, cloud-delivered service for the Cisco Identity Services Engine, a security policy management platform that unifies and automates secure network access control. The new hosted service speeds time to deployment, supporting business growth and providing role-based, context-aware identity enforcement of users and devices permitted on the network, streamlining enterprise mobility experiences.

• pxGrid Ecosystem: Eleven new partners have joined the pxGrid Ecosystem with the addition of several new ecosystem technology categories, including cloud security and network/application performance management. pxGrid is Cisco’s security context information exchange fabric that enables security platforms to share information to drive better threat detection, mitigation and overall security operations.

Cisco is also expanding advanced threat-centric protection for its Evolved Programmable Network (EPN), which is its open network architecture designed to advance the adoption of Software Defined Networking (SDN) and Network Functions Virtualization (NFV). Cisco’s new service provider security solutions include the following:

• Cisco Firepower 9300 Integrated Security Platform is a carrier-grade, high-performance, scalable and modular multi-services security platform purpose-built for service providers, that can scale security for increased data flows due to accelerated service demands and carrier class requirements.

• Expanded Advanced Orchestration and Cloud Capabilities enable Cisco’s new security solutions to integrate with the Cisco architecture and third-party SDN/NFV solutions, as well as Cisco’s Adaptive Security Appliance Virtual (ASAv) with Cisco’s Network Service Orchestrator (NSO) and Application-Centric Infrastructure (ACI). These orchestration and cloud capabilities also include open APIs for integration with orchestration, Operation Support Systems/Business Support Systems, and Cloud Security-as-a-Service solutions.

• Advanced features such as secure containers to accommodate future security services and applications. Additionally, Cisco ASA firewall and third-party DDoS mitigation from Radware are currently supported, with additional capabilities planned for the second half of 2015.

Cisco Integrates ACI with FirePOWER Intrusion Prevention

Cisco is integrating its FirePOWER Next Generation Intrusion Prevention System (NGIPS) into its Application Centric Infrastructure (ACI) architecture.

The integrated ACI + firePOWER security solution, which will be available in June 2015, offers automated threat protection to combat emerging data center security threats. The idea is fine-grained control (including application level security), visibility and centralized automation all the way from infrastructure to the application level.

Cisco ACI also third-party ecosystem solutions from Check Point Software Technologies, Fortinet, Infoblox, Intel Security, Radware, and Symantec.

Cisco said ACI integration with FirePOWER NGIPS (including Advanced Malware Protection) provides security before, during and after an attack, enabling organizations to dynamically detect and block advanced threats with continuous visibility and control across the full attack continuum. These new security capabilities deliver unprecedented control, visibility and centralized security automation in the data center.

Cisco also announced that independent qualified security assessors have validated ACI for deployment in payment card industry (PCI) compliant networks. Managing and simplifying the scope of compliance can help reduce costs for these organizations.


Panasonic Develops 300GB "freeze-ray" Optical Discs for Facebook Data Centers

Panasonic unveiled its freeze-ray, an Optical Disc-Based Data Archive System, developed in collaboration with Facebook, which is deploying the first-generation 100 GB Blu-ray Disc-based archive system into its data centers now.  Facebook expects deployment of the second-generation 300GB Archival Disc-based archive system later in 2016.

The technology is aimed at infrequently or never accessed data stored for the long term -- in the world’s data centers.

Panasonic said its freeze-ray data archiving solution provides optimal cold storage for protecting data integrity and reducing costs.  Optical discs provide longevity, immutability, backward compatibility, low power consumption and tolerance to environmental changes.

Panasonic’s main contribution to the effort was its high-density optical technology, key devices (optical discs, drives and related robotics) and library software to control the system easily in the data center. Facebook collaborated by providing its unmatched expertise in designing, deploying, managing and servicing storage systems in data centers. In addition, Facebook provided extensive technical and real-world data center feedback at every stage of the development. Both companies have been working on two generations of the freeze-ray solution.

“As Facebook continues to grow, we needed to address some of our fundamental engineering challenges with an efficient, low-cost and sustainable solution that matches our speed and exabyte-scale of data,” said Jason Taylor, PhD, VP of Infrastructure, Facebook. "We're seeing exponential growth in the number of photos and videos being uploaded to Facebook, and the work we’ve done with Panasonic is exciting because optical storage introduces a medium that is immutable, which helps ensure that people have long-term access to their digital memories.”


Broadcom Intros Low-power Wi-Fi/Bluetooth Chip

Broadcom introduced its lowest power Wi-Fi/Bluetooth combo chip for mobile platforms and accessories, boasting up to 3X longer battery life compared to Broadcom's previous combo chips.

The company said its new BCM43012 chip allows OEMs to integrate Wi-Fi into platforms that have traditionally been powered by Bluetooth alone due to battery size or constrained power budgets.  In some applications, the BCM43012 Wi-Fi consumes 80 percent less power than the most common Bluetooth solutions today.


  • Highly-integrated 28nm dual-band 802.11n and Bluetooth 4.2 SoC
  • Integrated efficient power amplifiers (PAs), low noise amplifiers (LNAs), and power management unit (PMU) for low rest of bill of materials (RBOM) cost and small system footprint
  • Architectural improvements provide unrivaled low power in sleep and active states for both Wi-Fi and BT
  • Coexistence hardware and algorithms to ensure optimal Wi-Fi and BT performance
  • WLAN features include enhanced proximity and location features enabled by 802.11mc and TurboQAM data rates up to 96 Mbps
  • Bluetooth features include angle of arrival (AoA) and angle of departure (AoD) technology, wireless charging support for A4WP and AirFuel, and early adopter 2 Mbps Low Energy protocol capability

"For more than a decade, Broadcom has achieved a market leadership position in connectivity combos by setting the standard for performance, features, and power consumption," said Dino Bekis, Broadcom Vice President of Marketing, Wireless Connectivity Combos. "We have applied this expertise to launch a family of products for the promising mobile accessories markets with solutions that allow our customers to deliver a new generation of connected platforms with breakthrough capabilities."


Broadcom Samples 64Bit Quad-core Router Processor

Broadcom has begun sampling the industry's first 64bit quad-core processor for high-end residential routers supporting smart home and Internet of Things applications.

The BCM4908 includes a 1.8GHz 64Bit quad-core ARM CPU and uses Broadcom's Runner network packet processor to deliver more than 5 Gbps of system data throughput without taxing the CPU. It also supports the increased speeds coming into the home including Google Fiber and Comcast 2 Gbps via an interface for a 2.5 Gigabit Ethernet PHY.

Key Features:

  • Zero CPU Wi-Fi offload frees up CPU resources for other tasks
  • BroadStream iQoS acceleration
  • Dedicated security processor to enable hardware VPN acceleration
  • 2.5Gb Base-X Ethernet WAN/LAN port for supporting fast connectivity to multi-gigabit modem or a Network Attached Storage (NAS) device
  • Feature-rich connectivity with integrated SATA III, two USB 3.0 ports and three PCIe Gen 2 ports reduces external RBOM cost
  • Utilizes low power 28nm technology and advanced power management, offering power reductions of more than 50 percent as compared to previous solutions
  • Supports Broadcom's tri-band (AC5300) 5G WiFi XStream 802.11ac MU-MIMO:
  • Three BCM4366 4x4 radios, each with an integrated CPU for host offload processing
  • Providing a total of seven CPU cores ("Septacore") with more than 9.6 GHz of CPU horse power
  • Powerful hardware acceleration for routing and USB storage

"With this new SoC, Broadcom is driving home network connectivity to the next level," said Manny Patel, Broadcom Director of Marketing, Wireless Connectivity. "By increasing the CPU performance and adding advanced features, we're enabling OEMs to build more powerful home routers that address the increased bandwidth requirements needed to support the continued consumption of high-bandwidth content, growing demand for UltraHD as well as the growing emergence of more IoT and smart home applications."


UHD Alliance Specs for Devices/Services

The UHD Alliance (UHDA) has begun promoting a new consumer-facing logo to identify devices, content and services capable of delivering a premium experience based on agreed specifications, including performance metrics for resolution, high dynamic range (HDR), peak luminance, black levels and wide color gamut among others. The specifications also make recommendations for immersive audio and other features.

“The diverse group of UHDA companies agreed that to realize the full potential of Ultra HD the specs need to go beyond resolution and address enhancements like HDR, expanded color and ultimately even immersive audio. Consumer testing confirmed this,” said UHD Alliance President Hanno Basse. “The criteria established by this broad cross section of the Ultra HD ecosystem enables the delivery of a revolutionary in-home experience, and the ULTRA HD PREMIUM logo gives consumers a single, identifying mark to seek out so they can purchase with confidence.”

For devices, key specs include:

  • Image Resolution: 3840x2160
  • Color Bit Depth: 10-bit signal
  • Color Palette (Wide Color Gamut)
  • Signal Input: BT.2020 color representation
  • Display Reproduction: More than 90% of P3 colors

High Dynamic Range

  • A combination of peak brightness and black level either:
  • More than 1000 nits peak brightness and less than 0.05 nits black level or more than 540 nits peak brightness and less than 0.0005 nits black level

The UHDA, which was established a year ago, has grown to more than 35 companies, including DIRECTV, Dolby Laboratories, LG Electronics, Netflix, Panasonic Corporation, Samsung Electronics, Sony Corporation, Technicolor, The Walt Disney Studios, Twentieth Century Fox, Universal Pictures, Warner Bros. Entertainment, among others.


AudioCodes to Acquire Active Communications Europe

AudioCodes has agreed to acquire Active Communications Europe, a leading provider of communications solutions, for $3 million in cash plus an earn-out arrangement of up to an additional $2 million based on attaining certain sales targets over the next three years.

Active Communications Europe is a Microsoft Silver Partner specializing in Unified Communications. AudioCodes sells advanced solutions for the Unified Communications and Unified Communications as a Service (UCaaS) market.

"This agreement with Active Communications Europe places AudioCodes in a stronger position to serve the growing adoption of Microsoft Skype for Business Online, Office 365 and Cloud PBX," said Shabtai Adlersberg, President and CEO of AudioCodes. "The technology and expertise of Active Communications Europe effectively complement the AudioCodes One Voice portfolio."


Monday, January 4, 2016

Blueprint: Four SDN Predictions for 2016

by Carolyn Raab, VP of Product Management at Corsa

In 2015, service providers, telcos and national research and engineering consortiums went through a major transition as they began implementing software-defined networks (SDN) to deliver programmable high performance and massive scale in the WAN and data center edge. And for network architects, operators and others involved in these next generation networks, the hard work is just beginning because the pressure will be on in 2016 to ensure that these SDN deployments live up to and exceed the hype. As these deployments move forward many architects will find themselves staring at a network that is completely different in size and shape than what they’re accustomed to. Fortunately, several new trends will help ensure greater control and scale across these networks, and compel us to make the following four predictions about the key developments that will benefit internet scale programmable networks in 2016.

1) FPGAs grow up and play a much larger role 

Network engineers need flexible, open hardware to create policy-driven, self-tuning networks. Hardware vendors need design cycles that can keep pace with network innovations and changes the network engineers demand.  FPGAs have advanced to the point where their underlying silicon process technology is in lock-step with ASICs, and can also provide users with the benefit of leveraging the combined volume of all other users of the same platform. They match the performance level and affordability of ASICs while offering full flexibility and rapid design cycles.  This shift to FPGAs will enable network architectures to evolve and scale more rapidly.

2) SDN will emerge from the hype cycle, based on real deployments

There are now confirmed, real deployments of SDN in service providers, Internet exchanges, ISPs, and data centers.  One challenge they all share is that the top-to-bottom solution requires an involved integration of SDN orchestration, control and data plane elements.  This clumsy stitching together of various parts of the equation has delayed real deployments as much as the lack of controllers and real SDN hardware that are performant and open.  However, with the availability of internet scale programmable and open hardware and open source controllers getting broad support the missing pieces are now present.  This top-to-bottom offering of interworking parts means real deployments will expand beyond the early, most sophisticated users to a broader base of networks of different shapes and sizes.

3) Re-programmable networks and real-time analytics will be hot topics for 2016

Because you program the network, you can make it better by creating an agile, self-tuning, automated network that create value for providers and users alike. This requires a virtuous circle of real-time statistics feeding into real-time analytics tools that trigger changes that are immediately programmed into the network.

To date, these tools existed, but in isolation of each other.  Now we see the beginnings of offerings that have created linkages to move towards closing the circle.  Through industry partnerships or as vertically integrated solutions from a single vendor, the ability to re-program the network on the fly is generating significant interest on the part of numerous stakeholders including service providers, broadcasters, municipalities, and enterprises.  All of them share a common requirement of needing to know what is going on in their networks so they can take the next appropriate action: Isolate? Allocate bandwidth? Add a new service?  Look for much discussion and some innovative deployments of re-programmable networks.

4) 2016: “The year of 100G SDN”

100G will begin to ramp up aggressively because both the data drivers and the underlying network have reached a critical junction.  Traffic growth on the network continues to put pressure on network infrastructure, and will be even more significant with 100G storage deploying to add to the massive growth in video and IoT generated traffic.  Operators will be able to answer with 100G SDN because of two key enablers:
  • Affordability – 100G SDN deployments are approaching a price point that is barely 3x what a 10G link would cost.  
  • Flexible feeds & speeds: QSFP28 for 100G, SFP+ for 10G and anything in between is possible with the same optics cage.  
Programmable SDN hardware designed with these cages can deploy as 10G initially and then rapidly move from 10G to 100G with a soft upgrade not a new hardware purchase to immediately address the data demands.

These and other trends highlight how large SDN deployments will require a more open and flexible approach at the software/firmware and hardware levels. It will be critical to ensure that networks can adapt and evolve as needed. We will be watching as networks take new innovative approaches to managing and orchestrating data in 2016

About the Author

Carolyn Raab is VP of Product Management at Corsa.

About Corsa Technology 

Corsa Technology is a networking hardware company focused on performance Software Defined Networking (SDN). Corsa develops programmable, flexible, internet-scale switches that respond in real-time to network orchestration, directing and managing traffic for SDN and NFV deployments from the 100G SDN WAN edge to networks needing full subscriber awareness. For more information, please visit www.corsa.com.

Blueprint: 2016 and the Rise of NFV – Practicality Rules

by Martin Taylor, CTO, Metaswitch

With network function virtualization quickly moving into the mainstream and proliferation of related technology offerings on the rise, clarity of purpose and ease of use is more critical than ever. Winning solutions in 2016 will combine purpose-built technology with turn-key simplicity, making it easy for network operators to understand, adopt and scale NFV deployment system-wide.

Here are some of my predictions for 2016:

1. Pragmatic network operators in 2016 will progress the fastest; those who deploy proven VNFs that are not too demanding on cloud, SDN, orchestration or OSS/BSS integration will usefully move the virtualization needle in 2016. Leading solutions will:

  • Deliver high availability on vanilla cloud infrastructures without relying on a specially-engineered cloud infrastructure to deliver high availability.
  • Require only basic IP connectivity from the NFV network fabric, vs. requiring a high degree of programmability to create service function chains. 
  • Have simple life-cycles and be able to deliver most of their value with little or no orchestration beyond initial deployment, vs. requiring sophisticated orchestration.
  • Have few and simple OSS / BSS touchpoints, rather than having complex configuration and management requirements and involving a lot of custom work to interface them to OSS and BSS. 

2. VoLTE and CPE will be the two most active areas of the network for NFV-based buildouts in the coming year.

  • VoLTE is a service that requires a number of network functions to be deployed including IMS, SBC, TAS and SCC-AS, all of which are available in virtualized form.
  • Many services offered by network operators require the deployment of multiple items of CPE, e.g. Metro Ethernet access device, firewall, WAN accelerator, intrusion detection system, enterprise SBC – each of which is currently deployed today as a separate physical appliance. NFV offers the opportunity to virtualize all these functions and deploy them as software in a generic CPE device based on a server, or in a service provider’s cloud in a metro data center, thus removing the need to ship and install a multiplicity of physical appliances on the customer premises.

3. While 2016 will see NFV cloud and orchestration solutions mature, OSS/BSS will emerge as the biggest brake on NFV progress.

  • There are two issues here. First, integration with OSS / BSS is usually the long pole in the tent when it comes to deploying any new network function. There are numerous backend systems that a network function needs to talk to for provisioning, configuration, alarms, performance reporting, etc., and integrating with a network function at each of these touchpoints often requires custom software work. This issue does not go away just because a network function is virtualized.
  • Secondly, traditional OSS / BSS is not well suited to managing virtualized network functions because its view of the world is appliance-centric and it doesn’t know how to handle shifting populations of different kinds of virtual machines that together do the work of a physical appliance. OSS / BSS needs to evolve very substantially to cope with the realities of NFV, and this will take time.

About the Author

Martin Taylor is chief technical officer of Metaswitch Networks. He joined the company in 2004, and headed up product management prior to becoming CTO. Previous roles have included founding CTO at CopperCom, a pioneer in Voice over DSL, where he led the ATM Forum standards initiative in Loop Emulation; VP of Network Architecture at Madge Networks, where he led the company’s successful strategy in Token Ring switching; and business general manager at GEC-Marconi, where he introduced key innovations in Passive Optical Networking. Martin has a degree in Engineering from the University of Cambridge. In January 2014, Martin was recognized by Light Reading as one of the top five industry “movers and shakers” in Network Functions Virtualization.

Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

Blueprint: 2016 is the Year SDN Finds its Home, and its Name is NFV

by Peter Margaris, Head of Service Provider Product Marketing at F5 Networks

For the past few years, the testing and adoption of Software Defined Networking (SDN) has progressed incrementally. While at the same time, Service Providers (SPs) have made measurable progress towards the commercialization of network functions virtualization (NFV). SDN and NFV have been viewed as separate, but complimentary initiatives, but SPs are coupling these initiatives with the goal of transforming their entire networks. They are accelerating the adoption of NFV and SDN because of the speed in which they must adapt to the next generations of advanced devices, and due to the pressure to offer new and differentiated consumer and enterprise services. While there was significant progress in 2015, the continued evolution of industry standards and APIs, as well as the successful commercialization of multiple NFV use cases, will lead to Service Providers expanding their the SDN and NFV initiatives significantly in 2016.

Continued Evolution of Industry Standards and APIs 

The evolution of standards and Application Programming Interfaces (APIs) among vendors in 2016 will be critical for SPs to drive forward their network transformations. Previously, the lack of standardization and integration among architecture components slowed the adoption of both SDN and NFV. There is no doubt that SPs are committed to SDN and NFV. This is evidenced by the trials, PoCs, and standards coalescing around the commercialization of L4-L7 service offerings. In the past 12 months, the community of vendors and operators have made great strides on this front. In particular, the collaborative efforts governed by OPNFV (the Open Platform for NFV) along with ETSI NFV have accelerated the evolution of the NFV reference platform.

Also in 2015, we’ve seen greater collaboration between the Open Networking Foundation (ONF) for SDN standards, and the European Telecommunications Standards Institute (ETSI) for NFV standards. The result is that SPs are incorporating SDN architectures alongside specific NFV use cases in both trials and commercial deployments. The ETSI PoC #38 is an example in which multiple vendors collaborated with Australian service provider, Telstra, to produce the ETSI-certified proof-of-concept around delivering customer premise equipment (CPE) to enterprise customers from the cloud. This is also referred to as virtual CPE (vCPE) services.1 Service providers are now in better position to take advantage of the real gains that have been made, and the continued and evolving network transformation in 2016 will certainly provide a continued business transformation as well.

Entrance into New Markets

Opportunities for SPs to commercialize SDN/NFV architectures will expand in 2016 as more L4-L7 services are deployed with high-level NFV orchestration systems and SDN infrastructures. Because NFV enables them to deliver L4-L7 services on-demand through an automated and policy-driven process, markets that otherwise were not accessible will open to SPs. NFV Networks can flex on-demand to incorporate a wider range of virtual network functions (VNFs) into their architectures. SPs will look to expand their use of VNFs with rich sets of APIs that are more easily deployable to support different use case scenarios, customizable service chains for customers, and efficient delivery of network services.

This is still only the early stages of a long migration that ultimately will enable service providers to transform their networks and their businesses with the flexibility and agility that only these new network architectures can deliver.

About the Author

As Head of Service Provider Product Marketing at F5 Networks, Peter Margaris is responsible for the company’s overall solution messaging, positioning and market strategy directed at F5’s service provider business segment. With a diverse background and over 25 years of experience in telecommunications and mobile technologies, he has held business leadership roles at Motorola, Nokia, and Alcatel-Lucent, as well as wireless start-up companies in Silicon Valley.
1 HP Press Release regarding ETSI PoC #38:

ETSI Web Site:    NFV ISG Proof of Concept #38:

Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

Blueprint: The (Near) Future of Enterprise Apps, Analytics, Big Data and The Cloud

by Derek Collison, Founder and CEO of Apcera

In 2016, technical innovation, combined with evolutionary trends, will bring rapid organizational changes and new competitive advantages to enterprises capable of adopting new technologies. Not surprisingly, however, the same dynamics will mean competitive risk for organizations that have not positioned themselves to easily absorb (and profit from) new technological changes. The following predictions touch on some of the areas in IT that I think will see the biggest evolutions in 2016 and beyond.
  1. Hadoop: old news in 24 months. Within the next two years, no one will be talking about big data and Apache Hadoop—at least, not as we think of the technology today. Machine Learning and AI will become so good and so fast that it will be possible to extract patterns, perform real-time predictions, and gain insight around causation and correlation without human intervention to model or prepare raw data. In order to function effectively, automated analytics typically need to be embedded in other systems that bring forth data. Next-generation AI-enabled machine learning systems (aka “big data,” even though this term will soon fade away), will be able to automatically assemble and deliver financial, marketing, scientific and other insights to managers, researchers, executive decision makers and consumers—giving them new levels of competitive advantage.
  2. Microservices will change how applications are developed. Containers will disrupt the industry by giving organizations the ability to build less and assemble more since the cost of the isolation context is so small, fast and cheap. While microservices are inherently complex, new platforms are emerging that will make it possible for IT organizations to innovate at speed without compromising security, or performing the undifferentiated heavy lifting to construct these micro-service systems in production. With robust auditing and logging tools, these platforms will be able to reason and decide how to effectively manage all IT resources, including containers, VMs and hybrids.
  3. The container ecosystem will continue to diversify and evolve. The coming year will see significant evolution in the container management space. Some container products will simply vanish from the market, while certain companies, not wanting to miss out on the hype, will simply acquire existing technology to claim a spot in the new ecosystem. This consolidation will shrink the size of the playing field, making viable container management choices easier for IT decision makers to identify. Over time, as container vendors seek to differentiate themselves, those that survive will be the ones that demonstrate the ability to orchestrate complex and blended workloads, in a manner that enterprises can manage with trust. The container will slowly become the most unimportant piece of the equation.
  4. True isolation and security will continue to push technology forward. Next year, look for creative advances in enabling technology, such as hybrid solutions, consisting of fast and lightweight virtual machines (VMs) that wrap containers, micro-task virtualization and unikernels. This is already beginning to happen. For example, Intel's Clear Containers (which are actually stripped-down VMs) use no more than 20 MB of memory each, making them look more like containers in terms of server overhead, and spin up in just 100-200 milliseconds. The goal here is to provide the isolation and security required by the enterprise, combined with the speed of the minimalist “Clear Linux OS.” Unikernels, another emerging technology, possess meaningful security benefits for organizations because they have an extremely small code footprint, which, by definition, reduces the size of the “attack surface.” In addition, unikernels feature low boot times, a performance characteristic always in favor with online customers who have dollars to spend and the burgeoning micro-services crowd.
This coming year is set to be a busy one. Technology is advancing at a pace that has never been seen before. The rise of machine learning in agile enterprises will truly transform the way information is gathered, analyzed and used. Microservices and containers are going to change the way software systems are designed and built, and we’ll see a lot of movement and acquisitions within the container ecosystem. And, as always, security will be a prominent concern; however, much of the new technology adopted next year will be built upon a foundation of isolation and security, not bolted on as an afterthought. Innovation that doesn’t compromise security will be a welcome change. 2016 is shaping up to be an exciting year.

About the Author

Derek Collison is CEO and founder of Apcera, provider of the trusted cloud platform for global 2000 companies. An industry veteran and pioneer in large-scale distributed systems and enterprise computing, Derek has held executive positions at TIBCO Software, Google and VMware. While at Google, he co-founded the AJAX APIs group and went on to VMware to design and architect the industry’s first open PaaS, Cloud Foundry. With numerous software patents and frequent speaking engagements, Derek is a recognized leader in distributed systems design and architecture and emerging cloud platforms.

Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

Blueprint: 2016 Predictions - Death of Traditional ADC, Ubiquitous SSL

by Sonal Puri, CEO of Webscale Networks

With the end of the year comes the inevitable flood of predictions about the evolution of various technologies in the coming year.  And while predictions about upcoming security threats and malware are likely to come in droves, there’s one area that is sure to get easily overlooked:  what’s going to evolve in companies’ back-end systems in the New Year. Organizations’ networks and infrastructures will see some changes in 2016, and those changes should come from surprising places.

The Death of the Traditional ADC

Everyone understands that the rise of cloud computing will mean the beginning of the end for on-premise datacenters and server rooms. But one technology that gets overlooked is the application delivery controller (ADC). Traditional ADCs are physical boxes that sit in server rooms and control the distribution of website visitor requests to different servers. But as more companies move their servers to the cloud, there will no longer be a need for a physical product to handle web traffic distribution. Furthermore, if companies have a hybrid on-premise-plus-cloud deployment, it won’t make sense for them to use a box in their server room in addition to a SaaS version in the cloud. A SaaS version can transition across different cloud providers, managed services, and the data center, but for obvious reasons a physical ADC can’t.

The end of traditional ADCs will also be a bad sign for the middleware companies putting these boxes together. It’s likely that vendors such as F5 and Citrix will announce plans for cloud expansion next year, but it’s also likely that these plans will fizzle out. It’s highly possible that ADC vendors will become part of the trend of server vendors working directly with SaaS vendors, and leaving appliance middleware creators out in the cold. In the business world, it’s almost impossible for the old players to take on the new, more nimble players and succeed. For example, look at what happened to Blockbuster when it tried to take on Netflix, or the fight between Amazon and Barnes & Noble.

SSL Encryption Becomes Ubiquitous

SEO has evolved from a buzzword to a business strategy, and everyone is looking for a boost in page ranking over their competitors. But one of the lesser known strategies for boosting page ranking is “forcing” SSL encryption. Last year, Google announced that it had started determining if a website secures the entire user session by HTTPS – and, if it did, Google would raise the site owner’s page ranking. This news has gone under the radar until recently, but Symantec just published a white paper outlining how this process works. As Google is pretty much ubiquitous with searching, we’ll see a lot more companies adopting SSL encryption in 2016. While this technology has been out there for a while, its new marketing potential will prove to be much more of an impetus than will its security benefits.

Slow is the New Down

One of the biggest problems for organizations’ websites is an increased lack of patience from visitors when it comes to website performance. Website applications can go down, and companies won’t realize it because their old and unsophisticated monitoring tools typically report only if a website has crashed completely. When it comes to user satisfaction, however, a slow and unresponsive website is just as damaging to your brand as a downed website. Research has revealed that 47% of consumers expect a web page to load in two seconds or less, and 40% will completely abandon a website that takes more than three seconds to load. As was evident during this year’s Cyber Monday shopping, these experiences are not limited to small e-commerce businesses, but big players as well. With dynamic websites that require that every user see something different when visiting, infrastructure managers need to be looking at services that not only predict traffic surges and scale appropriately, but also self-heal in the event of a failure.

How to Prepare

What’s the best way to address these changes? More important than any particular solution is the need for organizations to be mindful of what users expect when visiting their websites. If an organization has cloud deployments or other infrastructure changes on the horizon, be sure that those changes can scale to meet future needs as well as present needs. Consider for example how social media and new customer engagement tools have transformed how we reach our customers, and the speed of their response. Think also about how your team will manage this growing infrastructure as you expand into new territories, new countries and across multiple public, private and hybrid cloud environments. In the end, improving an organizations’ infrastructure is about thinking how to meet the needs of users and customers, and how to ensure that if you do strike gold, with the right promotion, at the right time, that your website isn’t going down, while your sales are going up.

About the Author

Sonal Puri serves as the Chief Executive Officer of Webscale Networks (previously Lagrange Systems). Prior to Lagrange, she was the Chief Marketing Officer at Aryaka Networks and led sales, marketing and alliances for the pioneer in SaaS for global enterprise networks. Sonal has more than 18 years of experience with Internet Infrastructure in sales, marketing, corporate and business development and channels. Previously, Sonal headed business development and corporate strategy for the Application Acceleration business unit, and the Western US Corporate Development team at Akamai Technologies working on partnerships, mergers and acquisitions. Sonal also ran global business operations, channels, business development and the acquisition for Speedera Networks (AKAM) and held key management roles in sales, marketing and IT at Inktomi, CAS and Euclid. Sonal holds a Master’s degree in Building Science from the University of Southern California, and an undergraduate degree in Architecture from the University of Mumbai, India.

Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

ARRIS Completes Acquisition of Pace

ARRIS International plc completed its previously announced $2.1B (£1.4B) acquisition of Pace plc.

In addition to CPE, the combination further establishes ARRIS as a global leader in HFC/Optics, complementing its established CMTS leadership position.

"ARRIS is investing in our industry's next stage of growth. This acquisition enables us to scale our leadership and innovation to transform global entertainment and communications for millions of people," said Bob Stanzione, Chairman and CEO of ARRIS. "Our combined organization unites two of the strongest leadership and engineering teams in the industry—giving us the scale, expertise, and technology to make ARRIS, more than ever before, the partner of choice for the world's leading service providers. Together with our customers, we're creating a world of connected, personalized entertainment and communications that blend seamlessly into our everyday lives."


ARRIS to Acquire Pace for $2.1 Billion

ARRIS Group agreed to acquire Pace plc., a supplier of networking equipment for cable operators, for US$2.1 billion (£1.4 billion) is stock and cash.

Under the agreed upon terms, Pace shareholders will receive £1.325 of cash and a fixed exchange ratio of 0.1455 New ARRIS shares for each Pace share, reflecting aggregate consideration as of April 21, 2015 of £4.265 per share, representing a 28% premium to the Pace closing share price as of April 21, 2015.

The transaction will result in the formation of New ARRIS, which will be incorporated in the U.K., and its operational and worldwide headquarters will be in Suwanee, GA USA. New ARRIS is expected to be listed on the NASDAQ stock exchange under the ticker ARRS. In connection with the formation of New ARRIS each current share of ARRIS will be exchanged for one share in New ARRIS.

Intel Completes Acquisition of Altera

Intel completed its previously announced acquisition of Altera, a provider of field-programmable gate array (FPGA) technology.

Altera will operate as a new Intel business unit called the Programmable Solutions Group (PSG), led by Altera veteran Dan McNamara. Intel said it is committed to a smooth transition for Altera customers and will continue the support and future product development of Altera's many products, including FPGA, ARM-based SoC and power products. In addition to strengthening the existing FPGA business, PSG will work closely with Intel's Data Center Group and IoT Group to deliver the next generation of highly customized, integrated products and solutions.

"Altera is now part of Intel, and together we will make the next generation of semiconductors not only better but able to do more," said Brian Krzanich, Intel CEO. "We will apply Moore's Law to grow today's FPGA business, and we'll invent new products that make amazing experiences of the future possible – experiences like autonomous driving and machine learning."


Intel to Acquire Altera for its Programmable Logic Devices

Intel agreed to acquire Altera a for $54 per share in an all-cash transaction valued at approximately $16.7 billion.

Altera, which is based in San Jose, California, offers programmable logic, process technologies, IP cores and development tools . Its portfolio includes its Stratix series FPGAs with embedded memory, digital signal processing (DSP) blocks, high-speed transceivers, and high-speed I/O pins. Altera's Arria system-on-chip solutions integrate an ARM-based hard processor and memory interfaces with the FPGA fabric using a high-bandwidth interconnect. These devices include additional hard logic such as PCI Express Gen2, multiport memory controllers, error correction code (ECC), memory protection and high-speed serial transceivers.

Altera had 2014 revenue of $1.9 billion, of which 44% of sales were for telecom/wireless, 22% for industrial/military/automotive, and 16% for networking/computer/storage. Altera holds about 39% market share of the PLD segment compared to 49% for Xilinx. The company was founded in 1983 and has approximately 3,000 employees.

"Intel's growth strategy is to expand our core assets into profitable, complementary market segments," said Brian Krzanich, CEO of Intel. "With this acquisition, we will harness the power of Moore's Law to make the next generation of solutions not just better, but able to do more. Whether to enable new growth in the network, large cloud data centers or IoT segments, our customers expect better performance at lower costs. This is the promise of Moore's Law and it's the innovation enabled by Intel and Altera joining forces."

"Given our close partnership, we've seen firsthand the many benefits of our relationship with Intel—the world's largest semiconductor company and a proven technology leader, and look forward to the many opportunities we will have together," said John Daane, President, CEO and Chairman of Altera. "We believe that as part of Intel we will be able to develop innovative FPGAs and system-on-chips for our customers in all market segments."

  • In February 2013, Altera announced that its next generation FPGAs will be based on Intel’s 14 nm tri-gate transistor technology. These next-generation products target ultra high-performance systems for military, wireline communications, cloud networking, and compute and storage applications. Under a partnership deal announced by the firms, Altera’s next-generation products will now include 14 nm, in addition to previously announced 20 nm technologies.

Nokia Obtains Majority Share in Alcatel-Lucent

The French financial regulator, the Autorité des marchés financiers, confirmed that as a result of its ongoing public exchange offer, Nokia has now obtained over 71% over the share capital of Alcatel-Lucent, including over 76% of American depositary shares an over 89% of OCEANE convertible bonds issued by the company.

Philippe Camus, Chairman and interim CEO of Alcatel-Lucent stated: “With the Board of directors of Alcatel-Lucent, we are pleased that the combination of Nokia and Alcatel-Lucent has reached a decisive step, since Nokia obtained a large majority of the share capital on a fully diluted basis. We reaffirm our unanimous support to this industrial project which, by creating a global powerhouse in next-generation communications technologies and services, creates value for our shareholders, as well as for all our stakeholders. On behalf of the Board, I strongly encourage the investors in Alcatel-Lucent that have retained their securities to tender them into the re-opened offer in order to benefit from this creation of value and to fully participate in a major project for our industry.”


NVIDIA Develops Supercomputer for Self-Driving Cars

NVIDIA unveiled an artificial-intelligence supercomputer for self-driving cars.

In a pre-CES keynote in Las Vegas, NVIDIA's CEO Jen-Hsun Huang said the onboard processing needs of future automobiles far exceeds the silicon capabilities currently on the market.

NVIDIA's DRIVE PX 2 will pack the processing equivalent of 150 MacBook Pros -- 8 teraflops of power -- enough to process data from multiple sensors in real time, providing 360-degree detection of lanes, vehicles, pedestrians, signs, etc. The design will use the company's next gen Tegra processors plus two discrete, Pascal-based GPUs. NVIDIA is also developing a suite of software tools, libraries and modules to accelerate the development and testing of autonomous vehicles.

Volvo will be the first company to deploy the DRIVE PX 2. A public test of 100 autonomous cars using this technology is planned for Gothenburg, Sweden.


Zayo Completes Viatel Acquisition

Zayo completed its previously announced acquisition of Viatel for EUR 98.8 million.  The acquisition adds an 8,400 kilometer fiber network across eight countries to Zayo’s European footprint, including 12 new metro networks, seven data centers and connectivity to 81 on-net buildings.

“The acquisition of Viatel’s European network business strengthens our strategic position in Europe and provides customers with access to our fiber network and expanded connectivity to key international markets,” said Dan Caruso, Zayo chairman and CEO. “Because of the complementary nature of the acquisition, we will begin cross-selling our full suite of services to both Zayo and Viatel customers immediately.”


AT&T Introduces Family of LTE Modules For IOT

AT&T introduced a new family of LTE modules for Internet of Things (IoT) applications and optimized for battery life.

AT&T worked with Wistron NeWeb Corp. (WNC), a module and device manufacturer. The modules are expected to become available from WNC at prices planned as low as $14.99 each, plus applicable taxes, starting in the second quarter. Samples will be available for testing in the first quarter.

“Businesses depend on IoT solutions for gathering real-time information on assets across the world,” said Chris Penrose, senior vice president, Internet of Things, AT&T Mobility. “We’re pleased to be able to facilitate the availability of cost-effective modules so our customers can deploy IoT solutions over the AT&T 4G LTE network. The new LTE modules help the battery life of IoT devices last longer so businesses can better serve their customers.”