Friday, July 3, 2015

Emerson to Spin Off Network Power Business

Emerson recently announced plans to spin off its Network Power business and to streamline its portfolio.
The company is also exploring "other strategic options" to drive growth and accelerate value creation for shareholders.

The spinoff of Network Power will result in two separate companies with distinct strategies and investment profiles. Following completion of all actions, Emerson will continue to be a global leader in bringing technology and engineering together to provide solutions for customers in the process, industrial, commercial and residential markets.  These actions offer significant opportunities for enhanced growth, profitability, cash flow, and returns to shareholders.

As a publicly traded company, Network Power will be the world’s leading, stand-alone provider of thermal management, A/C and D/C power, transfer switches, services and infrastructure management systems for the data center and telecommunications industries.

“Emerson has a proven record of taking decisive actions to enhance shareholder value while providing an unmatched level of service to customers around the world,” said Chairman and Chief Executive Officer David Farr. “We are aligning ourselves with the changing global marketplace and our customers’ evolving needs to drive Emerson and Network Power forward. Creating two, independent companies will position both businesses to continue as leaders and to pursue distinct strategies to drive profitable growth. Emerson and Network Power will each have sharper strategic focus, enabling both companies to better allocate resources, incentivize employees and allocate capital to capture the significant long-term opportunities in their respective markets.”

  • Emerson reported $24.5 billion in revenue for fiscal 2014. The company is based in St. Louis and currently has about 115,000 employees worldwide.

Alcatel-Lucent Lands Contracts with China Mobile and China Unicom

Alcatel-Lucent announced comprehensive frame agreements with China Mobile and China Unicom, two of the world’s largest telecommunications operators,  to facilitate the transition to dynamic cloud-based networks.

Under the agreements - both spanning a year and valued at up to RMB4.53 billion (EUR 656 million) and RMB3.59 billion (EUR 520 million) for China Mobile and China Unicom respectively - Alcatel-Lucent will deliver its mobile and fixed ultra-broadband access, IP routing, agile optical networking and network functions virtualization (NFV) capabilities, as well as Nuage Networks’ software defined networking (SDN) technologies.

The agreements were signed in Toulouse during a visit by Li Keqiang, China’s Premier of the State Council, build on long-standing collaboration between Alcatel-Lucent and the two companies.

"This announcement is highly significant as it furthers Alcatel-Lucent’s role as a key technology provider in China and aligns perfectly with our strategy of bringing high-speed ultra-broadband access to open up new opportunities for both service providers and their customers alike. We are very pleased to continue working with both China Mobile and China Unicom to help them deliver on their commitments under The Broadband China’ initiative," stated Michel Combes, CEO of Alcatel-Lucent.

Thursday, July 2, 2015

Renée James to Step Down as President of Intel

Renée James will be stepping down as President of Intel to pursue an external CEO role, effective January 2016.

Intel also announced a number of other executive changes.

  • Arvind Sodhani, President of Intel Capital, will retire in January after a distinguished 35-year career with the company. President of Mergers and Acquisitions Wendell Brooks will take an expanded role to also become President of Intel Capital. Merging these teams under one leader will allow clear focus across all investment opportunities for Intel.
  • On July 1, the Intel Security organization – formerly the independent McAfee division – was formally integrated into Intel operations under the leadership of General Manager Chris Young. This integration will deliver better technologies for our customers and more effective operations that enable Intel Security to advance the state of security across the industry.
  • Intel Communication and Devices Group General Manager Aicha Evans has been elevated to the company's Management Committee, reflecting the leadership role she plays across Intel's business and the importance of communication and mobility to the company's growth strategy and product portfolio.
  • Josh Walden, General Manager of Intel's New Technology Group, now leads all product and research teams that create and deploy new technology categories, such as interactive computing devices, perceptual computing and wearable devices. 
  • Intel executives Hermann Eul and Mike Bell will leave the company after a transition period.

Wednesday, July 1, 2015

5G Novel Radio Multiservice Adaptive Network Project Gets Underway

to develop a novel, adaptive and future-proof mobile network architecture for the 5G era.

As part of the 5GPPP initiative, vendors, operators, IT companies, enterprises and academia in Europe are joining forces to launch the 5G NORMA (5G Novel Radio Multiservice adaptive network Architecture) project.

The 5G NORMA project, which is expected to run for 30 months, will propose an end-to-end architecture that takes into consideration both Radio Access Network (RAN) and Core Network aspects. The consortium envisions the architecture will enable unprecedented levels of network customizability to ensure that stringent performance, security, cost and energy requirements are met. It will also provide an API-driven architectural openness, fueling economic growth through over-the-top innovation.

Consortium members
Vendors and IT: Alcatel-Lucent, NEC, Nokia Networks, ATOS
Operators: Deutsche Telekom, Orange, Telefonica
Small and Medium-sized Enterprises: Azcom Technology, Nomor Research, Real Wireless
Academia: University Kaiserslautern in Germany, Kings College London, University Carlos III Madrid

The technical approach is based on the innovative concept of adaptive (de)composition and allocation of network functions, which flexibly decomposes the network functions and places the resulting functions in the most appropriate location. By doing so, access and core functions may no longer reside in different locations, which is exploited to jointly optimize their operation whenever possible. The adaptability of the architecture is further strengthened by the innovative software-defined mobile network control and mobile multi-tenancy concepts and underpinned by corroborating demonstrations.

"5G is not only about new radio access technology, network architecture will play an important role as well. 5G networks will have to be programmable, software driven and managed holistically to enable a diverse range of services in a profitable way. With 5G NORMA, the consortium aims to ensure economic sustainability of the network operation and open opportunities for new players, while leveraging a future-proof architecture in a cost- and energy-effective way," stated Dr. Werner Mohr, Chairman of the 5GPPP Association.

Level 3 Acquires Black Lotus for DDoS Mitigation

Level 3 Communications has acquired Black Lotus, a start-up offering global Distributed Denial of Service (DDoS) mitigation services. Details on the all-cash deal were not disclosed.

Black Lotus, which is based in San Francisco, operates a global DDoS mitigation network which monitors traffic on edge routers to generate sample data called flows which are then sent to an analysis platform. The traffic sample is then evaluated to determine if there is a DDoS attack against the destination IP, and if so traffic to that IP is diverted into one or more scrubbing centers. Once traffic is in the scrubbing center it can be filtered based on signatures which are predefined traffic patterns which are known to be DDoS attacks, or heuristics which are abnormalities in traffic patterns which may be indicative of a DDoS attack.

Level 3 said the acquisition of Black Lotus will add additional capabilities to the existing Level 3 DDoS service, which launched earlier in the year, including adding extra scrubbing centers. The Level 3 DDoS Mitigation service provides an enhanced network-based detection and mitigation scrubbing solution alongside network routing, rate limiting and IP filtering abilities. Black Lotus adds proxy-based DDoS mitigation services to the portfolio providing additional capabilities for application layer attacks, along with advanced behavioral analytics technology. The application layer is a prime target for DDoS attacks that often impact Web servers and Web hosting providers.

"At Level 3, we value security and are committed to protecting our customers and our network," said Chris Richter, senior vice president of managed security services at Level 3. "Black Lotus' proxy and behavioral technologies, combined with their experienced team of DDoS experts, perfectly complements Level 3's DDoS mitigation and threat intelligence capabilities. With this acquisition, Level 3 continues its commitment of investing in a comprehensive portfolio of services that enhance the growth, efficiency and security of our customers' operations, helping enterprises combat the cybersecurity challenges they face every day."

Interoute Claims Fastest Transatlantic Cloud

Interoute claims that its global cloud platform, Interoute Virtual Data Centre (VDC), delivers nearly double the throughput across the Atlantic than the next best cloud provider, as evidenced by research conducted by Cloud Spectator.

The research from March 2015 compared Interoute VDC with three leading cloud providers (Amazon AWS, Rackspace and Microsoft Azure), testing network throughput and latency between Europe and USA and between providers' European data centres. In all of the comparisons, Interoute VDC demonstrated the highest throughputs and lowest latencies.

Key research findings:


  • Interoute VDC delivered 1.1 Gbps throughput, which was 96% better than Amazon AWS, 141% better than Rackspace, and 195% better than Microsoft Azure.
  • Interoute VDC had the lowest latency, between its London and New York data centres. Interoute was the only provider in the comparison with both of its transatlantic data centres located in key business cities, meaning that VDC users can access compute and storage resources, and deliver data to their customers, from two centres of European and US business activity. 

Within Europe:

  • Interoute VDC achieved 1.3 Gbps throughput between its London and Amsterdam data centres. This was 52% better than Amazon AWS (Dublin – Frankfurt) and 73% better than Microsoft Azure (Dublin - Amsterdam)
  • Interoute VDC achieved a latency of 6 milliseconds between London and Amsterdam, over three times better than the inter-data centre latency of the comparison providers.

“This independent report confirms and validates our networked cloud strategy. Building cloud into a world class network provides our customers with significantly better performance when compared with the traditional cloud models. Businesses looking to grow between Europe and US should definitely be looking at the importance of these network characteristics for their ability to shift workloads into the cloud. Interoute’s fourteen global zones are all built into high performance network with over 300 interconnects in Europe alone. So wherever you choose to put your data and connect to us, your services are typically going to perform faster on Interoute than on many other global providers,” stated Matthew Finnie, CTO of Interoute.

Ixia Ships its Private and Hybrid Cloud Visibility Solution

Ixia has enhanced its Net Tool Optimizer (NTO) platform with single-pane-of-glass visibility into hybrid cloud deployments, allowing a unified view of the entire network.

The new major capabilities delivered in this release include:

  • Unified visibility for virtual and physical networks under a single management interface
  • Intelligent routing of application traffic
  • Double the available interconnects with existing hardware
  • High speed visibility with the 4x40Gbps Advanced Feature Module (AFM) hardware
  • Enabling visibility without borders - Insight into Hybrid Cloud Deployments

Ixia said its graphical user interface, patented dynamic filtering technology and "virtual tap" enable visibility into hybrid cloud deployments. Ixia’s NTO now controls a consolidated data flow from physical and virtual networks that can be then shared with existing security and application performance tools. This extends their life through the virtualization migration, leveraging existing infrastructure and reducing transition risk.

Ixia is also announcing the availability of a new advanced 160Gbps feature module, with 4x40Gbps interfaces. This module supports all existing packet grooming features such as de-duplication, slicing, stripping and tunnel termination.

June was busy for Start-ups + Networking + Cloud + Security

Cisco to acquire OpenDNS for $635 Million

Cisco agreed to acquire OpenDNS, a privately held security company based in San Francisco, for approximately $635 million in cash and assumed equity awards. OpenDNS provides a secure DNS offering with advanced threat protection for "any device, across any port, protocol or app." Its predictive security model is designed to anticipate malicious activity, including botnets and phishing. Its DNSCrypt technology converts regular DNS traffic into encrypted...

Distil Raises $21M for Bot Detection and Mitigation

Distil Networks, a start-up with offices in Arlington, Virginia and San Francisco, raised $21 million in Series B funding for its bot detection and mitigation solution. Distil helps to defend websites against malicious bots used for web scraping, brute force attacks, competitive data mining, account hijacking, unauthorized vulnerability scans, spam, man-in-the-middle attacks and click fraud. Its unique approach monitors every single Web request...

HackerOne Raises $25 Million for Vulnerability Tracking

HackerOne, a start-up based in San Francisco with offices in the Netherlands, raised $25 million in Series B funding for its vulnerability management and bug bounty platform. HackerOne, which was created by people who scaled a new security approach at Facebook, Microsoft and Google, relies on the worldwide hacker community to find and disclose software security holes. The company said it can identify security vulnerabilities on a continuous basis,...

AtScale Raises Funding for Business Intelligence Interface for Hadoop

AtScale, a start-up based in San Mateo, California, announced $7 million in series A funding for its business intelligence solution for Hadoop. AtScale aims to is the glue between two fast-growing but currently disconnected markets: Big Data, estimated at $50B by Wikibon, and the Business Analytics industry, a space IDC predicts will reach $59.2B in 2018. AtScale requires no data movement and no new visualization interface to act as the business...

Enigma Raises $28 Million for its Analytics Engine

Enigma, a start-up based in New York City, announced a $28.2 million Series B funding round to support its work in data discovery and analytics. Enigma has developed an analytics engine that can search, discover and connect billions of previously unlinked public records from thousands of governments and organizations across the world. Enigma is now focusing these capabilities on the enterprise market, bringing its unique corpus of public data...

Portworx Targets Container-Aware Storage

Portworx, a start-up based in Redwood City, California, unveiled its solution for providing elastic, scale-out block storage natively to Docker containers. Portworx PWX Converged Infrastructure for Containers allows Dockerized applications to execute directly on the storage infrastructure. It also enables Dockerized applications to be scheduled across machines and clouds, making possible the deployment of stateful, distributed applications. Key...

Rancher Labs Launches its Container Infrastructure Platform

Rancher Labs, a start-up based in Cupertino, California, announced the beta release of its platform for running Docker in production. It includes a fully-integrated set of infrastructure services purpose built for containers, including networking, storage management, load balancing, service discovery, monitoring, and resource management. Rancher connects these infrastructure services with standard Docker plugins and application management tools,... Targets Network/Storage for Linux Containers, a start-up based in San Jose, previewed its solution for delivering network and storage solutions for Linux containers. Project 6 is software for deploying and managing Docker containers across a cluster of hosts, with a focus on simplifying on-premises environments. is making it easy to pack stateless and stateful applications onto the same environment by integrating Docker and Google’s Kubernetes with additional capabilities...

Corsa Introduces SDN Metering and QoS for Big Data

Corsa Technology, a start-up based in Ottawa, unveiled new SDN metering and QoS (Quality of Service) capability for its line of performance SDN hardware. Bandwidth reservation is seen as especially interesting for organizations running Big Data workloads. The traffic engineering function, which is based on OpenFlow 1.3, allows network architects to better manage bandwidth across their network with dynamic, policy-aware metering and QoS. Metering...

PLUMgrid Names Larry Lang as CEO

PLUMgrid, a start-up offering an Open Networking Suite (ONS) for clouds based on OpenStack, named Larry Lang as chief executive officer. Founder Awais Nemat has been appointed chairman of the board of directors. Lang has held executive positions including president and CEO of Quorum Labs, vice president and general manager of the mobile internet business unit at Cisco Systems, and vice president of product management at Ipsilon Networks, now part...

AppFormix Promises Better Orchestration of VMs and Docker Containers

AppFormix, a start-up based in San Jose, California emerged from stealth mode to unveil its platform for managing the physical infrastructure and orchestrating virtual machines and Docker containers by leverating OpenStack, Kubernetes and Mesos. The company said its goal is to create a fully optimized software-defined data center by bridging the gap between application requirements and the underlying resources. The AppFormix software solution provides...

Menlo Security Raises $25 million for Isolation Platform

Menlo Security emerged from stealth to unveil its Isolation Platform, a new technology that eliminates the threat of malware from key attack vectors, including Web and email. The solution does not use endpoint software. Instead, the Menlo Security Isolation Platform isolates and executes all Web content in the cloud and away from the endpoint. It uses patent-pending, clientless rendering technology, Adaptive Clientless Rendering (ACR), to deliver...

Avi Networks Integrates Cloud ADC with Cisco ACI

Avi Networks, a start-up based in Sunnyvale, California, has integrated its Cloud Application Delivery Controller with Cisco's Application Centric Infrastructure (ACI). Avi Networks, which was founded by key engineers behind Cisco's Nexus data center platforms, offers a software-only load balancer that adopts the same approach taken by large cloud service providers, such as Amazon, Facebook and Google, in that it runs entirely on x86.  The...

Cisco to Acquire Piston Cloud for OpenStack

Cisco agreed to acquire Piston Cloud Computing, a start-up based in San Francisco, for its enterprise OpenStack solutions. Financial terms were not disclosed. Piston Enterprise OpenStack is designed for building, scaling and managing a private Infrastructure-as-a-Service (IaaS) cloud on bare-metal, converged commodity hardware.  Piston Cloud enables Cloud Foundry's Platform-as-a-Service (PaaS) offering to run on OpenStack. It also supports...

IBM Acquires Blue Box for OpenStack Cloud Migration

IBM has acquired Blue Box Group, a managed private cloud provider built on OpenStack. Financial terms were not disclosed. Blue Box, which is based in Seattle, provides a private cloud as a service platform designed to enable easier deployment of workloads across hybrid cloud environments. IBM said the acquisition reinforces its commitment to deliver flexible cloud computing models that make it easier for customers to move to data and applications...

AccelOps Builds Threat Intelligence into its Actionable Security Platform

AccelOps, a start-up based in Santa Clara, California, introduced threat intelligence capabilities for its integrated IT and operational visibility platform. The existing AccelOps virtual appliance software monitors security, performance and compliance in cloud and virtualized infrastructures on a single screen. It automatically discovers, analyzes and automates IT issues in machine and big data across organizations’ data centers and cloud resources,...

Hedvig Raises $18 Million for Distributed, Software-Defined Storage

Hedvig, a start-up based in Santa Clara, California, announced $18 million in Series B funding for its software-defined storage solution designed to bring "the power of Amazon and Facebook-like infrastructure to any enterprise data center.". Hedvig has developed a Distributed Storage Platform combines cloud and commodity infrastructure.  The system creates a virtualized pool that provisions storage with a few clicks, scales to petabytes and...

Tuesday, June 30, 2015

Azure Service Fabric Powers Microsoft's Cloud

Microsoft's Azure Service Fabric is a microservice application platform that allows developers to decompose their work into logical subsystems that are loosely coupled and can be updated independently.

In this video, Mark Russinovich, Chief Technology Office for Microsoft Azure,  talks about how Azure Service Fabric is becoming a key differentiator for the company's cloud initiatives.

Recorded at Open Networking Summit 2015 in Santa Clara, California.

#ONS2015 - Microsoft Azure Puts SDN at Center of its Hyperscale Cloud

To handle its hyperscale growth, Microsoft Azure must integrate the latest compute and storage technologies into a truly software-defined infrastructure, said Mark Russinovich, Chief Technology Officer of Microsoft Azure in a keynote presentation at the Open Networking Summit in Santa Clara, California. The talk covered how Microsoft is building its hyperscale SDN, including its own scalable controllers and hardware-accelerated hosts.  Microsoft...

More on core technologies for enabling hyperscale clouds

See Brad Booth on Hierarchical SDN, the move toward on-board optics, and Flexible Ethernet for data center operations.

Blueprint: Two-factor Authentication Signals the Death of the Password and Physical Token

by Andy Kemshall, Co-founder and CTO of SecurEnvoy

Considering the frequency and severity of data breaches today, we have reached a point of Cybercrime 2.0.  This requires an approach of Security 2.0. The challenge of protecting company data and systems is a continually evolving IT infrastructure.   Companies need enhanced authentication solutions that allow them to protect access to the data and resources critical for operations remotely. With that, the case for multi-factor authentication becomes stronger.

According to the Ponemon “2015 Cost of Data Breach Study: Global Analysis,” the average total cost of a data breach increased from $3.52 to $3.79 million. The average cost paid for each lost or stolen record containing sensitive and confidential information increased from $145 in 2014 to $154 in this year’s study1.

Once only considered for high-end companies (e.g., banks), today companies large and small in the government, healthcare, energy, financial services, insurance, manufacturing, marketing, retail, telecommunications, charity, legal and construction sectors are turning to two-factor authentication (2FA) for their internal security needs.  Although the evolution is slow, a change in attitude is taking place due to the growing concern what a breach can result in including: company downtime, lawsuits, lost business and a damaged reputation. This is motivating executives to pay closer attention to their company’s security.

Within a work environment, most companies utilize standard security measures.  This is with either a simple username and password or a physical token to enable employees to access important data and applications.

The Password

Over the years, we’ve trusted the password.  We trust its ability to keep our companies safe from thieves and those who would do us harm. Passwords met an impasse five years ago, and today they need to have 12 characters or you need to write them down in order to keep track of them.  Moore’s law tells us that every two years computing power doubles – meaning every two years the amount of time it takes to crack a password using a brute force attack is cut in half. It’s now reached the point where a password can be cracked in minutes, sometimes seconds. The antidote: 2FA.  This incorporates something you know, such as a password or PIN, something you are, such as a fingerprint or retinal scan, and something you own, which can either be a physical token or a soft token on a device you use every day, such as a mobile phone. The idea behind 2FA is to bring two of these separate methods together for a stronger level of security, should one of the methods become compromised.

The Physical Token

Companies employing the traditional physical token are likely to experience the following downsides to this approach including: contractors and employees can misplace them, overloading the IT department in replacements; physical tokens do not scale well, can be expensive, deployment of a newer version can take a while (three months to a year) and are less secure than 2FA.

These are non-issues when considering 2FA with a mobile device approach as it is extremely simple to deploy, easy-to-use and adoption with employees is quick. There are seven billion GSM devices in the world and people are very attached to their mobile devices.  Also, if employees want to upgrade their mobile device, the user self-enrolls their new device rendering the old one safe for disposal.

Lastly, the costs of tokens versus a mobile 2FA approach.  The life of a token is three to five years and to replace all of them in a medium or large-sized company can cost hundreds of thousands of dollars, plus it can take three to twelve months to completely roll out.  This holds companies back in terms of productivity.  A mobile 2FA approach simply leverages devices employees already have with them, saving companies money and time to change over new systems.

Implementation of 2FA

If a company wishes to implement a mobile 2FA approach for its network architecture, networking insiders can choose to deploy this in three different ways: on-premise, through managed service provisioning (MSP) or via the cloud.

On-premise allows direct integration within your own network. This unique approach seamlessly dovetails an existing infrastructure. A major benefit of this is that user data resides within the company and leverages existing replication infrastructure such as Active Directory.

Some solutions providers have a partner network for MSP deployment. Utilizing a dedicated MSP partner allows greater choice of integration to suit your network. This approach also allows a security vendor to take over the overall operation and day-to-day administration of your tokenless two-factor authentication system. Reducing the burden of one’s resources, this approach makes it easy for the vendor to provide 2FA solutions for the cloud, integrating into the login seamlessly into your environment.

Although on-premise is the most ideal approach, cloud should be considered if there is a different setup, for SMBs and for companies with several servers and several locations.  Although a lot of companies turn to the cloud as a solution, when it comes to security, there are drawbacks.  These include:

  • Needing constant synchronization with the information people have any time it changes;
  • A cloud environment can be ceased by any government; and
  • The cloud environment cloud stores the seed records (with sensitive information and passwords), which can be hacked.

An additional advantage of on-premise approach is that the seed records are under the control of your company security as security providers like SecurEnvoy do not hold any seed records.

In conclusion, two-factor authentication via mobile devices is evolving into an ideal method that should be considered today to authenticate the end user. It is stronger, the adoptability is easy - as the end-user can pick what mobile device they can use (and in some cases, how they can receive a passcode via SMS, email or voice), it is simple to deploy and overall, it costs less.

About the Author

Andy Kemshall, Co-Founder and CTO at SecurEnvoy is one of the leading European experts in two-factor authentication. As the co-founder and CTO of SecurEnvoy, he brings nearly 20 years of IT security authentication experience to SecurEnvoy. Andy is the inventor of both SMS and secure mail recipient -based two-factor authentication, and more recently NFC based one-swipe authentication. Prior to his role at SecurEnvoy, Andy was one of the original customer-facing technical experts at RSA Europe.  While at RSA, he served as the Sales Engineering Manager where he managed high-level customer relationships, developed the product and advised RSA HQ on new and emerging technologies from Europe.

About SecurEnvoy

SecurEnvoy ( is the trusted global leader of mobile phone-based Tokenless® two-factor authentication. Its innovative approach to the multi-factor authentication market now sees millions of users benefitting from its solutions all over the world. Controlling endpoints located across five continents, SecurEnvoy design innovative two-step verification solutions that leverage both the device the user carries with them and their existing infrastructure. The solutions are the fastest to deploy and the most secure in the industry. With no hardware or deployment issues, the ROI is dramatically reduced and easily managed.

Ponemon’s 2015 Cost of Data Breach Study: Global Analysis

Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.