Showing posts with label Conference. Show all posts
Showing posts with label Conference. Show all posts

Sunday, January 12, 2020

MPLS SDN NFV World : Announcing the Programme

The 22nd Edition of the MPLS + SDN + NFV World Congress will take place from 31st March to 3rd April 2020. Here is a summary of the programme.

 AI and 5G Impact for Networks: Status & Perspectives 

The 2020 agenda covers (1) the impact of AI and 5G on IP/MPLS networks; (2) whether SD-WAN and MPLS are complementary technologies; (3) current encoding options for SR; (4) the impact of 5G and IoT on IP networks; (6) how far disaggregation must go; (7) AIOps: evolution or revolution?

The 1st plenary session will be shared with the Third Edition of the AI Net conference. 
The Sessions: Disaggregation, Service Assurance, AI, Segment Routing, SD-WAN, 5G Architectures, Network Slicing, Automation 
Track 1 of the conference covers in detail the recent evolutions and perspectives of Segment Routing. Also addressed: the SD-WAN phenomenon and the global path to its deployment at scale, including Automation aspects. 
Track 2 addresses 5G and its impact on the connectivity network as well as Network Slicing challenges, NFV and IP/Optical integration. 
Track 3 explores AI/ML potential for network operations, Reinforcement Learning and Self Healing Networks. 
A Strong Presence of Service Providers and OTTs 
As in previous years, the agenda benefits from numerous contributions from SPs and OTTs. 
Verizon, Orange, Telefonica, BT, Vodafone, Deutsche Telekom, Telia, Turk Telekom, SFR, Telecom Argentina, China Mobile, Colt, Line Corporation, Charter, Brazilian National Research Network will describe their current deployments and explain their expectations.

Sunday, July 20, 2014

Hot Interconnects Symposium - August 26-28

The 22nd annual IEEE Hot Interconnects symposium will be held at Google headquarters in Mountain View, California on August 26-28.

The event is an international forum for researchers and developers of state-of-the-art hardware and software architectures and implementations for interconnection networks of all scales, ranging from multi-core on-chip interconnects to those within systems, clusters, and data centers.

Early registration ends July 31.

Thursday, April 3, 2014

MEF GEN14 Conference to Showcase Global Ethernet - Nov 2014

Kevin Vachon introduces MEF GEN14, a global gathering of the Ethernet community that is planned for 17-20-November-2014 in Washington, D.C.  The 3-day conference will cover all of the topics of Carrier Etherent ranging from enterprise business services, cloud services, mobile backhaul, etc.  The exhibition will showcase proof-of-concept for future Carrier Ethernet capabilities.

Dates: November 17-20, 2014
Venue: Gaylord National in Washington, D.C.

Converge! Network Digest is a media partner for GEN14.

See 2-min introduction to GEN14

Sunday, March 9, 2014

Where is SDN today? #ONS2014 participants share views...

See One Minute views on the state of SDN in the market today from:

  • Niel Viljoen, CEO & Founder, Netronome
  • Marc Cohn, Chair of Market Education Committee, Open Networking Foundation
  • Umesh Kukreja, Director of Product Marketing, Riverbed
  • Lillian Withrow, Chief Financial Officer, Adara
  • Richard Platt, CTO & VP of Engineering, Netsocket
  • Vish Nandlall, Ph.D, SVP Strategy & CTO, RNAM, Ericsson
  • Chris Swan, SVP - Global Field Organization, Overture Networks
  • Andrew Randall, GM, Networking Business Unit, Metaswitch Networks
  • Michael Capuano, VP of Corporate Marketing, Infinera
  • Steve West, Chief Technology Officer, Cyan Inc
  • Karl May, President and CEO, Vello Systems

Click "Playlist" in top left corner to see index.

Thursday, January 9, 2014

OpenDaylight Summit Scheduled for February 4-5 in Santa Clara

The OpenDaylight Project will host a conference in Santa Clara, California, February 4-5, 2014, to unite developers and users across enterprises, carriers and equipment providers for a collaborative and educational SDN and NFV experience.

Keynote speakers for The OpenDaylight Summit include:

  • Neela Jacques, executive director for OpenDaylight, Commencement.
  • Christos Kolias, senior research scientist, OpenFlow/SDN technical lead, network architecture at Orange.
  • Jun Park (Ph.D), senior systems architect at Bluehost.
  • Erik Ekudden, vice president and head of technology strategies, Ericsson, “Accelerating the Network with Open Source Software.”
  • Vijoy Pandey, chief technology officer of Network OS and a distinguished engineer at IBM, “Building an Open Adaptive and Responsive Data Center using OpenDaylight.” 
  • A user panel “Forming and Norming for SDN/NFV: Where to Support Innovation and Where to Simplify Life with Standards” featuring Open Networking User Group co-founder Nick Lippis, Open Networking Foundation MEC chair Marc Cohn, Open Networking Research Center’s executive director Dr. Guru Parulkar and ETSI’s NFV group leader Christos Kolias, moderated by Neela Jacques.
  • Inder Gopal, chairman, OpenDaylight board of directors, “OpenDaylight: What’s Next?”

A panel with OpenDaylight developers to discuss what’s on the road map for 2014, moderated by David M. Meyer, chair of the OpenDaylight Technical Steering Committee.

Tuesday, November 12, 2013

Amazon's "AWS re:invent" Sold Out in Las Vegas

Amazon's "AWS re:invent" conference this week in Las Vegas (Sands Expo Hall) is completely sold out.

The event has over 175 scheduled sessions covering best practices for AWS services,  training bootcamps, hands-on labs, and hackathon.  The registration fee was $1,299.

Wednesday, November 6, 2013

Ericsson Investor Conference: Growing Twice as Fast as Market, Services & Support

Ericsson presented an update on its business fundamentals, market outlook, competitive positioning and investment strategy at its annual Investor Day conference in Stockholm.

"Looking at sales growth in a longer perspective, it is encouraging to see that we grew twice as fast as the market in 2010-2012, currency adjusted. This is proof that our strategy is effective and that we are delivering real value to our customers,"
stated Hans Vestberg, Ericsson's CEO.

Some notes on the event:

  • Ericsson's key competitive assets are (1) Technology leadership (2) Services leadership (3) Global scale.
  • Ericsson expects steady growth across all areas with no major changes in figures for the main compound annual growth rates (CAGR), compared with last year. Ericsson estimates that the total network equipment market will show a CAGR of 3-5%; telecom services is estimated to show a CAGR of 5-7%; and the market for support solutions is forecasted to show a CAGR of 9-11%.
  • Over 1 billion subscribers currently are directly managed by Ericsson and 2.5 billion subscribers are supported.
  • Ericsson has 64,000 service professionals in the field.  It has 114,00 employees in total in 180 countries. About 24,000 employees are in R&D.
  • 50% of LTE smartphone traffic traverses Ericsson equipment.
  • In the TV & Media Business, where Ericsson has recently acquired Microsoft's Mediaroom division, Ericsson claims 25% IPTV market share with over 13 million subscribers.
  • In patent licensing, Ericsson earned SEK 6.6 billion (US$1.02 billion) in 2012.  LTE smartphone uptake is a key driver going forward.
  • Services revenue will likely continue to increase as a % of total sales over time.  Software sales will also gradually increase as the company evolves.

  • Managed Services deals are characterized by long-term engagements, multi-vendor deployments, customer OPEX savings, and high renewal rates.  Ericsson's global scale is a key advantage.
  • By 2019, Ericsson expects about 65% of the world's population will be covered by LTE.
  • Ericsson invests 14.4% of its revenue in R&D.  About 80% of R&D is now focused on software.
  • The concept of "network slicing" will be key for how Service Provider leverage SDN and the cloud.
  • Service Provider networks need to become media-aware.
  • 5G, which is now in the earliest development stages, will aim for high-capacity, coverage everywhere, super low-latency, super high-speed and cost efficiency.
  • 70% of mobile traffic is generated indoors.
  • Ericsson now has 78 contracts for its SSR and 39 live networks.
  • Ericsson now has 21 VoLTE/ISMs contracts.

Sunday, October 27, 2013

Huawei Demos SDN Virtual Transport Service

Huawei demonstrated a transport SDN prototype capable of delivering Network on Demand (NoD) services.

Huawei's SDN virtual transport service solution is based on its Path Computation Element (PCE) architecture.

The goal of the virtual transport service platform is to provide users with multiple modeled virtual topologies through programmable interfaces to meet on-demand virtual network service (such as NoD service) user requirements in real time.

Huawei said its platform can help carriers deliver customized services and deploy value-added services efficiently and flexibly. An intelligent Transport Path Engine (TPE) algorithm ensures dynamic network resource optimization.

The announcement follows Huawei's demonstration of a transport SDN controller prototype launched last year.

SoftCOM is Huawei's end-to-end ICT network architecture based on concepts such as cloud computing, SDN, NFV, and network openness.

The demonstration occurred at the recent Layer123 conference in Bad Homburg, Germany.  In a keynote address at the conference, Justin Dustzadeh, Huawei's CTO & VP Technology Strategy, said Carrier SDN ultimately will be an end-to-end framework spanning Last Mile, Access, Aggregation, Metro & Core, and Data Center equipment. He presented several SDN use cases, including:

  • SDN-based Mobile Backhaul
  • Virtualized Residential Gateways with virtual set-top boxes
  • Traffic Steering for Gi-LANs
  • Cloud IMS
  • SDN-based Transport Network Virtualization

Huawei is also continuing work on its Protocol-Oblivious Forwarding (POF) technology, which is a software-defined networking (SDN) forwarding implementation.  The goal is to leverage the OpenFlow protocol in a programming model where forwarding devices are no longer limited by pre-defined packet protocols or forwarding rules.  Data plane hardware would not be limited by hard­-wired protocol implementations. POF has been prototyped on Huawei's NE5000E Core Router

Wednesday, October 23, 2013

Open Server Summit: Cloud Networking for Warehouse-sized Data Centers

The big six cloud companies are on CAPEX growth trajectories that will take them past the traditional telcos in a few years, said Andy Bechtolsheim, Chairman of Arista Networks.
There is a race to build gigantic data centers that are more efficient and powerful than anything seen before.  Speaking at the Open Server Summit in Santa Clara, California, Bechtolsheim said the expected gains from Moore's Law coupled with promising developments in silicon photonics and virtualization technologies, make it likely that the hyper-scale data centers will continue to hold a competitive advantage over the coming decade.

So, how do you build a network for a data center with 100,000+ servers and millions of VMs?  Bechtolsheim said his company has already seen (and won) competitive bids for 100,000+ ports of non-blocking 10 GigE server connections.  The ideal network, he said, should be truly transparent, flat and universal. This means the bandwidth and latency between any two servers in the data center should be the same.  A top-of-rack switching architecture is preferred.  To scale it for the biggest of data centers, Arista has developed a "Spine-of-Spine" network architecture that could scale to link up to 884,000 servers at 10G with a 3-1 oversubscription.

Sunday, October 20, 2013

Layer123 SDN & OpenFlow World Congress: DT's Terastream Project

"The fundamental premise of Terastream is simplification," said Axel Clauberg, VP Aggregation, Transport, IP & Fixed Access, Deutsche Telekom AG, speaking at last week's Layer123 conference in Bad Homburg, Germany.

The Terastream project, which is a next generation network project of Deutsche Telekom, in many ways reflects the business drivers and next steps for SDN and NFV.  Clauberg said operators must continue to invest in their networks in order to handle explosive traffic growth and deliver an excellent customer experience. Competition is tough not only because of traditional challengers but because of over-the-top (OTT) players who are able to move very fast because they live in a software-based world.

The Terastream architecture answers these challenges, said Clauberg, by providing a radical simplification of the infrastructure and eliminating legacy protocols.  What should the network look like if all the traffic were IP?  What is all of the equipment for delivering services could be centralized in an infrastructure cloud data center?  Once the network operator has a data center that serves as an infrastructure cloud, it is easy to see how other network functions could be virtualized.

Key enabling technologies for Terastream include 100G coherent optics, IP/DWDM, IPv6, a real-time OSS, KVM hypervisors, and OpenStack.    IPv4 is only a service produced in the network. Clauberg said one early decision was to push for open standard solutions. The infrastructure cloud will be run on Layer 3 networking.  There is a strong push to make everything fully automated and secure.

Terastream has already been deployed in Croatia -- part of the DT global footprint -- as a proof-of-concept.  The first rollout occurred in a matter of months.  Over 500 customers are already getting up to Gigabit access speeds with native IPv6 service.

"We are on a path toward becoming a software-defined operator," said Clauberg.  However, even though operators are moving from hardware to these software business models, his call to the industry is to keep innovating in both domains.

A DT slide deck from last year.

  • In September, Deutsche Telekom confirmed multi-vendor interoperability on a 100 Gigabit Ethernet long haul link on TeraStream pilot network in Croatia.  The tests, which were a joint effort between Alcatel-Lucent, Cisco Systems, Hrvatski Telekom and Deutsche Telekom, represent an important milestone towards the future of generally available 100GbE networking.
    The actual link, the industry’s first of its kind, consisted of 600 km of Standard Single Mode Fiber (SMF) between the cities of Split and Varazdin using leveraging the TeraStream colorless, open spectrum, passive network. The 100 Gbps signal, carrying live production traffic, used ITU standard 50GHz spacing via a high gain forward error correction algorithm capable of achieving 10Gbps comparable distances. The implementation in this test used pre-standards, aligned in ITU Study Group 15.  Multi-vendor interoperability on the 100 Gigabit Ethernet (GbE) long-haul DWDM links was established between IP routers from Cisco Systems and Alcatel-Lucent. Cortina Systems also provided technology for the tests.

Monday, July 22, 2013

Intel: Re-Architecting the Data Center

Intel unveiled its plans to "re-architect the data center" with a new generation of 22nm Atom processors, future 14nm system-on-chip (SoC) products, smart storage options, new rack designs and virtualized network technologies.

In a press event in San Francisco, Intel executives said the new data center infrastructure strategy arrives just in time to handle the massive growth of information technology services in the data center. These mega trends include the global proliferation of smartphones, online video, cloud-based software, and big data applications.

"Datacenters are entering a new era of rapid service delivery," said Diane Bryant, senior vice president and general manager of the Datacenter and Connected Systems Group at Intel. "Across network, storage and servers we continue to see significant opportunities for growth. In many cases, it requires a new approach to deliver the scale and efficiency required, and today we are unveiling the near and long-term actions to enable this transformation."

The next-generation Intel Atom processor C2000 product family are aimed at low-energy, high-density microservers and storage (codenamed "Avoton"), and network devices (codenamed "Rangeley"). This second generation of Intel's 64-bit SoCs is expected to become available later this year and will be based on the company's 22nm process technology and Silvermont microarchitecture. It will feature up to eight cores with integrated Ethernet and support for up to 64GB of memory. Intel estimates the new chips will deliver up to four times the energy efficiency and up to seven times more performance than the first generation Intel Atom processor-based server SoCs introduced in December last year. Sampling is underway.

Intel also outlined its roadmap for products based on its forthcoming 14nm process technology, which is scheduled for 2014 and beyond. These products, which are aimed at microservers, storage and network devices, will include the next generation of Intel Xeon processors E3 family (codenamed "Broadwell").  It also includes the next generation of Intel Atom processor SoCs (codenamed "Denverton").

Inside the data center, Intel's Rack Scale Architecture (RSA) promises to dramatically increase the utilization and flexibility of the datacenter by moving to pooled compute, memory and I/O resources in a rack. These resources will have shared power, cooling and rack management software.  Optical interconnects could be used a "rack fabric" uniting all these resources. Each component would be modular, enabling easy upgrade paths for compute, memory or I/O components.  Rackspace Hosting is already deploying server racks based on this RSA vision.  Rackspace is also a big backer of OpenStack.

On the networking front, Intel is backing SDN to maximize network bandwidth, significantly reduce cost and provide the flexibility to offer new services. The goal is to move from manually-configured networks to flexible, open system for rapid provisioning of specialized services.

Intel introduced Open Network Platform reference designs to help OEMs build and deploy this new generation of ne

In April 2013, Intel introduced three platforms for software defined networking (SDN) and network function virtualization (NFV):

The Intel Open Network Platform Switch Reference Design, previously codenamed "Seacliff Trail," is based on scalable Intel processors, Intel Ethernet Switch 6700 series and the Intel Communications Chipset 89xx series.  It will include Wind River Open Network Software (ONS), an open and fully customizable network switching software stack using Wind River Linux. Wind River ONS allows for key networking capabilities such as advanced tunneling as well as modular, open control plane and management interface supporting SDN standards such as OpenFlow and Open vSwitch. Common, open programming interfaces allow for automated network management, and coordination between the server switching elements and network switches enabling more cost-effective, secure, efficient and extensible services.
The Intel Data Plane Development Kit Accelerated Open vSwitch  -- a project aimed at improving small packet throughput and workload performance that can be achieved on the Open vSwitch.  Intel is specifically re-creating the kernel forwarding module (data plane) to take advantage of the Intel DPDK library. The Intel DPDK Accelerated Open vSwitch is planned to initially be released with the Intel® ONP Server Reference Design in the third quarter of this year.
The Intel Open Network Platform Server Reference Design, previously codenamed "Sunrise Trail," is based on the Intel Xeon processor, Intel 82599 Ethernet Controller and Intel Communications Chipset 89xx series. The ONP Server Reference Design enables virtual appliance workloads on standard Intel architecture servers using SDN and NFV open standards for datacenter and telecom. Wind River Open Network Software includes an Intel DPDK Accelerated Open vSwitch, fast packet acceleration and deep packet inspection capabilities, as well as support for open SDN standards such as OpenFlow, Open vSwitch and OpenStack. The project is in development now: the first alpha series is slated to be available in the second half of this year.

"SDN and NFV are critical elements of Intel's vision to transform the expensive, complex networks of today to a virtualized, programmable, standards-based architecture running commercial off-the-shelf hardware," said Rose Schooler, vice president of Intel Architecture Group and general manager of Intel's Communications and Storage Infrastructure Group. "The reference designs announced today enable a new phase in the evolution of the network and represent Intel's commitment to driving an open environment that fosters business agility and smart economics."

In a keynote address at the Open Networking Summit conference in Silicon Valley, Schooler cited a number of companies planning to build products based on these platforms, including Big Switch, HP, NEC, NTT Data, Quanta, Super Micro, VMware and Vyatta (a Brocade company). 

Some other points from the ONF event:

  • Intel is working with NEC and Telefonica to develop a network virtualization of the Evolved Packet Core.  The design puts MME and S/P GW functions on an ATCA Chassis.
  • VMware is working with Intel on a network virtualization solution for software defined data centers (SDDC).
  • Intel is using SDN concepts in its own data centers.
  • Intel is working with HP and Verizon to test a cross-country, cloud bursting between distant data centers.  The trial involves an Intel private cloud in Portland, OR, and HP Lab in Plano, TX, and a Verizon Public Cloud lab in Waltham, MA.

Wednesday, May 29, 2013

D11: Mary Meeker’s Internet Trends Report

Mary Meeker, a partner at the venture investment firm of the Kleiner Perkins Caufield & Byers, presented her annula Internet Trends report at the D11 conference organized by The Wall Street Journal.  Her series of 117 slides covers a wide range of key trends in mobile networking, smartphone adoption, advertising and social issues.

Here are a few highlights on the networking side:

  • 2.4 billion Internet users is 2012, up 8% Y/Y.
  • 80% of Top Ten Global Internet sites are "Made in USA" while 81% of users are outside USA.
  • 500 million photos are now uploaded and shared every day on Flickr, Snapchat, Instagram and Facebook.
  • 100 hours of video are uploaded per minute to YouTube as of May 2013.
  • More than 1.1 billion Facebook users, 68% on mobiles, 60% log-in daily, average 200+ friends
  • Global mobile traffic is about 15% of total Internet traffic. Rising rapidly.
  • In China, more users are now accessing the web via mobiles than via desktop PCs
  • In Korea, mobile search queries surpassed PC search queries in Q4 2012.
  • There are currently about 1.5 billion smartphone users, representing about 21% penetration. The growth rate is 31% Y/Y.

Monday, April 1, 2013

Cyber 3.0 - Where the Semantic Web and Cyber Meet

by John Trobough, President, Narus

The term “Cyber 3.0” has been used mostly in reference to the strategy described by U.S. Deputy Defense Secretary William Lynn at an RSA conference. In his Cyber 3.0 strategy, Lynn stresses a five-part plan as a comprehensive approach to protect critical assets. The plan involves equipping military networks with active defenses, ensuring civilian networks are adequately protected, and marshaling the nation’s technological and human resources to maintain its status in cyberspace.

Cyber 3.0 technologies will be the key to enable such protection, and is achieved when the semantic Web’s automated, continuous machine learning is applied to cybersecurity and surveillance.

Cyber 3.0 will be the foundation for a future in which machines drive decision-making. But Cyber 3.0’s ability to deliver greater visibility, control and context has far-reaching implications in our current, hyper-connected environment, where massive amounts of information move easily and quickly across people, locations, time, devices and networks. It is a world where human intervention and intelligence alone simply can’t sift through and analyze information fast enough. Indeed, arming cybersecurity organizations with the incisive intelligence afforded by this machine learning means cybersecurity incidents are identified and security policies are enforced before critical assets are compromised.


In order to stress the full weight of the meaning of Cyber 3.0, it is important to first put the state of our networked world into perspective. We can start by stating categorically that the Internet is changing: Access, content, and application creation and consumption are growing exponentially.

From narrowband to broadband, from kilobits to gigabits, from talking people to talking things, our networked world is changing forever. Today, the Internet is hyper-connecting people who are now enjoying super-fast connectivity anywhere, anytime and via any device. They are always on and always on the move, roaming seamlessly from network to network. Mobile platforms and applications only extend this behavior. As people use a growing collection of devices to stay connected (i.e., laptops, tablets, smartphones, televisions), they change the way they work and collaborate, the way they socialize, the way they communicate, and the way they conduct business.

Add to this the sheer enormity of digital information and devices that now connect us: Cisco estimates that by 2015, the amount of data crossing the Internet every five minutes will be equivalent to the total size of all movies ever made, and that annual Internet traffic will reach a zettabyte — roughly 200 times the total size of all words ever spoken by humans2. On a similar note, the number of connected devices will explode in the next few years, reaching an astonishing 50 billion by 20203. By this time, connected devices could even outnumber connected people by a ratio of 6-to-14. This interconnectedness indeed presents a level of productivity and convenience never before seen, but it also tempts fate: the variety and number of endpoints — so difficult to manage and secure — invite cyber breaches, and their hyper-connectivity guarantees the spread of cyber incidents as well as a safe hiding place for malicious machines and individuals engaged in illegal, dangerous or otherwise unsavory activities.


Cyber is nonetheless integral to our everyday lives. Anything we do in the cyber world can be effortlessly shifted across people, locations, devices and time. While on one hand, cyber is positioned to dramatically facilitate the process of knowledge discovery and sharing among people (increasing performance and productivity and enabling faster interaction), on the other, companies of all sizes must now secure terabytes and petabytes of data. That data enters and leaves enterprises at unprecedented rates, and is often stored and accessed from a range of locations, such as from smartphones and tablets, virtual servers, or the cloud.
On top of all this, all the aforementioned endpoints have their own security needs, and the cybersecurity challenge today lies in how to control, manage and secure large volumes of data in increasingly vulnerable and open environments. Specifically, cybersecurity organizations need answers to how they can:

• Ensure visibility by keeping pace with the unprecedented and unpredictable progression of new applications running in their networks

• Retain control by staying ahead of the bad guys (for a change), who breach cybersecurity perimeters to steal invaluable corporate information or harm critical assets

• Position themselves to better define and enforce security policies across every aspect of their network (elements, content and users) to ensure they are aligned with their mission and gain situational awareness

• Understand context and slash the investigation time and time-to-resolution of a security problem or cyber incident

Unfortunately, cybersecurity organizations are impeded from realizing any of these. This is because their current solutions require human intervention to manually correlate growing, disparate data and identify and manage all cyber threats. And human beings just don’t scale.


Indeed, given the great velocity, volume and variety of data generated now, the cyber technologies that rely on manual processes and human intervention — which worked well in the past — no longer suffice to address cybersecurity organizations’ current and future pain points, which correlate directly with the aforementioned confluence of hyper-connectivity, mobility and big data. Rather, next-generation cyber technology that can deliver visibility, control and context despite this confluence is the only answer. This technology is achieved by applying machine learning to cybersecurity and surveillance, and is called Cyber 3.0.

In using Cyber 3.0, human intervention is largely removed from the operational lifecycle, and processes, including decision-making, are tackled by automation: Data is automatically captured, contextualized and fused at an atomic granularity by smart machines, which then automatically connect devices to information (extracted from data) and information to people, and then execute end-to-end operational workflows. Workflows are executed faster than ever, and results are more accurate than ever. More and more facts are presented to analysts, who will be called on only to make a final decision, rather than to sift through massive piles of data in search of hidden or counter-intuitive answers. And analysts are relieved from taking part in very lengthy investigation processes to understand the after-the-fact root cause.

In the future, semantic analysis and sentiment analysis will be implanted into high-powered machines to:

• Dissect and analyze data across disparate networks

• Extract information across distinct dimensions within those networks

• Fuse knowledge and provide contextualized and definite answers

• Continuously learn the dynamics of the data to ensure that analytics and data models are promptly refined in an automated fashion

• Compound previously captured information with new information to dynamically enrich models with discovered knowledge

Ultimately, cybersecurity organizations are able to better control their networks via situational awareness gained through a complete understanding of network activity and user behavior. This level of understanding is achieved by integrating data from three different planes: the network plane, the semantic plane and the user plane. The network plane mines traditional network elements like applications and protocols; the semantic plane extracts the content and relationships; and the user plane establishes information about the users. By applying machine learning and analytics to the dimensions extracted across these three planes, cybersecurity organizations have the visibility, context and control required to fulfill their missions and business objectives.

Visibility: Full situational awareness across hosts, services, applications, protocols and ports, traffic, content, relationships, and users to determine baselines and detect anomalies

Control: Alignment of networks, content and users with enterprise goals, ensuring information security and intellectual property protection

Context: Identification of relationships and connectivity among network elements, content and end users

Clearly, these three attributes are essential to keeping critical assets safe from cybersecurity incidents or breaches in security policy. However, achieving them in the face of constantly changing data that is spread across countless sources, networks and applications is no small task — and definitely out of reach for any principles or practices that rely even partly on human interference. Moreover, without visibility, control and context, one can never be sure what type of action to take.

Cyber 3.0 is not a mythical direction of what “could” happen. It’s the reality we will face as the Web grows, as new technologies are put into practice, and as access to more and more devices continues to grow. The future is obvious. The question is: How will we respond?

By virtue of machine learning capabilities, Cyber 3.0 is the only approach that can rise to these challenges and deliver the incisive intelligence required to protect our critical assets and communities now and into the future.

About the Author

John Trobough is president of Narus, Inc., a subsidiary of The Boeing Company (NYSE: BA).  Trobough previously was president of Teleca USA, a leading supplier of software services to the mobile device communications industry and one of the largest global Android commercialization partners in the Open Handset Alliance (OHA). He also held executive positions at Openwave Systems, Sylantro Systems, AT&T and Qwest Communications.

About the Company

Narus, a wholly owned subsidiary of The Boeing Company (NYSE:BA), is a pioneer in cybersecurity.  Narus is one of the first companies to apply patented advanced analytics to proactively identify cyber threats from insiders and outside intruders. The innovative Narus nSystem of products and applications is based on the principles of Cyber 3.0, where the semantic Web and cyber intersect. Using incisive intelligence culled from big data analytics, Narus nSystem identifies, predicts and characterizes the most advanced security threats, empowering organizations to better protect their critical assets. Narus counts governments, carriers and enterprises around the world among its growing customer base. The company is based in the heart of Silicon Valley, in Sunnyvale, California.

Thursday, February 14, 2013

Ericsson Aims for Converged Content Delivery Network

Ericsson is introducing a new unified content delivery network (CDN) solution for both fixed and mobile networks.

The new Ericsson Media Delivery Network solution aims to integrate the company's advanced packet core and radio capabilities with a converged cache. The solution also adds management and service exposure layers for intelligent control and business model enablement.

Ericsson said its goal is to enable operators to enter the media value chain with profitable video delivery and to truly leverage their established consumer relationships. At the same time it offers content providers and enterprises cost-effective accessibility and guaranteed quality of experience across all networks, enabling the delivery of video, web content, and app downloads while accelerating commerce.

"Yesterday our mobile devices were telephones, today they are everything; our TVs, our banks, our conference rooms. The recent Ericsson Mobility Report shows that mobile data traffic will grow 12 times by 2018, and this is only the start of the tremendous transformation we will witness. The Ericsson Media Delivery Network solution breaks the boundaries of traditional CDN solutions, offering operators a single, intelligent, and agile management platform for superior efficiency, optimization, velocity of service, and monetization opportunities," stated Per Borgklint, Senior Vice President and Head of Business Unit Support Solutions, Ericsson. 

  • In 2011, Ericsson and Akamai Technologies announced an exclusive strategic alliance focused on bringing to market mobile cloud acceleration solutions aimed at improving end-user Internet experiences such as mobile ecommerce, enterprise applications and internet content. The companies will jointly develop solutions for delivering content and applications to mobile devices.
  • Ericsson has subsequently introduced CDN capabilities in its Smart Services Routers.

Sunday, December 16, 2012

World Conference on Telecommunications Ends without Support from Major Players

The World Conference on International Telecommunications in Dubai ended with delegates from 89 countries agreeing to a new treaty that encompasses a range of communications issues, including addenda on the development and growth of the Internet.  However, the United States, Australia, Canada, Germany, Italy, Japan, the United Kingdom and other western European nations rejected the treaty on grounds that new government regulations would be a threat to the open Internet.
ITU Secretary-General Dr Hamadoun Touré said the agreement was not meant to stem the free flow of information but to assist developing countries, promote accessibility to persons with disabilities, address spam, and other Internet traffic areas.  He noted that provisions concerning the Internet were removed from the actual treaty text and annexed as a non-binding resolution.

Opponents of the treaty were not convinced.  They said granting new powers of authority to the ITU was unwanted and unneeded. Regulating spam messaging could open the door to sanctioned government surveillance and censorship especially of political or religious content.

The United Stated said it could not support the treaty.  U.S. Ambassador Terry Kramer stated: "The Internet has given the world unimaginable economic and social benefits during these past 24 years – all without UN regulation. We candidly cannot support an ITU treaty that is inconsistent with a multi-stakeholder model of Internet governance. As the ITU has stated, this conference was never meant to focus on internet issues; however, today we are in a situation where we still have text and resolutions that cover issues on spam and also provisions on internet governance. These past two weeks, we have of course made good progress and shown a willingness to negotiate on a variety of telecommunications policy issues, such as roaming and settlement rates, but the United States continues to believe that internet policy must be multi-stakeholder driven. Internet policy should not be determined by member states but by citizens, communities, and broader society, and such consultation from the private sector and civil society is paramount. This has not happened here."

Tuesday, December 4, 2012

World Conference on International Telecommunications Underway in Dubai

Speaking at the I.T.U.'s World Conference on International Telecommunications is underway this week in Dubai to renegotiate the International Telecommunication Regulations (ITRs), a binding global treaty that facilitates global interconnection and interoperability of information and communication services.

The ITRs have not been revised since 1988.

ITU Secretary-General Dr Hamadoun I. Touré said the conference should not been seen as undercutting the freedom of expression.  In his opening speech on Monday,  Touré  said "One of the most persistent myths [about WCIT-12] concerns freedom of expression, and it has been suggested that this conference might in some way act to restrict the open and free flow of information. In Article 33 of the ITU’s Constitution, however, Member States recognize the right of the public to correspond by means of the international service of public correspondence. And the ITRs cannot contradict that provision, or indeed any other article in the ITU Constitution."

The U.S. government has previously stated its opposition to significant changes to the ITRs, saying that ITR should apply only to “recognized operating agencies,” which are those entities providing telecommunications services to the public, and thus preventing the treaty from expanding to include private networks, data processing and other activities.

This week, Google launched  a public campaign to "Keep the Internet Free and Open."

Wednesday, November 28, 2012

Amazon Web Services Positions Itself as the Infrastructure of Innovation

Amazon Web Services is growing rapidly thanks to a virtuous circle -- as it gains more customers there is greater server usage, this means AWS needs to build more infrastructure, which then leads to greater economies of scale, the company benefits from lower infrastructure costs, it reduces prices and this attracts even more customers.  This virtuous cycle is currently in full motion, giving AWS a strategic advantage over others who were late to enter the market, said Andy Jassy in a keynote at the company's first AWS re:Invent conference in Las Vegas.  The company has lowered prices 23 times since launching cloud services in 2006 largely without competitive pressure to do so. The latest price cut: AWS is lowering its S3 cloud storage service by 25%.

Jassy said Amazon is injecting new energy into its virtuous circle flywheel by adding services/features and opening up to third party integrators, network solution providers and app vendors.  This propels AWS forward to be the "infrastructure for innovation."

Here is current snapshot of AWS

  • 100s of thousands of customers using its cloud services
  • Over 300 government agencies and 1,500 academic institutions
  • AWS has introduced over 150 new cloud services or features during 2012
  • Amazon's S3 storage service is currently holding over 1.3 trillion objects and handling peak loads of 835,000 requests per second
  • The Amazon Elastic Map Reduce service (Hadoop running on EC2) has now launched 3.7 million clusters
  • In 2003,'s retail business generated $5.2 billion in revenue. Now, AWS adds enough server capacity every day to power the entire operations of the 2003 retail business
  • AWS Global Infrastructure now encompasses 9 regions(US East, two US West coast, Europre, Brazil, Tokyo, Singapore, Sydney) 25 availability zones and 38 edge locations. There is as a separate U.S. government AWS cloud

Jassy's keynote, along with partner presentations from NASA, Netflix, NASDAQ and SAP, is now on YouTube.

Monday, August 6, 2012

U.S. Opposes Changes to International Telecommunications Regulations

The U.S. State Department has submitted its first group of proposals to the World Conference on International Telecommunications (WCIT), which will be held at the end of this year in Dubai.

 WCIT intends to review and potentially revise the treaty-level International Telecommunications Regulations (ITRs), which govern the flow of traffic between nations and which have not been amended since 1988.

The U.S. proposals include:
  • Minimal changes to the preamble of the ITRs;
  • Alignment of the definitions in the ITRs with those in the ITU Constitution and Convention, including no change to the definitions of telecommunications and international telecommunications service;
  • Maintaining the voluntary nature of compliance with ITU-T Recommendations;
  • Continuing to apply the ITRs only to recognized operating agencies or RoAs; i.e., the ITRs’ scope should not be expanded to address other operating agencies that are not involved in the provision of authorized or licensed international telecommunications services to the public; and
  • Revisions of Article 6 to affirm the role played by market competition and commercially negotiated agreements for exchanging international telecommunication traffic.
The U.S. WCIT Head of Delegation, Ambassador Terry Kramer, stated: “The ITRs have served well as a foundation for growth in the international market,” Ambassador Kramer said. “We want to preserve the flexibility contained in the current ITRs, which has helped create the conditions for rapid evolution of telecommunications technologies and markets around the world... We will not support any effort to broaden the scope of the ITRs to facilitate any censorship of content or blocking the free flow of information and ideas. The United States also believes that the existing multi-stakeholder institutions, incorporating industry and civil society, have functioned effectively and will continue to ensure the health and growth of the Internet and all of its benefits.” 03-Aug-12

Monday, April 16, 2012

NEC: "SDN is Ready to Go!"

NEC has made an early bet on OpenFlow and software defined networking (SDN), said Kaoru Yano, Chairman of the Board of NEC Corporation, in a keynote address at the Open Networking Summit in Santa Clara, California. The company has been involved in SDN since its early days at Stanford and has played a leading role in many OpenFlow projects in Japan and around the world. This pioneering effort is really an extension of the company's long term research in computers and communications.

Some key points from his presentation:
  • The value proposition of SDN can be summed up as follows: simple, fast, scalable, and open.

  • SDN's key function is to automatically manage network traffic and distribute it as needed.

  • SDN enables network traffic t0 be scaled and managed because network control is separated from network hardware.

  • SDN allows users to upgrade the network independently of the hardware.

  • NEC's OpenFlow formula is: Simplification + Virtualization + Visualization.

  • NEC's unique SDN proposition is a logical abstraction called the Virtual Tenant Network -- this enables the complete separation of the logical plane from the physical plane. Users can design the Layer 2/3 network as they wish and then this design will be automatically mapped to the physical hardware.

  • NEC's ProgrammableFlow enables drag-and-drop configuration of virtual tenants in a data center.

  • NEC currently has 100+ SDN system trials underway.

  • Current SDN applications will include academic networks, data centers, enterprise networks and the backbone infrastructure of telecom carriers.

  • Geneis Hosting Solutions is using SDN to provide flexible global IP address assignment. The deployment has achieved a 60% reduction in global IP address and saved 100 hours per week of service support. The network has achieved 99.999% availability.

  • Nippon Express using NEC's ProgrammableFlow to achieve virtualization flexibility and reduce service delivery time. The deployment reduced tithe rack space required for core switches by 70% and reduced power consumption by 80%.

  • NEC is using Programmable Flow for its own multi-purpose data center. Following last year's earthquake and tsunami, NEC needed to relocate data center facilities to western Japan due to power restrictions. ProgrammableFlow simplified this transfer process.

  • Software-defined networking can provide much smarter congestion control and recovery following a major disaster as experienced last year.

The Open Networking Summit is planning to post a video of their conference following the event.

Google Links Data Centers with Software-defined Networking

WAN economics to date have not made Google happy, said Urs Hoelzle, SVP of Technical Infrastructure and Google Fellow, speaking at the Open Networking Summit 2012 in Santa Clara, California. Ideally, the cost per bit should go down as the network scales, but this is not really true in a really massive backbone like Google's. This scale requires more expensive hardware and manual management of very complex software. The goal should be to manage the WAN as a fabric and not as a collection of individual boxes. Current equipment and protocols do not allow this. Google's ambition is to build a WAN that is higher performance, more fault tolerant and cheaper.

Some notes from his presentation:
  • Google currently operates two WAN backbones. I-Scale is the Internet facing backbone that carries user traffic. It must have bulletproof performance. G-Scale is the internal backbone that carries traffic between Google's data centers worldwide. The G-Scale network has been used to experiment with SDN.

  • Google chose to pursue SDN in order to separate hardware from software. This enables it to choose hardware based on necessary features and to choose software based on protocol requirements.

  • SDN provides logically, centralized network control. The goal is to be more deterministic, more efficient and more fault-tolerant.

  • SDN enables better centralized traffic engineering, such as an ability for the network to converge quickly to target optimum on a link failure.

  • Deterministic behavior should simplify planning vs over provisioning for worst case variability.

  • The SDN controller uses modern server hardware, giving it more flexibility than conventional routers.

  • Switches are virtualized with real OpenFlow and the company can attach real monitoring and alerting servers. Testing is vastly simplified.

  • The move to SDN is really about picking the right tool for the right job.

  • Google's OpenFlow WAN activity really started moving in 2010. Less than two years later, Google is now running the G-Scale network on OpenFlow-controlled switches. 100% of its production data center to data center traffic is now on this new SDN-powered network.

  • Google built their own OpenFlow switch because none were commercially available. The switch was built from merchant silicon. It has scaled to hundred of nonblocking 10GE ports.

  • Google's practice is to simplify every software stack and hardware element as much as possible, removing anything that is not absolutely necessary.

  • Multiple switch chassis are used in each domain.

  • Google is using open source routing stacks for BGP and ISIS.

  • The OpenFlow-controlled switches look like regular routers. BGP/ISIS/OSPF now interfaces with OpenFlow controller to program the switch state.

  • All data center backbone traffic is now carried by the new network. The old network is turned off.

  • Google started rolling out centralized traffic engineering in January.

  • Google is already seeing higher network utilization and gaining the benefit of flexible management of end-to-end paths for maintenance.

  • Over the past six months, the new network has seen a high degree of stability with minimal outages.

  • The new SDN-powered network is meeting the company's SLAs.

  • It is still too early to quantify the economics.

  • A key benefit is the unified view of the network fabric -- higher QoS awareness and predictability.

  • The OpenFlow protocol is really barebones at this point, but it is good enough for real world networks at Google scale.

  • 100% of traffic carried on the new network.
The Open Networking Summit is planning to post a video of their conference following the event.