Thursday, November 17, 2016

Intel Nervana Aims for AI

Intel introduced its "Nervana" platform and outlined its broad for artificial intelligence (AI), encompassing a range of new products, technologies and investments from the edge to the data center.

Intel currently powers 97 percent of data center servers running AI workloads on its existing Intel Xeon processors and Intel Xeon Phi processors, along with more workload-optimized accelerators, including FPGAs (field-programmable gate arrays).

Intel said the breakthrough technology acquired from Nervana earlier this summer will be integrated into its product roadmap. Intel will test first silicon (code-named “Lake Crest”) in the first half of 2017 and will make it available to key customers later in the year. In addition, Intel announced a new product (code-named “Knights Crest”) on the roadmap that tightly integrates best-in-class Intel Xeon processors with the technology from Nervana. Lake Crest is optimized specifically for neural networks to deliver the highest performance for deep learning and offers unprecedented compute density with a high-bandwidth interconnect.

“We expect the Intel Nervana platform to produce breakthrough performance and dramatic reductions in the time to train complex neural networks,” said Diane Bryant, executive vice president and general manager of the Data Center Group at Intel. “Before the end of the decade, Intel will deliver a 100-fold increase in performance that will turbocharge the pace of innovation in the emerging deep learning space.”

Bryant also announced that Intel expects the next generation of Intel Xeon Phi processors (code-named “Knights Mill”) will deliver up to 4x better performance1 than the previous generation for deep learning and will be available in 2017.

In addition, Intel announced it is shipping a preliminary version of the next generation of Intel Xeon processors (code-named “Skylake”) to select cloud service providers. With AVX-512, an integrated acceleration advancement, these Intel Xeon processors will significantly boost the performance of inference for machine learning workloads. Additional capabilities and configurations will be available when the platform family launches in mid-2017 to meet the full breadth of customer segments and requirements.

Intel also highlighted its Saffron Technology, which leverages memory-based reasoning techniques and transparent analysis of heterogeneous data.

https://newsroom.intel.com/editorials/krzanich-ai-day/

Intel to Acquire Nervana for AI

Intel agreed to acquire Nervana Systems, a start-up based in San Diego, California, for their work in machine learning. Financial terms were not disclosed.

Nervana, which was founded in 2014, developed a software and hardware stack for deep learning.

Intel said the Nervana technology would help optimize the Intel Math Kernel Library and its integration into industry standard frameworks, advancing the deep learning performance and TCO of the Intel Xeon and Intel Xeon Phi processors.

http://www.intel.com

Infinera Upgrades and Expands DTN-X with Infinite Capacity Engine

Infinera introduced a number of new platforms within its DTN-X Family designed to power cloud scale networks.  The rollout include two new platforms, the XT-3300 and the XT-3600 meshponders, as well as significant upgrades to the XTC-4 and XTC-10 Packet Optical Transport Node (P-OTN) platforms.

Infinera also announced the MTC-6 FlexILS chassis and 20-port super-channel Flexible Grid Reconfigurable Optical Add-Drop Multiplexer (FlexROADM), a key element within one of the most widely deployed flexible grid open line systems.

Infinera said cloud scale networks must be designed to efficiently address both large N x 100 gigabit Ethernet (GbE) linear connectivity requirements driven by web scale operators as well as the diverse mesh connectivity requirements driven by more traditional telco enterprise and residential customers.
Key attributes of cloud scale networks include:

  • Scalable and sliceable – multi-carrier super-channel technology delivering massive bandwidth per optical engine along with the ability to use software defined networking (SDN) control to independently tune, modulate and route each wavelength
  • Integrated and disaggregated – integrated dense wavelength division multiplexing (DWDM) and switching platforms working in harmony with disaggregated server-like platforms to build the most cost effective networks possible
  • Secure and open – completely open and programmable solutions while featuring in-flight line-rate encryption and other critical security features
  • The DTN-X Family platforms now integrate the ground-breaking Infinera Infinite Capacity Engine featuring the Advanced Coherent Toolkit. The new server-like DTN-X XT-3300 and XT-3600 are the industry’s first meshponder platforms, which combine sliceable photonics and muxponder functionality to deliver hyper-scalability up to 2.4 terabits per second (Tb/s) along with fine-grained granularity for optical mesh networks. The server-like small form factor meshponder platforms, developed from experiences in the web scale market, seamlessly interoperate with the chassis-based DTN-X XTC switching platforms.
Some highlights of the announcement:

  • The Infinera DTN-X XTC-4 and XTC-10 platforms have been upgraded to support 1.2 Tbps per slot and more than double the switching and transmission capacity through non-disruptive, in-service upgrades. The new DWDM modules, powered by the Infinera Infinite Capacity Engine, co-exist with the deployed modules thereby offering complete investment protection. The DTN-X XTC now offers up to 12 Tb/s of non-blocking switching capacity and unlike competitive systems, have no tradeoffs between client side tributary capacity and line side capacity.
  • The Infinera FlexILS open line system, which supports over 50 Tbps of fiber capacity, takes super-channels from the DTN-X platforms and routes wavelengths to the appropriate destination for flexible optical mesh networking. The new 20-port super-channel FlexROADM supports flexible grid super-channels with single-channel granularity and full CDC (colorless, directionless, contentionless) functionality while using six times fewer fibers than conventional ROADMs. FlexILS now includes the new compact MTC-6 chassis and is fully open and interoperable with Infinera and third-party terminals.
  • All of the new capabilities in the FlexILS and DTN-X Family are controlled by the Xceed Software Suite and managed by Infinera’s Digital Network Administrator (DNA). This comprehensive software portfolio enables service providers to simply and easily optimize the network including keeping traffic at the optical layer longer and only grooming and switching when necessary.
“Infinera is transforming transport networks to be open and cloud scale,” said Dr. Dave Welch, Infinera Co-founder and President. “We are bringing the power of web scale to service provider networking with the unified architecture of meshponders and multi-terabit switches managed by a common, open control layer. The upgraded DTN-X Family and the new CDC FlexROADM deliver a new architecture that enables network operators to cost effectively provide scalable and secure end-user services. This architecture demonstrates Infinera’s commitment to innovation to enable our customers to build the next generation cloud scale infrastructure.”

The MTC-6 FlexILS open line system chassis is shipping now. The XT-3300 platform is planned for availability in the first quarter of 2017 with the other platforms to follow starting in the second quarter of 2017.

https://www.infinera.com/infinera-powers-cloud-scale-networks-with-new-dtn-x-platforms/

Infinera's Infinite Capacity Engine Revs to 2.4 Tbps Capacity



Infinera unveiled its Infinite Capacity Engine, a multi-terabit optical subsystem combining the company's next generation FlexCoherent Processor and the photonics of its fourth generation photonic integrated circuit (PIC). The Infinite Capacity Engine, which will be integrated into the Infinera portfolio of long-haul terrestrial, subsea, metro and data center interconnect platforms, breaks new ground for optical transport by being the first in the...


LinkedIn Inaugurates Hyperscale Data Center in Oregon

LinkedIn opened a massively scalable, next-generation data center in Oregon.

The facility, which is hosted by Infomart, is now fully live and ramped. The location benefits from a direct access contract for 100% renewable energy, network diversity, expansion capabilities, and talent opportunities. LinkedIn expects to achieve a PUE (Power Usage Effectiveness) of 1.06 during full economization mode.

The network in the new data center uses LinkedIn's own programmable switch design (Pigeon) based on Broadcom's Tomahawk silicon and its own software stack.

https://engineering.linkedin.com/blog/2016/11/linkedin_s-oregon-data-center-goes-live

LinkedIn Develops its Own Data Center Switch

LinkedIn's engineering team has developed its own data center switch to keep up with the rapidly growing traffic demands of its professional, social network.

The new switch, dubbed "Pigeon", is a 3.2 Tbps switching platform that can be used as a leaf or spine switch. It uses Broadcom's latest Tomahawk silicon (32X100G) and switch software developed in house.

In a blog post, Zaid Ali Kahn describes why the company decided to take on the difficult task of switching software code development rather than relying on an existing network equipment supplier. Key reason include ongoing bugs, unneeded features, lack of a Linux-based platform support tools such as Chef/Puppet/CFEngine, out-of-date monitoring capabilities, and the high-cost of scaling the software license.

The Pigeon switch will be deployed in LinkedIn's upcoming data center in Oregon.

https://engineering.linkedin.com/blog/2016/02/falco-decoupling-switching-hardware-and-software-pigeon

Intel and Google Alliance Looks to Multi-Cloud Enterprise IT

Intel and Google announced a strategic alliance to help enterprise IT deliver an open, flexible and secure multi-cloud infrastructure for their businesses. The companies agreed to collaborate on Kubernetes (containers), machine learning, security and IoT.

One area of focus is the optimization of the open source TensorFlowOpens in a new window library to benefit from the performance of Intel architecture. The companies said their "joint work will provide software developers an optimized machine learning library to drive the next wave of AI innovation across a range of models including convolutional and recurrent neural networks."

Intel and Google are also working on the Kubernetes open source container management platform in multicloud environments. The goal is to drive optimizations to Kubernetes to take full advantage of Intel architecture including performance optimizations, improved infrastructure management, and further enhanced security for enterprise workloads.

http://itpeernetwork.intel.com/intel-announces-strategic-alliance-google-accelerate-multi-cloud-adoption-democratize-ai/

Google Builds a Custom ASIC for Machine Learning


Google has developed a custom ASIC for machine learning and artificial intelligence. The Tensor Processing Unit (TPU) is tailored for TensorFlow, which is an open source software library for machine learning that was developed at Google. In a blog posting,  Norm Jouppi, Distinguished Hardware Engineer at Google, discloses that the TPUs have already been in deployment in Google data centers for over a year, where they "deliver an order of magnitud

Versa Networks Hires Kelly Ahuja and Pankaj Patel

Versa Networks named Kelly Ahuja as its new CEO, replacing company co-founder Kumar Mehta, who now takes on the role of Chief Development Officer. Kelly is an eighteen-year veteran of Cisco, serving most recently as senior vice president (SVP) of the Service Provider Business.  He held multiple leadership roles during his tenure at Cisco, including SVP/GM of the Mobility Business Group and SVP/GM of the Service Provider Routing Technology Group.

Versa also announced that Pankaj Patel has joined the company as lead director on Versa’s board of directors. Most recently, Pankaj was Executive Vice President and Chief Development Officer at Cisco and drove the business and technology strategy across Cisco’s $38 billion Routing, Switching, Wireless, Security, Mobility, Video, Collaboration, Data Center and Cloud offerings.

“With Versa Networks experiencing immense demand for its platform for SD-WAN and SD-Security, and the wide-area network market now entering the next phase of explosive growth, there are no better people to lead our company than Kelly Ahuja and Pankaj Patel. They are the right leaders at the right time,” said Versa Networks founder, Kumar Mehta. “Over the last four years, I’ve had the privilege of working with the most talented team in the industry at Versa, and I know their passion and hunger for greatness will only grow under Kelly and Pankaj’s direction.”

Versa Networks offers a carrier-grade software-defined WAN (SD-WAN) and security (SD-Security) solution that is purely software- and NFV-based, and fully multi-tenant. The solution provides a wide range of virtualized networking and security functions that can be used to create highly scalable and high-value managed services that run on low-cost white box and x86 hardware.

http://www.versa-networks.com

OIF Commences Work on Four New Project

Following its Q4 meeting this month in Auckland, New Zealand, The Optical Internetworking Forum announced four new projects:

  • 400ZR Interoperability Project -- will develop an implementation agreement for 400G ZR and short-reach DWDM multi-vendor interoperability.  It is relevant for router-to-router interconnect use cases and is targeted at (passive) single channel and amplified DWDM applications with distances up to 120 km. This project should ensure a cost-effective and long-term relevant implementation using single-carrier 400G, coherent detection and advanced DSP/FEC algorithms.  
  • Common ACO Electrical I/O Project -- will define the ACO electrical I/O independent of the choice of form factor and optical carrier count for 45 Gbaud and 64 Gbaud per-carrier applications. This project would build upon the success of the CFP2-ACO but is form factor agnostic so that it could be applied to multiple applications such as  CFP4, CFP8, QSFP, micro QSFP and OFSP.
  • Coherent Modem Management Interface Project -- members have requested that the industry combine the coherent modem management interface specifications [4"x5" LH MSA, CFP2-ACO, CFP2-DCO, Flex-Coherent, etc.] into a standalone document.  OIF leadership, working in conjunction with the CFP MSA group, is inviting companies to participate in creating a complementary Normative document.
  • High Baud Rate Coherent Modulation Function Project -- will define a small form factor component implementation agreement that combines the high baud-rate PMQ (HB-PMQ) modulator plus the RF drive functions into a single component. This new component will be used in conjunction with a high baud Integrated Coherent Receiver (ICR), a micro Integrable Tunable Laser Assembly (ITLA) and a coherent DSP, to implement a high performance coherent modem.

“The OIF continues to work with other standards bodies and the industry to identify a wide range of technology needs that cross the entire optical and electrical ecosystem,” said Karl Gass of Qorvo and the OIF’s Physical Link Layer working group optical vice-chair.  “The OIF remains committed to providing technology direction that provides a path to interoperability in a pre-competitive environment. The projects started during the Q4 meeting demonstrate the OIF’s  commitment to work with other standards bodies in the industry.”

In addition, OIF announced the following election results: Dave Brown of Nokia was re-elected to the Board for a two-year term and appointed as president.  Re-elected to one-year terms were Junjie Li of China Telecom and Dave Stauffer of Kandou Bus. Stauffer will continue to serve as secretary/treasurer. Jonathan Sadler of Coriant and Nathan Tracy of TE Connectivity were elected to the board for two-year terms.  Tracy was appointed as vice president of marketing. Tom Issenhuth of Microsoft was appointed as vice president and Ian Betty of Ciena continues to serve on the Board.

Newly elected were Klaus-Holger Otto of Nokia as Technical Committee chair, Ed Frlan of Semtech as Technical Committee vice chair and Jeffery Maki of Juniper Networks as chair of Physical Layer User Group Working Group.

Brian Holden of Kandou Bus, Market Awareness and Education Committee co-chair, Physical and Link Layer; and Lyndon Ong of Ciena, MA&E Committee co-chair, Networking were both re-elected.

http://www.oiforum.com/


See also