Showing posts with label OCP. Show all posts
Showing posts with label OCP. Show all posts

Thursday, August 1, 2019

TE intros straddle-mount connectors for OCP NICs

TE Connectivity (TE) introduced its new Sliver straddle-mount connectors, which are the new standard form factor supporting a faceplate-pluggable Open Compute Project (OCP) NIC 3.0.  Applications include OCP NIC 3.0 cards in a low profile. OCP NIC 3.0 cards are horizontal and faceplate-pluggable, which helps to increase airflow through the enclosure and enable system ease of design. TE’s Sliver straddle-mount products are among the most cost-effective and highest performing solutions on the market.

TE said its Sliver straddle-mount connectors for SFF-TA-1002 support high speeds through PCIe Gen 5, with a roadmap to 112G. SFF-TA-1002 is a proposed alternative or replacement to many form factors, including M.2, U.2, and PCIe. The high-density, 0.6mm pitch of the Sliver straddle-mount connectors also supports next-gen silicon PCIe lane counts, which is where current products in the market begin to max out.

“OCP-compliant designs are taking the data center equipment industry by storm, and TE Connectivity is a major supplier of connectors for these designs,” said Ann Ou, product manager at TE Connectivity. “Our Sliver straddle-mount products deliver high performance and density in a standardized form factor to facilitate design and manufacturing for our data center equipment partners.”

https://www.te.com/usa-en/products/connectors/pcb-connectors/sliver-connectors.html?source=header-match&tab=pgp-story

Monday, May 27, 2019

Wiwynn offers edge server based on OCP OpenEDGE

Taiwan-based Wiwynn introduced an Edge Platform based on the Nokia-led OCP OpenEDGE specification.

Wiwynn EP100 is a 3U edge system that supports up to five 1U half-width servers and can flexibly configure with 2U half-width and 1U full-width server sleds. Communication service providers can also scale computing power by adding more EP100 systems for applications ranging from base stations to regional central offices.

“We are thrilled to embrace edge cloud opportunities in the 5G era by applying Open Compute Project (OCP) hardware and initiating an open firmware development kit,” said Dr. Sunlai Chang, Senior Vice President and CTO of Wiwynn. “Wiwynn EP100 enables communication service providers to address diverse low-latency data processing demands of Cloud RAN and modern central offices with a flexible and high-efficiency architecture at a balanced cost.”

“Nokia AirFrame open edge launched April 2018 welcomes Wiwynn’s adoption of the Nokia led OCP OpenEDGE specification, with this announcement of a new equipment provider for enclosure and sled designs. Wiwynn’s contribution to OCP OpenEDGE is an important step forward in the creation of a healthy ecosystem and providing Far Edge Data Center Equipment consumers with multi-source procurement options to avoid vendor lock-in,” said Hannu Nikurautio, Head of Cloud RAN Product Management of Nokia.

Tuesday, March 19, 2019

Huawei adopts Open Rack in its cloud data centers

Huawei Technologies will adopt Open Rack in its new public cloud data centers worldwide.

The Open Rack initiative proposed by the Open Compute Project (OCP) seeks to redefine the data center rack and is one of the most promising developments in the scale computing environment. It is the first rack standard that is designed for data centers, integrating the rack into the data center infrastructure, a holistic design process that considers the interdependence of everything from the power grid to the gates in the chips on each motherboard. Adopted by some of the world’s largest hyperscale internet service providers such as Facebook, Google and Microsoft, Open Rack is helping to lower total cost of ownership (TCO) and improve energy efficiency in the scale compute space.

“Huawei’s engineering and business leaders recognized the efficiency and flexibility that Open Rack offers, and the support that is available from a global supplier base. Providing cloud services to a global customer base creates certain challenges. The flexibility of the Open Rack specification and the ability to adapt for liquid cooling allows Huawei to service new geographies. Huawei’s decision to choose Open Rack is a great endorsement!” stated Bill Carter, Chief Technology Officer for the Open Compute Project Foundation.

“Huawei’s strategic investment and commitment to OCP is a win-win,” said Mr. Kenneth Zhang, General Manager of FusionServer, Huawei Intelligent Computing Business Department. “Combining Huawei’s extensive experience in Telco and Cloud deployments together with the knowledge of the vast OCP community will help Huawei to provide cutting edge, flexible and open solutions to its global customers. In turn, Huawei can leverage its market leadership and global datacenter infrastructure to help introduce OCP to new geographies and new market segments worldwide.”

Friday, March 15, 2019

OCP 2019: Edgecore debuts "Minipack" Switch for 100G and 400G

At OCP Summit 2019, Edgecore Networks introduced an open modular switch for 100G and 400G networking that conforms to the Minipack Fabric Switch design contributed by Facebook to the Open Compute Project (OCP).

Minipack is a disaggregated whitebox system providing a flexible mix of 100GbE and 400GbE ports up to a system capacity of 12.8Tbps.

The Minipack switch can support a mix of 100G and 400G Ethernet interfaces up to a maximum of 128x100G or 32x400G ports. Minipack is based on Broadcom StrataXGS Tomahawk 3 Switch Series silicon capable of line rate 12.8Tbps Layer2 and Layer3 switching.

The Minipack front panel has eight slots for port interface modules (PIM). The first PIM options available for the Edgecore Minipack switch are the PIM-16Q with 16x100G QSFP28 ports, and the PIM-4DD with 4x400G QSFP-DD ports. The Minipack modular switch is a 4U form factor, power optimized for data center deployments, and includes hot-swappable redundant power supplies and fans for high availability.

Edgecore said its Minipack AS8000 Switch enables network operators to select disaggregated NOS and SDN software options from commercial partners and open source communities to address different use cases and operational requirements. Edgecore has ported and validated Software for Open Networking in the Cloud (SONiC), the OCP open source software platform, on the Minipack AS8000 Switch as an open source option for high capacity data center fabrics. In addition, Cumulus Networks announced the availability of its Cumulus Linux operating system for the Edgecore Minipack switch.

“Network operators are demanding open network solutions to increase their network capacities with 400G and higher density 100G switches based on open technology. The Edgecore Minipack switch broadens our full set of OCP Accepted open network switches, and enables data center operators to deploy higher capacity fabrics with flexible combinations of 100G and 400G interfaces and pay-as-you-grow expansion,” said George Tchaparian, CEO, Edgecore Networks. “The open and modular design of Minipack will enable Edgecore and partners to address more data center and service provider use cases in the future by developing innovative enhancements such as additional interface modules supporting encryption, multiple 400G port types, coherent optical ports and integrated optics, plus additional Minipack Switch family members utilizing deep-buffer or highly programmable or next-generation switching silicon in the same flexible modular form factor.”

“Facebook designed Minipack as a fabric switch with innovative performance, power optimization and modularity to enable our deployment of the next generation data center fabrics,” said Hans-Juergen Schmidtke, Director of Engineering, Facebook. “We have contributed the Minipack design to OCP in order to stimulate additional design innovation and to facilitate availability of the platform to network operators. We welcome Edgecore’s introduction of Minipack as a commercial whitebox product.”

The Minipack AS8000 Switch with PIM-16Q 100G QSFP28 interface modules will be available from Edgecore resellers and integrators worldwide in Q2. PIM-4DD 400G QSFP-DD interface modules will be available in Q3. SONiC open source software, including platform drivers for the Edgecore Minipack AS8000 Switch, are available from the SONiC GitHub.

OCP 2019: Netronome unveils 50GbE SmartNICs

Netronome unveiled its Agilio CX 50GbE SmartNICs in OCP Mezzanine 2.0 form factor with line-rate advanced cryptography and 2GB onboard DDR memory.

The Agilio CX SmartNIC platform fully and transparently offloads virtual switch, virtual router, eBPF and P4-based datapath processing for networking functions such as overlays, security, load balancing and telemetry, enabling cloud and SDN-enabled compute and storage servers to free up critical server CPU cores for application processing while delivering significantly higher performance.

Netronome said its new SmartNIC reduces tail latency significantly enabling high-performance Web 2.0 applications to be deployed in cost and energy-efficient servers. With advanced Transport Layer Security (TLS/SSL)-based cryptography support at line-rate and up to two million stateful sessions per SmartNIC, web and data storage servers in hyperscale environments can now be secured tighter than ever before, preventing hacking of networks and precious user data.

Deployable in OCP Yosemite servers, the Agilio CX 50GbE SmartNICs implement a standards-based and open advanced buffer management scheme enabled by the unique many-core multithreaded processing memory-based architecture of the Netronome Network Flow Processor (NFP) silicon. This improves application performance and enables hyperscale operators to maintain high levels of service level agreements (SLAs). Dynamic eBPF-based programming and hardware acceleration enables intelligent scaling of networking workloads across multiple host CPU cores, improving server efficiency. The solution also enhances security and data center efficiencies by offloading TLS, a widely deployed protocol used for encryption and authentication of applications that require data to be securely exchanged over a network.

“Securing user data in Web 2.0 applications and preventing malicious attacks such as BGP hijacking as experienced recently in hyperscale operator infrastructures are critical needs that have exacerbated significantly in recent years,” said Sujal Das, chief marketing and strategy officer at Netronome. “Netronome developed the Agilio CX 50GbE SmartNIC solution to address these vital industry requirements by meticulously optimizing the hardware with open source and hyperscale operator applications and infrastructures.”

Agilio CX 50GbE SmartNICs in OCP Mezzanine 2.0 form factor are sampling today and include the generally available NFP-5000 silicon. The production version of the board and software is expected in the second half of this year.

OCP 2019: Inspur and Intel contribute 4-socket Crane Mountain design

Inspur and Intel will contribute a jointly-developed, cloud-optimized platform code named "Crane Mountain" to the OCP community.

The four-socket platform is a high-density, flexible and powerful 2U server, validated for Intel Xeon (Cascade Lake) processors and optimized with Intel Optane DC persistent memory.

Inspur said its NF8260M5 system is being used by Intel as a lead platform for introducing the “high-density cloud-optimized” four-socket server solution to the cloud service provider (CSP) market.

At OCP Summit 2019, Inspur also showcased three new artificial intelligence (AI) computing solutions, and announced the world’s first NVSwitch-enabled 16-GPU fully connected GPU expansion box, the GX5, which is also part of an advanced new architecture that combines the 16-GPU box with an Inspur 4-socket Olympus server. This solution features 80 CPU cores, making it suitable for deep-learning applications that require maximum throughput across multiple workloads. The Inspur NF8360M5 4-socket Olympus server is going through the OCP Contribution and OCP Accepted recognition process.

Inspur also launched the 8-GPU box ON5388M5 with NVLink 2.0, as a new OCP contribution-in-process for 8-GPU box solutions. The Inspur solution offers two new topologies for different AI applications, such as autonomous driving and voice recognition.




Alan Chang discusses Inspur's contributions to the Open Compute Project, including a High-density Cloud-optimized platform code-named “Crane Mountain”.

This four-socket platform is a high-density, flexible and powerful 2U server, validated for Cascade Lake processors and optimized with Intel Optane DC persistent memory.  It is designed and optimized for cloud Infrastructure-aaS, Function-aaS, and Bare-Metal-aaS solutions.

https://youtu.be/JZj-arumtD0


OCP 2019: Toshiba tests NVM Express over Fabrics

At OCP Summit 2019, Toshiba Memory America demonstrated proof-of-concept native Ethernet NVMe-oF (NVM Express over Fabrics) SSDs.

Toshiba Memory also showed its KumoScale software, which is a key NVMe-oF enabler for disaggregated storage cloud deployments. First introduced last year, Toshiba Memory has recently enhanced KumoScale’s capabilities with support for TCP-based networks.

OCP 2019: Wiwynn intros Open19 server based on Project Olympus

At OCP 2019, Wiwynn introduced an Open19 server based on Microsoft’s Project Olympus server specification.

The SV6100G3 is a 1U double wide brick server that complies with the LinkedIn led Open19 Project standard, which defines a cross-industry common form factor applicable to EIA 19” racks. With the Open19 defined brick servers, cages and snap-on cables, operators can blind mate both data and power connections to speed up rack deployment and enhance serviceability.

Based on the open source cloud hardware specification of Microsoft’s Project Olympus, the SV6100G3 features two Intel Xeon Processor Scalable family processors, up to 1.5TB memory and one OCP Mezzanine NIC. The

“Wiwynn has extensive experience in open IT gears design to bring TCO improvement for hyperscale data centers,” said Steven Lu, Vice President of Product Management at Wiwynn. “We are excited to introduce the Open19 based SV6100G3 which assists data center operators of all sizes to benefit from the next generation high-efficiency open standards with lower entry barrier.”

Thursday, March 14, 2019

OCP 2019: Microsoft's Project Zipline offers better data compression

At OCP 2019, Microsoft unveiled Project Zipline, a new compression standard for data sets covering Edge to Cloud app.

Project Zipline promises "compression without compromises where always-on procession achieves high compression ratios with high throughput and low latency. Zipline encompasses algorithms, software, and silicon engines.

Microsoft estimates Zipline data set sizes at 4 ~ 8% of uncompressed sizes. Over time, Microsoft anticipates Project Zipline compression technology will make its way into network data processing, smart SSDs, archival systems, cloud appliances, general purpose microprocessor, IoT, and edge devices.

A number of big name silicon and storage companies are already on board as partners.

https://azure.microsoft.com/en-us/blog/hardware-innovation-for-data-growth-challenges-at-cloud-scale/


OCP 2019: Arista's 12.8Tbps switch developed with Facebook

At OCP 2019, Arista Networks announced a high-radix 12.8Tbps switching system developed with Facebook with the goal of simplifying 100/400G networking.

The Arista 7360X Series doubles system density while reducing power consumption and cost by doubling the network diameter and reducing the number of required leaf-spine tiers. Full manageability via FBOSS (Facebook Open Switching Software) is supported for controlling power and thermal efficiency along with the control plane.

The new platform is a compact, four rack unit design and all active components are removable. It delivers a 60% reduction in power at under 10 watts per 100G port. Standards-based, the system comes with support for 100G QSFP and 400G OSFP or QSFP-DD optics and cables. Arista EOS delivers the advanced traffic management, automation and telemetry features needed to build and maintain modern cloud networks.
The Arista 7368X4 Series is available as an 8-slot modular system with a choice of 100G and 400G modules based on industry-standard interfaces and support for EOS.

It is currently shipping with 100G interfaces. Price per 100G port is under $600.

“The Arista solution has helped Facebook to gain significant improvements in power and space efficiency, reducing the number of switch chips in the network stack and allowing power to be freed up for compute resources,” said Najam Ahmad, Vice President Network Engineering for Facebook. “Having both an internally developed Minipack and the Arista solution allows Facebook to remain multi-sourced, with an option to run Arista EOS or FBOSS on both, where either system can be deployed in multiple tiers of networks.”

OCP 2019: New Open Domain-Specific Architecture sub-project

The Open Compute Project is launching an Open Domain-Specific Architecture (ODSA) sub-project to define an open interface and architecture that enables the mixing and matching of available silicon die from different suppliers onto a single SoC for data center applications. The goal is to define a process to integrate best-of-breed chiplets onto a SoC.

Netronome played a lead role initiating the new project.

“The open architecture for domain-specific accelerators being proposed by the ODSA Workgroup brings the benefits of disaggregation to the world of SoCs. The OCP Community led by hyperscale operators has been at the forefront driving disaggregation of server and networking systems. Joining forces with OCP, the ODSA Workgroup brings the next chapter of disaggregation for domain-specific accelerator SoCs as it looks toward enabling proof of concepts and deployable products leveraging OCP’s strong ecosystem of hardware and software developers,” said Sujal Das, chief marketing and strategy officer at Netronome.

"Coincident with the decline of Moore's law, the silicon industry is facing longer development times and significantly increased complexity. We are pleased to see the ODSA Workgroup become a part of the Open Compute Project. We hope workgroup members will help to drive development practices and adoption of best-of-breed chiplets and SoCs. Their collaboration has the potential to further democratize chip development, and ultimately reduce design overhead of domain-specific silicon in emerging use cases,” said Aaron Sullivan, Director Hardware Engineering at Facebook."

https://2019ocpglobalsummit.sched.com/event/JxrZ/open-domain-specific-architecture-odsa-sub-project-launch

Wiki page: https://www.opencompute.org/wiki/Server/ODSA

Mailing list: https://ocp-all.groups.io/g/OCP-ODSA

Netronome proposes open "chiplets" for domain specific workloads

Netronome unveiled its open architecture for domain-specific accelerators .

Netronome is collaborating with six leading silicon companies, Achronix, GLOBALFOUNDRIES, Kandou, NXP, Sarcina and SiFive, to develop this open architecture and related specifications for developing chiplets that promise to reduce silicon development and manufacturing costs.

The idea is fo chiplet-based silicon to be composed using best-of-breed components such as processors, accelerators, and memory and I/O peripherals using optimal process nodes. The open architecture will provide a complete stack of components (known good die, packaging, interconnect network, software integration stack) that lowers the hardware and software costs of developing and deploying domain-specific accelerator solutions. Implementing open specifications contributed by participating companies, any vendor’s silicon die can become a building block that can be utilized in a chiplet-based SoC design.

“Netronome’s domain-specific architecture as used in its Network Flow Processor (NFP) products has been designed from the ground up keeping modularity, and economies of silicon development and manufacturing costs as top of mind,” said Niel Viljoen, founder and CEO at Netronome. “We are extremely excited to collaborate with industry leaders and contribute significant intellectual property and related open specifications derived from the proven NFP products and apply that effectively to the open and composable chiplet-based architecture being developed in the ODSA Workgroup.”

OCP 2019: Broadcom intros OCP NIC 3.0 adapters

Broadcom introduced OCP NIC 3.0 adapters supporting the full range of data rates and interfaces from 1GbE to 200GbE. The portfolio includes a wide selection of Ethernet adapter cards with 1-, 2- and 4-port configurations.

The new 100GbE and 200GbE adapters, which are based on Broadcom's NetXtreme E-Series Ethernet controllers, also feature Broadcom’s Thor multi-host controller that has the industry’s best performing 56G PAM-4 SerDes and PCIe 4.0 interface. Sampling is underway.

“OCP NIC 3.0 provides a unified specification and form factor for connecting server and storage systems,” said Ed Redmond, senior vice president and general manager of the Compute and Connectivity Division at Broadcom. “With strong customer demand and virtually unanimous industry backing for this unified solution, our complete portfolio of OCP NIC 3.0 adapters facilitates broad adoption of this new form factor and drives further innovation in high performance computing and storage applications to address an ever-increasing demand for bandwidth.”

Tuesday, March 20, 2018

Netronome intros 25/50GbE SmartNICs based on the OCP v2.0 mezzanine spec

Netronome unveiled its Agilio CX 25 and 50Gb/s SmartNICs featuring support for the OCP v2.0 mezzanine specification.

The Agilio CX SmartNIC platform fully and transparently offloads virtual switch, router, P4 and eBPF-based datapath processing for networking functions such as overlays, security, load balancing and telemetry, enabling cloud and SDN-enabled compute and storage servers to free up critical server CPU cores for application processing while delivering significantly higher performance.

Netronome said its new Agilio SmartNICs are deployable in multiple OCP server and storage platforms, and pack 60 processing cores within stringent OCP v2.0 form factor and power profiles to deliver nine times higher kernel datapath processing capabilities per server to enhance security and data access efficiencies.

The Agilio CX 25/50GbE SmartNICs utilize open sourced, Linux-based, upstreamed drivers and compilers to enable seamless offload of Enhanced Berkeley Packet Filter/Express Data Processing (eBPF/XDP) applications.

“OCP designs are known to deliver size and cost-effective scale and performance. SoC silicon used in SmartNICs has typically involved much larger size, cost and power profiles while delivering lower performance,” said Sujal Das, chief marketing and strategy officer at Netronome. “Netronome unique SoC technology with open-source programming available in Agilio CX 25/50GbE SmartNICs enables the industry to realize the confluence of the value of OCP designs with the much sought-after capabilities of SmartNICs.”

Agilio CX 25GbE SmartNICs for OCP are sampling today, and Agilio CX 50GbE SmartNICs for OCP will sample in Q3 2018.

https://www.netronome.com/products/agilio-cx/

Saturday, March 3, 2018

TE Connectivity targets higher density switches with zQSFP+ Stacked Belly-to-Belly Cages

TE Connectivity (TE) introduced its zQSFP+ stacked belly-to-belly cages designed for high-density switches with 48 or 64 silicon ports. The cage supports a single printed circuit board (PCB) architecture (versus two PCBs) in each line card.

The company said this new design addresses the requirements for higher density switches, including Open Compute Project (OCP) reference designs. The cages support up to 28G NRZ and 56G PAM-4 data rates to achieve faster speeds in these high-density switches. TE's zQSFP+ stacked belly-to-belly cages are dual-sourced with Molex and are drop-in replacements.

"These new zQSFP+ cages allow us to design denser switches while reducing costs by using just one PCB per line card," said Melody Chiang, product manager at Accton. "TE continually supports our efforts to design faster, denser switches, and this belly-to-belly configuration is just the latest example."

Tuesday, February 6, 2018

Open Compute Project measures its market impact

The Open Compute Project Foundation (OCP) has engaged IHS Markit to determine the adoption and impact of OCP gear in the technology industry.

IHS Markit interviewed OCP members, suppliers and service providers, as well as incorporated their own in-depth industry research to determine non-board member revenue by region and vertical, as well as provide a forecast through 2021. OCP Board member companies include Facebook, Goldman Sachs, Intel, Microsoft and Rackspace. Equipment markets explored in this study included servers, storage, network, rack, power and peripherals.

Some preliminary findings:
  • 2017 OCP YoY growth from non-board member companies was 103%
  • The 5-year CAGR (compound annual growth rate) is 59%, while the total market growth is expected to be in the low single digits
  • Servers account for almost 75% of non-board OCP revenue in 2017, with rack, power, peripherals and other (primarily WiFi and PON, or passive optical networks) expecting the highest growth rates
  • The America’s represented the majority of non-board OCP revenue in 2017, through hyperscaler, telco and financial industry adoption, while EMEA has a forecasted CAGR of 70%, primarily driven by telecommunications firms
    EMEA revenue from non-board member companies is expected to surpass $1 billion (US) by 2021, while Asia Pacific is expected to surpass EMEA in adoption as early as 2020

“OCP is excited to work with IHS Markit to get an independent view of our ability to influence the market through adoption. This study creates a baseline for us to measure our progress against, as well as gives us insight into projected growth in regions and markets. It also provides a view into perceived value as well as barriers for adoption. While we are pleased with the initial indicators, we also recognize we have much to do to continue our momentum,” stated Rocky Bullock, CEO for the Open Compute Project Foundation.

In th

Wednesday, September 6, 2017

Radisys Announces Next-Generation DCEngine Hardware

Radisys released its next-generation DCEngine hardware, a pre-packaged rack solution based on Open Compute Project (OCP) principles and designed to transition Communications Service Providers (CSPs) to virtualized data centers.

DCEngine, which is based on the OCP-ACCEPTED CG-OpenRack-19 specification, leverages the Intel Xeon Scalable processors. It supports Intel Xeon Scalable Architecture-based compute and storage sleds, with a wide range of processing options that can be installed and tuned up inside existing DCEngine systems in minutes. DCEngine meets CSP requirements for an enhanced, scalable power systems that delivers 25,000W per feed for higher processor density, greater efficiency and lowered expenses as well as DC and AC power entry options suitable for a wide range of environments. It also offers an In-rack Uninterruptible Power Supply (UPS) option to support simplified infrastructure, easy maintenance and lower overhead. Radisys delivers the final pre-assembled DCEngine rack with no on-site setup.

Radisys said its next-gen DCEngine supports CSPs transition away from proprietary hardware and vendor lock-in to a data center environment built with open source software and hardware components. The enhanced rack design, combined with operations and support modeled after Facebook practices, can bring an annual OpEx saving of nearly 40 percent compared to traditional data center offerings, while reducing deployment time from months to just days.

“Our CSP customers are requiring open telecom solutions to support their data center transformations, easing their pain points around power and costs, while simplifying their operational complexities,” said Bryan Sadowski, vice president, FlowEngine and DCEngine, Radisys. “With the next-generation DCEngine, combined with Radisys’ deep telco expertise and OCP’s operations/support model, service providers not only get innovation and service agility, but also gain significant TCO savings.”

http://www.radisys.com

Friday, June 30, 2017

AT&T to launch software-based 10G XGS-PON trial

AT&T announced it will conduct a 10 Gbit/s XGS-PON field trial in late 2017 as it progresses with plans to virtualise access functions within the last mile network.

The next-generation PON trial is designed to deliver multi-gigabit Internet speeds to consumer and business customers, and to enable all services, including 5G wireless infrastructure, to be converged onto a single network.

AT&T noted that XGS-PON is a fixed wavelength symmetrical 10 Gbit/s passive optic network technology that can coexist with the current GPON technology. The technology can provide 4x the downstream bandwidth of the existing system, and is as cost-effective to deploy as GPON. As part of its network virtualisation initiative, AT&T plans to place some XGS-PON in the cloud with software leveraging open hardware and software designs to speed development.
AT&T has worked with ON.Lab to develop and test ONOS (Open Network Operating System) and VOLTHA (Virtual Optical Line Terminator Hardware Abstraction) software. This technology allows the lower level details of the silicon to be hidden. AT&T stated that it has also submitted a number of open white box XGS OLT designs to the Open Compute Project (OCP) and is currently working with the project to gain approval for the solutions.

The company noted that interoperability is a key element of its Open Access strategy, and prompted the creation of an OpenOMCI specification, which provides an interoperable interface between the OLT and the home devices. This specification, which forms a key part of software-defined network (SDN) and network function virtualisation (NFV), has been distributed to standards and open source communities.



  • AT&T joined OCP in January 2016 to support its network transformation program. Earlier this year at the OCP Summit Edgecore Networks, a provider of open networking solutions and a subsidiary of Accton Technology, announced design contributions to OCP including a 25 Gigabit Ethernet top-of-rack switch and high-density 100 Gigabit Ethernet spine switch. The company also showcased new open hardware platforms.
  • At the summit, Edgecore displayed a disaggregated virtual OLT for PON deployment at up to 10 Gbit/ based on the AT&T Open XGS-PON 1 RU OLT specification that was contributed to the OCP Telco working group.
  • Edgecore's ASFvOLT16 disaggregated virtual OLT is based on the AT&T Open XGS-PON 1 RU OLT specification and features Broadcom StrataDNX switch and PON MAC SOC silicon, offering 16 ports of XGS-PON or NG-PON2, with 4 x QSFP28 ports and designed for next generation PON deployments and R-CORD telecom infrastructure.

Friday, March 24, 2017

Microsoft's Project Olympus provides an opening for ARM

A key observation from this year's Open Compute Summit is that the hyper-scale cloud vendors are indeed calling the shots in terms of hardware design for their data centres. This extends all the way from the chassis configurations to storage, networking, protocol stacks and now customised silicon.

To recap, Facebook's newly refreshed server line-up now has 7 models, each optimised for different workloads: Type 1 (Web); Type 2 - Flash (database); Type 3 – HDD (database); Type 4 (Hadoop); Type 5 (photos); Type 6 (multi-service); and Type 7 (cold storage). Racks of these servers are populated with a ToR switch followed by sleds with either the compute or storage resources.

In comparison, Microsoft, which was also a keynote presenter at this year's OCP Summit, is taking a slightly different approach with its Project Olympus universal server. Here the idea is also to reduce the cost and complexity of its Azure rollout in hyper-scale date centres around the world, but to do so using a universal server platform design. Project Olympus uses either a 1 RU or 2 RU chassis and various modules for adapting the server for various workloads or electrical inputs. Significantly, it is the first OCP server to support both Intel and ARM-based CPUs. 

Not surprisingly, Intel is looking to continue its role as the mainstay CPU supplier for data centre servers. Project Olympus will use the next generation Intel Xeon processors, code-named Skylake, and with its new FPGA capability in-house, Intel is sure to supply more silicon accelerators for Azure data centres. Jason Waxman, GM of Intel's Data Center Group, showed off a prototype Project Olympus server integrating Arria 10 FPGAs. Meanwhile, in a keynote presentation, Microsoft Distinguished Engineer Leendert van Doorn confirmed that ARM processors are now part of Project Olympus.

Microsoft showed Olympus versions running Windows server on Cavium's ThunderX2 and Qualcomm's 10 nm Centriq 2400, which offers 48 cores. AMD is another CPU partner for Olympus with its ARM-based processor, code-named Naples.  In addition, there are other ARM licensees waiting in the wings with designs aimed at data centres, including MACOM (AppliedMicro's X-Gene 3 processor) and Nephos, a spin-out from MediaTek. For Cavium and Qualcomm, the case for ARM-powered servers comes down to optimised performance for certain workloads, and in OCP Summit presentations, both companies cited web indexing and search as one of the first applications that Microsoft is using to test their processors.

Project Olympus is also putting forward an OCP design aimed at accelerating AI in its next-gen cloud infrastructure. Microsoft, together with NVIDIA and Ingrasys, is proposing a hyper-scale GPU accelerator chassis for AI. The design, code named HGX-1, will package eight of NVIDIA's latest Pascal GPUs connected via NVIDIA’s NVLink technology. The NVLink technology can scale to provide extremely high connectivity between as many as 32 GPUs - conceivably 4 HGX-1 boxes linked as one. A standardised AI chassis would enable Microsoft to rapidly rollout the same technology to all of its Azure data centres worldwide.

In tests published a few months ago, NVIDIA said its earlier DGX-1 server, which uses Pascal-powered Tesla P100 GPUs and an NVLink implementation, were delivering 170x of the performance of standard Xeon E5 CPUs when running Microsoft’s Cognitive Toolkit.

Meanwhile, Intel has introduced the second generation of its Rack Scale Design for OCP. This brings improvements in the management software for integrating OCP systems in a hyper-scale data centre and also adds open APIs to the Snap open source telemetry framework so that other partners can contribute to the management of each rack as an integrated system. This concept of easier data centre management was illustrated in an OCP keynote by Yahoo Japan, which amazingly delivers 62 billion page views per day to its users and remains the most popular website in that nation. The Yahoo Japan presentation focused on an OCP-compliant data centre it operates in the state of Washington, its only overseas data centre. The remote data centre facility is manned by only a skeleton crew that through streamlined OCP designs is able to perform most hardware maintenance tasks, such as replacing a disk drive, memory module or CPU, in less than two minutes.

One further note on Intel’s OCP efforts relates to its 100 Gbit/s CWDM4 silicon photonics modules, which it states are ramping up in shipment volume. These are lower cost 100 Gbit/s optical interfaces that run over up to 2 km for cross data centre connectivity.

On the OCP-compliant storage front not everything is flash, with spinning HDDs still in play. Seagate has recently announced a 12 Tbytes 3.5 HDD engineered to accommodate 550 Tbyte workloads annually. The company claims MTBF (mean time between failure) of 2.5 million hours and the drive is designed to operate 24/7 for five years. These 12 Tbyte enable a single 42 U rack to deploy over 10 Pbytes of storage, quite an amazing density considering how much bandwidth would be required to move this volume of data.


Google did not make a keynote appearance at this year’s OCP Summit, but had its own event underway in nearby San Francisco. The Google Cloud Next event gave the company an even bigger stage to present its vision for cloud services and the infrastructure needed to support it.

Wednesday, March 22, 2017

Facebook shows its progress with Open Compute Project

The latest instalment of the annual Open Compute Project (OCP) Summit, which was held March 8-9 in Silicon Valley, brought new open source designs for next-generation data centres. It is six years since Facebook launched OCP and it has grown into quite an institution. Membership in the group has doubled over the past year to 195 companies and it is clear that OCP is having an impact in adjacent sectors such as enterprise storage and telecom infrastructure gear.

The OCP was never intended to be a traditional standards organisation, serving more as a public forum in which Facebook, Microsoft and potentially other big buyers of data centre equipment can share their engineering designs with the industry. The hyper-scale cloud market, which also includes Amazon Web Services, Google, Alibaba and potentially others such as IBM and Tencent, are where the growth is at. IDC, in its Worldwide Quarterly Cloud IT Infrastructure Tracker, estimates total spending on IT infrastructure products (server, enterprise storage and Ethernet switches) for deployment in cloud environments will increase by 18% in 2017 to reach $44.2 billion. Of this, IDC estimates that 61% of spending will be by public cloud data centres, while off-premises private cloud environments constitute 15% of spending.

It is clear from previous disclosures that all Facebook data centres have adopted the OCP architecture, including its primary facilities in Prineville (Oregon), Forest City (North Carolina), Altoona (Iowa) and LuleƄ (Sweden). Meanwhile, the newest Facebook data centres, under construction in Fort Worth (Texas) and Clonee, Ireland are pushing OCP boundaries even further in terms of energy efficiency.

Facebook's ambitions famously extend to connecting all people on the planet and it has already passed the billion monthly user milestone for both its mobile and web platforms. The latest metrics indicate that Facebook is delivering 100 million hours of video content every day to its users; 95+ million photos and videos are shared on Instagram on a daily basis; and 400 million people now use Messenger for voice and video chat on a routine basis.

At this year's OCP Summit, Facebook is rolling out refreshed designs for all of its 'vanity-free' servers, each optimised for a particular workload type, and Facebook engineers can choose to run their applications on any of the supported server types. Highlights of the new designs include:

·         Bryce Canyon, a very high-density storage server for photos and videos that features a 20% higher hard disk drive density and a 4x increase in compute capability over its predecessor, Honey Badger.

·         Yosemite v2, a compute server that features 'hot' service, meaning servers do not need to be powered down when the sled is pulled out of the chassis in order for components to be serviced.

·         Tioga Pass, a compute server with dual-socket motherboards and more IO bandwidth (i.e. more bandwidth to flash, network cards and GPUs) than its predecessor, Leopard, enabling larger memory configurations and faster compute time.

·         Big Basin, a server designed for artificial intelligence (AI) and machine learning, optimised for image processing and training neural networks. Compared to its predecessor, Big Basin can train machine learning models that are 30% larger due to greater arithmetical throughput and by implementing more memory (12 to 16 Gbytes).

Facebook currently has web server capacity to deliver 7.5 quadrillion instructions per second and its 10-year roadmap for data centre infrastructure, also highlighted at the OCP Summit, predicts that AI and machine learning will be applied to a wide range of applications hosted on the Facebook platform. Photos and videos uploaded to any of the Facebook services will routinely go through machine-based image recognition and to handle this load Facebook is pursuing additional OCP designs that bring fast storage capabilities closer to its compute resources. It will leverage silicon photonics to provide fast connectivity between resources inside its hyper-scale data centres and new open source models designed to speed innovation in both hardware and software.

Monday, March 13, 2017

Aricent unveils ConvergedOS Open Hardware Operating System

Aricent, a global design and engineering company, announced the introduction of its intelligent network operating system, Aricent ConvergedOS, designed to provide network equipment and technology system providers with a ready-to-deploy, open hardware and Open Compute Project (OCP)-compatible software solution.

In addition, through its established partnership with Inventec, Aricent is introducing the new network operating system on the Inventec D7032Q28B 100 Gigabit Ethernet spine switch targeting data centre applications and enterprise and service provider network deployments.

The Aricent ConvergedOS provides support for a total of 32 x 100 Gigabit Ethernet QSFP28 interfaces with line-rate Layer 2/3 performance of up to 3.2 Tbit/s in a PHY-less design to meet growing traffic demands in data centres.

ConvergedOS is based on Aricent's Intelligent Switching Solution (ISS), a switching, routing and network optimisation software platform designed to enable connectivity in the data centre for storage area networking, 100 Gbit/s links and distribution of workloads across data centres via Ethernet VPN services.

Key features of Aricent's ConvergedOS solution include:

1. Data centre networking, with support for L2 switching VLAN, L2 multicast IGMP/MLD snooping, IGMP/MLD proxy, link aggregation, LLDP-MED, data centre bridging (DCB)-PFC, ETS, QCN and DCBX, LLDP.
2. BGP spine life architecture, enabling support for a faster convergence, cloud-ready management interface.
3. Support for L3 (IPv4/v6) unicast and multicast routing RIP, OSPFv, IS-IS, BGP4, IGMP (v1/v2/v3), MLD, router, PIM-SM, PIM-DM, PIM-Bidirectional, DVMRP and MSDP.
4. Platform protection via hot redundancy, VRRP (IPv4/v6), uplink failure detection (UFD), multi-chassis LAG, split horizon.
5. Data centre virtualisation and overlay, with VxLAN gateways, Ethernet VPN (VxLAN), edge virtualisation via 802.1Qbg, S-channel, MPLS VPN.
6. Data centre convergence, with support for Fibre Channel over Ethernet (FCoE), FIP snooping, FC direct attach.
7. Data centre telemetry with Broadview and agent software for collecting ASIC stats and counter for diagnosis.

Aricent recently announced new capabilities for its Autonomous Network Solution (ANS) for the automation of next-generation virtualised networks with new components based on standards including ETSI NFV, ETSI AFI GANA, MEF LSO and TM-Forum's Zoom.

http://www.aricent.com

See also