Showing posts with label OCP. Show all posts
Showing posts with label OCP. Show all posts

Sunday, March 1, 2020

OCP Summit in San Jose is cancelled

The Open Compute Project Foundation (OCP) has decided to cancel the OCP Global Summit due to the COVID-19 situation. The event was scheduled to take place March 3-5 at the San Jose Convention Center in California. Also canceled were associated events including the Future Technology Symposium, the OCP SONiC/SAI Pre-Summit Workshop, and the Open System Firmware Hack event.

The OCP Summit is an annual event with an active and broad community following.

https://www.opencompute.org/summit/global-summit

Sunday, September 29, 2019

AT&T contributes Distributed Disaggregated Chassis white box to OCP

AT&T has contributed its specifications for a Distributed Disaggregated Chassis (DDC) white box architecture to the Open Compute Project (OCP). The contributed design aims to define a standard set of configurable building blocks to construct service provider-class routers, ranging from single line card systems, a.k.a. “pizza boxes,” to large, disaggregated chassis clusters.  AT&T said it plans to apply the design to the provider edge (PE) and core routers that comprise its global IP Common Backbone (CBB).

“The release of our DDC specifications to the OCP takes our white box strategy to the next level,” said Chris Rice, SVP of Network Infrastructure and Cloud at AT&T. “We’re entering an era where 100G simply can’t handle all of the new demands on our network. Designing a class of routers that can operate at 400G is critical to supporting the massive bandwidth demands that will come with 5G and fiber-based broadband services. We’re confident these specifications will set an industry standard for DDC white box architecture that other service providers will adopt and embrace.”

AT&T’s DDC white box design, which is based on Broadcom’s Jericho2 chipset, calls for three key building blocks:

  • A line card system that supports 40 x 100G client ports, plus 13 400G fabric-facing ports.
  • A line card system that support 10 x 400G client ports, plus 13 400G fabric-facing ports.
  • A fabric system that supports 48 x 400G ports. A smaller, 24 x 400G fabric systems is also included.

AT&T points out that the line cards and fabric cards are implemented as stand-alone white boxes, each with their own power supplies, fans and controllers, and the backplane connectivity is replaced with external cabling. This approach enables massive horizontal scale-out as the system capacity is no longer limited by the physical dimensions of the chassis or the electrical conductance of the backplane. Cooling is significantly simplified as the components can be physically distributed if required. The strict manufacturing tolerances needed to build the modular chassis and the possibility of bent pins on the backplane are completely avoided.

Four typical DDC configurations include:

  • A single line card system that supports 4 terabytes per second (Tbps) of capacity.
  • A small cluster that consists of 1 plus 1 (added reliability) fabric systems and up to 4 line card systems. This configuration would support 16 Tbps of capacity.
  • A medium cluster that consists of 7 fabric systems and up to 24 line card systems. This configuration supports 96 Tbps of capacity.
  • A large cluster that consists of 13 fabric systems and up to 48 line card systems. This configuration supports 192 Tbps of capacity.
  • The links between the line card systems and the fabric systems operate at 400G and use a cell-based protocol that distributes packets across many links. The design inherently supports redundancy in the event fabric links fail.


“We are excited to see AT&T's white box vision and leadership resulting in growing merchant silicon use across their next generation network, while influencing the entire industry,” said Ram Velaga, SVP and GM of Switch Products at Broadcom. “AT&T's work toward the standardization of the Jericho2 based DDC is an important step in the creation of a thriving eco-system for cost effective and highly scalable routers.”   

“Our early lab testing of Jericho2 DDC white boxes has been extremely encouraging,” said Michael Satterlee, vice president of Network Infrastructure and Services at AT&T. “We chose the Broadcom Jericho2 chip because it has the deep buffers, route scale, and port density service providers require. The Ramon fabric chip enables the flexible horizontal scale-out of the DDC design. We anticipate extensive applications in our network for this very modular hardware design.”

https://about.att.com/story/2019/open_compute_project.html

Broadcom's Jericho2 switch-routing chip boasts 10 Tbps capacity

Broadcom announced commercial availability of its Jericho2 and FE9600 chips, the next generation of its StrataDNX family of system-on-chip (SoC) Switch-Routers.

The Jericho2 silicon boasts 10 Terabits per second of Switch-Router performance and is designed for high-density, industry standard 400GbE, 200GbE, and 100GbE interfaces. Key features include the company's "Elastic Pipe" packet processing, along with large-scale buffering with integrated High Bandwidth Memory (HBM).

The new device is shipping within 24 months from its predecessor Jericho+., Jericho2 delivers 5X higher bandwidth at 70% lower power per gigabit.

In addition to Jericho2, Broadcom is shipping FE9600, the new fabric switch device with 192 links of the industry's best performing and longest-reach 50G PAM-4 SerDes. This device offers 9.6 Terabits per second fabric capacity, a delivers 50% reduction in power per gigabit compared to its predecessor FE3600.

“The Jericho franchise is the industry’s most innovative and scalable silicon used today in various Switch-Routers by leading carriers,” said Ram Velaga, Broadcom senior vice president and general manager, Switch Products. “I am thrilled with the 5X increase in performance Jericho2 was able to achieve over a single generation. Jericho2 will accelerate the transition of carrier-grade networks to merchant silicon-based systems with best-in-class cost/performance.”

Arrcus scales out with Broadcom's Jericho2, raises $30m 

Arrcus, a start-up that offers a hardware-agnostic network operating system for white boxes switches, announced multiple high-density 100GbE and 400GbE routing solutions for hyperscale cloud, edge, and 5G networks.

The company says its ArcOS software architecture has the foundational attributes to scale-out to an open aggregated routing solution, enabling operators to design, deploy, operationalize, and manage their infrastructure across multiple domains in the network.

"Our mission is to democratize the networking industry by providing best-in-class software, the most flexible consumption model, and the lowest total cost of ownership for our customers; we are now extending this by providing leading-edge open integration solutions for routing. ArcOS is the essential link to fully realize the unparalleled advancements in the 10Tbps Jericho2 SoC family and the resulting systems," Devesh Garg, co-founder and CEO of Arrcus.


The new ArcOS-based platforms, based on Broadcom’s 10Tbps, highly-flexible and programmable StrataDNX Jericho2 switch-router system-on-a-chip (SoC), include:

  • 24 ports of 100G + 6 ports of 400G
  • 40 ports of 100G
  • 80 ports of 100G
  • 96 ports of 100G

Edgecore contributes Cell Site Gateways work to OCP and TIP

Edgecore Networks announced a series of cross-contributions of Cell Site Gateways across the Telecom Infra Project (TIP) and the Open Compute Project (OCP) communities.

The AS7316-26XB cell site gateway design and specification, that was contributed to OCP in October 2018, has now been contributed to TIP’s Open Optical & Packet Transport project group. The AS7315-27X-DCSG cell site gateway specification, which was developed as part of TIP’s Disaggregated Cell Site Gateways (DCSG) initiative and contributed to TIP, is now also being contributed to the OCP community. This family of contributed designs will accelerate service provider adoption of open networking options to meet the increasing bandwidth and service demand in the upcoming 5G rollouts.

The AS7316-26XB and AS7315-27X-DCSG are temperature hardened and optimized for deployment in outside plant enclosures and support base stations with full IEEE 1588 timing and GPS functions, provide backhaul uplinks at 25G or 100G Ethernet, and airflow and stacking port options. The gateways incorporate Broadcom® StrataDNX™ switch silicon, deep packet buffer memory, and offer Intel® Xeon® and Atom® Processor options. Both models support both commercial and open source network operating system options.

Edgecore said the contributed products enable service providers to deploy 4G and 5G services with the economics of disaggregated open network technology.

“With the latest cell site gateway contributions, Edgecore continues to expand our leadership position in both the OCP and TIP communities, building upon previous contributions of open network leaf/spine switches, disaggregated OLTs, and optical transport systems. We fully support the recent announcement of further collaboration between TIP and OCP and the path for solutions within their communities to be made readily available across both organizations. This collaboration will ultimately provide operators with more choice and flexibility,” said George Tchaparian, President and CEO of Edgecore.

“TIP is creating a new approach to building and deploying telecom network infrastructure, and we thank Edgecore Networks’ for their continuous contributions to TIP, including the Cassini open packet transponder, and now the family of disaggregated cell site gateways. These open innovative new designs will provide flexibility and choice to network operators," said Attilio Zani, Executive Director, TIP Foundation.

ONF partners with Edgecore on SEBA,ODTN and Trellis projects

ONF reached an agreement with Edgecore Networks to dedicate significant engineering resources to accelerate and ensure the success of the ONF projects SEBA, ODTN and Trellis.

Specifically, ONF and Edgecore have jointly created the new Onsite Immersion Engineering program (ONF-OIE) to embed engineers within the ONF lab team. Edgecore engineers will work closely with the ONF and its community of developers to help mature the functionality, robustness, scalability, and reliability of these platforms so they are ready for production deployments. Edgecore is the first ONF Partner Member to be making use of the new ONF-OIE program, building a dedicated team of engineers to work at ONF’s facilities under ONF’s direction.

“Edgecore Networks and the ONF are now harnessing a significant opportunity with operators that have fully embraced open source to power their edge networks,” said George Tchaparian, president and CEO for Edgecore. “Edgecore Networks is committed to the vision of open platforms, and is ensuring that a SEBA, VOLTHA and Trellis run seamlessly on our Edgecore hardware.”

“We are very pleased to strengthen our collaboration with Edgecore Networks with the launch of the ONF-OIE program, especially as our exemplar platforms based on open source and white box hardware are gaining significant traction worldwide,” said Guru Parulkar, executive director for the ONF.  “This group of developers will play an important role maturing SEBA, VOLTHA and Trellisand readying these platforms for production; first on Edgecore Networks hardware, followed by others, and with deployment by operators around the globe.

https://www.opennetworking.org/news-and-events/press-releases/onf-and-edgecore-networks-enter-key-agreement-to-invest-in-success-of-open-source-deployments/

Thursday, August 1, 2019

TE intros straddle-mount connectors for OCP NICs

TE Connectivity (TE) introduced its new Sliver straddle-mount connectors, which are the new standard form factor supporting a faceplate-pluggable Open Compute Project (OCP) NIC 3.0.  Applications include OCP NIC 3.0 cards in a low profile. OCP NIC 3.0 cards are horizontal and faceplate-pluggable, which helps to increase airflow through the enclosure and enable system ease of design. TE’s Sliver straddle-mount products are among the most cost-effective and highest performing solutions on the market.

TE said its Sliver straddle-mount connectors for SFF-TA-1002 support high speeds through PCIe Gen 5, with a roadmap to 112G. SFF-TA-1002 is a proposed alternative or replacement to many form factors, including M.2, U.2, and PCIe. The high-density, 0.6mm pitch of the Sliver straddle-mount connectors also supports next-gen silicon PCIe lane counts, which is where current products in the market begin to max out.

“OCP-compliant designs are taking the data center equipment industry by storm, and TE Connectivity is a major supplier of connectors for these designs,” said Ann Ou, product manager at TE Connectivity. “Our Sliver straddle-mount products deliver high performance and density in a standardized form factor to facilitate design and manufacturing for our data center equipment partners.”

https://www.te.com/usa-en/products/connectors/pcb-connectors/sliver-connectors.html?source=header-match&tab=pgp-story

Monday, May 27, 2019

Wiwynn offers edge server based on OCP OpenEDGE

Taiwan-based Wiwynn introduced an Edge Platform based on the Nokia-led OCP OpenEDGE specification.

Wiwynn EP100 is a 3U edge system that supports up to five 1U half-width servers and can flexibly configure with 2U half-width and 1U full-width server sleds. Communication service providers can also scale computing power by adding more EP100 systems for applications ranging from base stations to regional central offices.

“We are thrilled to embrace edge cloud opportunities in the 5G era by applying Open Compute Project (OCP) hardware and initiating an open firmware development kit,” said Dr. Sunlai Chang, Senior Vice President and CTO of Wiwynn. “Wiwynn EP100 enables communication service providers to address diverse low-latency data processing demands of Cloud RAN and modern central offices with a flexible and high-efficiency architecture at a balanced cost.”

“Nokia AirFrame open edge launched April 2018 welcomes Wiwynn’s adoption of the Nokia led OCP OpenEDGE specification, with this announcement of a new equipment provider for enclosure and sled designs. Wiwynn’s contribution to OCP OpenEDGE is an important step forward in the creation of a healthy ecosystem and providing Far Edge Data Center Equipment consumers with multi-source procurement options to avoid vendor lock-in,” said Hannu Nikurautio, Head of Cloud RAN Product Management of Nokia.

Tuesday, March 19, 2019

Huawei adopts Open Rack in its cloud data centers

Huawei Technologies will adopt Open Rack in its new public cloud data centers worldwide.

The Open Rack initiative proposed by the Open Compute Project (OCP) seeks to redefine the data center rack and is one of the most promising developments in the scale computing environment. It is the first rack standard that is designed for data centers, integrating the rack into the data center infrastructure, a holistic design process that considers the interdependence of everything from the power grid to the gates in the chips on each motherboard. Adopted by some of the world’s largest hyperscale internet service providers such as Facebook, Google and Microsoft, Open Rack is helping to lower total cost of ownership (TCO) and improve energy efficiency in the scale compute space.

“Huawei’s engineering and business leaders recognized the efficiency and flexibility that Open Rack offers, and the support that is available from a global supplier base. Providing cloud services to a global customer base creates certain challenges. The flexibility of the Open Rack specification and the ability to adapt for liquid cooling allows Huawei to service new geographies. Huawei’s decision to choose Open Rack is a great endorsement!” stated Bill Carter, Chief Technology Officer for the Open Compute Project Foundation.

“Huawei’s strategic investment and commitment to OCP is a win-win,” said Mr. Kenneth Zhang, General Manager of FusionServer, Huawei Intelligent Computing Business Department. “Combining Huawei’s extensive experience in Telco and Cloud deployments together with the knowledge of the vast OCP community will help Huawei to provide cutting edge, flexible and open solutions to its global customers. In turn, Huawei can leverage its market leadership and global datacenter infrastructure to help introduce OCP to new geographies and new market segments worldwide.”

Friday, March 15, 2019

OCP 2019: Edgecore debuts "Minipack" Switch for 100G and 400G

At OCP Summit 2019, Edgecore Networks introduced an open modular switch for 100G and 400G networking that conforms to the Minipack Fabric Switch design contributed by Facebook to the Open Compute Project (OCP).

Minipack is a disaggregated whitebox system providing a flexible mix of 100GbE and 400GbE ports up to a system capacity of 12.8Tbps.

The Minipack switch can support a mix of 100G and 400G Ethernet interfaces up to a maximum of 128x100G or 32x400G ports. Minipack is based on Broadcom StrataXGS Tomahawk 3 Switch Series silicon capable of line rate 12.8Tbps Layer2 and Layer3 switching.

The Minipack front panel has eight slots for port interface modules (PIM). The first PIM options available for the Edgecore Minipack switch are the PIM-16Q with 16x100G QSFP28 ports, and the PIM-4DD with 4x400G QSFP-DD ports. The Minipack modular switch is a 4U form factor, power optimized for data center deployments, and includes hot-swappable redundant power supplies and fans for high availability.

Edgecore said its Minipack AS8000 Switch enables network operators to select disaggregated NOS and SDN software options from commercial partners and open source communities to address different use cases and operational requirements. Edgecore has ported and validated Software for Open Networking in the Cloud (SONiC), the OCP open source software platform, on the Minipack AS8000 Switch as an open source option for high capacity data center fabrics. In addition, Cumulus Networks announced the availability of its Cumulus Linux operating system for the Edgecore Minipack switch.

“Network operators are demanding open network solutions to increase their network capacities with 400G and higher density 100G switches based on open technology. The Edgecore Minipack switch broadens our full set of OCP Accepted open network switches, and enables data center operators to deploy higher capacity fabrics with flexible combinations of 100G and 400G interfaces and pay-as-you-grow expansion,” said George Tchaparian, CEO, Edgecore Networks. “The open and modular design of Minipack will enable Edgecore and partners to address more data center and service provider use cases in the future by developing innovative enhancements such as additional interface modules supporting encryption, multiple 400G port types, coherent optical ports and integrated optics, plus additional Minipack Switch family members utilizing deep-buffer or highly programmable or next-generation switching silicon in the same flexible modular form factor.”

“Facebook designed Minipack as a fabric switch with innovative performance, power optimization and modularity to enable our deployment of the next generation data center fabrics,” said Hans-Juergen Schmidtke, Director of Engineering, Facebook. “We have contributed the Minipack design to OCP in order to stimulate additional design innovation and to facilitate availability of the platform to network operators. We welcome Edgecore’s introduction of Minipack as a commercial whitebox product.”

The Minipack AS8000 Switch with PIM-16Q 100G QSFP28 interface modules will be available from Edgecore resellers and integrators worldwide in Q2. PIM-4DD 400G QSFP-DD interface modules will be available in Q3. SONiC open source software, including platform drivers for the Edgecore Minipack AS8000 Switch, are available from the SONiC GitHub.

OCP 2019: Netronome unveils 50GbE SmartNICs

Netronome unveiled its Agilio CX 50GbE SmartNICs in OCP Mezzanine 2.0 form factor with line-rate advanced cryptography and 2GB onboard DDR memory.

The Agilio CX SmartNIC platform fully and transparently offloads virtual switch, virtual router, eBPF and P4-based datapath processing for networking functions such as overlays, security, load balancing and telemetry, enabling cloud and SDN-enabled compute and storage servers to free up critical server CPU cores for application processing while delivering significantly higher performance.

Netronome said its new SmartNIC reduces tail latency significantly enabling high-performance Web 2.0 applications to be deployed in cost and energy-efficient servers. With advanced Transport Layer Security (TLS/SSL)-based cryptography support at line-rate and up to two million stateful sessions per SmartNIC, web and data storage servers in hyperscale environments can now be secured tighter than ever before, preventing hacking of networks and precious user data.

Deployable in OCP Yosemite servers, the Agilio CX 50GbE SmartNICs implement a standards-based and open advanced buffer management scheme enabled by the unique many-core multithreaded processing memory-based architecture of the Netronome Network Flow Processor (NFP) silicon. This improves application performance and enables hyperscale operators to maintain high levels of service level agreements (SLAs). Dynamic eBPF-based programming and hardware acceleration enables intelligent scaling of networking workloads across multiple host CPU cores, improving server efficiency. The solution also enhances security and data center efficiencies by offloading TLS, a widely deployed protocol used for encryption and authentication of applications that require data to be securely exchanged over a network.

“Securing user data in Web 2.0 applications and preventing malicious attacks such as BGP hijacking as experienced recently in hyperscale operator infrastructures are critical needs that have exacerbated significantly in recent years,” said Sujal Das, chief marketing and strategy officer at Netronome. “Netronome developed the Agilio CX 50GbE SmartNIC solution to address these vital industry requirements by meticulously optimizing the hardware with open source and hyperscale operator applications and infrastructures.”

Agilio CX 50GbE SmartNICs in OCP Mezzanine 2.0 form factor are sampling today and include the generally available NFP-5000 silicon. The production version of the board and software is expected in the second half of this year.

OCP 2019: Inspur and Intel contribute 4-socket Crane Mountain design

Inspur and Intel will contribute a jointly-developed, cloud-optimized platform code named "Crane Mountain" to the OCP community.

The four-socket platform is a high-density, flexible and powerful 2U server, validated for Intel Xeon (Cascade Lake) processors and optimized with Intel Optane DC persistent memory.

Inspur said its NF8260M5 system is being used by Intel as a lead platform for introducing the “high-density cloud-optimized” four-socket server solution to the cloud service provider (CSP) market.

At OCP Summit 2019, Inspur also showcased three new artificial intelligence (AI) computing solutions, and announced the world’s first NVSwitch-enabled 16-GPU fully connected GPU expansion box, the GX5, which is also part of an advanced new architecture that combines the 16-GPU box with an Inspur 4-socket Olympus server. This solution features 80 CPU cores, making it suitable for deep-learning applications that require maximum throughput across multiple workloads. The Inspur NF8360M5 4-socket Olympus server is going through the OCP Contribution and OCP Accepted recognition process.

Inspur also launched the 8-GPU box ON5388M5 with NVLink 2.0, as a new OCP contribution-in-process for 8-GPU box solutions. The Inspur solution offers two new topologies for different AI applications, such as autonomous driving and voice recognition.




Alan Chang discusses Inspur's contributions to the Open Compute Project, including a High-density Cloud-optimized platform code-named “Crane Mountain”.

This four-socket platform is a high-density, flexible and powerful 2U server, validated for Cascade Lake processors and optimized with Intel Optane DC persistent memory.  It is designed and optimized for cloud Infrastructure-aaS, Function-aaS, and Bare-Metal-aaS solutions.

https://youtu.be/JZj-arumtD0


OCP 2019: Toshiba tests NVM Express over Fabrics

At OCP Summit 2019, Toshiba Memory America demonstrated proof-of-concept native Ethernet NVMe-oF (NVM Express over Fabrics) SSDs.

Toshiba Memory also showed its KumoScale software, which is a key NVMe-oF enabler for disaggregated storage cloud deployments. First introduced last year, Toshiba Memory has recently enhanced KumoScale’s capabilities with support for TCP-based networks.

OCP 2019: Wiwynn intros Open19 server based on Project Olympus

At OCP 2019, Wiwynn introduced an Open19 server based on Microsoft’s Project Olympus server specification.

The SV6100G3 is a 1U double wide brick server that complies with the LinkedIn led Open19 Project standard, which defines a cross-industry common form factor applicable to EIA 19” racks. With the Open19 defined brick servers, cages and snap-on cables, operators can blind mate both data and power connections to speed up rack deployment and enhance serviceability.

Based on the open source cloud hardware specification of Microsoft’s Project Olympus, the SV6100G3 features two Intel Xeon Processor Scalable family processors, up to 1.5TB memory and one OCP Mezzanine NIC. The

“Wiwynn has extensive experience in open IT gears design to bring TCO improvement for hyperscale data centers,” said Steven Lu, Vice President of Product Management at Wiwynn. “We are excited to introduce the Open19 based SV6100G3 which assists data center operators of all sizes to benefit from the next generation high-efficiency open standards with lower entry barrier.”

Thursday, March 14, 2019

OCP 2019: Microsoft's Project Zipline offers better data compression

At OCP 2019, Microsoft unveiled Project Zipline, a new compression standard for data sets covering Edge to Cloud app.

Project Zipline promises "compression without compromises where always-on procession achieves high compression ratios with high throughput and low latency. Zipline encompasses algorithms, software, and silicon engines.

Microsoft estimates Zipline data set sizes at 4 ~ 8% of uncompressed sizes. Over time, Microsoft anticipates Project Zipline compression technology will make its way into network data processing, smart SSDs, archival systems, cloud appliances, general purpose microprocessor, IoT, and edge devices.

A number of big name silicon and storage companies are already on board as partners.

https://azure.microsoft.com/en-us/blog/hardware-innovation-for-data-growth-challenges-at-cloud-scale/


OCP 2019: Arista's 12.8Tbps switch developed with Facebook

At OCP 2019, Arista Networks announced a high-radix 12.8Tbps switching system developed with Facebook with the goal of simplifying 100/400G networking.

The Arista 7360X Series doubles system density while reducing power consumption and cost by doubling the network diameter and reducing the number of required leaf-spine tiers. Full manageability via FBOSS (Facebook Open Switching Software) is supported for controlling power and thermal efficiency along with the control plane.

The new platform is a compact, four rack unit design and all active components are removable. It delivers a 60% reduction in power at under 10 watts per 100G port. Standards-based, the system comes with support for 100G QSFP and 400G OSFP or QSFP-DD optics and cables. Arista EOS delivers the advanced traffic management, automation and telemetry features needed to build and maintain modern cloud networks.
The Arista 7368X4 Series is available as an 8-slot modular system with a choice of 100G and 400G modules based on industry-standard interfaces and support for EOS.

It is currently shipping with 100G interfaces. Price per 100G port is under $600.

“The Arista solution has helped Facebook to gain significant improvements in power and space efficiency, reducing the number of switch chips in the network stack and allowing power to be freed up for compute resources,” said Najam Ahmad, Vice President Network Engineering for Facebook. “Having both an internally developed Minipack and the Arista solution allows Facebook to remain multi-sourced, with an option to run Arista EOS or FBOSS on both, where either system can be deployed in multiple tiers of networks.”

OCP 2019: New Open Domain-Specific Architecture sub-project

The Open Compute Project is launching an Open Domain-Specific Architecture (ODSA) sub-project to define an open interface and architecture that enables the mixing and matching of available silicon die from different suppliers onto a single SoC for data center applications. The goal is to define a process to integrate best-of-breed chiplets onto a SoC.

Netronome played a lead role initiating the new project.

“The open architecture for domain-specific accelerators being proposed by the ODSA Workgroup brings the benefits of disaggregation to the world of SoCs. The OCP Community led by hyperscale operators has been at the forefront driving disaggregation of server and networking systems. Joining forces with OCP, the ODSA Workgroup brings the next chapter of disaggregation for domain-specific accelerator SoCs as it looks toward enabling proof of concepts and deployable products leveraging OCP’s strong ecosystem of hardware and software developers,” said Sujal Das, chief marketing and strategy officer at Netronome.

"Coincident with the decline of Moore's law, the silicon industry is facing longer development times and significantly increased complexity. We are pleased to see the ODSA Workgroup become a part of the Open Compute Project. We hope workgroup members will help to drive development practices and adoption of best-of-breed chiplets and SoCs. Their collaboration has the potential to further democratize chip development, and ultimately reduce design overhead of domain-specific silicon in emerging use cases,” said Aaron Sullivan, Director Hardware Engineering at Facebook."

https://2019ocpglobalsummit.sched.com/event/JxrZ/open-domain-specific-architecture-odsa-sub-project-launch

Wiki page: https://www.opencompute.org/wiki/Server/ODSA

Mailing list: https://ocp-all.groups.io/g/OCP-ODSA

Netronome proposes open "chiplets" for domain specific workloads

Netronome unveiled its open architecture for domain-specific accelerators .

Netronome is collaborating with six leading silicon companies, Achronix, GLOBALFOUNDRIES, Kandou, NXP, Sarcina and SiFive, to develop this open architecture and related specifications for developing chiplets that promise to reduce silicon development and manufacturing costs.

The idea is fo chiplet-based silicon to be composed using best-of-breed components such as processors, accelerators, and memory and I/O peripherals using optimal process nodes. The open architecture will provide a complete stack of components (known good die, packaging, interconnect network, software integration stack) that lowers the hardware and software costs of developing and deploying domain-specific accelerator solutions. Implementing open specifications contributed by participating companies, any vendor’s silicon die can become a building block that can be utilized in a chiplet-based SoC design.

“Netronome’s domain-specific architecture as used in its Network Flow Processor (NFP) products has been designed from the ground up keeping modularity, and economies of silicon development and manufacturing costs as top of mind,” said Niel Viljoen, founder and CEO at Netronome. “We are extremely excited to collaborate with industry leaders and contribute significant intellectual property and related open specifications derived from the proven NFP products and apply that effectively to the open and composable chiplet-based architecture being developed in the ODSA Workgroup.”

OCP 2019: Broadcom intros OCP NIC 3.0 adapters

Broadcom introduced OCP NIC 3.0 adapters supporting the full range of data rates and interfaces from 1GbE to 200GbE. The portfolio includes a wide selection of Ethernet adapter cards with 1-, 2- and 4-port configurations.

The new 100GbE and 200GbE adapters, which are based on Broadcom's NetXtreme E-Series Ethernet controllers, also feature Broadcom’s Thor multi-host controller that has the industry’s best performing 56G PAM-4 SerDes and PCIe 4.0 interface. Sampling is underway.

“OCP NIC 3.0 provides a unified specification and form factor for connecting server and storage systems,” said Ed Redmond, senior vice president and general manager of the Compute and Connectivity Division at Broadcom. “With strong customer demand and virtually unanimous industry backing for this unified solution, our complete portfolio of OCP NIC 3.0 adapters facilitates broad adoption of this new form factor and drives further innovation in high performance computing and storage applications to address an ever-increasing demand for bandwidth.”

Tuesday, March 20, 2018

Netronome intros 25/50GbE SmartNICs based on the OCP v2.0 mezzanine spec

Netronome unveiled its Agilio CX 25 and 50Gb/s SmartNICs featuring support for the OCP v2.0 mezzanine specification.

The Agilio CX SmartNIC platform fully and transparently offloads virtual switch, router, P4 and eBPF-based datapath processing for networking functions such as overlays, security, load balancing and telemetry, enabling cloud and SDN-enabled compute and storage servers to free up critical server CPU cores for application processing while delivering significantly higher performance.

Netronome said its new Agilio SmartNICs are deployable in multiple OCP server and storage platforms, and pack 60 processing cores within stringent OCP v2.0 form factor and power profiles to deliver nine times higher kernel datapath processing capabilities per server to enhance security and data access efficiencies.

The Agilio CX 25/50GbE SmartNICs utilize open sourced, Linux-based, upstreamed drivers and compilers to enable seamless offload of Enhanced Berkeley Packet Filter/Express Data Processing (eBPF/XDP) applications.

“OCP designs are known to deliver size and cost-effective scale and performance. SoC silicon used in SmartNICs has typically involved much larger size, cost and power profiles while delivering lower performance,” said Sujal Das, chief marketing and strategy officer at Netronome. “Netronome unique SoC technology with open-source programming available in Agilio CX 25/50GbE SmartNICs enables the industry to realize the confluence of the value of OCP designs with the much sought-after capabilities of SmartNICs.”

Agilio CX 25GbE SmartNICs for OCP are sampling today, and Agilio CX 50GbE SmartNICs for OCP will sample in Q3 2018.

https://www.netronome.com/products/agilio-cx/

Saturday, March 3, 2018

TE Connectivity targets higher density switches with zQSFP+ Stacked Belly-to-Belly Cages

TE Connectivity (TE) introduced its zQSFP+ stacked belly-to-belly cages designed for high-density switches with 48 or 64 silicon ports. The cage supports a single printed circuit board (PCB) architecture (versus two PCBs) in each line card.

The company said this new design addresses the requirements for higher density switches, including Open Compute Project (OCP) reference designs. The cages support up to 28G NRZ and 56G PAM-4 data rates to achieve faster speeds in these high-density switches. TE's zQSFP+ stacked belly-to-belly cages are dual-sourced with Molex and are drop-in replacements.

"These new zQSFP+ cages allow us to design denser switches while reducing costs by using just one PCB per line card," said Melody Chiang, product manager at Accton. "TE continually supports our efforts to design faster, denser switches, and this belly-to-belly configuration is just the latest example."

Tuesday, February 6, 2018

Open Compute Project measures its market impact

The Open Compute Project Foundation (OCP) has engaged IHS Markit to determine the adoption and impact of OCP gear in the technology industry.

IHS Markit interviewed OCP members, suppliers and service providers, as well as incorporated their own in-depth industry research to determine non-board member revenue by region and vertical, as well as provide a forecast through 2021. OCP Board member companies include Facebook, Goldman Sachs, Intel, Microsoft and Rackspace. Equipment markets explored in this study included servers, storage, network, rack, power and peripherals.

Some preliminary findings:
  • 2017 OCP YoY growth from non-board member companies was 103%
  • The 5-year CAGR (compound annual growth rate) is 59%, while the total market growth is expected to be in the low single digits
  • Servers account for almost 75% of non-board OCP revenue in 2017, with rack, power, peripherals and other (primarily WiFi and PON, or passive optical networks) expecting the highest growth rates
  • The America’s represented the majority of non-board OCP revenue in 2017, through hyperscaler, telco and financial industry adoption, while EMEA has a forecasted CAGR of 70%, primarily driven by telecommunications firms
    EMEA revenue from non-board member companies is expected to surpass $1 billion (US) by 2021, while Asia Pacific is expected to surpass EMEA in adoption as early as 2020

“OCP is excited to work with IHS Markit to get an independent view of our ability to influence the market through adoption. This study creates a baseline for us to measure our progress against, as well as gives us insight into projected growth in regions and markets. It also provides a view into perceived value as well as barriers for adoption. While we are pleased with the initial indicators, we also recognize we have much to do to continue our momentum,” stated Rocky Bullock, CEO for the Open Compute Project Foundation.

In th

Wednesday, September 6, 2017

Radisys Announces Next-Generation DCEngine Hardware

Radisys released its next-generation DCEngine hardware, a pre-packaged rack solution based on Open Compute Project (OCP) principles and designed to transition Communications Service Providers (CSPs) to virtualized data centers.

DCEngine, which is based on the OCP-ACCEPTED CG-OpenRack-19 specification, leverages the Intel Xeon Scalable processors. It supports Intel Xeon Scalable Architecture-based compute and storage sleds, with a wide range of processing options that can be installed and tuned up inside existing DCEngine systems in minutes. DCEngine meets CSP requirements for an enhanced, scalable power systems that delivers 25,000W per feed for higher processor density, greater efficiency and lowered expenses as well as DC and AC power entry options suitable for a wide range of environments. It also offers an In-rack Uninterruptible Power Supply (UPS) option to support simplified infrastructure, easy maintenance and lower overhead. Radisys delivers the final pre-assembled DCEngine rack with no on-site setup.

Radisys said its next-gen DCEngine supports CSPs transition away from proprietary hardware and vendor lock-in to a data center environment built with open source software and hardware components. The enhanced rack design, combined with operations and support modeled after Facebook practices, can bring an annual OpEx saving of nearly 40 percent compared to traditional data center offerings, while reducing deployment time from months to just days.

“Our CSP customers are requiring open telecom solutions to support their data center transformations, easing their pain points around power and costs, while simplifying their operational complexities,” said Bryan Sadowski, vice president, FlowEngine and DCEngine, Radisys. “With the next-generation DCEngine, combined with Radisys’ deep telco expertise and OCP’s operations/support model, service providers not only get innovation and service agility, but also gain significant TCO savings.”

http://www.radisys.com

Friday, June 30, 2017

AT&T to launch software-based 10G XGS-PON trial

AT&T announced it will conduct a 10 Gbit/s XGS-PON field trial in late 2017 as it progresses with plans to virtualise access functions within the last mile network.

The next-generation PON trial is designed to deliver multi-gigabit Internet speeds to consumer and business customers, and to enable all services, including 5G wireless infrastructure, to be converged onto a single network.

AT&T noted that XGS-PON is a fixed wavelength symmetrical 10 Gbit/s passive optic network technology that can coexist with the current GPON technology. The technology can provide 4x the downstream bandwidth of the existing system, and is as cost-effective to deploy as GPON. As part of its network virtualisation initiative, AT&T plans to place some XGS-PON in the cloud with software leveraging open hardware and software designs to speed development.
AT&T has worked with ON.Lab to develop and test ONOS (Open Network Operating System) and VOLTHA (Virtual Optical Line Terminator Hardware Abstraction) software. This technology allows the lower level details of the silicon to be hidden. AT&T stated that it has also submitted a number of open white box XGS OLT designs to the Open Compute Project (OCP) and is currently working with the project to gain approval for the solutions.

The company noted that interoperability is a key element of its Open Access strategy, and prompted the creation of an OpenOMCI specification, which provides an interoperable interface between the OLT and the home devices. This specification, which forms a key part of software-defined network (SDN) and network function virtualisation (NFV), has been distributed to standards and open source communities.



  • AT&T joined OCP in January 2016 to support its network transformation program. Earlier this year at the OCP Summit Edgecore Networks, a provider of open networking solutions and a subsidiary of Accton Technology, announced design contributions to OCP including a 25 Gigabit Ethernet top-of-rack switch and high-density 100 Gigabit Ethernet spine switch. The company also showcased new open hardware platforms.
  • At the summit, Edgecore displayed a disaggregated virtual OLT for PON deployment at up to 10 Gbit/ based on the AT&T Open XGS-PON 1 RU OLT specification that was contributed to the OCP Telco working group.
  • Edgecore's ASFvOLT16 disaggregated virtual OLT is based on the AT&T Open XGS-PON 1 RU OLT specification and features Broadcom StrataDNX switch and PON MAC SOC silicon, offering 16 ports of XGS-PON or NG-PON2, with 4 x QSFP28 ports and designed for next generation PON deployments and R-CORD telecom infrastructure.

See also