Friday, March 15, 2019

Microsoft and Facebook collaborate on co-packaged optics

Microsoft and Facebook have established a Co-Packaged Optics (CPO) Collaboration with the goal of encouraging common design elements for bridging optics and ASICs.

The collaboration intends to provide open specifications for design elements, including the electrical signaling interface, optical standard, optical module management interface and reliability requirements. When complete, the open specifications will enable the industry to develop a set of solutions involving switch and ASIC manufacturers, optics suppliers, CMs and others that will create the final package which can then be attached to the switch PCB. The collaboration has targeted the 51T switch generation as the tipping point for industry adoption of co-packaged optics.

"The Co-Packaged Optics Collaboration will provide a customer-driven, system-level view of requirements for co-packaged optics," said Katharine Schmidtke, director, Technology Sourcing, Facebook, responsible for the company's optical technology strategy. "By sharing the specifications, we aim to develop a diverse and innovative supplier ecosystem."

"Providing the industry with a customer-supported set of requirements will create a stable, cooperative environment where suppliers can address one of the optical industry's most important technical challenges," said Jeff Cox, partner director, Network Architecture, Microsoft and executive director of the CPO Collaboration. "As co-founders of the Co-Packaged Optics Collaboration, Microsoft and Facebook invite customers and suppliers to join and collaborate with us."

OCP 2019: Edgecore debuts "Minipack" Switch for 100G and 400G

At OCP Summit 2019, Edgecore Networks introduced an open modular switch for 100G and 400G networking that conforms to the Minipack Fabric Switch design contributed by Facebook to the Open Compute Project (OCP).

Minipack is a disaggregated whitebox system providing a flexible mix of 100GbE and 400GbE ports up to a system capacity of 12.8Tbps.

The Minipack switch can support a mix of 100G and 400G Ethernet interfaces up to a maximum of 128x100G or 32x400G ports. Minipack is based on Broadcom StrataXGS Tomahawk 3 Switch Series silicon capable of line rate 12.8Tbps Layer2 and Layer3 switching.

The Minipack front panel has eight slots for port interface modules (PIM). The first PIM options available for the Edgecore Minipack switch are the PIM-16Q with 16x100G QSFP28 ports, and the PIM-4DD with 4x400G QSFP-DD ports. The Minipack modular switch is a 4U form factor, power optimized for data center deployments, and includes hot-swappable redundant power supplies and fans for high availability.

Edgecore said its Minipack AS8000 Switch enables network operators to select disaggregated NOS and SDN software options from commercial partners and open source communities to address different use cases and operational requirements. Edgecore has ported and validated Software for Open Networking in the Cloud (SONiC), the OCP open source software platform, on the Minipack AS8000 Switch as an open source option for high capacity data center fabrics. In addition, Cumulus Networks announced the availability of its Cumulus Linux operating system for the Edgecore Minipack switch.

“Network operators are demanding open network solutions to increase their network capacities with 400G and higher density 100G switches based on open technology. The Edgecore Minipack switch broadens our full set of OCP Accepted open network switches, and enables data center operators to deploy higher capacity fabrics with flexible combinations of 100G and 400G interfaces and pay-as-you-grow expansion,” said George Tchaparian, CEO, Edgecore Networks. “The open and modular design of Minipack will enable Edgecore and partners to address more data center and service provider use cases in the future by developing innovative enhancements such as additional interface modules supporting encryption, multiple 400G port types, coherent optical ports and integrated optics, plus additional Minipack Switch family members utilizing deep-buffer or highly programmable or next-generation switching silicon in the same flexible modular form factor.”

“Facebook designed Minipack as a fabric switch with innovative performance, power optimization and modularity to enable our deployment of the next generation data center fabrics,” said Hans-Juergen Schmidtke, Director of Engineering, Facebook. “We have contributed the Minipack design to OCP in order to stimulate additional design innovation and to facilitate availability of the platform to network operators. We welcome Edgecore’s introduction of Minipack as a commercial whitebox product.”

The Minipack AS8000 Switch with PIM-16Q 100G QSFP28 interface modules will be available from Edgecore resellers and integrators worldwide in Q2. PIM-4DD 400G QSFP-DD interface modules will be available in Q3. SONiC open source software, including platform drivers for the Edgecore Minipack AS8000 Switch, are available from the SONiC GitHub.

OCP 2019: Netronome unveils 50GbE SmartNICs

Netronome unveiled its Agilio CX 50GbE SmartNICs in OCP Mezzanine 2.0 form factor with line-rate advanced cryptography and 2GB onboard DDR memory.

The Agilio CX SmartNIC platform fully and transparently offloads virtual switch, virtual router, eBPF and P4-based datapath processing for networking functions such as overlays, security, load balancing and telemetry, enabling cloud and SDN-enabled compute and storage servers to free up critical server CPU cores for application processing while delivering significantly higher performance.

Netronome said its new SmartNIC reduces tail latency significantly enabling high-performance Web 2.0 applications to be deployed in cost and energy-efficient servers. With advanced Transport Layer Security (TLS/SSL)-based cryptography support at line-rate and up to two million stateful sessions per SmartNIC, web and data storage servers in hyperscale environments can now be secured tighter than ever before, preventing hacking of networks and precious user data.

Deployable in OCP Yosemite servers, the Agilio CX 50GbE SmartNICs implement a standards-based and open advanced buffer management scheme enabled by the unique many-core multithreaded processing memory-based architecture of the Netronome Network Flow Processor (NFP) silicon. This improves application performance and enables hyperscale operators to maintain high levels of service level agreements (SLAs). Dynamic eBPF-based programming and hardware acceleration enables intelligent scaling of networking workloads across multiple host CPU cores, improving server efficiency. The solution also enhances security and data center efficiencies by offloading TLS, a widely deployed protocol used for encryption and authentication of applications that require data to be securely exchanged over a network.

“Securing user data in Web 2.0 applications and preventing malicious attacks such as BGP hijacking as experienced recently in hyperscale operator infrastructures are critical needs that have exacerbated significantly in recent years,” said Sujal Das, chief marketing and strategy officer at Netronome. “Netronome developed the Agilio CX 50GbE SmartNIC solution to address these vital industry requirements by meticulously optimizing the hardware with open source and hyperscale operator applications and infrastructures.”

Agilio CX 50GbE SmartNICs in OCP Mezzanine 2.0 form factor are sampling today and include the generally available NFP-5000 silicon. The production version of the board and software is expected in the second half of this year.

OCP 2019: Ixia and Marvell conduct 12.8Tbps 400GE test

Keysight Technologies announced a joint demonstration of Ixia’s AresOne-400 Gigabit Ethernet (GE) test system and the Marvell Prestera CX 8580 Ethernet switch.

The Marvell Prestera CX 8580 switch, a 12.8Tbps, 256x 50G device, is part of a new family of switches from Marvell that offers workflow visibility and analytics with its Storage Aware Flow Engine (SAFE) technology and a reduction in network layers leveraging its high radix switch core technology known as FASTER.

“We are excited to showcase our newly announced Marvell Prestera CX 8500 family with a powerful RFC compliant 32x400G demonstration. This feature-rich family leverages the testing capability of Ixia’s AresONE test equipment to test the scale, 12.8Tbps, and wide range of packet encapsulations that are supported by the switch pipeline,” said Guy Azrad, vice president of engineering, Networking Business Unit and general manager at Marvell Israel. “This collaboration testifies to both organizations’ ability to support the design, deployment and testing of the next generation, high-speed network infrastructure that will be needed to keep pace with ever-growing data demands.”

“This demonstration highlights the capabilities of the AresONE-400GE test system and the Marvell Prestera CX 8580 switch to support real-world applications in the data center,” said Sunil Kalidindi, vice president of product management at Keysight’s Ixia Solutions Group. “We are proud to showcase the world’s first and only full-box 12.8Tbps 400GE test with Marvell and demonstrate the maturity of our solutions as 400GE rapidly becomes mainstream.”

OCP 2019: Inspur and Intel contribute 4-socket Crane Mountain design

Inspur and Intel will contribute a jointly-developed, cloud-optimized platform code named "Crane Mountain" to the OCP community.

The four-socket platform is a high-density, flexible and powerful 2U server, validated for Intel Xeon (Cascade Lake) processors and optimized with Intel Optane DC persistent memory.

Inspur said its NF8260M5 system is being used by Intel as a lead platform for introducing the “high-density cloud-optimized” four-socket server solution to the cloud service provider (CSP) market.

At OCP Summit 2019, Inspur also showcased three new artificial intelligence (AI) computing solutions, and announced the world’s first NVSwitch-enabled 16-GPU fully connected GPU expansion box, the GX5, which is also part of an advanced new architecture that combines the 16-GPU box with an Inspur 4-socket Olympus server. This solution features 80 CPU cores, making it suitable for deep-learning applications that require maximum throughput across multiple workloads. The Inspur NF8360M5 4-socket Olympus server is going through the OCP Contribution and OCP Accepted recognition process.

Inspur also launched the 8-GPU box ON5388M5 with NVLink 2.0, as a new OCP contribution-in-process for 8-GPU box solutions. The Inspur solution offers two new topologies for different AI applications, such as autonomous driving and voice recognition.




Alan Chang discusses Inspur's contributions to the Open Compute Project, including a High-density Cloud-optimized platform code-named “Crane Mountain”.

This four-socket platform is a high-density, flexible and powerful 2U server, validated for Cascade Lake processors and optimized with Intel Optane DC persistent memory.  It is designed and optimized for cloud Infrastructure-aaS, Function-aaS, and Bare-Metal-aaS solutions.

https://youtu.be/JZj-arumtD0


OCP 2019 video: Introducing Carrier Open Infrastructure



Bob Lamb of CBTS introduces Carrier Open Infrastructure (COI), which is reference architecture based on frameworks from the Open Networking Foundation (ONF), open source hardware from the Open Compute Project (OCP) and Open Source VNFs.

CBTS said its goal is to help carriers leverage open source virtual networking functions (VNFs) and common, off-the-shelf (COTS) hardware to grow revenue as broadband speeds increase and average revenue per-subscriber (ARPU) declines.

The COI architecture leverages the ONF's Central Office Redefined as a data center (CORD) framework for enabling gigabit access over copper, fiber and wireless.


OCP 2019: Toshiba tests NVM Express over Fabrics

At OCP Summit 2019, Toshiba Memory America demonstrated proof-of-concept native Ethernet NVMe-oF (NVM Express over Fabrics) SSDs.

Toshiba Memory also showed its KumoScale software, which is a key NVMe-oF enabler for disaggregated storage cloud deployments. First introduced last year, Toshiba Memory has recently enhanced KumoScale’s capabilities with support for TCP-based networks.

OCP 2019: Wiwynn intros Open19 server based on Project Olympus

At OCP 2019, Wiwynn introduced an Open19 server based on Microsoft’s Project Olympus server specification.

The SV6100G3 is a 1U double wide brick server that complies with the LinkedIn led Open19 Project standard, which defines a cross-industry common form factor applicable to EIA 19” racks. With the Open19 defined brick servers, cages and snap-on cables, operators can blind mate both data and power connections to speed up rack deployment and enhance serviceability.

Based on the open source cloud hardware specification of Microsoft’s Project Olympus, the SV6100G3 features two Intel Xeon Processor Scalable family processors, up to 1.5TB memory and one OCP Mezzanine NIC. The

“Wiwynn has extensive experience in open IT gears design to bring TCO improvement for hyperscale data centers,” said Steven Lu, Vice President of Product Management at Wiwynn. “We are excited to introduce the Open19 based SV6100G3 which assists data center operators of all sizes to benefit from the next generation high-efficiency open standards with lower entry barrier.”

FCC seeks innovation in spectrum above 95 GHz

The FCC adopted new rules allowing for the development of new services in the spectrum above 95 GHz.

Specifically, the FCC is creating a new category of experimental licenses for use of frequencies between 95 GHz and 3 THz. The goal is to give innovators the flexibility to conduct experiments lasting up to 10 years, and to more easily market equipment during the
experimental period.

The item also makes a total of 21.2 gigahertz of spectrum available for use by unlicensed devices. The FCC said it selected bands with propagation characteristics that will permit large numbers of unlicensed devices to use the spectrum, while limiting the potential for
interference to existing governmental and scientific operations in the above-95 GHz bands, such as space research and atmospheric sensing.

https://www.fcc.gov/document/fcc-opens-spectrum-horizons-new-services-technologies

CBRS milestone: Commscope and Google pass test

The Institute for Telecommunication Sciences (ITS) has given a passing grade to a Citizens Broadband Radio Service (CBRS) Environmental Sensing Capability (ESC) system developed by CommScope and Google.

ITS, which is part of the National Telecommunications and Information Administration (NTIA), is the official test lab that has been tasked with confirming the performance of ESCs.

CBRS provides 150 MHz of spectrum in the 3.5 GHz band in the U.S. CBRS spectrum is managed by Spectrum Access Systems (SASs) but will require an ESC network to detect federal radar operations. The ESC will alert the SASs of federal radar activity, and SASs will then reconfigure nearby CBRS devices to operate without interfering with federal operations.

“Our ESC sensor has passed all required testing for certification - demonstrating that we can detect all current and future radar waveforms and our respective SASs can protect incumbent users,” said Mat Varghese, Senior Product Manager, Wireless Services, Google. “This is an important milestone and we are looking ahead toward commercial operations in CBRS.”


“We are pleased that our ESC sensor, as expected, has passed all testing from the lab and is on track for the next phase,” said Mike Guerin, vice president of Integrated Solutions, CommScope. “We look forward to initial commercial deployment and working with customers and federal agencies to ensure success.”

The joint CommScope/Google ESC network is currently being deployed and is expected to be completed by the end of the year. CommScope and Google will each own and operate independent SAS systems which will provide service using the jointly operated ESC network.

CommScope and Google develop Environmental Sensing for CBRS

CommScope and Google agreed to jointly develop, deploy and operate an Environmental Sensing Capability (ESC) network for the Citizens Broadband Radio Service (CBRS) market.

CBRS spectrum is managed by Spectrum Access Systems (SASs), which require an ESC network to sense radar operation. The ESC will alert the SASs of naval radar operations so the connected SAS systems can reconfigure spectrum allocations for nearby CBRS devices to operate without interfering with naval activity.


The companies said they will each provide independent SAS services and jointly operate the ESC network. The ESC network is engineered for high availability with the built-in redundancy and fault detection necessary to provide this key enabling capability. As part of this collaboration, both companies share responsibility for overall network design.

Google has developed the ESC sensor and cloud decision engine and will operate the cloud that communicates with each SAS. CommScope will deploy and manage the operation of the physical network. CommScope and Google are working with the FCC and other governmental agencies to obtain certification of the ESC.

Thursday, March 14, 2019

OCP 2019: Facebook rethinks data center fabric

At this week's OCP Summit in San Jose, Facebook released details on how they're rethinking the fabric of their data centers. FBOSS is still used to bind together its data center, but there are significant changes to ensure that a single code image and the same overall systems can support multiple generations of data center topologies and an increasing number of hardware platforms.

Facebook's next-generation "F16" data center fabric design offers 4x the capacity of its predecessor while promising to be more scalable and simpler to operate and evolve. The fabric leverages commercially available 100G CWDM4-OCP, which yields desired 4x capacity increase as 400G link speeds, but with 100G optics.

The refreshed fabric includes Minipack, a new modular block switch Facebook developed with Arista Networks. The Minipack switch easily integrates into various data center topologies while also consuming half the power of its predecessors. Facebook calculates that Minipack will consume 50 percent less power and space than its predecessor. Its modularity enables it to serve multiple roles in the new topologies.

In addition to Minipack, Facebook also jointly developed Arista Networks’ 7368X4 switch.

Both Minipack and the Arista 7368X4 are being contributed to OCP, and both run FBOSS.

Facebook has also developed HGRID as the evolution of Fabric Aggregator to handle the doubling of buildings per region.

https://code.fb.com/data-center-engineering/f16-minipack/


OCP 2019: Microsoft's Project Zipline offers better data compression

At OCP 2019, Microsoft unveiled Project Zipline, a new compression standard for data sets covering Edge to Cloud app.

Project Zipline promises "compression without compromises where always-on procession achieves high compression ratios with high throughput and low latency. Zipline encompasses algorithms, software, and silicon engines.

Microsoft estimates Zipline data set sizes at 4 ~ 8% of uncompressed sizes. Over time, Microsoft anticipates Project Zipline compression technology will make its way into network data processing, smart SSDs, archival systems, cloud appliances, general purpose microprocessor, IoT, and edge devices.

A number of big name silicon and storage companies are already on board as partners.

https://azure.microsoft.com/en-us/blog/hardware-innovation-for-data-growth-challenges-at-cloud-scale/


OCP 2019: Arista's 12.8Tbps switch developed with Facebook

At OCP 2019, Arista Networks announced a high-radix 12.8Tbps switching system developed with Facebook with the goal of simplifying 100/400G networking.

The Arista 7360X Series doubles system density while reducing power consumption and cost by doubling the network diameter and reducing the number of required leaf-spine tiers. Full manageability via FBOSS (Facebook Open Switching Software) is supported for controlling power and thermal efficiency along with the control plane.

The new platform is a compact, four rack unit design and all active components are removable. It delivers a 60% reduction in power at under 10 watts per 100G port. Standards-based, the system comes with support for 100G QSFP and 400G OSFP or QSFP-DD optics and cables. Arista EOS delivers the advanced traffic management, automation and telemetry features needed to build and maintain modern cloud networks.
The Arista 7368X4 Series is available as an 8-slot modular system with a choice of 100G and 400G modules based on industry-standard interfaces and support for EOS.

It is currently shipping with 100G interfaces. Price per 100G port is under $600.

“The Arista solution has helped Facebook to gain significant improvements in power and space efficiency, reducing the number of switch chips in the network stack and allowing power to be freed up for compute resources,” said Najam Ahmad, Vice President Network Engineering for Facebook. “Having both an internally developed Minipack and the Arista solution allows Facebook to remain multi-sourced, with an option to run Arista EOS or FBOSS on both, where either system can be deployed in multiple tiers of networks.”

OCP 2019: Facebook plans own fiber linking data centers in Ohio, VA, NC

As previously disclosed, Facebook built its own 200-mile underground fiber cable between its data centers in New Mexico and Texas. The cable is described as "one of the highest-capacity systems in the United States, with state-of-the-art optical fiber."

Facebook now plans new fiber builds between its data centers in Ohio, Virginia, and North Carolina.

This blog posting by Facebook's Kevin Salvadori discusses the company's fiber deployments.


Facebook presentation: Optics Inside the Data Center

Mark McKillop, Network Engineer at Facebook, and Katharine Schmidtke, Sourcing Manager of Network Hardware at Facebook, talk about challenges in Facebook's optical networks, both in backbone and in data centers.

The first part of the video covers the optical systems used to connect Facebook's POPs and data centers.

The second part discusses optical scaling challenges inside the data centers, including the potential for onboard optics in future systems.

This 30-minute video presentation was recorded at Facebook's Networking@Scale 2018 event in June in California.

See video:
https://www.facebook.com/atscaleevents/videos/2090069407932819/

OCP 2019: New Open Domain-Specific Architecture sub-project

The Open Compute Project is launching an Open Domain-Specific Architecture (ODSA) sub-project to define an open interface and architecture that enables the mixing and matching of available silicon die from different suppliers onto a single SoC for data center applications. The goal is to define a process to integrate best-of-breed chiplets onto a SoC.

Netronome played a lead role initiating the new project.

“The open architecture for domain-specific accelerators being proposed by the ODSA Workgroup brings the benefits of disaggregation to the world of SoCs. The OCP Community led by hyperscale operators has been at the forefront driving disaggregation of server and networking systems. Joining forces with OCP, the ODSA Workgroup brings the next chapter of disaggregation for domain-specific accelerator SoCs as it looks toward enabling proof of concepts and deployable products leveraging OCP’s strong ecosystem of hardware and software developers,” said Sujal Das, chief marketing and strategy officer at Netronome.

"Coincident with the decline of Moore's law, the silicon industry is facing longer development times and significantly increased complexity. We are pleased to see the ODSA Workgroup become a part of the Open Compute Project. We hope workgroup members will help to drive development practices and adoption of best-of-breed chiplets and SoCs. Their collaboration has the potential to further democratize chip development, and ultimately reduce design overhead of domain-specific silicon in emerging use cases,” said Aaron Sullivan, Director Hardware Engineering at Facebook."

https://2019ocpglobalsummit.sched.com/event/JxrZ/open-domain-specific-architecture-odsa-sub-project-launch

Wiki page: https://www.opencompute.org/wiki/Server/ODSA

Mailing list: https://ocp-all.groups.io/g/OCP-ODSA

Netronome proposes open "chiplets" for domain specific workloads

Netronome unveiled its open architecture for domain-specific accelerators .

Netronome is collaborating with six leading silicon companies, Achronix, GLOBALFOUNDRIES, Kandou, NXP, Sarcina and SiFive, to develop this open architecture and related specifications for developing chiplets that promise to reduce silicon development and manufacturing costs.

The idea is fo chiplet-based silicon to be composed using best-of-breed components such as processors, accelerators, and memory and I/O peripherals using optimal process nodes. The open architecture will provide a complete stack of components (known good die, packaging, interconnect network, software integration stack) that lowers the hardware and software costs of developing and deploying domain-specific accelerator solutions. Implementing open specifications contributed by participating companies, any vendor’s silicon die can become a building block that can be utilized in a chiplet-based SoC design.

“Netronome’s domain-specific architecture as used in its Network Flow Processor (NFP) products has been designed from the ground up keeping modularity, and economies of silicon development and manufacturing costs as top of mind,” said Niel Viljoen, founder and CEO at Netronome. “We are extremely excited to collaborate with industry leaders and contribute significant intellectual property and related open specifications derived from the proven NFP products and apply that effectively to the open and composable chiplet-based architecture being developed in the ODSA Workgroup.”

OCP 2019: CBTS brings Carrier Open Infrastructure based on OpenCORD

CBTS (formerly Cincinnati Bell Technology Solutions) announced its Carrier Open Infrastructure (COI) reference architecture based on frameworks from the Open Networking Foundation (ONF), open source hardware from the Open Compute Project (OCP) and Open Source VNFs.

CBTS said its goal is to help carriers leverage open source virtual networking functions (VNFs) and common, off-the-shelf (COTS) hardware to grow revenue as broadband speeds increase and average revenue per-subscriber (ARPU) declines.

The COI architecture leverages the ONF's Central Office Redefined as a data center (CORD) framework for enabling gigabit access over copper, fiber and wireless.



CBTS 10GB XGS-PON Access Solutions 
OpenOLT
CO-OLT24XG-PON is a powerful next-generation OpenCORD-compatible 1RU PON access platform designed for remote terminal (RT) and/or central office (CO) applications. It features:

• High-performance processor to ensure device stability and OpenFlow control plane performance
• Interoperability with SDN controllers including OpenDaylight, ONOS and Commercial-source
• 24 Port x XFP XGS-PON + 6 x 100GE ports
• G.9807.1 10G PON MAC
• Up to 256 ONTs/2048 service flows per PON port
• Non-Blocking line rate architecture to forward packet flows at wire speeds on all ports
• Deep packet buffers for high-speed packet processing
• HQoS support
• Service rate limiting for both U/S and D/S.
• Flexibility to define a wide range of match-action table processing (OpenFlow1.3 + multi-table pipelines)
• Guaranteed fast failover (link or device) by supporting large number of flow mod/sec

OpenONU
• XGS-PON SFU (Single Family Unit) XG-99K – ITU-T G.9807 compliant 10GB downstream and upstream XGS-PON interface supports triple-play services including voice, video, and high-speed internet access service
o Compliant with standard OMCI definition, manageable at remote side, supports the full range FCAPS functions including supervision, monitoring and maintenance
• XGS PON ONT SFP XG-99S Plug-in – ITU-T G.9807 compliant XGS-PON interface replaces the Ethernet SFP+ module existing in Ethernet gateway, switch, router and backhaul equipment
o OMCI stack provides all XGS PON functionalities and full range FCAPS management features including supervision, monitoring and maintenance

“We developed COI and our new OCP-based optical access solutions in response to the pressing need we see to support carriers straining to conduct national expansion initiatives in the face of ongoing subscriber demand for higher bandwidth and increased competition driving subscription fees down,” said Greg Harrison, SVP of Service Provider at OnX/CBTS. “These developments build on our long history of success in SDN projects for the world’s largest carriers and on our deep commitment to industry open source initiatives including ONF and OCP. We look forward to continuing to build on the momentum we have created with COI, and to innovating even further with the help of our growing community.” 

OCP 2019: Juniper integrates with SONiC

Juniper Networks will offer native integration of its platforms with Software for Open Networking in the Cloud (SONiC), which was developed and contributed to the Open Compute Project (OCP) Foundation by Microsoft.

SONiC is an extensible network switch operations and management platform with a large and growing ecosystem of hardware and software partners.

Juniper said native integration with SONiC underscores its commitment to open programmability, complete disaggregation and expanded solutions to support cloud-first enterprises. Specifically, the integration will offer cloud and service provider customers:


  • Open programmability: Allows for the rapid integration, agility and flexibility necessary for enterprise end users looking to swiftly adapt to market changes.
  • Disaggregation: Highly modular architecture decouples integrated components and software, thereby offering customers the ultimate freedom of choice and flexibility.
  • Automation: Network operations have always been a tedious and repetitive process. Combining the power of open programmability and disaggregation, Juniper streamlines network diagnostics, automates complex workflows and optimizes network infrastructure operation.
  • Broad ecosystem: Native SONiC integration will provide the broad networking community and cloud providers with the latest routing, switching and analytics solutions from Juniper.

“At Juniper Networks, we recognize how important open programmability is to our customers, already evidenced in our support of OpenConfig, Open/R and P4. To continue this support, we’re excited to announce the native integration of Juniper’s platforms with SONiC to offer hyperscale data center customers another option in data center architecture,” stated Manoj Leelanivas, Chief Product Officer, Juniper Networks.

“The integration of Juniper’s platforms with SONiC shows the company’s commitment to open networking and is an important step in our mission to revolutionize networking for today and into the future. Customers will be able to take advantage of this simplified and automated switch management platform, enhanced by rich routing and deep telemetry innovations,” stated Yousef Khalidi, CVP, Azure Networking, Microsoft Corp.

OCP 2019: Big Switch demos SONiC + Open Network Linux

At OCP 2019, Big Switch Networks demonstrated an open-source network operating system (NOS) through an integration with Microsoft-led Software for Open Networking in the Cloud (SONiC) and Big Switch-led Open Network Linux (ONL). The demonstration highlights automation, zero-touch provisioning and visibility leveraging a DevOps-centric Ansible workflow and SDN-centric controller workflows.

The SONiC + ONL NOS is comprised of the following open-source software components, each of which are widely deployed independently:
  • ONL, a base platform OS, including ONLP platform APIs
  • SONiC, higher-layer NOS stack, including forwarding agent/Switch Abstraction Interface (SAI) management, telemetry and programmable API layers
  • Free Range Routing (FRR), integrated through SONiC, for the L3 control plane functionality (BGP, OSPF)
The SONiC + ONL demo stack is available for download from the SONiC + ONL technology page. (below). Examples from the demo include:
  • Configuration automation and visibility with Ansible
  • Zero-touch installation and visibility via an SDN controller
  • Ease of deploying a BGP switching fabric with 10G, 25G and 100G open networking switches from Edgecore Networks, leveraging Broadcom’s StrataXGS Trident II and StrataXGS Tomahawk networking ASICs




https://www.bigswitch.com/solutions/technology/open-network-linux/onl-sonic

See also