Showing posts with label OCP. Show all posts
Showing posts with label OCP. Show all posts

Wednesday, September 6, 2017

Radisys Announces Next-Generation DCEngine Hardware

Radisys released its next-generation DCEngine hardware, a pre-packaged rack solution based on Open Compute Project (OCP) principles and designed to transition Communications Service Providers (CSPs) to virtualized data centers.

DCEngine, which is based on the OCP-ACCEPTED CG-OpenRack-19 specification, leverages the Intel Xeon Scalable processors. It supports Intel Xeon Scalable Architecture-based compute and storage sleds, with a wide range of processing options that can be installed and tuned up inside existing DCEngine systems in minutes. DCEngine meets CSP requirements for an enhanced, scalable power systems that delivers 25,000W per feed for higher processor density, greater efficiency and lowered expenses as well as DC and AC power entry options suitable for a wide range of environments. It also offers an In-rack Uninterruptible Power Supply (UPS) option to support simplified infrastructure, easy maintenance and lower overhead. Radisys delivers the final pre-assembled DCEngine rack with no on-site setup.

Radisys said its next-gen DCEngine supports CSPs transition away from proprietary hardware and vendor lock-in to a data center environment built with open source software and hardware components. The enhanced rack design, combined with operations and support modeled after Facebook practices, can bring an annual OpEx saving of nearly 40 percent compared to traditional data center offerings, while reducing deployment time from months to just days.

“Our CSP customers are requiring open telecom solutions to support their data center transformations, easing their pain points around power and costs, while simplifying their operational complexities,” said Bryan Sadowski, vice president, FlowEngine and DCEngine, Radisys. “With the next-generation DCEngine, combined with Radisys’ deep telco expertise and OCP’s operations/support model, service providers not only get innovation and service agility, but also gain significant TCO savings.”

http://www.radisys.com

Friday, June 30, 2017

AT&T to launch software-based 10G XGS-PON trial

AT&T announced it will conduct a 10 Gbit/s XGS-PON field trial in late 2017 as it progresses with plans to virtualise access functions within the last mile network.

The next-generation PON trial is designed to deliver multi-gigabit Internet speeds to consumer and business customers, and to enable all services, including 5G wireless infrastructure, to be converged onto a single network.

AT&T noted that XGS-PON is a fixed wavelength symmetrical 10 Gbit/s passive optic network technology that can coexist with the current GPON technology. The technology can provide 4x the downstream bandwidth of the existing system, and is as cost-effective to deploy as GPON. As part of its network virtualisation initiative, AT&T plans to place some XGS-PON in the cloud with software leveraging open hardware and software designs to speed development.
AT&T has worked with ON.Lab to develop and test ONOS (Open Network Operating System) and VOLTHA (Virtual Optical Line Terminator Hardware Abstraction) software. This technology allows the lower level details of the silicon to be hidden. AT&T stated that it has also submitted a number of open white box XGS OLT designs to the Open Compute Project (OCP) and is currently working with the project to gain approval for the solutions.

The company noted that interoperability is a key element of its Open Access strategy, and prompted the creation of an OpenOMCI specification, which provides an interoperable interface between the OLT and the home devices. This specification, which forms a key part of software-defined network (SDN) and network function virtualisation (NFV), has been distributed to standards and open source communities.



  • AT&T joined OCP in January 2016 to support its network transformation program. Earlier this year at the OCP Summit Edgecore Networks, a provider of open networking solutions and a subsidiary of Accton Technology, announced design contributions to OCP including a 25 Gigabit Ethernet top-of-rack switch and high-density 100 Gigabit Ethernet spine switch. The company also showcased new open hardware platforms.
  • At the summit, Edgecore displayed a disaggregated virtual OLT for PON deployment at up to 10 Gbit/ based on the AT&T Open XGS-PON 1 RU OLT specification that was contributed to the OCP Telco working group.
  • Edgecore's ASFvOLT16 disaggregated virtual OLT is based on the AT&T Open XGS-PON 1 RU OLT specification and features Broadcom StrataDNX switch and PON MAC SOC silicon, offering 16 ports of XGS-PON or NG-PON2, with 4 x QSFP28 ports and designed for next generation PON deployments and R-CORD telecom infrastructure.

Friday, March 24, 2017

Microsoft's Project Olympus provides an opening for ARM

A key observation from this year's Open Compute Summit is that the hyper-scale cloud vendors are indeed calling the shots in terms of hardware design for their data centres. This extends all the way from the chassis configurations to storage, networking, protocol stacks and now customised silicon.

To recap, Facebook's newly refreshed server line-up now has 7 models, each optimised for different workloads: Type 1 (Web); Type 2 - Flash (database); Type 3 – HDD (database); Type 4 (Hadoop); Type 5 (photos); Type 6 (multi-service); and Type 7 (cold storage). Racks of these servers are populated with a ToR switch followed by sleds with either the compute or storage resources.

In comparison, Microsoft, which was also a keynote presenter at this year's OCP Summit, is taking a slightly different approach with its Project Olympus universal server. Here the idea is also to reduce the cost and complexity of its Azure rollout in hyper-scale date centres around the world, but to do so using a universal server platform design. Project Olympus uses either a 1 RU or 2 RU chassis and various modules for adapting the server for various workloads or electrical inputs. Significantly, it is the first OCP server to support both Intel and ARM-based CPUs. 

Not surprisingly, Intel is looking to continue its role as the mainstay CPU supplier for data centre servers. Project Olympus will use the next generation Intel Xeon processors, code-named Skylake, and with its new FPGA capability in-house, Intel is sure to supply more silicon accelerators for Azure data centres. Jason Waxman, GM of Intel's Data Center Group, showed off a prototype Project Olympus server integrating Arria 10 FPGAs. Meanwhile, in a keynote presentation, Microsoft Distinguished Engineer Leendert van Doorn confirmed that ARM processors are now part of Project Olympus.

Microsoft showed Olympus versions running Windows server on Cavium's ThunderX2 and Qualcomm's 10 nm Centriq 2400, which offers 48 cores. AMD is another CPU partner for Olympus with its ARM-based processor, code-named Naples.  In addition, there are other ARM licensees waiting in the wings with designs aimed at data centres, including MACOM (AppliedMicro's X-Gene 3 processor) and Nephos, a spin-out from MediaTek. For Cavium and Qualcomm, the case for ARM-powered servers comes down to optimised performance for certain workloads, and in OCP Summit presentations, both companies cited web indexing and search as one of the first applications that Microsoft is using to test their processors.

Project Olympus is also putting forward an OCP design aimed at accelerating AI in its next-gen cloud infrastructure. Microsoft, together with NVIDIA and Ingrasys, is proposing a hyper-scale GPU accelerator chassis for AI. The design, code named HGX-1, will package eight of NVIDIA's latest Pascal GPUs connected via NVIDIA’s NVLink technology. The NVLink technology can scale to provide extremely high connectivity between as many as 32 GPUs - conceivably 4 HGX-1 boxes linked as one. A standardised AI chassis would enable Microsoft to rapidly rollout the same technology to all of its Azure data centres worldwide.

In tests published a few months ago, NVIDIA said its earlier DGX-1 server, which uses Pascal-powered Tesla P100 GPUs and an NVLink implementation, were delivering 170x of the performance of standard Xeon E5 CPUs when running Microsoft’s Cognitive Toolkit.

Meanwhile, Intel has introduced the second generation of its Rack Scale Design for OCP. This brings improvements in the management software for integrating OCP systems in a hyper-scale data centre and also adds open APIs to the Snap open source telemetry framework so that other partners can contribute to the management of each rack as an integrated system. This concept of easier data centre management was illustrated in an OCP keynote by Yahoo Japan, which amazingly delivers 62 billion page views per day to its users and remains the most popular website in that nation. The Yahoo Japan presentation focused on an OCP-compliant data centre it operates in the state of Washington, its only overseas data centre. The remote data centre facility is manned by only a skeleton crew that through streamlined OCP designs is able to perform most hardware maintenance tasks, such as replacing a disk drive, memory module or CPU, in less than two minutes.

One further note on Intel’s OCP efforts relates to its 100 Gbit/s CWDM4 silicon photonics modules, which it states are ramping up in shipment volume. These are lower cost 100 Gbit/s optical interfaces that run over up to 2 km for cross data centre connectivity.

On the OCP-compliant storage front not everything is flash, with spinning HDDs still in play. Seagate has recently announced a 12 Tbytes 3.5 HDD engineered to accommodate 550 Tbyte workloads annually. The company claims MTBF (mean time between failure) of 2.5 million hours and the drive is designed to operate 24/7 for five years. These 12 Tbyte enable a single 42 U rack to deploy over 10 Pbytes of storage, quite an amazing density considering how much bandwidth would be required to move this volume of data.


Google did not make a keynote appearance at this year’s OCP Summit, but had its own event underway in nearby San Francisco. The Google Cloud Next event gave the company an even bigger stage to present its vision for cloud services and the infrastructure needed to support it.

Wednesday, March 22, 2017

Facebook shows its progress with Open Compute Project

The latest instalment of the annual Open Compute Project (OCP) Summit, which was held March 8-9 in Silicon Valley, brought new open source designs for next-generation data centres. It is six years since Facebook launched OCP and it has grown into quite an institution. Membership in the group has doubled over the past year to 195 companies and it is clear that OCP is having an impact in adjacent sectors such as enterprise storage and telecom infrastructure gear.

The OCP was never intended to be a traditional standards organisation, serving more as a public forum in which Facebook, Microsoft and potentially other big buyers of data centre equipment can share their engineering designs with the industry. The hyper-scale cloud market, which also includes Amazon Web Services, Google, Alibaba and potentially others such as IBM and Tencent, are where the growth is at. IDC, in its Worldwide Quarterly Cloud IT Infrastructure Tracker, estimates total spending on IT infrastructure products (server, enterprise storage and Ethernet switches) for deployment in cloud environments will increase by 18% in 2017 to reach $44.2 billion. Of this, IDC estimates that 61% of spending will be by public cloud data centres, while off-premises private cloud environments constitute 15% of spending.

It is clear from previous disclosures that all Facebook data centres have adopted the OCP architecture, including its primary facilities in Prineville (Oregon), Forest City (North Carolina), Altoona (Iowa) and LuleƄ (Sweden). Meanwhile, the newest Facebook data centres, under construction in Fort Worth (Texas) and Clonee, Ireland are pushing OCP boundaries even further in terms of energy efficiency.

Facebook's ambitions famously extend to connecting all people on the planet and it has already passed the billion monthly user milestone for both its mobile and web platforms. The latest metrics indicate that Facebook is delivering 100 million hours of video content every day to its users; 95+ million photos and videos are shared on Instagram on a daily basis; and 400 million people now use Messenger for voice and video chat on a routine basis.

At this year's OCP Summit, Facebook is rolling out refreshed designs for all of its 'vanity-free' servers, each optimised for a particular workload type, and Facebook engineers can choose to run their applications on any of the supported server types. Highlights of the new designs include:

·         Bryce Canyon, a very high-density storage server for photos and videos that features a 20% higher hard disk drive density and a 4x increase in compute capability over its predecessor, Honey Badger.

·         Yosemite v2, a compute server that features 'hot' service, meaning servers do not need to be powered down when the sled is pulled out of the chassis in order for components to be serviced.

·         Tioga Pass, a compute server with dual-socket motherboards and more IO bandwidth (i.e. more bandwidth to flash, network cards and GPUs) than its predecessor, Leopard, enabling larger memory configurations and faster compute time.

·         Big Basin, a server designed for artificial intelligence (AI) and machine learning, optimised for image processing and training neural networks. Compared to its predecessor, Big Basin can train machine learning models that are 30% larger due to greater arithmetical throughput and by implementing more memory (12 to 16 Gbytes).

Facebook currently has web server capacity to deliver 7.5 quadrillion instructions per second and its 10-year roadmap for data centre infrastructure, also highlighted at the OCP Summit, predicts that AI and machine learning will be applied to a wide range of applications hosted on the Facebook platform. Photos and videos uploaded to any of the Facebook services will routinely go through machine-based image recognition and to handle this load Facebook is pursuing additional OCP designs that bring fast storage capabilities closer to its compute resources. It will leverage silicon photonics to provide fast connectivity between resources inside its hyper-scale data centres and new open source models designed to speed innovation in both hardware and software.

Monday, March 13, 2017

Aricent unveils ConvergedOS Open Hardware Operating System

Aricent, a global design and engineering company, announced the introduction of its intelligent network operating system, Aricent ConvergedOS, designed to provide network equipment and technology system providers with a ready-to-deploy, open hardware and Open Compute Project (OCP)-compatible software solution.

In addition, through its established partnership with Inventec, Aricent is introducing the new network operating system on the Inventec D7032Q28B 100 Gigabit Ethernet spine switch targeting data centre applications and enterprise and service provider network deployments.

The Aricent ConvergedOS provides support for a total of 32 x 100 Gigabit Ethernet QSFP28 interfaces with line-rate Layer 2/3 performance of up to 3.2 Tbit/s in a PHY-less design to meet growing traffic demands in data centres.

ConvergedOS is based on Aricent's Intelligent Switching Solution (ISS), a switching, routing and network optimisation software platform designed to enable connectivity in the data centre for storage area networking, 100 Gbit/s links and distribution of workloads across data centres via Ethernet VPN services.

Key features of Aricent's ConvergedOS solution include:

1. Data centre networking, with support for L2 switching VLAN, L2 multicast IGMP/MLD snooping, IGMP/MLD proxy, link aggregation, LLDP-MED, data centre bridging (DCB)-PFC, ETS, QCN and DCBX, LLDP.
2. BGP spine life architecture, enabling support for a faster convergence, cloud-ready management interface.
3. Support for L3 (IPv4/v6) unicast and multicast routing RIP, OSPFv, IS-IS, BGP4, IGMP (v1/v2/v3), MLD, router, PIM-SM, PIM-DM, PIM-Bidirectional, DVMRP and MSDP.
4. Platform protection via hot redundancy, VRRP (IPv4/v6), uplink failure detection (UFD), multi-chassis LAG, split horizon.
5. Data centre virtualisation and overlay, with VxLAN gateways, Ethernet VPN (VxLAN), edge virtualisation via 802.1Qbg, S-channel, MPLS VPN.
6. Data centre convergence, with support for Fibre Channel over Ethernet (FCoE), FIP snooping, FC direct attach.
7. Data centre telemetry with Broadview and agent software for collecting ASIC stats and counter for diagnosis.

Aricent recently announced new capabilities for its Autonomous Network Solution (ANS) for the automation of next-generation virtualised networks with new components based on standards including ETSI NFV, ETSI AFI GANA, MEF LSO and TM-Forum's Zoom.

http://www.aricent.com

Wednesday, March 8, 2017

Facebook Refreshes its OCP Server Designs

At this year's Open Compute Summit in Santa Clara, California, Facebook unveiled a number of new server designs to power the wide variety of workloads it now handles.

Some updated Facebook metrics:

  • People watch 100 million hours of video every day on Facebook; 
  • 95M+ photos and videos are posted to Instagram every day; 
  • 400M people now use voice and video chat every month on Messenger. 

Highlights of the new servers:

  • Bryce Canyon is a storage server primarily used for high-density storage, including photos and videos. The server is designed with more powerful processors and increased memory, and provides increased efficiency and performance. Bryce Canyon has 20% higher hard disk drive density and a 4x increase in compute capability over its predecessor, Honey Badger.
  • Yosemite v2 is a compute server that provides the flexibility and power efficiency needed for scale-out data centers. The power design supports hot service, meaning servers don't need to be powered down when the sled is pulled out of the chassis in order for components to be serviced; these servers can continue to operate.
  • Tioga Pass is a compute server with dual-socket motherboards and more IO bandwidth (i.e. more bandwidth to flash, network cards, and GPUs) than its predecessor Leopard. This design enables larger memory configurations and speeds up compute time.
  • Big Basin is a server used to train neural networks, a technology that can do a number of research tasks including learning to identify images by examining enormous numbers of them. With Big Basin, Facebook can train machine learning models that are 30% larger (compared its predecessor Big Sur). They can do so due to greater arithmetic throughput now available and by implementing more memory (12GB to 16GB). In tests with image classification model Resnet-50, they reached almost 100% improvement in throughput compared to Big Sur.

http://www.opencompute.org/wiki/Files_and_Specs
https://www.facebook.com/Engineering/

Microsoft's Project Olympus OCP Server Runs Qualcomm's ARM Processor

Qualcomm Datacenter Technologies (QCT) is working with Microsoft to enable a variety of Azure cloud workloads using its 10 nanometer Qualcomm Centriq 2400 ARM-based processor.

QCT has now joined the Open Compute Project and submitted a server specification using Centriq 2400, which offers up to 48 cores optimized for highly parallelized data center workloads.

Specifically, the Qualcomm Centriq 2400 Open Compute Motherboard server specification is based on the latest version of Microsoft’s Project Olympus. The companies have demonstrated Windows Server, developed for Microsoft’s internal use, powered by the Centriq 2400 processor.

“QDT is accelerating innovation in datacenters by delivering the world’s first 10nm server platform,” said Ram Peddibhotla, vice president, product management, Qualcomm Datacenter Technologies, Inc. “Our collaboration with Microsoft and contribution to the OCP community enables innovations such as Qualcomm Centriq 2400 to be designed in and deployed into the data centers rapidly. In collaborating with Microsoft and other industry leading partners, we are democratizing system design and enabling a broad-based ARM server ecosystem.”

“Microsoft and QDT are collaborating with an eye to the future addressing server acceleration and memory technologies that have the potential to shape the data center of tomorrow,” said Dr. Leendert van Doorn, distinguished engineer, Microsoft Azure, Microsoft Corp. “Our joint work on Windows Server for Microsoft’s internal use, and the Qualcomm Centriq 2400 Open Compute Motherboard server specification, compatible with Microsoft’s Project Olympus, is an important step toward enabling our cloud services to run on QDT-based server platforms.”

http://www.qualcomm.com

Cavium's ARM-based ThunderX2 Powers Microsoft's Project Olympus Server

At the Open Compute Summit in Santa Clara, California, Cavium announced that its ThunderX2 ARMv8-A Data Center processor is being tested by Microsoft for running a variety of workloads on the Microsoft Azure cloud platform.

The ThunderX2 product family is Cavium's second generation 64-bit ARMv8-A server processor SoCs for Data Center, Cloud and High Performance Computing applications. The family integrates fully out-of-order high performance custom cores supporting single and dual socket configurations. ThunderX2 is optimized to drive high computational performance delivering outstanding memory bandwidth and memory capacity.

Cavium said its hardware platform is fully compliant with Microsoft's Project Olympus which is one of the most modular and flexible cloud hardware designs in the data center industry. The platform integrates two ThunderX2 processors in a dual socket configuration. ThunderX2 SoC integrates a large number of fully out-of-order custom ARMv8-A cores with rich IO connectivity for accommodating a variety of peripherals for Azure, delivering excellent throughput and latency for cloud applications. The platform has been designed in collaboration with a leading server ODM supplier for Microsoft.

"Cavium is excited to work with Microsoft on ThunderX2," said Gopal Hegde, VP/GM, Data Center Processor Group at Cavium. "ARM-based servers have come a long way with first generation ThunderX-based server platforms being deployed at multiple data centers, which enabled a critical mass of ecosystem partners for ARM. We see the second generation products helping to drive a tipping point for ARM server deployment across a mainstream set of volume applications. Microsoft's support will help accelerate commercial deployment of ARMv8 server platforms for Data Centers and Cloud."

http://www.cavium.com

Radisys Announces DCEngine Release 1.0 Management Software

Radisys announced its DCEngine Management Software Release 1.0 for optimizing resources for hyperscale data centers,

The software is now available and shipping integrated with Radisys’ DCEngine product line, which is an open hardware platform based on the Open Compute Project (OCP) CG-OpenRack-19 specification. The specification is a scalable carrier-grade rack level system that integrates high performance compute, storage and networking in a standard 19 inch rack. Future DCEngine Management Software releases will extend these capabilities with a focus on facilitating the deployment and integration of the DCEngine hyperscale data center solution into existing SDN-enabled ecosystems.

Highlights of DCEngine Management Software Release 1.0

  • Intel Rack Scale Design APIs to enable dynamic composition of resources based on workload specific demands
  • Modules for leading data center orchestration frameworks, such as Ansible, to make firmware updates easy and convenient
  • Redfish Interface 1.0 protocol support


“CSPs are evolving their virtualization strategies and deploying data center infrastructure to support high availability applications such as virtualized network functions and real-time data analytics,” said Bryan Sadowski, vice president, FlowEngine and DCEngine, Radisys. “Our new management software for DCEngine delivers essential hardware resource management capabilities that are increasingly needed in this new ecosystem. We’ve reduced the operational pain points for rack scale deployments and operations by building a suite of tools that enable automated and convenient configuration as well as resource management to meet CSPs’ evolving requirements.”

http://www.radisys.com

Tuesday, March 7, 2017

Open Compute Project Summit Kicks Off in Silicon Valley

The 2017 Open Compute Project (OCP) Summit kicks off on March 8th and is expected to attract 2,000 attendees interested in next-gen data center design.

Keynote speakers include Kushagra Vaid, General Manager, Azure Cloud Hardware Infrastructure, Microsoft; Masaharu Miyamoto, Senior Server Engineer, Yahoo! JAPAN; Jason Waxman, Vice President, Data Center Group (DCG) General Manager, Datacenter Solutions Group (DSG), Intel; and Vijay Rao, Director of Technology Strategy, Facebook.

http://opencompute.org/ocp-u.s.-summit-2017/agenda/

Wednesday, January 25, 2017

Apstra Demos Wedge Switch Running its OS

Apstra, a start-up based in Menlo Park, California, released its Apstra Operating System (AOS) 1.1.1 and an integration with Wedge 100, Facebook’s second generation top-of-rack network switch.

Apstra said its distributed operating system for the data center network will disaggregate the operational plane from the underlying device operating systems and hardware. Sitting above both open and traditional vendor hardware, AOS provides the abstraction required to automatically translate a data center network architect’s intent into a closed loop, continuously validated infrastructure. The intent, network configurations, and telemetry are stored in a distributed, system-wide state repository.

“At Apstra we believe in giving network engineers choice and control in operating their network and we are excited to be part of the network disaggregation movement,” said Mansour Karam, CEO and Founder of Apstra, Inc. “We are delighted to have been invited to demonstrate AOS integrated with Wedge 100 today. AOS provides network engineers with advanced operational control and situational awareness of network services, and enables them to design, deploy, and operate a truly Self-Operating Network™ (SON) without vendor lock-in.”

http://www.apstra.com

Facebook Deploys Backpack -- its 2nd Gen Data Center Switch

Facebook unveiled Backpack, its second-generation modular switch platform developed in house at Facebook for 100G data center infrastructure. It leverages Facebook's recently announced Wedge switch.

Backpack is designed with a clear separation of the data, control, and management planes. It uses simple building blocks called switch elements. The Backpack chassis is equivalent to a set of 12 Wedge 100 switches connected together. The orthogonal direct chassis architecture opens up more air channel space for a better thermal performance for managing the heat from 100G ASICs and optics.  Facebook will use the BGP routing protocol for the distribution of routes between the different line cards in the chassis.

The design has already entered production and deployment in Facebook data centers.  The company plans to submit the design to the Open Compute Project.

https://code.facebook.com/posts/864213503715814/introducing-backpack-our-second-generation-modular-open-switch/

Tuesday, January 24, 2017

Barefoot Contributes Wedge 100B Switch Designs to OCP

Barefoot Networks unveiled two Wedge 100B switch designs based on its Tofino 6.5 Tb/s Ethernet switch chip: Wedge100BF-32X, a 3.2Tb/s 1RU 32x100GE switch and Wedge100BF-65X, a 6.5Tb/s 2RU 65x100GE switch.

The Wedge 100B switches support FBOSS, SONiC and several other switch operating systems, and can be controlled by the OCP's Switch Abstraction Interface (SAI) API, switchAPI (an extensible, open API) or APIs designed by the user. The default "switch.p4" program running on Tofino turns the Wedge 100B switches into a top-of-rack switch, with all the standard features expected in a data center. Users may add or remove features as they choose, add new protocols, change table sizes, give greater visibility and fold in middlebox functions, such as Layer-4 load-balancing. The Wedge 100B platforms also introduce several enhancements, including an optimized power supply unit, lower cost PCB design, improved Design For Manufacturability, beefier CPU module, etc. The switches run an updated version of OpenBMC.

"The Open Compute Networking Project is excited to see Barefoot Networks share two Wedge 100B hardware designs with the community," said Omar Baldonado, OCP Networking Project Co-Lead. "We look forward to seeing the new innovations enabled by these Wedge 100B designs and the flexibility that their programmable switching silicon brings to the industry."

"Barefoot Networks is delighted to share its Tofino based Wedge 100B switch designs with the Open Compute Project community," said Martin Izzard, Co-Founder & CEO, Barefoot Networks. "With Wedge 100B platforms, the OCP ecosystem, network owners and architects have unprecedented access to a fully disaggregated networking stack down to the forwarding plane, enabling them to build networks that best suit their needs."

http://www.barefootnetworks.com

Tuesday, December 6, 2016

Radisys’ CG-OpenRack-19 Spec Accepted by Open Compute Project

Radisys announced that its CG-OpenRack-19 specification has been accepted by Open Compute Project Foundation.  The design is a scalable carrier-grade rack level system that integrates high performance compute, storage and networking in a standard rack.

Radisys said the specification provides a high level of modularity and simplicity by defining interoperability requirements, thereby reducing system set-up time and operator costs. Radisys’ contribution addresses key requirements needed to extend the base Open Rack model into telecom environments to accommodate the needs of existing central offices, including physical conditions, content/workload elements, management needs and networking/interconnect options. CG-OpenRack-19 promotes an agile approach to creating deployable system instantiations. It is based on a standard 19” rack width to leverage traditional telecom common equipment practices, while also being well suited for new virtualized data centers.

“Service providers are transitioning their network infrastructure from closed proprietary platforms to next-gen data centers built with open source software and hardware components,” said Andrew Alleman, CTO, Radisys. “Radisys is actively contributing to open systems and networks, with open source contributions in both software and hardware. With OCP, we saw a great approach and technology that is bigger than just one company. We worked closely with service providers and other vendors on the CG-OpenRack-19 contribution to bring a carrier-grade version of OCP Open Rack hardware to service provider networks. We are pleased that the specification is OCP-ACCEPTED™ and already achieving traction in service provider data centers.”

http://www.radisys.com/

Monday, October 31, 2016

Microsoft Announces Project Olympus - OCP Hardware Designs

Microsoft, in collaboration with the Open Compute Project (OCP), announced Project Olympus – a next generation hyperscale cloud hardware design and a new model for open source hardware development with the OCP community.

With Project Olympus, Microsoft said it hopes to foster a model of open source collaboration that has been embraced for software but has historically been at odds with the physical demands of developing hardware. Rather than contributing a fully-completed design to OCP, with this new approach, Microsoft will contribute its next generation cloud hardware designs when they are approximately 50% complete.  This is intended to encourage community involvement in the iterative design process.

“Microsoft is opening the door to a new era of open source hardware development. Project Olympus, the re-imagined collaboration model and the way they’re bringing it to market, is unprecedented in the history of OCP and open source datacenter hardware,” said Bill Carter, Chief Technology Officer, Open Compute Project Foundation.

The building blocks that Project Olympus will contribute consist of a new universal motherboard, high-availability power supply with included batteries, 1U/2U server chassis, high-density storage expansion, a new universal rack power distribution unit (PDU) for global datacenter interoperability, and a standards compliant rack management card.

Microsoft noted that over 90% of the servers it currently purchases are based on OCP contributed specifications.

https://azure.microsoft.com/en-us/blog/microsoft-reimagines-open-source-cloud-hardware/

Thursday, August 4, 2016

Facebook and Google Agree on 48v Open Rack Standard Architecture

Google and Facebook have collaborated on an Open Rack v2.0 Standard, which specifies a 48V power architecture with a modular, shallow-depth form factor that enables high-density deployment of Open Compute Platform (OCP) racks into data centers with limited space.

In a blog posting, Google's Debosmita Das and Mike Lau note that Google developed and has extensively deployed a 48V ecosystem with payloads utilizing 48V to Point-of-Load technology in its data centers since 2010.  Google said its experience with 48v has resulted in a significant reduction in losses and increased efficiency compared to 12V solutions, thereby saving millions of dollars and kilowatt hours.

https://cloudplatform.googleblog.com/

Here's what Happened at Open Compute Project Summit


In the five years since its launch, the Open Compute Project (OCP) has chalked up dozens of innovations and technical specification contributions that have been implemented by hyperscale data center operators. The ambitions have now expanded beyond rack hardware to include switching, storage, silicon photonics,  a telemetry framework, an open-source analytics platform and new domain of solutions adapted for telecom operators. Here are some...

O

Wednesday, April 6, 2016

OpenPOWER Advances in HyperScale Data Center Race

Since its founding two years ago, the OpenPOWER Foundation, which is an open development alliance based on IBM's POWER microprocessor architecture, has grown to more than 200 participating companies and organizations. The goal is to build advanced server, networking, storage and GPU-acceleration technology for next-generation, hyperscale data centers.

At the second annual OpenPOWER Summit held in San Jose this week, more than 50 new infrastructure and software innovations, spanning the entire system stack, including systems, boards, cards and accelerators are showcased.

Some highlights:

  • New Servers for High Performance Computing and Cloud Deployments – Foundation members introduced more than 10 new OpenPOWER servers, offering expanded services for high performance computing and server virtualization.
  • Google is developing a next-generation OpenPOWER and Open Compute Project form factor server. Google is working with Rackspace to co-develop an open server specification based on the new POWER9 architecture, and the two companies will submit a candidate server design to the Open Compute Project.
  • Rackspace announced that “Barreleye” has moved from the lab to the data center. Rackspace anticipates “Barreleye” will move into broader availability throughout the rest of the year, with the first applications on the Rackspace Public Cloud powered by OpenStack. Rackspace and IBM collectively contributed the “Barreleye” specifications to the Open Compute Project in January 2016.
  • IBM, in collaboration with NVIDIA and Wistron, plans to release its second-generation OpenPOWER server, which includes support for the NVIDIA Tesla Accelerated Computing platform. The server will leverage POWER8 processors connected directly to the new NVIDIA Tesla P100 GPU accelerators via the NVIDIA NVLink high-speed interconnect technology. Early systems will be available in Q4 2016. Additionally, IBM and NVIDIA plan to create global acceleration labs to help developers and ISVs port applications on the POWER8 and NVIDIA NVLink-based platform.
  • Expanded use of CAPI for Acceleration Technology – Foundation members, including Bittware, IBM, Mellanox and Xilinx, unveiled more than a dozen new accelerator solutions based on the Coherent Accelerator Processor Interface (CAPI). Alpha Data also unveiled a Xilinx FPGA-based CAPI hardware card at the Summit. These new accelerator technologies leverage CAPI to provide performance, cost and power benefits when compared to application programs running on a core or custom acceleration implementation attached via non-coherent interfaces. This is a key differentiator in building infrastructure to accelerate computation of big data and analytics workloads on the POWER architecture.
  • A Continued Commitment to Genomics Research – Following successful collaborations with LSU and tranSMART, OpenPOWER Foundation members continue to develop new advancements for genomics research. Edico Genome announced the DRAGEN Genomics Platform, a new appliance that enables ultra-rapid analysis of genomic data, reducing the time to analyze an entire genome from hours to just minutes, allowing healthcare providers to identify patients at higher risk for cancer before conditions worsen.
“To meet the demands of today’s data centers, businesses need open system design that provides greater f1exibility and speed at a lower cost,” said Calista Redmond, President of the OpenPOWER Foundation and Director of OpenPOWER Global Alliances, IBM. “The innovations introduced today demonstrate OpenPOWER members’ commitment to building technology infrastructures that provide customers with more choice, allowing them to leverage increased data workloads and analytics to drive better business outcomes.”

https://cloudplatform.googleblog.com/2016/04/Google-and-Rackspace-co-develop-open-server-architecture-based-on-new-IBM-POWER9-hardware.html
http://openpowerfoundation.org/

Sunday, March 13, 2016

Video: Equinix Joins Telecom Infrastructure Project

Equinix has just joined OCP's Telecom Infrastructure Project, which aims to bring the same principles of openness and disaggregation to the telco domain.


In this video, Ihab Tarazi, CTO of Equinix, discusses expectations of the project.  Equinix will support customers in its data centers as they deploy the next generation of OCP-compliant systems.

YouTube: https://youtu.be/ViWLx1lFnx8


Wednesday, March 9, 2016

Here's what Happened at Open Compute Project Summit

In the five years since its launch, the Open Compute Project (OCP) has chalked up dozens of innovations and technical specification contributions that have been implemented by hyperscale data center operators. The ambitions have now expanded beyond rack hardware to include switching, storage, silicon photonics,  a telemetry framework, an open-source analytics platform and new domain of solutions adapted for telecom operators.

Here are some highlights from this week's OCP Summit in San Jose, California:

  • Hundreds of companies and thousands of developers are now participating in Open Compute
  • Facebook estimates it has saved several billion dollars thanks to OCP
  • The OCP Board of Directors includes Jason Taylor (Facebook), Bill Laing (Microsoft), Don Duet (Goldman Sachs), Mark Roenick (Rackspace), Jason Waxman (Intel), Andy Bechtolsheim, and Frank Frankovsky.
  • OCP has big ambitions for the telecom world which needs efficient infrastructure too. OCP has launched a Telecom Infrastructure Project that includes initial participation from AT&T, DT, EE, Equinix, Nokia, SK Telecom and Verizon.
  • Facebook is moving quickly to bring 100G technology into the network backbone of its hyperscale data centers
  • Intel said its OCP Rack Scale Architecture, with compute shelves, NVMe storage shelves, and network shelves has gained traction from a robust ecosystem of partners.
  • OCP rack servers are moving to a standard 19" design.  This enables system to pack up to 256GB of DDR4 DIMMs on a single motherboard.
  • OCP foresees that specialized workloads, such as security or SDN, will benefit from acceleration boards based on FPGAs.
  • Silicon photonics will soon be a requirement in hyperscale data centers, especially when backbones eventually upgrade beyond 100G.
  • OCP is developing an open telemetry framework for data centers.
  • OCP open telemetry will be matched by an OCP open analytics platform.
  • Microsoft highlighted its role in developing an open switch abstraction interface (SAI) to remove complexity in hyperscale data centers
  • Microsoft's next OCP contribution is Software for Open Networking in the Cloud (SONiC), which allows sharing of the same software stack across hardware from multiple switch vendors. With its modular architecture and lean stack, SONiC allows data center operators to debug, fix, and test software bugs much faster. It also allows the flexibility to scale down the software and develop features that are required for datacenter and networking needs.
  • Artificial Intelligence applications will require highly efficient and massively scalable hardware blocks.  Facebook is bringing these AI requirements to OCP. Its Big Sur AI hardware assembly has already deployed thousands of machines in just a few months. Big Sur is Open Rack-compatible and incorporates eight high-performance GPUs of up to 300 watts each, with the flexibility to configure between multiple PCI-e topologies. It uses NVIDIA's Tesla Accelerated Computing Platform.

  • Google has just joined OCP. The biggest cloud participants also include Facebook, Microsoft and Rackspace.  (still missing Amazon, Alibaba, Apple, LinkedIn, etc).
  • Google's first contribution to OCP is its design for a 48v data center rack.  Google said its design is 30% more energy efficient, in part by minimizing AC-DC conversions.
  • Equinix has announced it is adopting Facebook’s Wedge network switch design and open-source architecture.
  • Goldman Sachs announced that more than 80% of servers that the company has acquired since last summer are based on OCP standards.
  • OCP is introducing Lightning, which is  flexible NVMe JBOF (just a bunch of flash). It is designed to provide a PCIe gen 3 connection from end to end (CPU to SSD). It leverages the existing Open Vault (Knox) SAS JBOD infrastructure to provide a faster time to market, maintain a common look and feel, enable modularity in PCIe switch solutions, and enable flexibility in SSD form factors. 

http://www.opencompute.org/


Panasonic's "freeze-ray" Long-term Storge Uses 300 GB Optical Discs

Panasonic unveiled its enhanced "freeze-ray" series Data Archiver, which uses 300GB Optical Discs for long-term storage. The system was developed in collaboration with Facebook and shown at this week's Open Compute Project Summit in San Jose.


A fully-loaded freeze-ray system can pack 1.9 petabytes of data in a standard 19-inch data center rack.

In the future, Panasonic plans to increase the capacity of the Archival Disc to 500GB and eventually 1TB per disk.

http://panasonic.net/avc/archiver/freeze-ray/

Radisys Launches DCEngine Frame Inspired by Open Compute

Radisys launched its "DCEngine" - a 42 RU frame based on principles of the Open Compute Project (OCP) with enhancements for communication service providers.

Radisys said its DCEngine provides a disruptive cloud platform that goes well beyond software-defined networking to create a truly open, software-defined services delivery infrastructure that fully embraces open source hardware and software technologies and offers DevOps agility to service providers.

DCEngine highilghts:

  • A pre-installed, ready-to-deploy frame inspired by OCP as an open architecture for Radisys and third-party partners. 
  • Performance and scalability through a collection of easily deployed and serviced compute, storage and networking sleds. DCEngine offers up to 2.4 petabytes of storage and up to 152 Intel Xeon class processors in a standard 42 RU frame.
  • A flexible software and management framework enabling the most demanding service provider workloads. This framework is based on leading open source projects including the Open Network Operating System (ONOS) and Central Office Re-architected as a Datacenter (CORD) initiatives being led by On.LAB.
  • A design that meets next-generation central office and service provider data center specifications, including NEBS temperature, EMI and seismic requirements where required.
  • Professional services and cradle-to-grave lifecycle management of DCEngine hardware and fully validated and supported software stacks, including design, installation, configuration, deployment and ongoing maintenance and validation.
Radisys confirmed initial orders and shipment of DCEngine into one of the world’s largest mobile operators late in 2015 .

“Radisys’ 25-plus years of communication software DNA, coupled with our leadership in open telecommunication platforms, positions us as the ideal partner for communication service providers looking to evolve from existing central office architecture to the cloud,” said Brian Bronson, President and Chief Executive Officer at Radisys. “DCEngine’s seamless scalability and efficient lifecycle management, coupled with our proven track record of understanding and addressing the needs of the service provider, positions Radisys to be a leader in enabling the evolution of next-generation operator infrastructure.”

Separately, Radisys noted that it has joined the Open Compute Project as a silver member.

See also