Showing posts with label Standards. Show all posts
Showing posts with label Standards. Show all posts

Sunday, July 19, 2020

Multi-access Edge Computing spec expected in Q3

The 5G Future Forum expects to release its first specification for Multi-access Edge Computing (MEC) in the third quarter of 2020.


  • The “MEC Experience Management” technical specification defines a set of intent-based APIs for functional exposure of edge and workload discovery with potential expansion to include future MEC functions and capabilities which are driven by network intelligence
  • The “MEC Deployment” technical specification defines the set of specifications to enable hyperscalers and service providers to deploy and integrate global MEC physical frameworks, including facilities (e.g. power and cooling), monitoring, operational considerations, and security.

The 5G Future Forum was established in January 2020 by América Móvil, KT Corp., Rogers, Telstra, Verizon, and Vodafone to develop 5G interoperability specifications to accelerate the delivery of 5G and MEC solutions around the world. Over the past six months, the Forum’s founding members have been working to create the governance structure for the 5GFF, as well as develop both technical and commercial workstreams.

The specifications will be released in August 2020. Following release of the specifications, the Forum anticipates expanding its membership to qualified new entrants. Other topics are being planned among the existing members with publication timeframes to be communicated shortly.

“The 5G Future Forum was set up to unlock the full potential of 5G and MEC applications and solutions around the globe,” said Rima Qureshi, chief strategy officer, Verizon. “5G is a key enabler of the next industrial revolution, where technology should transform how we live and work through applications including machine learning, autonomous industrial equipment, smart cars and cities, Internet of Things (IoT) and augmented and virtual reality. The release of these first specifications marks a major step forward in helping companies around the world create a seamless global experience for their customers.”

Thursday, July 9, 2020

New H.266/VVC codec improves efficiency by 50%

After devoting several years to its research and standardization, Fraunhofer HHI, together with partners including Apple, Ericsson, Intel, Huawei, Microsoft, Qualcomm, and Sony, announced the release of H.266/Versatile Video Coding (VVC).

The new H.266/VVC global video coding standard reduces bandwidth requirements by around 50% relative to the previous standard H.265/High Efficiency Video Coding (HEVC) without compromising visual quality.

H.266/VVC provides efficient transmission and storage of all video resolutions from SD to HD up to 4K and 8K, while supporting high dynamic range video and omnidirectional 360° video.

Fraunhofer HHI said H.266/VVC represents the pinnacle of (at least) four generations of international standards for video coding. The previous standards H.264/Advanced Video Coding (AVC) and H.265/HEVC, which were produced with substantial contributions from Fraunhofer HHI, remain active in more than 10 billion end devices, processing over 90% of the total global volume of video bits.

As an example of improved efficiency, the previous standard H.265/HEVC requires ca. 10 gigabytes of data to transmit a 90-min UHD video. H.266/VVC requires only 5 gigabytes of data to achieve the same quality. Because H.266/VVC was developed with ultra-high-resolution video content in mind, the new standard is particularly beneficial when streaming 4K or 8K videos.

“After dedicating almost three years toward this standard, we are proud to have been instrumental in developing H.266/VVC," says Benjamin Bross, head of the Video Coding Systems group at Fraunhofer HHI and editor of the +500-page standard specification of H.266/VVC. “Because of the quantum leap in coding efficiency offered by H.266/VVC, the use of video will increase further worldwide. Moreover, the increased versatility of H.266/VVC makes its use more attractive for a broader range of applications related to the transmission and storage of video.”

"If you consider that Fraunhofer HHI already played a key role in the development of the previous video coding standards H.264/AVC and H.265/HEVC, then we are happy with the fact that more than 50% of the bits on the Internet are generated by a Fraunhofer HHI technology,” adds Dr. Detlev Marpe, head of the Video Coding and Analytics department at Fraunhofer HHI.

A uniform and transparent licensing model based on the FRAND principle (i.e., fair, reasonable, and non-discriminatory) is planned to be established for the use of standard essential patents related to H.266/VVC.

Wednesday, July 1, 2020

O-RAN delivers its Bronze release with 23 new or updated specs

The O-RAN Alliance has launched its second open source software release featuring 23 new or updated specifications.

The O-RAN "Bronze" release also includes an initial set of O-RAN use cases and cloud native deployment support options.


Highlights of O-RAN Bronze:

  • an initial release of an A1 policy manager and an A1 controller that implements the Non-Real-Time Radio Intelligent Controller (Non-RT RIC) architecture.
  • the Near-Real-Time RIC updated to current O-RAN E2 and A1 specifications with 5 sample xAPPs.
  • initial O-CU and O-DU Low/High code contributions that support a FAPI framework and integration between the O-DU and RIC with E2 functionality and subscription support.
  • a Traffic Steering and Quality Prediction use case leveraging an E2 interface data ingest pipeline to demonstrate the functionality of RAN traffic steering with an E2 interface KPI monitoring capability.
  • OAM use cases that exercise Health Check call flows including the Near-RT RIC and its O1 and A1 interfaces.

“The new use cases, the Bronze software release, and the new O-RAN ALLIANCE members are indications that this global forum is working exactly as intended, reaching across borders to drive innovation and build consensus,” said Andre Fuetsch, Chairman of the O-RAN ALLIANCE and Chief Technology Officer – Network Services, at AT&T. “As this coalition evolves, we look forward to seeing how it continues to broaden access to 5G and other new access technologies.”

“Over the past 6 months, O-RAN working groups and the O-RAN Software Community have extensively engaged to achieve tight alignment between the specifications and the Bronze release open source code,” said Chih-Lin I, the Co-Chair of O-RAN Technical Steering Committee. “Specific progress related to both the Non-RT-RIC and the Near-RT-RIC frameworks and associated key interfaces deserves special mention for its importance in enabling AI/ML capabilities in RAN. The O-RAN virtual showcase further demonstrates the growing momentum towards global adoption and deployment of O-RAN solutions.”

“Ericsson is actively engaged in shaping the future of the O-RAN initiative by enabling Non-RT RIC (Non-Real-Time RAN Intelligent Controller) and A1 interface to support fine-grained intelligent steering of the RAN,” said Per Beming, Head of Standards and Industry Initiative in Ericsson. “During OSC Bronze release, Ericsson continued as the key contributor to Non-RT RIC project by improving support for intent based intelligent RAN optimization using A1 interface. This specific capability allows operators to leverage both RAN and non-RAN data to enrich end user experience.”

https://www.o-ran.org/software

Wednesday, March 11, 2020

QSFP-DD800 MSA releases initial 800G transceiver spec

The Quad Small Form Factor Pluggable Double Density 800 (QSFP-DD800) Multi Source Agreement (MSA) group has released a new hardware specification for the QSFP-DD800 transceiver form factor.

The new QSFP-DD800 1.0 specification is intended to be incremental to the existing QSFP-DD 5.0 specification. As signal integrity and thermal performance remain imperative, the transceiver pads have been optimized to improve signal integrity for 100 Gbps performance per lane without affecting backwards compatibility. The new specification additionally defines a novel 2x1 connector/cage, with cabled upper ports as an option to address signal loss issues associated with tradition PCBs. Looking ahead, QSFP-DD800 promoters will continue to work on new connector/cage variants, including 2x1 SMT versions that operate at 100 Gbps per lane.

The MSA group was formed to advance the development of high-speed, double-density QSFP modules which support 800 Gbps connectivity and includes the following promoters: Broadcom, Cisco, II-VI, Intel, Juniper Networks, Marvell, Molex and Samtec.

“In the short time our group has collaborated, we are thrilled to introduce this first specification for the next generation of the QSFP family of modules,” said Scott Sommers, co-chair of the QSFP-DD800 MSA. “As signal integrity and thermal management remain challenges for the optical communications industry, our MSA group is confident that its solutions will meet performance needs.”

“With their superior system integration and design flexibility, QSFP modules continue to be the cornerstone in building next generation networks and network equipment, especially as port speeds increase to 800G,” said Mark Nowell, co-chair of the QSFP-DD800 MSA. “Furthermore, their ability to increase switch and routing bandwidth density without sacrificing backwards compatibility with QSFP-DD, QSFP56 and QSFP28 modules provide network operators tremendous commercial and operational advantages.”

http://www.qsfp-dd800.com

Tuesday, March 10, 2020

Open Eye Consortium specification defines 53Gbps per lane PAM-4

The Open Eye Consortium (Open Eye MSA) published its 53 Gbps single-mode specification defining the requirements for analog PAM-4 solutions for 50G SFP, 100G DSFP, 200G QSFP, and 400G QSFP-DD and OSFP single-mode modules.

The Open Eye MSA aims to accelerate the adoption of PAM-4 optical interconnects scaling to 50Gbps, 100Gbps, 200Gbps, and 400Gbps by expanding upon existing standards to enable optical module implementations using less complex, lower cost, lower power, and optimized analog clock and data recovery (CDR) based architectures in addition to existing digital signal processing (DSP) architectures.

A whitepaper is available to view and download.

In addition, the Open Eye MSA announced the draft of its multi-mode specification available to its members for comments, with general availability targeted for release in Fall 2020.

http://www.openeye-msa.org

Tuesday, March 3, 2020

OIF's Coherent Common Management Interface Spec supports 400ZR

OIF has completed the Coherent Common Management Interface Specification (C-CMIS) Implementation Agreement (IA), which serves as an extension to the CMIS (QSFP-DD/OSFP/COBO) management specification, specifically targeting DCO modules.

“The C-CMIS IA is an important part of the developing 400ZR ecosystem,” said Ian Betty, Ciena and OIF Board Member. “It defines additional management registers, and monitors, together with new functionality, mechanisms, or behaviors, as needed.”

The C-CMIS IA provides register definition for coherent modules in pages and parameters that were previously reserved. Users that have previously implemented software to manage optical modules using CMIS will be able to quickly add support for these coherent pages and parameters. This release, which augments the existing CMIS specification which focused on addressing direct detect client optics, is targeted at the 400ZR application.

The technology and complexity of coherent modules requires additional monitoring parameters for use in field applications. This additional monitoring is primarily focused on Forward Error Correction (FEC) monitoring and optical/analog monitoring including items like Chromatic Dispersion, Differential Group Delay and Electrical Signal to Noise Ratio (eSNR). The C-CMIS IA provides specifications to monitor the standard parameters in a normative manner while taking advantage of the flexibility of the CMIS specification to monitor any additional proprietary parameters. 

“The current IA is focused on supporting the OIF 400ZR IA, which supports a single data path with eight-lane host electrical interface for a 400GBASE-R PCS signal and a single-lane 400G coherent media interface (with a new signal format called 400ZR),” explained Betty. “However, we expect future versions to include more complex Metro modules and may even extend these management features to other form factors.”

https://www.oiforum.com/

Sunday, January 26, 2020

CBRS Release 2 opens door to further innovation

The Wireless Innovation Forum (WInnForum) announced the approval of a new Release 2 specification defining enhancements to the baseline CBRS Operational and Functional Requirements. It defines optional features and functionality that can be incorporated at any time, with special focus on supporting specific vertical markets and their deployments.

“Our new release of the CBRS standards opens the way for substantial additional innovation in the CBRS band,” said Andrew Clegg, Chair of the WInnForum’s Spectrum Sharing Committee CBRS Functional and Operational Requirements Working Group. “Now anyone can rapidly add features to CBRS simply by contributing a suitable appendix to the Release 2 specification, and, pending committee approval and appropriate certification requirements, the feature is ready for adoption by CBRS users, equipment manufacturers, and Spectrum Access System (SAS) Administrators, on demand.”

Based on the Release 2 additions, exciting emerging technologies can be considered and implemented. Examples include:


  • Single Frequency Group - a set of CBSDs that require a common radio frequency assignment and reassignment when frequency reassignment is necessary or preferred; and,
  • 2D Antenna Patterns - requirements on how CBSD two-dimensional antenna patterns should be specified and used by the SAS to calculate CBSD antenna gain in a certain direction, taking both horizontal and vertical separation into account.

Additional emerging technologies can be considered in subsequent releases. Development is already underway on additional features to be added to Release 2 very shortly. Other planned features include enhanced group handling, flexible grants and grant updates, indoor penetration loss measurements, refined propagation modeling, registration enhancements, and support for beamforming. The Release 2 specifications will include updates to the SAS to SAS and SAS to CBSD protocols to support these new features, and a Release 2 test specification allowing industry to self-certify against requirements that do not impact Part 96.

https://cbrs.wirelessinnovation.org/enhancements-to-baseline-specifications
http://www.WirelessInnovation.org.

Wednesday, January 15, 2020

5G Future Forum aims for mobile-edge computing

A new 5G Future Forum is getting underway with the backing of America Movil, KT, Rogers, Telstra, Verizon and Vodafone to develop uniform interoperability specifications to improve speed to market for developers and multinational enterprises working on 5G-enabled solutions. In addition, Forum participants will develop public and private marketplaces to enhance developer and customer access to 5G, and will share global best practices in technology deployment.

"5G is a key enabler of the next global industrial revolution, where technology will transform how we live and work. It's critical that technology partners around the world unite to create the most seamless global experience for our customers," said Hans Vestberg, Chairman and CEO of Verizon. "We are proud to join with our fellow 5G leaders to unlock the full potential of applications and solutions that will transform with 5G's fast speeds, high reliability, improved security and single-digit latency."

Wednesday, January 8, 2020

Four new NFC specifications released

The NFC Forum released four new specifications aimed at the robustness and communications speed for Near Field Communication (NFC) devices, like smartphones.

  • Digital Protocol Technical Specification Version 2.2: This specification addresses the digital protocol for NFC-enabled device communication, providing an implementation specification on top of the ISO/IEC 18092 and ISO/IEC 14443 standards. The new specification adds error recovery for Type 2 and Type 5 Tags communication. This update improves the user experience by ensuring reliable NFC communication in difficult environments where NFC communications might be disturbed.
  • Type 2 Tag to Type 5 Tag Technical Specifications Version 1.1: Type 2 Tag to Type 5 Tag Specification Version 1.1 defines NFC-enabled devices in Reader/Writer Mode and detects, reads and writes an NFC Data Exchange Format message on NFC Forum Tags. The specification was updated to support time-optimized implementations to improve the performance for reading NFC Forum tags in support of the new TNEP protocol.
  • Activity Technical Specification Version 2.1 and Profiles Technical Specification Version 1.0: These specifications were created from Activity 2.0 by splitting it into two specifications to ease in the future maintenance of these specifications. The Profiles section of Activity 2.0 is now described in the Profiles 1.0 Technical Specification. It adds a new Profile to discover all services eventually offered with different technologies. The specification explains how the NFC Digital Protocol Specification can be used to set up the communication protocol with another NFC device or NFC Forum tag.

“We are constantly improving on the global specifications to improve the overall user experience for NFC users. NFC Forum members take this responsibility very seriously as their decisions impact the majority of smartphone users and many businesses,” said Mike McCamon, executive director, NFC Forum. “The specifications we are announcing today enhance the quality of NFC communications and allow users to exchange more data, faster in support of the rapid increase we are seeing in the use of TNEP.”


Tuesday, September 3, 2019

HiWire aims for standard Active Electrical Cables at 400G and up

A new HiWire Consortium has been established to pursue the standardization and certification of a new category of Active Electrical Cables (AEC). The group is dedicated to the establishment and ongoing development of an AEC standard that defines a specific implementation of the many industry MSAs and a formal certification process. This will enable an ecosystem of trusted Plug and Play AECs, available from multiple sources, for the hyperscale data center, telecom and enterprise markets.

HiWire AECs provide a full solution for layer 1 and 2 interconnect to deliver persistent and deterministic connectivity necessary for the next generation of data centers as the industry moves to 400G and beyond.

The founding companies of the HiWire Consortium are: Accton Technology, Alpha Networks, Arrcus, Bizlink, Cameo Communications, Zhejiang Canaan Technology, Centec Networks, Chelsio Communications, Credo, Dell EMC, Delta Electronics, Edom Technology, Cheng Uei Precision Industry Co.(Foxlink), Innovium, Barefoot Networks (an Intel Company), Inventec, Juniper Networks, Keysight Technologies, Quanta, Senao, Spirent Communications, Steligent Information Technologies Co., Wistron, Wistron NeWeb, and Wywinn Corporation.

“We are delighted and humbled by the widespread support for Credo and the HiWire Consortium,” said Bill Brennan, CEO of Credo. “The founding members all share a desire for thinner, longer and more reliable interconnect solutions. The consortium provides the framework to deliver a robust supply of interoperable solutions for 400G and beyond.”

“As new high-performance software workloads hit the network, the availability of reliable, low cost 400G interconnect is crucial,” said Ed Doe, vice president in Intel’s Connectivity Group and General Manager of the Barefoot Division. “The HiWire Consortium will go a step beyond MSAs and deliver to the datacenter what they have demanded from the start – truly interoperable interconnect solutions from a broad vendor base which will accelerate the adoption of our 12.8 Tbps P4-programmable Barefoot Ethernet switch series.”

“400G Ethernet represents a very challenging transition for the networking industry,” said Bob Wheeler, Principal Analyst at The Linley Group. “Creating a specific implementation and formal certification program around the many industry MSAs and standards is key to enabling trusted cables from multiple sources.”

https://hiwire.org



USB4 delivers 40Gbps bi-directionally

The new USB4 spec was officially published by the USB Implementers Forum (USB-IF).

The USB4 architecture is based on the Thunderbolt protocol specification recently contributed by Intel Corporation to the USB Promoter Group. It doubles the maximum aggregate bandwidth of USB and enables multiple simultaneous data and display protocols. It also maintains compatibility with existing USB 3.2, USB 2.0 and Thunderbolt 3 hosts and devices is supported; the resulting connection scales to the best mutual capability of the devices being connected.

Key characteristics of the USB4 solution include:

  • Two-lane operation using existing USB Type-C cables and up to 40Gbps operation over 40Gbps certified cables
  • Multiple data and display protocols that efficiently share the maximum aggregate bandwidth
  • Backward compatibility with USB 3.2, USB 2.0 and Thunderbolt 3

Over 50 companies are actively participating in the final stages of review of the draft specification.

http://www.usb.org

Wednesday, August 21, 2019

MEF publishes SD-WAN standard

MEF officially published SD-WAN Service Attributes and Services (MEF 70) -- the industry’s first global standard defining an SD-WAN service and its service attributes. The standard was officially approved by MEF members and ratified by the MEF Board of Directors at the organization’s recent Annual Members Meeting.

The SD-WAN standard describes requirements for an application-aware, over-the-top WAN connectivity service that uses policies to determine how application flows are directed over multiple underlay networks irrespective of the underlay technologies or service providers who deliver them.

MEF 70, among other things, defines:

  • Service attributes that describe the externally visible behavior of an SD-WAN service as experienced by the subscriber.
  • Rules associated with how traffic is handled.
  • Key technical concepts and definitions like an SD-WAN UNI, the SD-WAN Edge, SD-WAN Tunnel Virtual Connections, SD-WAN Virtual Connection End Points, and Underlay Connectivity Services.
SD-WAN standardization offers numerous benefits that will help accelerate SD-WAN market growth while improving overall customer experience with hybrid networking solutions. Key benefits include:

  • Enabling a wide range of ecosystem stakeholders to use the same terminology when buying, selling, assessing, deploying, and delivering SD-WAN services.
  • Making it easier to interface policy with intelligent underlay connectivity services to provide a better end-to-end application experience with guaranteed service resiliency.
  • Facilitating inclusion of SD-WAN services in standardized LSO architectures, thereby advancing efforts to orchestrate MEF 3.0 SD-WAN services across automated networks.
  • Paving the way for creation and implementation of certified MEF 3.0 SD-WAN services, which will give users confidence that a service meets a fundamental set of requirements.
“We want to thank the SD-WAN team for the incredible job they have done in bringing this industry-first standard to market in a timely manner,” said Nan Chen, President, MEF. “Combining standardized SD-WAN services with dynamic high-speed underlay connectivity services – including Carrier Ethernet, Optical Transport, and IP – enables service providers to deliver powerful MEF 3.0 hybrid networking solutions with unprecedented user- and application-directed control over network resources and service capabilities.”

MEF already has begun work on the next phase of SD-WAN standardization – MEF 70.1 – that will be of high interest to many enterprises. This work includes defining:
  • Service attributes for application flow performance and business importance.
  • SD-WAN service topology and connectivity.
  • Underlay connectivity service parameters.
MEF also is progressing related standards work focused on:
  • Application security for SD-WAN services.
  • Intent-based networking for SD-WAN that will simplify the subscriber-to-service provider interface.
  • Information and data modeling standards that will accelerate LSO API development for SD-WAN services.
“We’re seeing a significant change in how customers are using SD-WAN now versus two years ago, and that evolution is what makes service standards from MEF so critical. Today, and moving forward, SD-WAN is about delivering application performance. As the underlying networks — Optical Transport, Carrier Ethernet, and IP — are under greater pressure to be more ubiquitous, easy to provision, on-demand and elastic, that is where the MEF 3.0 construct comes into play. MEF’s role is creating a standards-based, intelligent network across multiple carriers that will eliminate friction as we work with each other to deliver application performance at the level of efficiency our customers are seeking,” stated Roman Pacewicz, Chief Product Officer, AT&T Business.

“Verizon is pleased to support MEF’s industry-leading SD-WAN standardization work. SD-WAN is the way to interface policy with an intelligent software defined network, and standardization makes it easier for integration to work across multiple types of underlying transport services. What that means for our end customers is it lets them get a better overall experience relative to their applications, with support for a broader range of use cases, guaranteed service resiliency, and improved service capabilities in an always on, always connected world,” stated Shawn Hakl, Senior Vice President Business Products, Verizon.

In addition, MEF remains on track to launch its MEF 3.0 SD-WAN Certification pilot program in 4Q 2019. This certification will test a set of service attributes and their behaviors defined in MEF 70 and described in detail in the upcoming MEF 3.0 SD-WAN Certification Test Requirements (MEF 90) document.

https://wiki.mef.net/pages/viewpage.action?pageId=89003131



Tuesday, June 18, 2019

PCIe 6.0 to leverage 56G PAM-4 to hit 64 GT/s transfer rate

PCI Express (PCIe) 6.0 technology will double the data rate to 64 GT/s while maintaining backwards compatibility with previous generations. PCI-SIG, which is the consortium that owns and manages PCI specification, said PCIe 6.0 is on target for release in 2021.

PCIe 6.0 Specification Features

  • Delivers 64 GT/s raw bit rate and up to 256 GB/s via x16 configuration
  • Utilizes PAM-4 (Pulse Amplitude Modulation with 4 levels) encoding and leverages existing 56G PAM-4 in the industry
  • Includes low-latency Forward Error Correction (FEC) with additional mechanisms to improve bandwidth efficiency
  • Maintains backwards compatibility with all previous generations of PCIe technology

“PCI Express technology has established itself as a pervasive I/O technology by sustaining bandwidth improvements for five generations over two decades,” Dennis Martin, an analyst at Principled Technologies, said. “With the

“Continuing the trend we set with the PCIe 5.0 specification, the PCIe 6.0 specification is on a fast timeline,” Al Yanes, PCI-SIG Chairman and President, said. “Due to the continued commitment of our member companies, we are on pace to double the bandwidth yet again in a time frame that will meet industry demand for throughput.”

http://www.pcisig.com


Tuesday, May 7, 2019

Open Eye MSA Consortium targets 50/100/200/400G modules

The Open Eye Consortium has established a Multi-Source Agreement (MSA) aimed at standardizing advanced specifications for lower latency, more power efficient and lower cost 50 Gbps, 100 Gbps, 200 Gbps, and up to 400 Gbps optical modules for datacenter interconnects over single-mode and multimode fiber.

The MSA aims to accelerate the adoption of PAM-4 optical interconnects scaling to 50 Gbps, 100 Gbps, 200 Gbps, and 400 Gbps by expanding upon existing standards to enable optical module implementations using less complex, lower cost, lower power, and optimized clock and data recovery (CDR) based architectures in addition to existing digital signal processing (DSP) architectures. The idea is to minimize the use of digital signal processing in optical modules.

The Open Eye industry consortium said it is committed to developing an industry-standard optical interconnect that leverages seamless component interoperability among a broad group of industry-leading technology providers, including providers of electronics, lasers and optical components.

 The initial Open Eye MSA specification will focus on 53 Gbps per lane PAM-4 solutions for 50G SFP, 100G DSFP, 100G SFP-DD, 200G QSFP, and 400G QSFP-DD and OSFP single-mode modules. Subsequent specifications will aim to address multimode and 100Gbps per lane applications. The initial specification release is planned for Fall 2019, with product availability to follow later in the year.

MACOM and Semtech Corporation initiated the formation of the Open Eye MSA with 19 current members in Promoter and Contributing membership classes.

Promoters include Applied Optoelectronics Inc., Cambridge Industries (CIG), Color Chip, Juniper Networks, Luxshare-ICT, MACOM, Mellanox, Molex and Semtech Corporation.

Contributors include: Accelink, Cloud Light Technology, InnoLight, Keysight Technologies, Maxim Integrated, O-Net, Optomind, Source Photonics and Sumitomo Electric.

“Through its participation in the Open Eye MSA, AOI is leveraging our laser and optical module technology to deliver benefits of low cost, high-speed connectivity to next generation data centers.” David (Chan Chih) Chen, AVP, Strategic Marketing for Transceiver, AOI

“MACOM continues to drive the industry’s technical requirements towards meeting the demands of Cloud Service Providers. Leveraging our proven leadership in 25G, 50G and 100G analog chipsets and optical components, we co-founded the Open Eye MSA to accelerate the adoption of 200G and 400G PAM optical interconnects. At the same time we are working in parallel to advance the DSP technologies necessary for faster connectivity speeds for future applications.” Preet Virk, Senior Vice President and General Manager, Networks, MACOM.

Companies interested in joining the Open Eye MSA can contact: admin@openeye-msa.org.

http://www.openeye-msa.org

Sunday, March 24, 2019

ETSI Multi-access Edge Computing phase 2 specs released

The ETSI Multi-access Edge Computing group (MEC ISG) released the first set of its Phase 2 specifications, including ETSI GS MEC 002 which includes new requirements for Phase 2, ETSI GS MEC 003 dealing with architecture and framework, and  ETSI GS MEC 009 giving general principles for service APIs.

The updated specs focus on the integration of NFV integration. The specification also describes example use cases and their technical benefits, for the purpose of deriving requirements. In addition, the release includes a report on MEC support for vehicle to infrastructure and vehicle to vehicle use cases.

“With this Release, the group continues to strengthen the leadership role that ETSI has played in edge computing since day one. I am proud of the quality of the work this team keeps delivering, making sure that the MEC marketplace evolves to an efficient, interoperable and open environment” says Alex Reznik, MEC ISG Chair. 

https://www.etsi.org/newsroom/press-releases/1567-2019-03-etsi-multi-access-edge-computing-releases-phase-2-specifications

Tuesday, February 19, 2019

Low latency spec for 50GbE tweaks forward error correction

The 25 Gigabit Ethernet Consortium has completed a low-latency forward error correction (FEC) specification for 50 Gbps, 100 Gbps and 200 Gbps Ethernet networks.

The new spec cuts FEC latency approximately in half by using a shortened codeword FEC variant – RS (272, 257+1, 7, 10) that replaces the IEEE 802.3cd and 802.3bs standard FEC.  The shortened codeword contains 272 x 10-bit symbols rather than the 544 x 10-bit symbols originally specified. Nothing else changes in the symbol distribution process from the output of the encoder to the FEC lanes in the new FEC, but that process is implemented more quickly due to the shortened codeword.

This will have a significant impact on overall physical layer latency, in particular for hyperscale datacenter networks comprised of a large number of nodes, with multiple hops between servers.

“Five years ago, only HPC developers cared about low latency, but today has latency sensitivity has come to many more mainstream applications,” said Rob Stone, technical working group chair of the 25G Ethernet Consortium. “With this new specification, the consortium is improving the single largest source of packet processing latency, which improves the performance that high-speed Ethernet brings to these applications.”

The specification is available at https://25gethernet.org/ll-fec-specification

Tuesday, September 25, 2018

ECOC 2018: DSFP form factor doubles data rate and density of SFP

A rev 1.0 hardware specification has been released for new DSFP (Dual Small Form-Factor Pluggable) modules -- doubling the data rate and port density of SFP modules in the same footprint.

Whereas SFP has a single electrical lane pair operating at bit and data rates up to 28 Gbps using NRZ and 56 Gbps using PAM4, the new DSFP has two electrical lane pairs, each operating at bit rates up to 26 Gbps using NRZ and 56 Gbps using PAM4, supporting aggregate date rates up to 56 Gbps and 112 Gbps, respectively. DSFP will potentially scale to a per lane bit rate of 112 Gbps using PAM4, supporting aggregate data rate up to 224 Gbps. SFP modules can be plugged into DSFP ports for backwards compatibility.

The spec was developed by the DSFP MSA (Multi-Source Agreement) Group, whose founding members are Amphenol, Finisar, Huawei, Lumentum, Molex, NEC, TE Connectivity, and Yamaichi.

The DSFP Hardware Specification Rev. 1.0 includes complete electrical, mechanical and thermal specifications for module and host card, including connector, cage, power, and hardware I/O. Also included are operating parameters, data rates, protocols, and supported applications.

Work is now underway on the DSFP MIS (Management Interface Specification), which is an abridged version of the CMIS (Common MIS) being developed by the QSFP-DD, OSFP and COBO Advisors Group.

“We are very excited about the introduction of a highly competitive new form factor by the DSFP MSA, which will double interface bandwidth and port density while maintaining compatibility with the existing SFP family of optics,” said Zhoujian Li, President of Research and Development, Wireless Networks, Huawei. “The DSFP form factor is low cost, has excellent high-speed signal integrity, reduces PCB area and is easy to design and manufacture.  It is a great platform that enables 5G deployment and evolution, while fully protecting our customers’ investment.”

“Publication of the DSFP Hardware Specification is part of an industry trend of quickly developing solutions optimized for specific applications. Stringent cost, power and size constraints in demanding market segments, like Mobile infrastructure, leads to solutions focused strictly on required functionality,” commented Chris Cole, Chair of the DSFP MSA Group, and Vice President of Advanced Development, Finisar.

http://www.dsfpmsa.org

Wednesday, September 19, 2018

IEEE 802.11aq enables wireless service delivery

The IEEE Standards Association (IEEE-SA) approved and published IEEE 802.11aq, an amendment to IEEE 802.11™, that addresses discovery of available services in Wireless Local Area Networks (WLANs).

The IEEE 802.11aq amendment specifies parameters for pre-association queries between wireless networks and devices. By facilitating a rich exchange of information between the wireless access point and the user’s device, users can swiftly and effortlessly discover what types of services are supported before making the decision to connect. Simplifying the service discovery process streamlines the network selection process, thereby elevating the end user experience.

IEEE said storing and caching available services with access points permits operators to differentiate their service offerings from those of market competitors in the same locality, opening the door to potential revenue generation opportunities.

“Connecting to a WLAN without first being able to easily discover whether a given service is supported by that network is often a source of frustration for end users. The IEEE 802.11aq amendment mitigates these situations by permitting users to quickly determine what services are available prior to actually connecting their devices,” said Stephen McCann, chair IEEE 802.11aq task group. “IEEE 802.11aq also delivers a critical competitive advantage through service differentiation in crowded market environments.”

Monday, July 2, 2018

CableLabs sets Point-to-Point Coherent Optics spec

CableLabs completed and published its first Point-to-Point Coherent Optics specifications for fiber access networks. The specifications enable the development of interoperable transceivers using coherent optical technology over point-to-point links.

The new specs, support 100 Gbps per wavelength, a 10X increase over the previous 10 Gbps rate, include:

  • P2P Coherent Optics Architecture Specification 
  • P2P Coherent Optics Physical Layer v1.0 Specification 

Coherent Optics technology uses amplitude, phase, and polarization to enable much higher fiber capacities which can improve streaming, video conferencing, file uploads and downloads and future usage needs for technologies such as virtual and augmented reality.

“CableLabs Point-to-Point Coherent Optics takes the existing fiber access network to hyper speed, boosting fiber capacity to meet the growing demand of broadband customers,” said Phil McKinney, president and chief executive officer of CableLabs. “Over half a billion people rely on CableLabs technology every day, and this breakthrough not only increases the capacity of the existing fiber system by an order of magnitude, it opens up wavelength resources to improve network quality and reliability, enabling advancements in cellular and wireless services.”

This announcement closely follows the launch of the CableLabs’ Full Duplex DOCSIS® specification in 2017, reflecting the company’s ongoing commitment to the broadband consumer community, cable and fiber providers and other key industry stakeholders.

https://apps.cablelabs.com/specification/P2PCO-SP-PHYv1.0
https://www.cablelabs.com/point-to-point-coherent-optics-specifications/

Sunday, May 13, 2018

Interview - Disaggregating and Virtualizing the RAN

The xRAN Forum is a carrier-led initiative aiming to apply the principles of virtualization, openness and standardization to one area of networking that has remained stubbornly closed and proprietary -- the radio access network (RAN) and, in particular, the critical segment that connects a base station unit to the antennas. Recently, I sat down with Dr. Sachin Katti, Professor in the Electrical Engineering and Computer Science departments at Stanford University and Director of the xRAN Forum, to find out what this is all about.

Jim Carroll, OND: Welcome Professor Katti. So let's talk about xRAN. It's a new initiative. Could you introduce it for us?

Dr. Sachin Katti, Director of xRAN Forum: Sure. xRAN is a little less than two years old. It was founded in late 2016 by me along with AT&T, Deutsche Telecom and SK Telecom -- and it's grown significantly since then.  We now are up to around ten operators and at least 20 vendor companies so it's been growing quite a bit the last year and a half.

JC: So why did xRAN come about?

SK:  Some history about how all of happened... I was actually at Stanford as my role as a faculty here at Stanford collaborating with both AT&T and Deutsche Telecom on something we called soft-RAN, which stood for software-defined radio access network. The research really was around how do you take radio access networks, which historically have been very tightly integrated and coupled with hardware, and make them more virtualized - to disaggregate the infrastructure so that you have more modular components, and also defined interfaces between the different common components. I think we all realized at that point that to really have an impact, we need to take this out of the research lab and get the industry and the cross-industry ecosystem to join forces and make this happen in reality.

That's the context behind how xRAN was born. The focus is on how do we define a disaggregated architecture for the RAN. Specifically, how do you take what's called the eNodeB base station and deconstruct the software stuff that's running on the base station such that you have modular components with open interfaces between them that allows for interoperability, so that you could truly have a multi-vendor deployment. And two, it also has a lot more programmability so that an operator could customize it for their own needs, enabling new applications and new service much more easily without having to go through a vendor every single time. I think it was really meant so that you can try all of those aspects and that's how it got started.

JC: Okay. Is there a short mission statement?  

SK: Sure. The mission statement for xRAN is to build an open virtualized, disaggregated radio access network architecture that opens standardized interfaces between all of these components, and to be able to build all of these components in a virtualized fashion on commodity hardware wherever possible.

JC:  In terms of the use cases, why would carriers need to virtualize their RAN, especially when they have other network slicing paradigms under development?

SK: It's great that you bring up network slice actually. Network slicing is one of the trialing use cases and the way to think about this is, in the future, everyone expects to have network slices with very different connectivity needs for enabling different kinds of applications. So you might have a slice for cars that have very different bandwidth and latency characteristics compared to a slice for IOT traffic, which is a bit more delay tolerant for example.

JC: And those are slices in a virtual EPC? Is that right?

SK:  Those are slices that need to be end-to-end. It can't just be the EPC because ultimately the SLAs you can give for the kind of connectivity you can deliver, is ultimately going to be dictated by what happens on the access. So, eventually, a slice has to be end-to-end and the challenge was if an operator, for example, wants to define new slices then how do they program the radio access network to deliver that SLA, to deliver that connectivity that that slice needs.

In the EPC there was a lot of progress on what are those interfaces to enable such slicing but there was not similar progress that happened in the RAN. How do you program the base station, and how do you program the access network itself to deliver such slicing capability? So that's actually one of the driving use cases that's in there since the start of xRAN. Another big use case, and I'm not sure whether we should call it a use case, but just a need, is around having a multi-vendor deployment. Historically, if you look at radio access network deployments, they're a single vendor. So, if you take a U.S. operator, for example, they literally divide up their markets into an Ericsson market or a Nokia market or whatever. And the understanding is everything in that market, from the base station to the antenna to the backhaul, everything comes from one vendor. They really cannot mix and match components from different vendors because there haven't been many interoperable interfaces, so the other big need or requirement that is coming all this is interoperability in a multivendor environment that they want to get to.

JC: How about infrastructure sharing? I mean we see that the tower companies are now growing by leaps and bounds and many carriers thinking that maybe it's no longer strategically important to own the tower and so share that tower, and they might share the backhaul as well. 

SK: It will actually help. It will actually enable that kind of sharing at an even more deeper level, because if you have an infrastructure that is virtualized and is running on more commodity hardware in a virtualized fashion then it becomes easier for a tower company to set up the compute substrate and their underlying backhaul substrate and then provide virtual infrastructure slices to each operator to operate on top of. And so instead of actually just physically separating -- right now they are basically renting space on the top right but instead if you could just the same underlying compute substrate and the same backhaul infrastructure as well a fronthaul infrastructure and virtually slice it and run multiple networks on top, it actually makes it possible to share on the infrastructure even more. So virtualization is almost a prerequisite to any of the sharing of infrastructure.

JC: Tell us about the newly released, xRAN fronthaul specification version 1.0. What is the body of work it builds on?

SK: Sure, let me step back and just talk about all the standardization efforts, and then I'll answer the question. xRAN actually has three big different working groups. One is around fronthaul, which refers to the link between the radio head and that baseband unit. This is the transport that's actually carrying the data between the baseline unit and the radio transmission and, in the reverse direction, when you receive something from the mobile unit.  So that's one aspect. The second one is around the control plane and user plane separation in the base station. Historically, the control plane and the user plane are tightly coupled. A significant working group effort in xRAN right now is how do you decouple those and define standardized interfaces between a control plane and a user plane.  And the last working group is trying to define what are the interfaces between the control plane of the radio access network and orchestration systems like ONAP. So those are three main focus areas.

Our first specification, which describes the fronthaul interfaces, was released this month. So, what went on there?  The problem that we solved concerns closed interfaces. Today if you bought a base station you also have to buy the antenna from the same vendor. That's it. For example, if you bought an Ericsson base station you have to buy an antenna from Ericsson as well. There are very few compatible antenna systems, but with 5G, and even with 4G, there's been a lot of innovation on the antenna side. There are innovators developing massive MIMO systems. These have lots of antennas and can significantly increase the capacity of the RAN. Many start-ups that are trying to do this, but they're struggling to get any traction because they cannot sell their antennas and connect it to an existing vendor's baseband unit. So, a critical requirement that operators are pushing was how do we make it such that this fronthaul specification is truly interoperable, making it possible to mix and match. You could take a small vendor's radio head and antenna and connect it with an existing well-established vendor's baseband unit -- that was the underlying requirement. What the new fronthaul work is truly trying to accomplish is to make sure that this interface is very clearly specified such that you do not need tight integration between the baseband unit and the radio head unit.

This fronthaul work came about initially with Verizon, AT&T and Deutsche Telekom driving it. Over the past year, we have had multiple operators joining the initiative, including NTT DoCoMo,  and several vendors they brought along including Nokia. Samsung, Mavenir, and a bunch of other companies, all coming together to write the specification and contribute IP towards it.

JC: Interesting, so you have support from those existing vendors who would seem to have a lot to lose if this disaggregation occurred disfavorably to them.

SK: Yes, we do. Current xRAN members include all or the bigger vendors, such as Nokia and Samsung, especially on the radio side. Cisco is a member which is more often on the orchestration side and there are several other big vendors that are part of this effort. And yeah, they have been quite supportive.

The xRAN Forum is an operator-driven body. The way we set up a new working group or project is that operators come in and tell us what their needs are, what their use cases are, and if we see enough consistency, when multiple operators share the same need or share the same use case, that leads to the start of the new working group. The operators often end up bringing their vendors along by saying we need this, "we are gonna drive it through the xRAN consortium and we need you to come and participate, otherwise you'll be left out." That's typically how vendors are forced to open up.

JC: Okay, interesting, so let's talk a little bit about the timelines and how this could play out. You talked about plugging into an existing baseband unit or base station unit so I guess there is a backward compatibility aspect?

SK: No, we are not expecting operators to build entirely new networks. The first fronthaul specification is meant both for 4G and 5G. The fronthaul is actually independent of the underlying air interface so it can work under 4G networks. On the baseband side, it does require a software update. It does require these systems to adhere to the spec in terms of how to talk to the radio head, and if they do, then the expectation is that someone should be able to plug in a new radio head and be able to make that system work. That being said, where we are at right now, is we have released a public specification. We believe it's interoperable but the next stage is to do interoperability testing. We expect that to happen later this year. Once interoperability testing happens, we will know what set of systems are compatible. Then we will have, if you will, a certificate saying that these are compliant.

JC: And would that certification be just for the fronthaul component or would that be for the control plane and data plane separation as well?

SK: Our working groups are progressing at different cadences.  The fronthaul specification already is out and they expect to the interoperability testing later this year, and that will be only for the fronthaul.  As and when we release the first specification for the control plane and use plane separation, we will have a corresponding timeline. But I think one thing to realize is that these are not all coupled. You could use the fronthaul specification on its own without having the rest the architecture. You could take existing infrastructure implement just the fronthaul specification and realize the benefits of the interoperability without necessarily having a control plane that's decoupled from the user plane. So the thing is structured such that each of those working groups can act independently. We didn't want to couple them because that would mean that it'll take a long time before anything happens.

JC: Wouldn't some of the xRAN work naturally have fit into 3GPP or ETSI's carrier virtualization efforts? Why have a new forum?

SK: Definitely. 3GPP is a big intersection point. I think the way we look at it is that we are trying to work on areas that 3GPP elected not to. So if it has anything to do with the air interface, for example, how should the infrastructure talk to the phone itself -we are not trying to work in that space. If it's got anything to do with how the base station talks to the core network, we are not trying to specify that interface. But there are things that 3GPP elected not to work on for whatever reason, and which could be how vendor incentives come into play. Perhaps these vendors discouraged 3GPP from working on intereroperable fronthaul interfaces. And we don't know the reason why 3GPP chose this path. You can see that this is also operator driven. So operators want certain things to happen but they
are not successful in getting 3GPP to do it. So xRAN is a venue for them to come in and specify
what they want to do and what they want to accomplish and get appropriately incentivized
vendors to actually come up together. So it is complementary in terms of the work effort, but I could see a scenario where the fronthaul specification that we come out with, this one and the next one, eventually forms the basis for a 3GPP standardized specification -- but that's not necessarily a conflict -- that actually might be how things eventually get fully standardized.

JC: There are other virtualization ideas that have sprung up from this same lab and in the Bay Area. How does this work in collaboration with CORD and M-CORD?

SK: Historically, I think virtualization has infected, if you will, the rest of the networking domain but has struggled to make headway in the RAN. If you looked at the rest of the network there's been a lot of success with virtualization. The RAN has traditionally been quite hard to do. I think there are multiple reasons for that. One is that the workload -- the things that you want to do in their RAN -- are much more stressful and demanding than the rest of the network in terms of processing. I think the hardware is now catching up to the point where you can take off-the-shelf hardware and run virtualized instances of the RAN on top. I think that's been one.

Second, the RAN is also a little bit harder to disaggregate because many of the control plane
decisions are occurring at a very fast timescale. There are things, for example, like how should I
schedule a particular user’s traffic to be sent over the air. That's a decision that the base station is making every millisecond and, at that timescale, it's really hard to run it at a deeper level. So, having a separate piece of logic making that decision, and then communicating that decision to the data plane if you will, and then the data plane implementing that decision, which would be classically how we  think about SDN, that's not going to work because if you have a round-trip latency of one millisecond that you can tolerate, it's too stringent.  I think we need to figure out how to deconstruct the problem, take out the right amount of control logic but still leave the very latency sensitive pieces in the underlying data plane of the infrastructure itself. I think that's still work in progress. We still know there are hard technical challenges there. 

JC: Okay, talking about inspiration -- one last thing- is there an application that you have in
mind that inspires this work?

SK: Sure. I am thinking a pretty compelling example is network slicing. As you look at these very demanding applications --if you think about virtual reality and augmented reality applications, or self-driving cars --there are very strict requirements on how that traffic should be handled in the network. If I think about a self-driving car, and it wants to offload some of its some mapping and sensing capabilities to the edge cloud, that loop, that interaction loop between that car and the edge cloud has very strict requirements. And you want that application to be able to come to the network and say this is the kind of connectivity I need for my traffic, and for the network to be programmable enough that the operator should be able to program the underlying infrastructure such that I can deliver that kind of connectivity to the self-driving car application.

I think those two classes of applications are characterized by latency sensitivity and bandwidth intensity. You don't get any leeway on either dimension. Right now, the people developing those applications do not trust the network. If you think about current prototypes of self-driving cars, the developers cannot assume that the network will be there. So they currently must build very complex systems to make the vehicle completely autonomous. If we truly want to build thinks where the cloud can actually play a role in controlling some systems, then we need this programmable network to enable such a world. 

Excellent, well thank you very much and good luck!