Tuesday, May 15, 2018

Verizon designates AWS as its Preferred Public Cloud Provider

Verizon Communications has designated Amazon Web Services (AWS) as its preferred public cloud provider as part of a corporate-wide initiative to increase agility and reduce costs through the use of cloud computing.

Verizon is migrating over 1,000 business-critical applications and database backend systems to AWS, several of which also include the migration of production databases to Amazon Aurora—AWS’s relational database engine. Verizon is also building AWS-specific training facilities, called “dojos,” where its employees can quickly ramp up on AWS technologies and learn how to innovate with speed and at scale.

The companies noted that Verizon first started working with AWS in 2015. This latest wave of migrations to AWS accelerates Verizon's digital transformation.

"We are making the public cloud a core part of our digital transformation, upgrading our database management approach to replace our proprietary solutions with Amazon Aurora," said Mahmoud El-Assir, Senior Vice President of Global Technology Services at Verizon. “The agility we’ve gained by moving to the world’s leading public cloud has helped us better serve our customers. Working with AWS complements our focus on efficiency, speed, and innovation within our engineering culture, and has enabled us to quickly deliver the best, most efficient customer experiences."

“We look forward to continuing our work with Verizon as their preferred public cloud provider, helping them to continually transform their business and innovate on behalf of their customers. The combination of Verizon’s team of builders with AWS’s extensive portfolio of cloud services and expertise means that Verizon’s options for delighting their customers is virtually unlimited,” stated Mike Clayville, Vice President, Worldwide Commercial Sales at AWS.

NEC and Google test subsea modulation using probabilistic shaping

NEC and Google have tested probabilistic shaping techniques to adjust the modulation of optical transmission across the 11,000-km FASTER subsea cable linking the U.S. and Japan.

The companies have demonstrated that the FASTER open subsea cable can be upgraded to a spectral efficiency of 6 bits per second per hertz (b/s/Hz) in an 11,000km segment -- representing a capacity of more than 26 Tbps in the C-band, which is over 2.5X the capacity originally planned for the cable, for no additional wet plant capital expenditure. The achievement represents a spectral efficiency-distance product record of 66,102 b/s/Hz.

The field trial was performed with live traffic on neighboring channels.

The companies said their test used near-Shannon probabilistic-shaping at a modulation of 64QAM, and for the first time on a live cable, artificial intelligence (AI) was used to analyze data for the purpose of nonlinearity compensation (NLC). NEC developed an NLC algorithm based on data-driven deep neural networks (DNN) to accurately and efficiently estimate the signal nonlinearity.

"Other approaches to NLC have attempted to solve the nonlinear Schrodinger equation, which requires the use of very complex algorithms," said NEC's Mr. Toru Kawauchi, General Manager, Submarine Network Division. "This approach sets aside those deterministic models of nonlinear propagation, in favor of a low-complexity black-box model of the fiber, generated by machine learning algorithms. The results demonstrate both an improvement in transmission performance and a reduction in implementation complexity. Furthermore, since the black-box model is built up from live transmission data, it does not require advance knowledge of the cable parameters. This allows the model to be used on any cable without prior modeling or characterization, which shows the potential application of AI technology to open subsea cable systems, on which terminal equipment from multiple vendors may be readily installed."

Transpacific FASTER Cable Enters Service with 60 Tbps Capacity

The world's highest capacity undersea cable system has entered commercial service -- six fiber pairs capable of delivering 60 Terabits per second (Tbps) of bandwidth across the Pacific.

FASTER is a 9,000km trans-Pacific cable connecting Oregon and two landing sites in Japan (Chiba and Mie prefectures). The system has extended connections to major hubs on the West Coast of the U.S. covering Los Angeles, the San Francisco Bay Area, Portland and Seattle. The design features extremely low-loss fiber, without a dispersion compensation section, and the latest digital signal processing technology.

Google will have sole access to a dedicated fiber pair. This enables Google to carry 10 Tbps of traffic (100 wavelengths at 100 Gbps). In addition to greater capacity, the FASTER Cable System brings much needed diversity to East Asia, writes Alan Chin-Lun Cheung, Google Submarine Networking Infrastructure.

Construction of the system was announced in August 2014 by the FASTER consortium, consisting of China Mobile International, China Telecom Global, Global Transit, Google, KDDI and Singtel.

ADVA intros virtualized encryption for the cloud

ADVA Optical Networking introduced its ConnectGuard Cloud technology for delivering virtualized encryption in hybrid and multi-cloud environments. The software is positioned as an alternative to costly and inflexible IPSec-focused appliances.

ADVA's ConnectGuard provides military-grade encryption and can be deployed on any COTS server or in a public cloud infrastructure. ConnectGuard Cloud is powered by Senetas' transport-independent encryption engine that supports dynamic software encryption at multiple layers, enabling customers to apply encryption based on the needs of the application and the type of networking available at remote sites.

ADVA's Ensemble Connector's zero touch provisioning capabilities enables roll out of secure cloud connectivity to thousands of endpoints within minutes. The company said ConnectGuard is currently in multiple trials with enterprises and service providers across the globe.   

"The security of our customers' data is something we've focused on for over two decades. Our team is intent on making sure that their data is safe wherever it is in the network," said Christoph Glingener, CTO, COO, ADVA. "That's why today marks a breakthrough. We've expanded our ConnectGuard(TM) security platform from protecting optical transport and Ethernet traffic to now safeguarding the cloud. With our ConnectGuard(TM) suite, we're securing data across Layers 1, 2, 3 and 4. This is something that no one else in the industry can offer. More than this, when customers use ConnectGuard(TM) Cloud, they benefit from all the unique capabilities of Ensemble Connector. With this solution, we can help customers safely migrate their applications to the cloud and we can even support a multi-cloud deployment model. This is a major step forward."

HPE to acquire Plexxi for hyperconverged switching

HPE agreed to acquire Plexxi, a developer of software-defined data fabric networking technology. Financial terms were not disclosed.

HPE said it intends to integrate Plexxi technology into its own hyperconverged solutions, where Plexxi will build on HPE's acquisition of SimpliVity last year.

Plexxi, which was founded in 2010 and is based in Nashua, NH, is a provider of Hyperconverged Networking (HCN) solutions. The company said its strength is in combining software-defined topologies and intent-based automation with workload and infrastructure awareness. Its portfolio includes:

  • PLEXXI CONNECT -- an event-based workflow automation platform for data centers. It drives the Plexxi fabric.
  • PLEXXI CONTROL -- workload-driven network fabric orchestration software for enterprise data centers. It offers full network management and visualization
  • PLEXXI SWITCHES -- a line of high-density 10/25/40/50/100GBE access switches offering up to 3.2 Tbps switching capacity in a 1 RU box.


Plexxi is headed by Rich Napolitano (CEO), who previously was head of EMC’s Unified Storage Division. Prior to EMC, Rich led both sales and engineering, and acted as both founder and venture capitalist across companies like Sun Microsystems, Pirus Networks, and Alchemy Angels. Plexxi was founded by Dave Husak (CTO) and co-founded by Mat Mathews (VP of Product Management).


  • In January 2016, Plexxi disclosed a strategic investment from GV (formerly Google Ventures). Other Plexxi investors have included Lightspeed Venture Partners, Matrix Partners and North Bridge Venture Partners.


HPE to Acquire SimpliVity for Hyperconverged Infrastructure Products

Hewlett Packard Enterprise agreed to acquire SimpliVity, a start-up offering software-defined, hyperconverged infrastructure, for $650 million in cash.

SimpliVity, which is privately held, was founded in 2009 and is headquartered in Westborough, MA. The company’s software-defined, hyperconverged infrastructure is designed from the ground up to meet the needs of enterprise customers who require on-premises technology infrastructure with enterprise-class performance, data protection, and resiliency, at cloud economics.

HPE said the SimpliVity portfolio enables it offer a rich set of enterprise data services across hyperconverged, 3PAR storage, composable infrastructure and multi-cloud offerings.

“This transaction expands HPE’s software-defined capability and fits squarely within our strategy to make Hybrid IT simple for customers,” said Meg Whitman, President and CEO, Hewlett Packard Enterprise.  “More and more customers are looking for solutions that bring them secure, highly resilient, on-premises infrastructure at cloud economics.  That’s exactly where we’re focused.”

Ciena supplies GeoMesh Extreme submarine solution for Black Sea cable

Caucasus Online, a major telecommunications carrier formed in 2006 by the merger of four Georgian ISPs (Caucasus Network, Georgia Online, SaNet and Telenet) is deploying Ciena’s GeoMesh Extreme submarine solution with high-density 100G transponders to enhance its Black Sea fibre optic network. Caucasus Online owns a 1,200 kilometer submarine fibre-optic cable across the Black Sea constructed by Tyco Electronics (today known as TE Subcom).

The upgrade will boost the data transfer capacity of the subsea network up to 5 Tbps while integrating multi-protocol and low-latency capabilities. Ciena's integration partner on the project is NUTS:iX.

As the sole owner of the Black Sea submarine cable, Caucasus Online provides critical services in regional communications, acting as a major gateway for internet traffic from Europe to South Caucasus and the Caspian region.

Ciena said its network management system enables Caucasus Online to manage mission-critical functions of its network such as inventory management and wavelength provisioning that span across access, metro, and core domains, with visibility through protocol layers such as WDM, OTN and packet services.

“As operators across the globe continue to see increasing demand for high-bandwidth applications, programmable infrastructure such as Ciena’s 6500 Packet-Optical platform and WaveLogic coherent optics enable our customers to create better cost structures and agility, laying the foundation for more adaptive networks that can truly address required changes to their network,” stated Daniel Prokop, Director of Central Europe, Channels, Ciena.

GlobeNet implements Ciena's GeoMesh with 6500 Packet-Optical Platform

GlobeNet, a wholesale provider of telecom infrastructure in the Americas, is implementing Ciena’s GeoMesh Extreme solution, including the 6500 Packet-Optical Platform powered by WaveLogic Ai coherent optics, to deliver 200G wavelengths along its 23,500 km fiber optic subsea cable system across the Americas.

Ciena’s solution is configured with a Packet/OTN architecture, ensuring multi-site connectivity with 200G wavelengths to handle data growth. Ciena said this added resiliency against fault scenarios provides service assurance for GlobeNet to protect the large amount of capacity carried by WaveLogic Ai. Additionally, Ciena’s Blue Planet Manage, Control, Plan (MCP) software improves network visibility through real-time software control.

“This deployment is a testament to our experience interconnecting the world with open submarine networks through a combination of flexible network solutions and support team. This resilient, cost-effective solution will provide GlobeNet with the highest capacity available, along with the ability to scale and create a more adaptive network,” stated Ian Clarke, Vice President, Global Submarine Systems, Ciena.

Sigfox presents a multisensory IoT solution

Sigfox launched an end-to-end commercial IoT solution combining a multisensor device, a prebuilt application, and Sigfox connectivity. The device comes with 6 different sensors (thermometer, hygrometer, light meter, accelerometer, magnetometer, reed switch) and a central button for multiple use cases.

The device works in any of the 45 countries where the Sigfox has coverage and can be configured to communicate in multiple regions without need of any local network. Battery life can last up to 1-year depending on use and frequency of messages.

 Sigfox is a French company founded in 2009 that builds wireless networks to connect low-power objects such as electricity meters, industrial sensors, etc. Sigfox uses a unique technology for extreme energy efficiency in the remote sensor and which remains compatible with Bluetooth, GPS 2G/3G/4G, and WiFi. Over the years, Sigfox has expanded its network to over 45 countries. It now claims to serve around 803 million people, with the ambition of extending the network across 60 countries and regions and reaching 1 billion people in 2018. At Mobile World Congress, Telxius and Sigfox announced a deal to expand the Sigfox network in Germany to cover more than 80 percent of the country. This entails the deployment of Sigfox equipment on a number of the 2,350 telecom towers that Telxius acquired from Telefónica Germany in early 2016. In addition, Sigfox can utilize further selected antenna locations of Telefónica Germany for further expansion of its network. To reach complete network coverage across Germany requires only about 2,500 Sigfox base stations. Previously, Sigfox Germany has acquired masts and roofs for its base stations directly. Working with Telxius, enables Sigfox to accelerate the rollout of its network as it will no longer have to negotiate directly with property owners. If the Sigfox partnership in Germany is successful, Telxius certainly offers telecom masts in all of its other market for supporting a global IoT network.

Monday, May 14, 2018

IDC sees global telecom and pay TV spending on the rise

Worldwide spending on telecommunications services and pay TV services reached $1,662 billion in 2017, an increase of 1.4% year over year (in constant dollar terms), according to the International Data Corporation's (IDC) Worldwide Telecom Services Database, and will accelerate to a 1.6%  growth rate in 2018, bringing worldwide spending on telecom and pay TV services to $1,689 billion. IDC is predicting the market to continue its positive growth until the end of the five-year forecast period (2018-2022), growing at a compound annual growth rate (CAGR) of 1.1%.

On a geographic basis, the Americas will remain the largest services market until the end of the forecast period in 2022. However, due to somewhat slower growth compared to other regions, its share of total worldwide spending will decline from 38% in 2017 to 36% in 2022. In contrast, Asia/Pacific will see its share increase from 32% to 34%.

"The Asia/Pacific market is growing faster than other regions due to the thirst for data services – spending on fixed data services is set to grow by 6% over the forecast period, which is significantly higher than other regions and this, coupled with mobile data growth, is driving the overall market growth," said Eric Owen, group vice president, EMEA Telecommunications & Networking at IDC.



"The global telecoms market will maintain steady growth of 2% over the forecast timeframe of 2018-2022. Communications service providers are in transition, facing a flat voice market, but steady growth in fixed and mobile data services. Fixed data services will grow by 4% due to strong demand for broadband, Ethernet, and high-speed fiber connectivity. While mobile voice revenues are declining, this sector will be sustained by strong growth in data and other services," said Courtney Munroe, group vice president, Worldwide Telecommunications Research at IDC.

Trump defends pivot on ZTE

In a follow-up tweet regarding ZTE, President Trump defended his decision to intervene in the case with the Department of Commerce, citing on-going trade negotiations and his personal relationship with President Xi.

Meanwhile, Wilbur Ross, Secretary of Commerce, said ZTE did "inappropriate things" referring to its violation of economic sanctions against Iran, but that his department would now consider "other remedies" instead of the current export ban of U.S. products to ZTE. Media sources also speculated that China was using the delayed approval process for Qualcomm's acquisition of NXP Semiconductor as its own bargaining chip in the ongoing bilateral trade negotiations.

Tele2 and Telia set 2025 data for deactivation of 3G in Sweden

Tele2 and Telia have agreed to deactivate the the 3G network in their joint company, Swedish UMTS Net AB, Sunab by the end of 2025.

Swedish UMTS Net AB, Sunab, is responsible for building, owning, and operating Tele2’s and Telia’s common 3G network. Today, the 3G network has over 6,000 base stations that will gradually be phased out or reused in other network expansion, which will result in cost and energy efficiency for Tele2.

The companies expect significant cost and energy efficiency gains. They said 3G deactivation is a natural part of future network evolution.

"This initiative is further proof of Tele2's challenger spirit. The transition from 3G to 4G is completely in line with our network strategy to move away from legacy networks, and move towards next generation networks. As Sweden’s most energy efficient network provider, we are extremely proud to accelerate our development , and become even more economically and environmentally efficient, says Samuel Skott, CEO Tele2 Sweden.

Charter's Spectrum Enterprise to invest $1 billion in national fiber network

Spectrum Enterprise, which is a part of Charter Communications, will invest more than one billion dollars in new fiber infrastructure this year to increase the density of its national fiber network. The carrier will also new tools, training and resources required to provide a differentiated client experience.

The company said its one billion dollar investment will primarily fund increased client access to the existing Spectrum Enterprise national fiber network, adding to the network's nearly 200,000 fiber-lit buildings.  The majority of the new fiber will be constructed within the existing Spectrum Enterprise national footprint. 

Last year, Charter also invested in excess of $1 billion exclusively in Spectrum Enterprise.

"As fiber connectivity has become fundamental to economic growth, we are focused on making our fiber infrastructure more accessible to clients, and reshaping their experience to align with the evolving realities of today's modern enterprise," said Phil Meeks, Executive Vice President and President, Spectrum Enterprise. "Advanced video and virtual reality solutions, cloud, IoT and the future of 5G all depend on a reliable and highly-dense fiber network. Our commitment is to ensure that our clients have the most robust fiber network and solutions to grow today and take advantage of future technologies that have immense demands on bandwidth."

OFS cleared in patent claim involving optical fiber coatings

OFS Fitel prevailed over DSM Desotech, a Dutch company, in patent infrigement case heard by the United States International Trade Commission (ITC).

The final determination cleared OFS, a manufacturer of optical fiber products, of all allegations that OFS optical fiber and the coating used on that fiber violated Section 337 of the Tariff Act of 1930, as amended.  The Final Determination found that all claims asserted by DSM at the ITC are invalid.

Dr. Timothy F. Murray, CEO and Chairman of OFS said: "We were surprised to be charged with patent infringement based on the use of coatings to make optical fiber by DSM, a foreign coatings manufacturer who supplies cable coating products to OFS and has been a supplier of fiber coatings in the past.  It is a measure of the strength of our US system of commercial and intellectual property law that foreign companies come to this country to seek just treatment. We appreciate the diligence and care taken by the ITC to hear and understand the arguments of both parties in delivering this ruling. The invalidation of the claims in the patents asserted by DSM is a cautionary note to those who attempt to use unsupportable positions to threaten action."

Molex acquires BittWare for FPGA-based platforms

Molex has acquired BittWare, a provider of computing systems featuring field-programmable gate arrays (FPGAs) deployed in data center compute and network packet processing applications. Financial terms were not disclosed.

Bittware, which is based in Concord, NH, provides solutions based on FPGA technology from Intel (formerly Altera) and Xilinx. BittWare FPGA solutions are used in compute and data center, military and aerospace, government, instrumentation and test, financial services, broadcast and video applications. BittWare serves original equipment manufacture (OEM) customers.

“FPGA-based platforms have become a strategically important driver of machine learning, artificial intelligence, cybersecurity, network acceleration, IoT, and other megatrends. As a Molex subsidiary, now working with Nallatech, I believe we will have the critical mass to bring new resources, better processes, and economies of scale to our valued customers and this rapidly growing industry as a whole,” said Jeff Milrod, president and CEO of BittWare.

Switch hits revenue of $98 million, up 10% yoy

Las Vegas-based, colocation provider Switch Inc, reported Q1 2018 revenue of $97.7 million, compared to $89.2 million for the same quarter in 2017, an increase of 10%. Net income was $4.0 million, compared to $20.3 million for the same quarter in 2017.  Net income in the first quarter of 2018 includes $12.4 million in equity-based compensation expense compared with $2.3 million in equity-based compensation expense in the same quarter of 2017.

"We are pleased with our progress in growing our ecosystem and positioning Switch as a partner of choice for global enterprises," said Thomas Morton, president and general counsel of Switch.  "Our highly differentiated and strategically located campus ecosystems continue to attract primary deployments, while our unique telecom capabilities enable hybrid cloud environments and hyperscale cloud deployments with AWS Direct Connect, Microsoft Express Route, and Google Cloud Interconnect."

Switch completed its IPO in October 2017.

Telefónica Certifies NETSCOUT for UNICA SDN/NFV

Telefónica has certified NETSCOUT's virtualized solutions vSCOUT and vSTREAM for deployment with their UNICA Lab architecture that supports future networks based on network function virtualization and software-defined networking (NFV/SDN) technologies.

NETSCOUT was able to demonstrate pervasive visibility across physical, virtual and cloud networks as well as interoperability with other virtual network functions (VNFs) on the UNICA platform.

“NETSCOUT is one of the first network and application management vendors certified by Telefónica's UNICA Lab.

“Telefónica has one of the industry’s most ambitious and forward-looking visions for future networks based on SDN/NFV technologies. Their UNICA architecture provides a roadmap to the future, and NETSCOUT has aggressively taken the necessary steps to be there with them, delivering unmatched visibility into the hybrid cloud environment,” said Bruce Kelley, senior vice president, chief technology officer, Service Provider, NETSCOUT. “NETSCOUT’s robust smart data technology delivers a pervasive troubleshooting and performance platform that Telefónica can use across all its new NFV domains, as well as future systems, such as 5G.”

ADTRAN heads Broadband Forum’s Application-Level Traffic Generation Testing

ADTRAN is leading an initiative within the Broadband Forum that will leverage application-level traffic generation for advanced broadband testing.

The idea is to develop standardized testing of dynamically reconfigured virtualized services, which can challenge traditional methods of performance characterization.

The first project in the Application-level Traffic Generation for Advanced Broadband Testing Project Stream is expected to be completed in late 2018, with follow-on projects in early 2019. It will deliver on the following principals:

  • Define and specify a model for the generation of test traffic at the application level that emulates real-time domain behavior of multiple applications from multiple subscribers.
  • Create a reference implementation for use in test cases in Open Broadband Labs and in other test environments as developed by the Broadband Forum and other industry stakeholders.
  • Primarily address high-speed internet access for residential use. In a later phase, business applications could also be addressed.


“This initiative will benefit service providers, vendors and testing labs by fundamentally allowing them to manage the increasingly complex subscriber traffic on the network as application and service parameters change and evolve,” Broadband Forum CEO Robin Mersh said. “The project stream will have far-reaching positive effects on the entire industry and is why the Broadband Forum is championing it. ADTRAN’s domain experience in helping the industry transition to a more flexible, scalable and secure software-defined access network is certainly appreciated.”

“As carriers invest in upgrading the access infrastructure and look to leverage the service creation capabilities now available, it is critical that testing evolves to keep pace and ensure that the promises of software-defined access are realized,” ADTRAN Senior Staff Scientist and Project Stream Leader Ken Ko said. “ADTRAN has played an instrumental role in bringing this work forward and we look forward to collaborating with our peers within the Broadband Forum to advance this important work.”

Sunday, May 13, 2018

Trump instructs Department of Commerce to save ZTE

In a tweet on Sunday morning, President Trump said he has instructed the Department of Commerce to find a way to get ZTE back into business fast because "too many jobs in China" would be lost. Trump's tweet also references President Xi of China.

https://twitter.com/realDonaldTrump/status/995680316458262533

FWD: The death of ZTE



Zhongxing Telecommunication Equipment Corporation (ZTE), one of the world's largest suppliers of network infrastructure products, informed the Hong Kong Stock Exchange that "the major operating activities of the Company have ceased".  If the notice means what we think it means, then ZTE is dead. It took only 3 weeks from the day that the U.S. Commerce Department' Bureau of Industry and Security (BIS) issued its order prohibiting companies...


ZTE: Major operating activities have ceased



ZTE stated that "the major operating activities of the Company have ceased" due to the export ban imposed on it by the U.S. Commerce Department' Bureau of Industry and Security (BIS). The announcement was made in a regulatory filing with the Hong Kong Stock Exchange. Trading of the company's shares have been suspended since April 16th. ZTE also said that it is actively communicating with the U.S. government in order to secure a reversal of the...


Three weeks in, ZTE appeals to U.S. Commerce Dept as shares remain suspended



ZTE has appealed to the U.S. Commerce Department’s Bureau of Industry and Security (BIS) to lift the ban on the export of U.S. products to the company, according to a regulatory filing made by ZTE to the Hong Kong exchange. There is no word on whether the appeal will be heard or acted upon by BIS. Meanwhile, trading of ZTE's shares on the Hong Kong market remain suspended since April 16th. ZTE posted a Q1 growth rate of 12% prior to export ban...

GSMA: Central America falls behind in mobile development

Mobile broadband development in Central America is lagging behind the rest of Latin America, putting the region’s future economic development at risk, according to the new report ‘Assessing the impact of market structure on innovation and quality: Driving mobile broadband in Central America’ released by the GSMA.

The 52-page report examines the development of mobile broadband in six countries (Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua and Panama) and finds that while 4G networks are available to 35 per cent of the population in Central America, the technology still only accounts for around 5 per cent of all mobile connections in the region, a sixth of that seen in South America.

“Closing the gap in 4G adoption in Central America requires urgent policy reform,” said Sebastián Cabello, Head of Latin America, GSMA. “This report underscores the need for governments and regulators to act quickly in reforming policies that will encourage investment and innovation and enable operators to deliver high-quality mobile broadband services to consumers and businesses across the region.”

The GSMA report can be downloaded here



Interview - Disaggregating and Virtualizing the RAN

The xRAN Forum is a carrier-led initiative aiming to apply the principles of virtualization, openness and standardization to one area of networking that has remained stubbornly closed and proprietary -- the radio access network (RAN) and, in particular, the critical segment that connects a base station unit to the antennas. Recently, I sat down with Dr. Sachin Katti, Professor in the Electrical Engineering and Computer Science departments at Stanford University and Director of the xRAN Forum, to find out what this is all about.

Jim Carroll, OND: Welcome Professor Katti. So let's talk about xRAN. It's a new initiative. Could you introduce it for us?

Dr. Sachin Katti, Director of xRAN Forum: Sure. xRAN is a little less than two years old. It was founded in late 2016 by me along with AT&T, Deutsche Telecom and SK Telecom -- and it's grown significantly since then.  We now are up to around ten operators and at least 20 vendor companies so it's been growing quite a bit the last year and a half.

JC: So why did xRAN come about?

SK:  Some history about how all of happened... I was actually at Stanford as my role as a faculty here at Stanford collaborating with both AT&T and Deutsche Telecom on something we called soft-RAN, which stood for software-defined radio access network. The research really was around how do you take radio access networks, which historically have been very tightly integrated and coupled with hardware, and make them more virtualized - to disaggregate the infrastructure so that you have more modular components, and also defined interfaces between the different common components. I think we all realized at that point that to really have an impact, we need to take this out of the research lab and get the industry and the cross-industry ecosystem to join forces and make this happen in reality.

That's the context behind how xRAN was born. The focus is on how do we define a disaggregated architecture for the RAN. Specifically, how do you take what's called the eNodeB base station and deconstruct the software stuff that's running on the base station such that you have modular components with open interfaces between them that allows for interoperability, so that you could truly have a multi-vendor deployment. And two, it also has a lot more programmability so that an operator could customize it for their own needs, enabling new applications and new service much more easily without having to go through a vendor every single time. I think it was really meant so that you can try all of those aspects and that's how it got started.

JC: Okay. Is there a short mission statement?  

SK: Sure. The mission statement for xRAN is to build an open virtualized, disaggregated radio access network architecture that opens standardized interfaces between all of these components, and to be able to build all of these components in a virtualized fashion on commodity hardware wherever possible.

JC:  In terms of the use cases, why would carriers need to virtualize their RAN, especially when they have other network slicing paradigms under development?

SK: It's great that you bring up network slice actually. Network slicing is one of the trialing use cases and the way to think about this is, in the future, everyone expects to have network slices with very different connectivity needs for enabling different kinds of applications. So you might have a slice for cars that have very different bandwidth and latency characteristics compared to a slice for IOT traffic, which is a bit more delay tolerant for example.

JC: And those are slices in a virtual EPC? Is that right?

SK:  Those are slices that need to be end-to-end. It can't just be the EPC because ultimately the SLAs you can give for the kind of connectivity you can deliver, is ultimately going to be dictated by what happens on the access. So, eventually, a slice has to be end-to-end and the challenge was if an operator, for example, wants to define new slices then how do they program the radio access network to deliver that SLA, to deliver that connectivity that that slice needs.

In the EPC there was a lot of progress on what are those interfaces to enable such slicing but there was not similar progress that happened in the RAN. How do you program the base station, and how do you program the access network itself to deliver such slicing capability? So that's actually one of the driving use cases that's in there since the start of xRAN. Another big use case, and I'm not sure whether we should call it a use case, but just a need, is around having a multi-vendor deployment. Historically, if you look at radio access network deployments, they're a single vendor. So, if you take a U.S. operator, for example, they literally divide up their markets into an Ericsson market or a Nokia market or whatever. And the understanding is everything in that market, from the base station to the antenna to the backhaul, everything comes from one vendor. They really cannot mix and match components from different vendors because there haven't been many interoperable interfaces, so the other big need or requirement that is coming all this is interoperability in a multivendor environment that they want to get to.

JC: How about infrastructure sharing? I mean we see that the tower companies are now growing by leaps and bounds and many carriers thinking that maybe it's no longer strategically important to own the tower and so share that tower, and they might share the backhaul as well. 

SK: It will actually help. It will actually enable that kind of sharing at an even more deeper level, because if you have an infrastructure that is virtualized and is running on more commodity hardware in a virtualized fashion then it becomes easier for a tower company to set up the compute substrate and their underlying backhaul substrate and then provide virtual infrastructure slices to each operator to operate on top of. And so instead of actually just physically separating -- right now they are basically renting space on the top right but instead if you could just the same underlying compute substrate and the same backhaul infrastructure as well a fronthaul infrastructure and virtually slice it and run multiple networks on top, it actually makes it possible to share on the infrastructure even more. So virtualization is almost a prerequisite to any of the sharing of infrastructure.

JC: Tell us about the newly released, xRAN fronthaul specification version 1.0. What is the body of work it builds on?

SK: Sure, let me step back and just talk about all the standardization efforts, and then I'll answer the question. xRAN actually has three big different working groups. One is around fronthaul, which refers to the link between the radio head and that baseband unit. This is the transport that's actually carrying the data between the baseline unit and the radio transmission and, in the reverse direction, when you receive something from the mobile unit.  So that's one aspect. The second one is around the control plane and user plane separation in the base station. Historically, the control plane and the user plane are tightly coupled. A significant working group effort in xRAN right now is how do you decouple those and define standardized interfaces between a control plane and a user plane.  And the last working group is trying to define what are the interfaces between the control plane of the radio access network and orchestration systems like ONAP. So those are three main focus areas.

Our first specification, which describes the fronthaul interfaces, was released this month. So, what went on there?  The problem that we solved concerns closed interfaces. Today if you bought a base station you also have to buy the antenna from the same vendor. That's it. For example, if you bought an Ericsson base station you have to buy an antenna from Ericsson as well. There are very few compatible antenna systems, but with 5G, and even with 4G, there's been a lot of innovation on the antenna side. There are innovators developing massive MIMO systems. These have lots of antennas and can significantly increase the capacity of the RAN. Many start-ups that are trying to do this, but they're struggling to get any traction because they cannot sell their antennas and connect it to an existing vendor's baseband unit. So, a critical requirement that operators are pushing was how do we make it such that this fronthaul specification is truly interoperable, making it possible to mix and match. You could take a small vendor's radio head and antenna and connect it with an existing well-established vendor's baseband unit -- that was the underlying requirement. What the new fronthaul work is truly trying to accomplish is to make sure that this interface is very clearly specified such that you do not need tight integration between the baseband unit and the radio head unit.

This fronthaul work came about initially with Verizon, AT&T and Deutsche Telekom driving it. Over the past year, we have had multiple operators joining the initiative, including NTT DoCoMo,  and several vendors they brought along including Nokia. Samsung, Mavenir, and a bunch of other companies, all coming together to write the specification and contribute IP towards it.

JC: Interesting, so you have support from those existing vendors who would seem to have a lot to lose if this disaggregation occurred disfavorably to them.

SK: Yes, we do. Current xRAN members include all or the bigger vendors, such as Nokia and Samsung, especially on the radio side. Cisco is a member which is more often on the orchestration side and there are several other big vendors that are part of this effort. And yeah, they have been quite supportive.

The xRAN Forum is an operator-driven body. The way we set up a new working group or project is that operators come in and tell us what their needs are, what their use cases are, and if we see enough consistency, when multiple operators share the same need or share the same use case, that leads to the start of the new working group. The operators often end up bringing their vendors along by saying we need this, "we are gonna drive it through the xRAN consortium and we need you to come and participate, otherwise you'll be left out." That's typically how vendors are forced to open up.

JC: Okay, interesting, so let's talk a little bit about the timelines and how this could play out. You talked about plugging into an existing baseband unit or base station unit so I guess there is a backward compatibility aspect?

SK: No, we are not expecting operators to build entirely new networks. The first fronthaul specification is meant both for 4G and 5G. The fronthaul is actually independent of the underlying air interface so it can work under 4G networks. On the baseband side, it does require a software update. It does require these systems to adhere to the spec in terms of how to talk to the radio head, and if they do, then the expectation is that someone should be able to plug in a new radio head and be able to make that system work. That being said, where we are at right now, is we have released a public specification. We believe it's interoperable but the next stage is to do interoperability testing. We expect that to happen later this year. Once interoperability testing happens, we will know what set of systems are compatible. Then we will have, if you will, a certificate saying that these are compliant.

JC: And would that certification be just for the fronthaul component or would that be for the control plane and data plane separation as well?

SK: Our working groups are progressing at different cadences.  The fronthaul specification already is out and they expect to the interoperability testing later this year, and that will be only for the fronthaul.  As and when we release the first specification for the control plane and use plane separation, we will have a corresponding timeline. But I think one thing to realize is that these are not all coupled. You could use the fronthaul specification on its own without having the rest the architecture. You could take existing infrastructure implement just the fronthaul specification and realize the benefits of the interoperability without necessarily having a control plane that's decoupled from the user plane. So the thing is structured such that each of those working groups can act independently. We didn't want to couple them because that would mean that it'll take a long time before anything happens.

JC: Wouldn't some of the xRAN work naturally have fit into 3GPP or ETSI's carrier virtualization efforts? Why have a new forum?

SK: Definitely. 3GPP is a big intersection point. I think the way we look at it is that we are trying to work on areas that 3GPP elected not to. So if it has anything to do with the air interface, for example, how should the infrastructure talk to the phone itself -we are not trying to work in that space. If it's got anything to do with how the base station talks to the core network, we are not trying to specify that interface. But there are things that 3GPP elected not to work on for whatever reason, and which could be how vendor incentives come into play. Perhaps these vendors discouraged 3GPP from working on intereroperable fronthaul interfaces. And we don't know the reason why 3GPP chose this path. You can see that this is also operator driven. So operators want certain things to happen but they
are not successful in getting 3GPP to do it. So xRAN is a venue for them to come in and specify
what they want to do and what they want to accomplish and get appropriately incentivized
vendors to actually come up together. So it is complementary in terms of the work effort, but I could see a scenario where the fronthaul specification that we come out with, this one and the next one, eventually forms the basis for a 3GPP standardized specification -- but that's not necessarily a conflict -- that actually might be how things eventually get fully standardized.

JC: There are other virtualization ideas that have sprung up from this same lab and in the Bay Area. How does this work in collaboration with CORD and M-CORD?

SK: Historically, I think virtualization has infected, if you will, the rest of the networking domain but has struggled to make headway in the RAN. If you looked at the rest of the network there's been a lot of success with virtualization. The RAN has traditionally been quite hard to do. I think there are multiple reasons for that. One is that the workload -- the things that you want to do in their RAN -- are much more stressful and demanding than the rest of the network in terms of processing. I think the hardware is now catching up to the point where you can take off-the-shelf hardware and run virtualized instances of the RAN on top. I think that's been one.

Second, the RAN is also a little bit harder to disaggregate because many of the control plane
decisions are occurring at a very fast timescale. There are things, for example, like how should I
schedule a particular user’s traffic to be sent over the air. That's a decision that the base station is making every millisecond and, at that timescale, it's really hard to run it at a deeper level. So, having a separate piece of logic making that decision, and then communicating that decision to the data plane if you will, and then the data plane implementing that decision, which would be classically how we  think about SDN, that's not going to work because if you have a round-trip latency of one millisecond that you can tolerate, it's too stringent.  I think we need to figure out how to deconstruct the problem, take out the right amount of control logic but still leave the very latency sensitive pieces in the underlying data plane of the infrastructure itself. I think that's still work in progress. We still know there are hard technical challenges there. 

JC: Okay, talking about inspiration -- one last thing- is there an application that you have in
mind that inspires this work?

SK: Sure. I am thinking a pretty compelling example is network slicing. As you look at these very demanding applications --if you think about virtual reality and augmented reality applications, or self-driving cars --there are very strict requirements on how that traffic should be handled in the network. If I think about a self-driving car, and it wants to offload some of its some mapping and sensing capabilities to the edge cloud, that loop, that interaction loop between that car and the edge cloud has very strict requirements. And you want that application to be able to come to the network and say this is the kind of connectivity I need for my traffic, and for the network to be programmable enough that the operator should be able to program the underlying infrastructure such that I can deliver that kind of connectivity to the self-driving car application.

I think those two classes of applications are characterized by latency sensitivity and bandwidth intensity. You don't get any leeway on either dimension. Right now, the people developing those applications do not trust the network. If you think about current prototypes of self-driving cars, the developers cannot assume that the network will be there. So they currently must build very complex systems to make the vehicle completely autonomous. If we truly want to build thinks where the cloud can actually play a role in controlling some systems, then we need this programmable network to enable such a world. 

Excellent, well thank you very much and good luck!


SpaceX launches Bangabandhu-1 satellite for Bangladesh

SpaceX successfully launched Bangabandhu-1, the first Bangladeshi communications satellite, into geostationary transfer orbit aboard a Falcon 9 Block 5 version.

Bangabandhu-1, which was built by Thales Alenia Space, is fitted with 26 Ku-Band and 14 C-Band transponders. It offers capacity in Ku-Band over Bangladesh and its territorial waters of the Bay of Bengal, India, Nepal, Bhutan, Sri Lanka, Philippines and Indonesia; it also provides capacity in C-Band over the whole region. Bangabandhu

SpaceX landed the first stage approximately 11 minutes after liftoff.