Tuesday, September 4, 2018

SiTime teams with Bosch on MEMS timing for 5G, IoT and automotive

SiTime, which supplies MEMS timing solutions, and Bosch, a global supplier of technology and services, announced a strategic technology partnership to accelerate innovation in micro-electro-mechanical systems (MEMS) timing.

Specifically, SiTime will work with Bosch to develop processes for next-generation MEMS resonator products. These MEMS resonators are the heartbeat of 5G, IoT and automotive electronics, and will enable the higher speeds of 5G, long battery life of IoT devices and increased reliability of driver assistance systems in automotive. Bosch will utilize its expertise in MEMS manufacturing to produce these resonators for SiTime and ensure availability of high-volume capacity.

SiTime has shipped over a billion units of its MEMS timeing solutions into all electronics markets, has over 90 percent share of the MEMS timing market, and has partnered with industry leaders, such as Intel, to drive timing innovation in 5G.

Bosch has been both a pioneer and a global market leader in the MEMS sensor segment since 1995 and has sold more than 9.5 billion MEMS sensors. The company developed the manufacturing process behind MEMS technology nearly 25 years ago. More than every second smartphone worldwide uses a Bosch MEMS sensor.

“Since 2009 SiTime has counted on Bosch to manufacture more than a billion MEMS resonators,” said Rajesh Vashist, CEO of SiTime. “Over the next decade, the 5G, IoT, and automotive markets will drive the growth of the timing industry by creating a 200 billion unit opportunity. Automation, communications, and computing applications in these markets will require more features, higher accuracy and reliability from timing components.”

“Stable, reliable MEMS timing devices are needed for successful operation of new, high-bandwidth 5G, IoT and driver assistance systems,” said Jens Fabrowsky, executive vice president, Automotive Electronics at Robert Bosch GmbH. “Without ultra-precise timing, the benefits and opportunities for next generation systems will not be achieved. With Bosch’s MEMS leadership and manufacturing excellence, and SiTime’s groundbreaking MEMS timing technology, this partnership will make possible unique new features and mission-critical services in 5G, IoT, and automotive applications.”

https://www.sitime.com/

Intel and SiTime collaborate on MEMS timing for 5G

SiTime, which specializes in MEMS timing solutions, and Intel agreed to collaborate on integrating timing solutions for Intel’s 5G multi-mode radio modems, with additional applicability to Intel LTE, millimeter-wave wireless, Wi-Fi, Bluetooth, and GNSS solutions.

SiTime said its MEMS timing technology helps meet the high-performance requirements of emerging 5G radio modem platforms, especially in the presence of stressors such as vibration, high temperature, and rapid thermal transients. Such stressors can disrupt the timing signal and result in network reliability issues, lower data throughput, and even connectivity drops.

“Our collaboration with SiTime on MEMS-based silicon timing solutions will help our customers build leading 5G platforms to best take advantage of the increased performance and capacity that the 5G NR standard brings,” said Dr. Cormac Conroy, corporate vice president and general manager of the Communication and Devices Group at Intel Corporation. “Intel’s modem technology and our collaboration with SiTime is helping to enable new mobile and consumer experiences, and enterprise and industrial use cases.”

Merger talks of China Telecom + China Unicom

Top Chinese government officials are reviewing a proposal to merge China Telecom and China Unicom, the nation's number 2 and 3 mobile operators, according to reports published by Bloomberg and others on Tuesday.

So far, there has not been official confirmation of the story although share prices of both companies have risen on the Hong Kong exchange.

Combined, the two carriers have 590 million mobile subscribers, compared to 905 million for China Mobile. A merger would enable a faster rollout of 5G but reduce the competitive landscape for mobile services to two players. Both carriers have reduced CAPEX in the first half of 2018 following completion of most 4G upgrades. Both reported surging mobile data traffic and an impact from increased competition and the elimination of provincial roaming charges.

Exactly one year ago, the Chinese government arranged for top Chinese tech companies, including Alibaba and Tencent, to inject RMB 78 billion (US$11.7 billion) into China Unicom in an effort to accelerate the transformation of its network. The consolidation could play to the favor of these investors.

As state-owned enterprises, both China Telecom (estimate 287,000 employees) and China Unicom (estimated 252,000 employees) have large numbers of workers and retirees.  While network integration and automation may reduce the need for so many employees from a technical perspective, from a social point of view, large-scale reductions may not be possible.

The big synergy in the merger presumably would be to reduce the rollout cost of a nationwide 5G network, which both China Telecom and China Unicom were anticipating in 2020. Both carriers have 5G pilots underway and limited trial services are expected in 2019. Full-scale nationwide rollout for each would involve upgrades to millions of base stations, and improvements to front-haul and backhaul infrastructure. China Unicom recently disclosed that it now has 910,000 4G base stations in operation. China Telecom has stated that it had 1.2 million 4G base stations in operation.

 Many 5G small cells and in-building networks are also required for the 5G upgrade plan. A merged entity presumably could build this at a much lower cost -- perhaps even approaching 50-60-% of what otherwise would be spent. For network equipment, especially Huawei and ZTE, this could be bad news. For China Tower, which recently completed an IPO, this could mean only 2 potential clients on its telecom masts (China Mobile and this merged entity).

With only two mobile operators, perhaps the really intense mobile price competition in China would ease. China Mobile must make do with ARPU of RMB 58.10 (US$8.43) -- about 1/6th the billing per subscriber per month as U.S. operators. China Telecom and China Unicom's mobile ARPU is lower, at RMB 47.9. This leaves very little profit potential per subscriber, making the business case for a deep, nationwide 5G rollout more difficult for two carriers than for one..


Ericsson to acquire CENX for service assurance

Ericsson agreed to acquire CENX, a privately-held company that offers a hyper-scale service assurance platform across virtual and hybrid networks. Financial terms were not disclosed.

CENX, founded in 2009, is headquartered in Jersey City, New Jersey. The company achieved significant year-over-year revenue growth in the fiscal year that ended December 31, 2017. CENX employs 185 people.  Ericsson has held a minority stake in CENX since 2012.

Ericsson said the acquisition will boost its Operations Support Systems (OSS) portfolio with vendor-agnostic service assurance and closed-loop automation capability, including in NFV and orchestration.

Mats Karlsson, Head of Solution Area OSS, Ericsson, says: “Dynamic orchestration is crucial in 5G-ready virtualized networks. By bringing CENX into Ericsson, we can continue to build upon the strong competitive advantage we have started as partners. I look forward to meeting and welcoming our new colleagues into Ericsson.”

Closed-loop automation ensures Ericsson can offer its service provider customers an orchestration solution that is optimised for 5G use cases like network slicing, taking full advantage of Ericsson’s distributed cloud offering. Ericsson’s global sales and delivery presence – along with its strong R&D – will also create economies of scale in the CENX portfolio and help Ericsson to offer in-house solutions for OSS automation and assurance.

Ed Kennedy CEO, CENX says: “Ericsson has been a great partner – and for us to take the step to fully join Ericsson gives us the best possible worldwide platform to realize CENX’s ultimate goal – autonomous networking for all. Our closed-loop service assurance automation capability complements Ericsson’s existing portfolio very well. We look forward to seeing our joint capability add great value to the transformation of both Ericsson and its customers.”

  • CENX was co-founded in 2009 as a Carrier Ethernet Neutral Exchange by Mr. Nan Chen, who is also the founder and president of MEF.
  • Previous investors in CENX have included BDC Capital, Mistral Venture Partners, VMware, Highland Capital Partners, Mesirow Financial Private Equity Inc., Verizon Ventures, a subsidiary of Verizon Communications, Ericsson, DCM Ventures, and Cross Creek Advisors.

CENX lands Tier 1 European contract for service assurance

CENX announced a contract to provide its hyper-scale service assurance platform to a globally recognized European Tier 1 operator.  CENX's hyper-scale service assurance platform enables closed-loop assurance automation across virtual and hybrid networks.

Under the contract, CENX will support the launch of new digital services and business models across fixed, wireless and data center infrastructure. CENX will enable the operator to assure and monitor its physical and cloud network assets within a single-pane while enabling closed-loop automation to better manage increasing complexity.



CommScope builds a C-RAN small cell for in-building 4G/5G

CommScope announced a "OneCell" C-RAN small cell solution for in-building 5G performance.

The enhanced OneCell portfolio includes a new radio point platform, the multi-carrier RP5000 Series, which features software-programmable radios that can flexibly support new air interfaces to enable LTE-to-5G migration. Its supports cell virtualization, distributed MIMO (multiple input/multiple output), and granular location-awareness to support smarter services including emergency services. It uses off-the-shelf Ethernet LAN fronthaul infrastructure components to reduce cost and complexity for enterprise deployments.

CommScope describes its OneCell is a 5G-ready in-building LTE solution that combines carrier-grade performance and reliability with deployment simplicity for single- and multi-operator environments. With OneCell, wireless operators and neutral hosts can fully participate in 5G-enabled services while preserving their investments in LTE.

The company notes that many 5G target use cases – such as ultra-high definition video, industrial automation and “smart building” applications – will be deployed inside buildings.

“In-building delivery of 5G service will be a major opportunity for operators to own the user experience, differentiate from over-the-top service providers and monetize service offerings,” said Matt Melester, senior vice president, CommScope. “But the old ways of delivering cellular service indoors simply cannot achieve 5G performance levels. CommScope’s enhanced OneCell small cell solution is uniquely positioned to make indoor 5G an enabler of enterprise business opportunities.”


Intel announces AI collaboration with Baidu Cloud

Baidu and Intel outlined new artificial intelligence (AI) collaborations showcasing applications ranging from financial services and shipping to video content detection.

Specifically, Baidu Cloud is leveraging Intel Xeon Scalable processors and the Intel Math Kernel Library-Deep Neural Network (Intel® MKL-DNN) as part of a new financial services solution for leading China banks; the Intel OpenVINO toolkit in new AI edge distribution and video solutions; and Intel Optane™ technology and Intel QLC NAND SSD technology for enhanced object storage.

“Intel is collaborating with Baidu Cloud to deliver end-to-end AI solutions. Adopting a new chip or optimizing a single framework is not enough to meet the demands of new AI workloads. What’s required is sysatems-level integration with software optimization, and Intel is enabling this through our expertise and extensive portfolio of AI technologies – all in the name of helping our customers achieve their AI goals,” stated Raejeanne Skillern, Intel vice president, Data Center Group, and general manager, Cloud Service Provider. Platform Group.

Semtech samples PAM4 clock and data recovery platform

Semtech announced sampling of its quad Tri-Edge clock and data recovery (CDR) with an integrated vertical-cavity surface-emitting laser (VCSEL) driver and its quad Tri-Edge CDR with an integrated transimpedance amplifier (TIA).

The bundle is optimized for low power and low cost PAM4 short-reach, 200G/400G QSFP28 SR4/8 modules for data center and active optical cable (AOC) applications.

“With this Tri-Edge PAM4 CDR bundle, Semtech further demonstrates its innovative and disruptive solutions to alternatives available in the market today. We expect this to enable the next-gen deployment for data centers to allow higher bandwidth growth while supporting an aggressive cost structure,” said Dr. Timothy Vang, Vice President of Marketing and Applications for Semtech’s Signal Integrity Products Group.

Semtech also announced:
  • the full production of its ClearEdge CDR platform IC bundle targeting high-performance data center and wireless applications. The quad ClearEdge CDR with integrated DML laser driver and quad ClearEdge CDR with integrated transimpedance amplifier (TIA) provides an optimized chipset for 100G PSM4 and CWDM4 module solutions. The quad ClearEdge CDR with integrated DML laser driver also supports module designs based on both chip-on-board optics and passive TOSAs. 
  • initial production of its bi-directional ClearEdge CDR with integrated DML laser driver.
  • mass production of a fully integrated quad 28G ClearEdge CDR with single-ended electro-absorption modulated lasers (EML) laser driver, consuming only 790 mW at maximum, 1.5 Vppse swing, in a 6mm x 5mm package with integrated bias-T passive components. This addresses the challenge of shrinking real estate in QSFP28 designs.

Lattice Semiconductor adds former Xilinx exec to its team

Lattice Semiconductor announced the appointment of Steve Douglass as Corporate Vice President, R&D.

Douglass previously served as the Corporate Vice President, Customer Technology Deployment at Xilinx.

Jim Anderson, President and Chief Executive Officer, said, “We are excited to have Steve Douglass join Lattice. His proven ability to lead global FPGA development teams and drive customer-focused innovation in targeted applications make him the perfect fit. His technical skills, market knowledge and leadership capabilities will help further strengthen Lattice as we drive sustained growth and profitability by accelerating the worldwide adoption of our ground-breaking hardware and software solutions.”

Lattice Semiconductor appoints AMD exec as its new CEO

Lattice Semiconductor appointed Jim Anderson as its new President and Chief Executive Officer, and to the company’s Board of Directors. He most recently served as at Advanced Micro Devices (AMD) as the General Manager and Senior Vice President of the Computing and Graphics Business Group.

Jeff Richardson, Chairman of the Board, said, “On behalf of the Board, we are pleased to announce the appointment of Jim Anderson as Lattice’s new President and Chief Executive Officer. Jim brings a strong combination of business and technical leadership with a deep understanding of our target end markets and customers. The transformation he drove of AMD’s Computing and Graphics business over the past few years is just a recent example of his long track record of creating significant shareholder value.

President Trump blocks sale of Lattice Semi citing National Security

President Trump signed an order blocking the sale of Lattice Semiconductor to Canyon Bridge Capital Partners on national security grounds. The issue was referred to the President by the Committee on Foreign Investment in the United States (CFIUS) due to concerns regarding China Venture Capital Fund Corporation Limited and its interest in Canyon Bridge Capital Partners.

Monday, September 3, 2018

Idea Cellular and Vodafone India complete merger -- 408M mobile users

Idea Cellular and Vodafone India completed their merger, creating India's largest telecom service provider with over 408 million mobile subscribers, 340,000 sites, and 1.7 million retail outlets and 15,000 branded stores.


The new company, Vodafone Idea Limited, is now operational and ranks as the No.2 operator worldwide by subscriber count, behind China Mobile. Its mobile network covers approximately 92% of India's population.

Vodafone Idea is structured as a partnership between Aditya Birla Group and the Vodafone Group. Following completion of a capital injection process, Vodafone will own a 45.2% stake in Vodafone Idea and Aditya Birla Group will own a 26.0% stake, both on a fully diluted basis. Vodafone will also separately hold a 29.4% stake in the combined entity resulting from the merger between Bharti Infratel and Indus Towers.

Vodafone Idea claims a #1 market share position in 9 of India's telecom circles, and 32% overall market share by revenue for all of India. In terms of spectrum, the company holds extensive 1850 MHz licenses and an "adequate number" of broadband carriers. It also controls about 235,000 kilometers of fiber. Both the Vodafone and Idea brands will continue to operate. Mr. Balesh Sharma has been appointed CEO of the business.

"Today, we have created India’s leading telecom operator.  It is truly a historic moment.   And this is much more than just about creating a large business.  It is about our Vision of empowering and enabling a New India and meeting the aspirations of the youth of our country.  The “Digital India”, as our Honourable Prime Minister describes it, is a monumental nation- building opportunity," stated Mr. Kumar Mangalam Birla, Chairman Aditya Birla Group and Vodafone Idea Limited.

Some highlights:

  • During the twelve months to 30 June 2018, Vodafone India and Idea generated revenue of INR585bn (€7.1bn) and EBITDA of INR107bn (€1.4bn). Vodafone Idea is expected to generate INR140bn (€1.7bn) run-rate cost and capex synergies, equivalent to a net present value of approximately INR700bn (€8.5bn)
  • The merger is expected to generate Rs. 140 billion annual synergy, including opex synergies of Rs. 84 billion, equivalent to a net present value of approximately Rs. 700 billion.
  • The equity infusion of Rs. 67.5 billion at Idea and Rs. 86 billion at Vodafone coupled with monetization of standalone towers of both companies for an enterprise value of Rs. 78.5 billion, provides the company a cash balance of over Rs. 193 billion post payout of Rs. 39 billion to the DoT.
  • Additionally, the company has an option to monetise an 11.15% stake in Indus, which would equate to a cash consideration of Rs. 51 billion7.
  • As of 30 June 2018, net debt was INR 1092 billion.

https://www.vodafoneidea.com/





Vodafone sells its mobile towers in India to American Tower

Vodafone India completed the sale of its standalone tower business in India to ATC Telecom Infrastructure Private Limited (a unit of American Tower) for an enterprise value of INR 38.5 billion (EUR 478 million).

Vodafone India is merging with Idea. Both parties announced their intention to sell their individual standalone tower businesses to strengthen the combined financial position of the merged entity. Completion of Idea’s sale of its standalone tower business to ATC is also expected in the first half of this calendar year.

Completion of Vodafone+Idea merger is expected to complete in the first half of the current calendar year.

  • In June, Idea Cellular Ltd. received approval from India's Department of Telecom to increase the Foreign Direct Investment (FDI) limit in the company to 100%. Previously, it faced a 67.5% limit.

DOCOMO tests edge computing for video processing

NTT DOCOMO has commenced a proof-of-concept (PoC) video IoT solution that will enable the interpretation and analysis of video data sourced from surveillance cameras using edge computing. DOCOMO will test the effectiveness of using edge computing to interpret and analyze video data. The edge computing will supplement processing performed in the cloud. As a first step, the PoC will test and evaluate the sourcing of data from surveillance cameras, aiming to develop a solution that uses existing cameras, requires no wired connectivity and does not involve the transmission of large quantities of data.

DOCOMO also confirmed a strategic investment in Cloudian, a Silicon Valley-based leader in enterprise object storage systems and developer of the Cloudian AI Box, a compact, high-speed AI data processing device equipped with camera connectivity and LTE / Wi-Fi capabilities, facilitating edge AI computing with both indoor and outdoor communications.

DOCOMO said the transfer and processing of large volumes of video data to the cloud have been a lengthy process involving significant delays and placing a considerable burden on cloud infrastructure and communication networks. Edge computing could help deal with these shortcomings and herald a new era of high-speed image recognition.



Cloudian raises $94 million for hyperscale data fabric

Cloudian, a start-up offering a hyperscale data fabric for enterprises, raised $94 million in a Series E funding, bringing the company’s total funding to $173 million.

“Cloudian redefines enterprise storage with a global data fabric that integrates both private and public clouds — spanning across sites and around the globe — at an unprecedented scale that creates new opportunities for businesses to derive value from data,” Cloudian CEO Michael Tso. “Cloudian’s unique architecture offers the limitless scalability, simplicity, and cloud integration needed to enable the next generation of computing driven by advances such as IoT and machine learning technologies.”

The funding round included participation from investors Digital Alpha, Eight Roads Ventures, Goldman Sachs, INCJ, JPIC (Japan Post Investment Corporation), NTT DOCOMO Ventures, Inc. and WS (Wilson Sonsini) Investments.

“Computing now operates without physical boundaries, and customers need storage solutions that also span from the data center to the edge,” said Takayuki Inagawa, president & CEO of NTT DOCOMO Ventures. “Cloudian’s geo-distributed architecture creates a global fabric of storage assets that support the next generation of connected devices.”

Cloudian brings its S3 API interface to Azure Blob Storage

Cloudian, a start-up based in San Mateo, California, is extending its hybrid cloud object storage system into Microsoft Azure.

Cloudian HyperCloud for Microsoft Azure leverages the company's S3 API interface to Azure Blob Storage. Cloudian said the world's largest Industrial Internet enterprise is using Cloudian HyperCloud for Azure to connect its Industrial Internet of Things solution to Azure Blob Storage.

"Cloudian HyperCloud for Azure is a game-changer for public cloud storage, enabling true bi-modal data storage across multiple cloud environments," said Michael Tso, Cloudian CEO and co-founder. "For the first time, customers have a fully supported, enterprise-ready solution to access their choice of cloud platforms from their S3-compliant applications. Customers can be up and running in minutes by launching HyperCloud from the Microsoft Azure Marketplace."

NXP acquires OmniPHY for automotive Ethernet

NXP Semiconductors has acquired OmniPHY, a provider of automotive Ethernet subsystem technology. Financial terms were not disclosed.

NXP said OmniPHY's interface IP and communication technology along with NXP’s own automotive portfolio will form a “one-stop shop” for automotive Ethernet. The companies’ technology synergies will center on 1.25-28Gbps PHY designs and 10-, 100- and 1000BASE-T1 Ethernet in advanced processes.

“Our heritage in vehicle networks is rich and with our leadership positions in CAN, LIN, and FlexRay, we hold a unique viewpoint on automotive networks,” said Alexander E. Tan, vice president and general manager of Automotive Ethernet Solutions, NXP. “The team and technology from OmniPHY give us the missing piece in an extensive high-bandwidth networking portfolio.”

"We are very excited to join NXP – a leader in automotive electronics, for a front-row seat to the autonomous driving revolution, one that will deliver profound change to the way people live,” said Ritesh Saraf, CEO of OmniPHY. “The combination of our teams and technology will accelerate and advance the delivery of automotive Ethernet solutions providing our customers with high quality and world-class automotive Ethernet innovation."

Vodafone tests Huawei's cloud-based Broadband Network Gateway

Vodafone recently completed the second phase test of Huawei's cloud-based Broadband Network Gateway (BNG) solution in a fixed broadband scenario.

The phase I testing was performed in December 2017 and focused on 52 functional tests of the solution, while the phase II testing, completed in May 2018, focused on the Vodafone Portugal service architecture including internet access and VPN services. Both phase I and II have been completed successfully.

Phase II testing covered access, authentication and accounting for home broadband users in various scenarios. It also included performance, reliability and security testing of cloud-based BNG systems. Vodafone and Huawei verified functionality of the cloud-based BNG solution using virtual network functions (VNFs) as the control plane and also using physical network functions (PNFs) as the user plane.

Huawei says its BNG solution features a Control & User Plane Separation (CUPS) architecture, which decouples the control and user planes of traditional BNG architectures. The control plane integrates the user management functions of multiple BNGs and shifts their resources to the cloud. In addition to automated service provisioning and network O&M, the solution deployment in the cloud also enables global resource sharing, elastic capacity scaling, flexible architecture adjustment and network capability exposure.

Jeffrey Gao, President of Huawei's Router & Carrier Ethernet Product line, stated: "Cloud-based BNG is an innovative implementation of Huawei's Intent-Driven Network in the context of network service cloudification. The Intent-Driven Network decouples traditional networks into an elastic, reliable bearer layer and an agile service layer. This creates a simple architecture enabling the rapid and flexible adjustment of resources. This solution helps operators improve the efficiency of their network operations, reduce O&M costs and smoothly evolve network services to the cloud."

Sunday, September 2, 2018

MACOM readies chipset for 200G and 400G optical modules

MACOM Technology Solutions has begun sampling a chipset solution for 200G and 400G CWDM optical module providers servicing Cloud Data Center applications. The company plans live demos at the upcoming China International Optoelectronic Exposition (CIOE) and European Conference on Optical Communication (ECOC) tradeshows. The chipset enables 200G modules at under 4.5W and 400G modules at under 9W total power consumption.

MACOM said its full transmit and receive solution operates at up to 53 Gbps PAM-4 data rates per lane and is optimized for use in 200G QSFP56 and 400G QSFP-DD and OSFP module applications.

For 200G demonstration, the solution is comprised of the MAOM-38051 four-channel transmit CDR and modulator driver and MAOT-025402 TOSA with embedded MAOP-L284CN CWDM L-PIC (silicon photonic integrated circuit with integrated CW lasers) transmitter, and on the receive side features the MAOR-053401 ROSA with embedded demultiplexer, BSP56B photodetectors MATA-03819 quad TIA and the MASC-38040 four-channel receive CDR. The combined, high-performance MACOM solution enables a low bit error rate (BER) and better than 1E-8 pre-forward error correction (Pre-FEC).

“MACOM is committed to leading the evolution of Data Center interconnects from 100G to 200G and 400G, as evidenced by our unique ability to deliver a complete 200G chipset and TOSA/ROSA subassembly solution with market-leading performance and power efficiency,” said Gary Shah, Vice President, High-Performance Analog Business Line, MACOM. “With this solution, optical module providers are expected to benefit from seamless component interoperability and a unified support team, reducing design complexity and costs while accelerating their time to market.”

Commercial shipments are targeted for early 2019.

https://www.macom.com/


Liberty Global picks Ericsson to consolidate NOCs

Liberty Global has selected Ericsson for the consolidation of Network Operations Center service delivery in six European locations: the UK, Ireland, the Netherlands, Hungary, Poland and Germany.

Under the contract, Ericsson has successfully undertaken operations consolidation and transfer of services functions of the various NOCs. This builds on the existing Managed Services contract between Ericsson and Liberty Global for mobile networks and fixed field services in Poland, Hungary and Austria.

Jeanie York, Managing Director Core Network Planning, Engineering, and Operations, Liberty Global, says: “Our partnership with Ericsson is part of Liberty Global’s strategy to continually improve the quality of our services while creating operational efficiencies throughout the region. Ericsson’s leadership in Managed Services was an ideal fit for us as we innovate to improve the customer experience.”

Saturday, September 1, 2018

Intel and Ericsson complete 5G-NR call over 39 GHz

Intel and Ericsson completed the first 5G-NR compliant live data call over the 39 GHz band using Intel’s RF mm-Wave chip with Ericsson Radio System commercial equipment, including the 5G NR radio AIR 5331, baseband and Intel 5G Mobile Trial Platform.

The 5G trial was completed in labs in Kista, Sweden, and Santa Clara, California.

“This live 5G demonstration on the 39 GHz band signifies how close 5G commercial services are to reality in North America. Using the Intel 5G Mobile Trial Platform configured with a 39 GHz RF chip/antenna, we successfully demonstrated a 3GPP-compliant data call performed connecting to an Ericsson commercial 5G g-NB base station, an important step in ensuring our commercial platforms are field ready for deployment in 2019,” stated Asha Keddy, vice president Next Generation and Standards at Intel.

“Completing this end-to-end data call on 39 GHz with Intel shows our commitment to realizing 5G in different spectrum bands,” said Fredrik Jejdling, executive vice president and head of Business Area Networks at Ericsson. “In July we did it on 3.5 GHz and now on 39 GHz, which will smoothen the path to 5G for our customers. Using commercial 5G radios for this multivendor interoperability milestone shows our progress towards making 5G a commercial reality.”

Colt extends SD-WAN internationally

Colt Technology Services announced the expansion of its SD-WAN service across Asia Pacific and North America,  enabling customers to benefit from application-based traffic steering, real-time service changes via an interactive customer portal, virtual routing and firewall services enabled via Network Function Virtualisation (NFV).

The SD-WAN services are delivered via universal CPEs, which are now also available on a self-install basis for faster customer delivery. Colt provides a range of network access options including delivery over Colt’s owned fibre network, third party internet and 3G/4G radio access at remote sites, with customers also being able to prioritise traffic using advanced routing techniques.

Colt has also just launched its On Demand offering in Singapore. The service was launched in Europe in 2017 and Japan in 2018.

“These two launches demonstrate that Colt is continuing to invest in advanced SDN and NFV capabilities on a global scale” explains Peter Coppens, Vice President Product Portfolio, Colt Technology Services.

“Through Colt’s SD WAN and On Demand services, organisations can now take full control over their agile, high bandwidth network in the way that best suits their business needs. It’s such technology, that Colt believes, truly allows organisations to undertake the digital transformations required to thrive in the business environment of today.”

Colt activates U.S. network

Colt has connected 13 major cities in North America, including New York, San Francisco and Chicago, to its dense Asian and European metro networks, which is made up of more than 870 data centers and 26,000 fiber-connected buildings.

Colt services available in the US include; enterprise bandwidth services up to 100Gbps, delivered over entire wavelengths and Ethernet, with private network options, and a number of wholesale services.

Colt’s On Demand bandwidth provisioning is available to businesses in Europe and Asia, with the service launching in Q4 in the US.

“Colt has been disrupting the market for more than 25 years, from our beginning as the only challenger to the local incumbents in the City of London to today, where we are a global network challenger that thinks and acts differently in a rapidly consolidating US market,” said Carl Grivner, Chief Executive Officer of Colt. “We know from our experience that business agility and the need for real-time response to customers is vital for large enterprises and financial firms. Colt is able to deliver on both counts. We’re privately held, affiliated with Fidelity Investments, and have the freedom to act extremely rapidly in a market characterized by unique, on-demand requirements.”

Friday, August 31, 2018

ColorChip to showcase 100G-400G PAM4 optical interconnects

ColorChip will showcase a family of PAM4 optical interconnects ranging from 100G to 400G, with reaches up to 40km, at the CIOE 2018 exhibition in Shenzhen, China.

ColorChip's100G CWDM4 2km and 4WDM-10 10km QSFP28 solutions leverage its proprietary "SystemOnGlass" technology.

"To support the massive use of fiber in fronthaul and backhaul networks, the evolving 5G infrastructure will require unparalleled volumes of high speed optical modules," commented Yigal Ezra, ColorChip's CEO. "ColorChip is well positioned to leverage existing 100G QSFP28 CWDM4 production lines, already proven and scaled for massive mega datacenter demand, to support the growing needs of the 5G market, with capacity ramping up to millions of units per year."

https://www.color-chip.com

Thursday, August 30, 2018

OpenStack's "Rocky" release enhances bare metal provisioning

OpenStack, which now powers more than 75 public cloud data centers and thousands of private clouds at a scale of more than 10 million compute cores, has now advanced to its 18th major release.

OpenStack "Rocky" has dozens of dozens of enhancements, the significant being refinements to Ironic (the bare metal provisioning service) and fast forward upgrades. There are also several emerging projects and features designed to meet new user requirements for hardware accelerators, high availability configurations, serverless capabilities, and edge and internet of things (IoT) use cases.

OpenStack bare metal clouds, powered by Ironic, enable both VMs and containers to support emerging use cases like edge computing, network functions virtualization (NFV) and artificial intelligence (AI) /machine learning.

New Ironic features in Rocky include:

  • User-managed BIOS settings—BIOS (basic input output system) performs hardware initialization and has many configuration options that support a variety of use cases when customized. Options can help users gain performance, configure power management options, or enable technologies like SR-IOV or DPDK. Ironic now lets users manage BIOS settings, supporting use cases like NFV and giving users more flexibility.
  • Conductor groups—In Ironic, the “conductor” is what uses drivers to execute operations on the hardware. Ironic has introduced the “conductor_group” property, which can be used to restrict what nodes a particular conductor (or conductors) have control over. This allows users to isolate nodes based on physical location, reducing network hops for increased security and performance.
  • RAM Disk deployment interface—A new interface in Ironic for diskless deployments. This is seen in large-scale and high performance computing (HPC) use cases when operators desire fully ephemeral instances for rapidly standing up a large-scale environment.

“OpenStack Ironic provides bare metal cloud services, bringing the automation and speed of provisioning normally associated with virtual machines to physical servers,” said Julia Kreger, principal software engineer at Red Hat and OpenStack Ironic project team lead. “This powerful foundation lets you run VMs and containers in one infrastructure platform, and that’s what operators are looking for.”

"At Oath, OpenStack manages hundreds of thousands of bare metal compute resources in our data centers. We have made significant changes to our supply chain process using OpenStack, fulfilling common bare metal quota requests within minutes,” said James Penick, IaaS Architect at Oath.

Database for the Instant Experience -- a profile of Redis Labs

The user experience is the ultimate test of network performance. For many applications, this often comes down to the lag after clicking and before the screen refreshes. We can trace the packets back from the user's handset, through the RAN, mobile core, metro transport, and perhaps long-haul optical backbone to a cloud data center. However, even if this path traverses the very latest generation infrastructure, if it ends up triggering a search in an archaic database, the delayed response time will be more harmful to the user experience than the network latency. Some databases are optimized for performance. Redis, an open source, in-memory, high-performance database, claims to be the fastest -- a database for the Instant Experience. I recently sat down with Ofer Bengal to discuss Redis, Redis Labs and the implication for networking and hyperscale clouds.



Jim Carroll:  The database market has been dominated by a few large players for a very long time. When did this space start to open up, and what inspired Redis Labs to jump into this business?

Ofer Bengal: The database segment of the software market had been on a stable trajectory for decades. If you had asked me ten years ago if it made sense to create a new database company, I would have said that it would be insane to try. But cracks started to open when large Internet companies such as Amazon and Facebook, which generated huge amounts of data and had very stringent performance requirements, realized that the relational databases provided by market leaders like Oracle, were not good enough for their modern use cases. With a relational database, when the amount of data grows beyond the size of a single server it is very complex to cluster and performance goes down dramatically.

About fifteen years ago, a number of Internet companies started to develop internal solutions to these problems. Later on, the open source community stepped in to address these challenges and a new breed of databases was born, which today is broadly categorized under “unstructured" or "NoSQL" databases.

Redis Labs was started in a bit of an unusual way, and not as a database company. The original idea was to improve application performance, because we, the founders, came from that space. We always knew that databases were the main bottleneck in app performance and looked for ways to improve that. So, we started with database caching. At that time, Memcached was a very popular open source caching system for accelerating database performance. We decided to improve it and make it more robust and enterprise-ready. And that's how we started the company.

In 2011, when we started to develop the product, we discovered a fairly new open source project by the name "Redis" (which stands for "Remote Dictionary Server"), which was started by Salvatore Sanfilippo, an Italian developer, who lives in Sicily until this very day. He essentially created his own in-memory database for a certain project he worked on and released it as open source. We decided to adopt it as the engine under the hood for what we were doing. However, shortly thereafter we started to see the amazing adoption of this open source database.  After a while, it was clear we were in the wrong business, and so we decided to focus on Redis as our main product and became a Redis company.  Salvatore Sanfilippo later joined the company and continues to lead the development of the open source project, with a group of developers. A much larger R&D team develops Redis Enterprise, our commercial offering.

Jim Carroll: To be clear, there is an open source Redis community and there's a company called Redis Labs, right?

Ofer Bengal:  Yes. Both the open source Redis and Redis Enterprise are developed by Redis Labs, but by two separate development teams. This is because a different mindset is required for developing open source code and an end-to-end solution suitable for enterprise deployment.
 
Jim Carroll: Tell us more about Redis Labs, the company.

Offer Bengal: We have a monumental number of open source Redis downloads. Its adoption has spread so widely that today you find it in most companies in the world. Our mission, at Redis Labs, is to help our customers unlock answers from their data. As a result, we invest equally into both open source Redis and enterprise-grade Redis, Redis Enterprise, and deliver disruptive capabilities that will help our customers find answers to their challenges and help them deliver the best application and service for their customers. We are passionate about our customers, community, people and our product. We're seeing a noticeable trend where enterprises that adopt OSS Redis are maturing their implementation with Redis Enterprise, to better handle scale, high availability, durability and data persistence. We have customers from all industry verticals, including six of the top Fortune 10 companies and about 40% of the Fortune 100 companies. To give you a few examples of some of our customers, we have AMEX, Walmart, DreamWorks, Intuit, Vodafone, Microsoft, TD Bank, C.H. Robinson, Home Depot, Kohl's, Atlassian, eHarmony – I could go on.

Redis Labs has now over 220 employees across our Mountain View CA HQ, R&D center in Israel, London sales office and other locations around the world.  We’ve completed a few investment rounds, totaling $80 million from Bain Capital Ventures, Goldman Sachs, Viola Ventures (Israel) and Dell Technologies Capital.

Jim Carroll: So, how can you grow and profit in an open source market as a software company?

Ofer Bengal:  The market for databases has changed a lot. Twenty years ago, if a company adopted Oracle, for example, any software development project carried out in that company had to be built with this database. This is not the case anymore. Digital transformation and cloud adoption are disrupting this very traditional model and driving the modernization of applications. New-age developers now have the flexibility to select their preferred solutions and tools for their specific problem at hand or use cases. They are looking for the best-of-breed database to meet each use case of their application. With the evolution of microservices, which is the modern way of building apps, this is even more evident. Each microservice may use a different database, so you end up with multiple databases for the same application. A simple smartphone application, for instance, may use four, five or even six different databases. These technological evolutions opened the market to database innovations.

In the past, most databases were relational, where the data is modeled in tables, and tables are associated with one another. This structure, while still relevant for some use cases, does not satisfy the needs of today’s modern applications.

Today, there are many flavors of unstructured NoSQL databases, starting with simple key value databases like DynamoDB, document-based databases like MongoDB, column-based databases like Cassandra, graph databases like Neo4j, and others.  Each one is good for certain use cases. There is also a new trend called multi-model databases, which means that a single database can support different data modeling techniques, such as relational, document, graph, etc.  The current race in the database world is about becoming the optimal multi-model database.

Tying it all together, how do we expect to grow as an organization and profit in an open source market?  We have never settled for the status quo. We looked at today’s environments and the challenges that come with them and have figured out a way to deliver Redis as a multi-model database. We continually strive to lead and disrupt this market. With the introduction of modules, customers can now use Redis Enterprise as a key-value store, document store, graph database, and for search and so much more. As a result, Redis Enterprise is the best-of-breed database suited to cater to the needs of modern-day applications. In addition to that, Redis Enterprise delivers the simplicity, ease of scale and high availability large enterprises desire. This has helped us become a well-loved database and a profitable business

Jim Carroll: What makes Redis different from the others?

Ofer Bengal: Redis is by far the fastest and most powerful database. It was built from day one for optimal performance: besides processing entirely in RAM (or any of the new memory technologies), everything is written in C, a low-level programming language. All the commands, data types, etc., are optimized for performance. All this makes Redis super-fast. For example, from a single, average size, cloud instance on Amazon, you can easily generate 1.5 million transactions per second at sub-millisecond latency. Can you imagine that? This means that the average latency of those 1.5 million transactions will be less than one millisecond. There is no database that comes even close to this performance. You may ask, what is the importance of this?  Well, the speed of the database is by far the major factor influencing application performance and Redis can guarantee instant application response.

Jim Carroll: How are you tracking the popularity of Redis?

Ofer Bengal: If you look at DockerHub, which is the marketplace for Docker containers, you can see the stats on how many containers of each type were launched there.  The last time I checked, over 882 million Redis containers have been launched on DockerHub. This compares to about 418 million for MySQL, and 642 million of MongoDB containers. So, Redis is way more popular than both MongoDB and MySQL. And we have many other similar data points confirming the popularity of Redis.

Jim Carroll: If Redis puts everything in RAM, how do you scale? RAM is an expensive resource, and aren’t you limited by the amount that you can fit in one system?

Ofer Bengal: We developed very advanced clustering technology which enables Redis Enterprise to scale infinitely. We have customers that have 10s of terabytes of data in RAM. The notion that RAM is tiny and used only for very special purposes, is no longer true, and as I said, we see many customers with extremely large datasets in RAM. Furthermore, we developed a technology for running Redis on Flash, with near-RAM performance at 20% the servers cost. The intelligent data tiering that Redis on Flash delivers allows our customers to keep their most used data in RAM while moving the less utilized data onto cheaper flash storage. This has organizations such as Whitepages saving over 80% of their infrastructure costs, with little compromise to performance.

In addition to that, we’re working very closely with Intel on their Optane™ DC persistent memory based on 3D Xpoint™. As this technology becomes mainstream, the majority of the database market will have to move to being in-memory.


Jim Carroll: What about the resiliency challenge? How does Redis deal with outages?

Ofer Bengal: Normally with memory-based systems, if something goes wrong with a node or a cluster, there is a risk of losing data. This is not the case with Redis Enterprise, because it is redundant and persistent.  You can write everything to disk without slowing down database operations. This is important to note because persisting to disk is a major technological challenge due to the bottleneck of writing to disk. We developed a persistence technology that preserves Redis' super-fast performance, while still writing everything to disk. In case of memory failures, you can read everything from disk. On top of that, the entire dataset is replicated in memory.  Each database can have multiple such replicas, so if one node fails, we instantly fail-over to a replica. With this and some other provisions, we provide several layers of resiliency.

We have been running our database-as-a-service for five years now, with thousands of customers, and never lost a customer's data, even when cloud nodes failed.

Jim Carroll: So how is the market for in-memory databases developing? Can you give some examples of applications that run best in memory?

Ofer Bengal: Any customer-facing application today needs to be fast. The new generation of end users expect instant experience from all their apps and are not tolerant to slow response, whether caused by the application or by the network.

You may ask "how is 'instant experience' defined?"  Let’s take an everyday example to illustrate what ‘instant’ really means., When browsing on your mobile device, how long are you willing to wait before your information is served to you? What we have found is that the expected time from tapping your smartphone or clicking on your laptop until you get the response, should not be more than 100 milliseconds. As an end consumer, we are all dissatisfied with waiting and we expect information to be served instantly. What really happens behind the scenes, however, is once you tap your phone, a query goes over the Internet to a remote application server, which processes the request and may generate several database queries. The response is then transmitted back over the Internet to your phone.

Now, the round trip over the Internet (in a "good" Internet day) is at least 50 milliseconds, and the app server needs at least 50 milliseconds to process your request. This means that at the database layer, the response time should be within sub-millisecond or you’re pretty much exceeding what is considered the acceptable standard wait time of 100 milliseconds. At a time of increasing digitization, consumers expect instant access to the service, and anything less will directly impact the bottom line. And, as I already mentioned, Redis is the only database that can respond in less than one millisecond, under almost any load of transactions.

Let me give you some use case examples. Companies in the finance industry (banks, financial institutions) are using relational databases for years. Any change, such as replacing an Oracle database, is analogous to open heart surgery. But when it comes to new customer facing banking applications, such as checking your account status or transferring funds, they would like to have instant experience. Many banks are now moving this type of applications to other databases, and Redis is often chosen for its blazing fast performance bar none.

As I mentioned earlier, the world is moving to microservices. Redis Enterprise fits the needs of this architecture quite nicely as a multi-model database. In addition, Redis is very popular for messaging, queuing and time series capabilities. It is also strong when you need fast data ingest, for example, when massive amounts of data are coming in from IoT devices, or in other cases where you have huge amounts of data that needs to be ingested in your system. What started off as a solution for caching has, over the course of the last few years, evolved into an enterprise data platform.

Jim Carroll: You mentioned microservices, and that word is almost becoming synonymous with containers. And when you mention containers, everybody wants to talk about Kubernetes, and managing clusters of containers in the cloud. How does this align with Redis?

Ofer Bengal: Redis Enterprise maintains a unified deployment across all Kubernetes environments, such as RedHat OpenShift, Pivotal Container Services (PKS), Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), Amazon Elastic Container Service for Kubernetes (EKS) and vanilla Kubernetes. It guarantees that each Redis Enterprise node (with one or more open source servers) reside on a POD that is hosted on a different VM or physical server. And in using the latest Kubernetes primitives, Redis Enterprise can now be run as a stateful service across these environments.

We use a layered architecture that splits responsibilities between tasks that Kubernetes does efficiently, such as node auto-healing and node scaling, tasks that Redis Enterprise cluster is good at, such as failover, shard level scaling, configuration and Redis monitoring functions, and tasks that both can orchestrate together, such as service discovery and rolling upgrades with zero downtime.

Jim Carroll: How are the public cloud providers supporting Redis?

Ofer Bengal:  Most cloud providers, such as AWS, Azure and Google, have launched their own versions of Redis database-as-a-service, based on open source Redis, although they hardly contribute to it.

Redis Labs, the major contributor to open source Redis, has launched services on all those clouds, based on Redis Enterprise.  There is a very big difference between open source Redis and Redis Enterprise, especially if you need enterprise-level robustness.

Jim Carroll: So what is the secret sauce that you add on top of open source Redis?

Offer Bengal:  Redis Enterprise brings many additional capabilities to open source Redis. For example, as I mentioned earlier, sometimes an installation requires terabytes of RAM, which can get quite expensive. We have built-in capabilities on Redis Enterprise that allows our customers to run Redis on SSDs with almost the same performance as RAM. This is great for reducing the customer's total cost of ownership.  By providing this capability, we can cut the underlying infrastructure costs by up to 80%. For the past few years, we’ve been working with most vendors of advanced memory technologies such as NVMe and Intel’s 3D Xpoint.  We will be one of the first database vendors to take advantage of these new memory technologies as they become more and more popular. Databases like Oracle, which were designed to write to disk, will have to undergo a major facelift in order to take advantage of these new memory technologies.

Another big advantage Redis Enterprise delivers is high availability. With Redis Enterprise, you can create multiple replicas in the same data center, across data centers, across regions, and across clouds.  You can also replicate between cloud and on-premise servers. Our single digit seconds failover mechanism guarantees service continuity.

Another differentiator is our active-active global distribution capability. If you would like to deploy an application both in the U.S. and Europe, for example, your will have application servers in a European data center and in a US data center. But what about the database? Would it be a single database for those two locations? While this helps avoid data inconsistency it’s terrible when it comes to performance, for at least one of these two data centers. If you have a separate database in each data center, performance may improve, but at the risk of consistency. Let’s assume that you and your wife share the same bank account, and that you are in the U.S. and she is traveling in Europe. What if both of you withdraw funds at an ATM at about the same time? If the app servers in the US and Europe are linked to the same database, there is no problem, but if the bank's app uses two databases (one in the US and one in Europe), how would they prevent overdraft? Having a globally distributed database with full sync is a major challenge. If you try to do conflict resolution over the Internet between Europe and the U.S., database operation will slow down dramatically, which is a no-go for the instant experience end users demand. So, we developed a unique technology for Redis Enterprise based on the mathematically proven CRDT concept, developed in universities. Today, with Redis Enterprise, our customers can deploy a global database in multiple data centers around the world while assuring local latency and strong eventual consistency. Each one works as if it is fully independent, but behind the scene we ensure they are all in sync.          

Jim Carroll: What is the ultimate ambition of this company?

Offer Bengal: We have the opportunity to build a very big software company. I’m not a kid anymore and I do not live on fantasies. Look at the database market – it’s huge! It is projected to grow to $50–$60 billion (depending on which analyst firm you ask) in sales in 2020. It is the largest segment in the software business, twice the size of the security/cyber market. The crack in the database market that opened up with NoSQL will represent 10% of this market in the near term. However, the border line between SQL and NoSQL is becoming a blur, as companies such as Oracle add NoSQL capabilities and NoSQL vendors add SQL capabilities. I think that over time, it will become a single large market. Redis Labs provides a true multi-model database. We support key-value with multiple data structures, graph, search, JSON (document based), all with built-in functionality, not just APIs. We constantly increase the use case coverage of our database, and that is ultimately the name of the game in this business. Couple all that with Redis' blazing fast performance, the massive adoption of open source Redis and the fact that it is the "most loved database" (according to StackOverflow), and you would agree that we have once in a lifetime opportunity!





Ciena posts strong quarter as revenue rises to $818.8m

Ciena reported revenue of $818.8 million for its fiscal third quarter 2018,  as compared to $728.7 million for the fiscal third quarter 2017.

Ciena's GAAP net income for the fiscal third quarter 2018 was $50.8 million, or $0.34 per diluted common share, which compares to a GAAP net income of $60.0 million, or $0.39 per diluted common share, for the fiscal third quarter 2017.

Ciena's adjusted (non-GAAP) net income for the fiscal third quarter 2018 was $74.3 million, or $0.48 per diluted common share, which compares to an adjusted (non-GAAP) net income of $56.4 million, or $0.35 per diluted common share, for the fiscal third quarter 2017.

"The combination of continued execution against our strategy and robust, broad-based customer demand resulted in outstanding fiscal third quarter performance," said Gary B. Smith, president and CEO of Ciena. "With our diversification, global scale and innovation leadership, we remain confident in our business model and our ability to achieve our three-year financial targets.”

Some highlights:

  • U.S. customers contributed 57.3% of total revenue
  • Three customers accounted for greater than 10% of revenue and represented 33% of total revenue
  • 37% of revenue comes from non-telco customers; In Q3, three of the top ten revenue accounts were webscale customers, including one that exceeded 10% of total quarterly sales – a first for Ciena.
  • Secured wins with tier one global service providers – many of whom are new to Ciena – including Deutsche Telekom in support of its international wholesale business entity. The project includes a Europe-wide network deployment leveraging our WaveLogic technology.
  • APAC sales were up nearly 50%, with India once again contributing greater than 10% of global revenue. India grew 100% year-over-year, and Japan doubled in the same period. Australia also remained a strong contributor to quarterly results.
  • The subsea segment was up 23% year-over-year, largely driven webscale company demand. Ciena noted several new and significant wins in Q3, including four new logos, and Ciena was selected as the preferred vendor for two large consortia cables.
  • The Networking Platforms business was up more than 14% year-over-year.
  • Adjusted gross margin was 43.4%
  • Headcount totaled 5,889
https://investor.ciena.com/events-and-presentations/default.aspx




At this