Tuesday, September 4, 2018

Semtech samples PAM4 clock and data recovery platform

Semtech announced sampling of its quad Tri-Edge clock and data recovery (CDR) with an integrated vertical-cavity surface-emitting laser (VCSEL) driver and its quad Tri-Edge CDR with an integrated transimpedance amplifier (TIA).

The bundle is optimized for low power and low cost PAM4 short-reach, 200G/400G QSFP28 SR4/8 modules for data center and active optical cable (AOC) applications.

“With this Tri-Edge PAM4 CDR bundle, Semtech further demonstrates its innovative and disruptive solutions to alternatives available in the market today. We expect this to enable the next-gen deployment for data centers to allow higher bandwidth growth while supporting an aggressive cost structure,” said Dr. Timothy Vang, Vice President of Marketing and Applications for Semtech’s Signal Integrity Products Group.

Semtech also announced:
  • the full production of its ClearEdge CDR platform IC bundle targeting high-performance data center and wireless applications. The quad ClearEdge CDR with integrated DML laser driver and quad ClearEdge CDR with integrated transimpedance amplifier (TIA) provides an optimized chipset for 100G PSM4 and CWDM4 module solutions. The quad ClearEdge CDR with integrated DML laser driver also supports module designs based on both chip-on-board optics and passive TOSAs. 
  • initial production of its bi-directional ClearEdge CDR with integrated DML laser driver.
  • mass production of a fully integrated quad 28G ClearEdge CDR with single-ended electro-absorption modulated lasers (EML) laser driver, consuming only 790 mW at maximum, 1.5 Vppse swing, in a 6mm x 5mm package with integrated bias-T passive components. This addresses the challenge of shrinking real estate in QSFP28 designs.

Lattice Semiconductor adds former Xilinx exec to its team

Lattice Semiconductor announced the appointment of Steve Douglass as Corporate Vice President, R&D.

Douglass previously served as the Corporate Vice President, Customer Technology Deployment at Xilinx.

Jim Anderson, President and Chief Executive Officer, said, “We are excited to have Steve Douglass join Lattice. His proven ability to lead global FPGA development teams and drive customer-focused innovation in targeted applications make him the perfect fit. His technical skills, market knowledge and leadership capabilities will help further strengthen Lattice as we drive sustained growth and profitability by accelerating the worldwide adoption of our ground-breaking hardware and software solutions.”

Lattice Semiconductor appoints AMD exec as its new CEO

Lattice Semiconductor appointed Jim Anderson as its new President and Chief Executive Officer, and to the company’s Board of Directors. He most recently served as at Advanced Micro Devices (AMD) as the General Manager and Senior Vice President of the Computing and Graphics Business Group.

Jeff Richardson, Chairman of the Board, said, “On behalf of the Board, we are pleased to announce the appointment of Jim Anderson as Lattice’s new President and Chief Executive Officer. Jim brings a strong combination of business and technical leadership with a deep understanding of our target end markets and customers. The transformation he drove of AMD’s Computing and Graphics business over the past few years is just a recent example of his long track record of creating significant shareholder value.

President Trump blocks sale of Lattice Semi citing National Security

President Trump signed an order blocking the sale of Lattice Semiconductor to Canyon Bridge Capital Partners on national security grounds. The issue was referred to the President by the Committee on Foreign Investment in the United States (CFIUS) due to concerns regarding China Venture Capital Fund Corporation Limited and its interest in Canyon Bridge Capital Partners.

Monday, September 3, 2018

Idea Cellular and Vodafone India complete merger -- 408M mobile users

Idea Cellular and Vodafone India completed their merger, creating India's largest telecom service provider with over 408 million mobile subscribers, 340,000 sites, and 1.7 million retail outlets and 15,000 branded stores.


The new company, Vodafone Idea Limited, is now operational and ranks as the No.2 operator worldwide by subscriber count, behind China Mobile. Its mobile network covers approximately 92% of India's population.

Vodafone Idea is structured as a partnership between Aditya Birla Group and the Vodafone Group. Following completion of a capital injection process, Vodafone will own a 45.2% stake in Vodafone Idea and Aditya Birla Group will own a 26.0% stake, both on a fully diluted basis. Vodafone will also separately hold a 29.4% stake in the combined entity resulting from the merger between Bharti Infratel and Indus Towers.

Vodafone Idea claims a #1 market share position in 9 of India's telecom circles, and 32% overall market share by revenue for all of India. In terms of spectrum, the company holds extensive 1850 MHz licenses and an "adequate number" of broadband carriers. It also controls about 235,000 kilometers of fiber. Both the Vodafone and Idea brands will continue to operate. Mr. Balesh Sharma has been appointed CEO of the business.

"Today, we have created India’s leading telecom operator.  It is truly a historic moment.   And this is much more than just about creating a large business.  It is about our Vision of empowering and enabling a New India and meeting the aspirations of the youth of our country.  The “Digital India”, as our Honourable Prime Minister describes it, is a monumental nation- building opportunity," stated Mr. Kumar Mangalam Birla, Chairman Aditya Birla Group and Vodafone Idea Limited.

Some highlights:

  • During the twelve months to 30 June 2018, Vodafone India and Idea generated revenue of INR585bn (€7.1bn) and EBITDA of INR107bn (€1.4bn). Vodafone Idea is expected to generate INR140bn (€1.7bn) run-rate cost and capex synergies, equivalent to a net present value of approximately INR700bn (€8.5bn)
  • The merger is expected to generate Rs. 140 billion annual synergy, including opex synergies of Rs. 84 billion, equivalent to a net present value of approximately Rs. 700 billion.
  • The equity infusion of Rs. 67.5 billion at Idea and Rs. 86 billion at Vodafone coupled with monetization of standalone towers of both companies for an enterprise value of Rs. 78.5 billion, provides the company a cash balance of over Rs. 193 billion post payout of Rs. 39 billion to the DoT.
  • Additionally, the company has an option to monetise an 11.15% stake in Indus, which would equate to a cash consideration of Rs. 51 billion7.
  • As of 30 June 2018, net debt was INR 1092 billion.

https://www.vodafoneidea.com/





Vodafone sells its mobile towers in India to American Tower

Vodafone India completed the sale of its standalone tower business in India to ATC Telecom Infrastructure Private Limited (a unit of American Tower) for an enterprise value of INR 38.5 billion (EUR 478 million).

Vodafone India is merging with Idea. Both parties announced their intention to sell their individual standalone tower businesses to strengthen the combined financial position of the merged entity. Completion of Idea’s sale of its standalone tower business to ATC is also expected in the first half of this calendar year.

Completion of Vodafone+Idea merger is expected to complete in the first half of the current calendar year.

  • In June, Idea Cellular Ltd. received approval from India's Department of Telecom to increase the Foreign Direct Investment (FDI) limit in the company to 100%. Previously, it faced a 67.5% limit.

DOCOMO tests edge computing for video processing

NTT DOCOMO has commenced a proof-of-concept (PoC) video IoT solution that will enable the interpretation and analysis of video data sourced from surveillance cameras using edge computing. DOCOMO will test the effectiveness of using edge computing to interpret and analyze video data. The edge computing will supplement processing performed in the cloud. As a first step, the PoC will test and evaluate the sourcing of data from surveillance cameras, aiming to develop a solution that uses existing cameras, requires no wired connectivity and does not involve the transmission of large quantities of data.

DOCOMO also confirmed a strategic investment in Cloudian, a Silicon Valley-based leader in enterprise object storage systems and developer of the Cloudian AI Box, a compact, high-speed AI data processing device equipped with camera connectivity and LTE / Wi-Fi capabilities, facilitating edge AI computing with both indoor and outdoor communications.

DOCOMO said the transfer and processing of large volumes of video data to the cloud have been a lengthy process involving significant delays and placing a considerable burden on cloud infrastructure and communication networks. Edge computing could help deal with these shortcomings and herald a new era of high-speed image recognition.



Cloudian raises $94 million for hyperscale data fabric

Cloudian, a start-up offering a hyperscale data fabric for enterprises, raised $94 million in a Series E funding, bringing the company’s total funding to $173 million.

“Cloudian redefines enterprise storage with a global data fabric that integrates both private and public clouds — spanning across sites and around the globe — at an unprecedented scale that creates new opportunities for businesses to derive value from data,” Cloudian CEO Michael Tso. “Cloudian’s unique architecture offers the limitless scalability, simplicity, and cloud integration needed to enable the next generation of computing driven by advances such as IoT and machine learning technologies.”

The funding round included participation from investors Digital Alpha, Eight Roads Ventures, Goldman Sachs, INCJ, JPIC (Japan Post Investment Corporation), NTT DOCOMO Ventures, Inc. and WS (Wilson Sonsini) Investments.

“Computing now operates without physical boundaries, and customers need storage solutions that also span from the data center to the edge,” said Takayuki Inagawa, president & CEO of NTT DOCOMO Ventures. “Cloudian’s geo-distributed architecture creates a global fabric of storage assets that support the next generation of connected devices.”

Cloudian brings its S3 API interface to Azure Blob Storage

Cloudian, a start-up based in San Mateo, California, is extending its hybrid cloud object storage system into Microsoft Azure.

Cloudian HyperCloud for Microsoft Azure leverages the company's S3 API interface to Azure Blob Storage. Cloudian said the world's largest Industrial Internet enterprise is using Cloudian HyperCloud for Azure to connect its Industrial Internet of Things solution to Azure Blob Storage.

"Cloudian HyperCloud for Azure is a game-changer for public cloud storage, enabling true bi-modal data storage across multiple cloud environments," said Michael Tso, Cloudian CEO and co-founder. "For the first time, customers have a fully supported, enterprise-ready solution to access their choice of cloud platforms from their S3-compliant applications. Customers can be up and running in minutes by launching HyperCloud from the Microsoft Azure Marketplace."

NXP acquires OmniPHY for automotive Ethernet

NXP Semiconductors has acquired OmniPHY, a provider of automotive Ethernet subsystem technology. Financial terms were not disclosed.

NXP said OmniPHY's interface IP and communication technology along with NXP’s own automotive portfolio will form a “one-stop shop” for automotive Ethernet. The companies’ technology synergies will center on 1.25-28Gbps PHY designs and 10-, 100- and 1000BASE-T1 Ethernet in advanced processes.

“Our heritage in vehicle networks is rich and with our leadership positions in CAN, LIN, and FlexRay, we hold a unique viewpoint on automotive networks,” said Alexander E. Tan, vice president and general manager of Automotive Ethernet Solutions, NXP. “The team and technology from OmniPHY give us the missing piece in an extensive high-bandwidth networking portfolio.”

"We are very excited to join NXP – a leader in automotive electronics, for a front-row seat to the autonomous driving revolution, one that will deliver profound change to the way people live,” said Ritesh Saraf, CEO of OmniPHY. “The combination of our teams and technology will accelerate and advance the delivery of automotive Ethernet solutions providing our customers with high quality and world-class automotive Ethernet innovation."

Vodafone tests Huawei's cloud-based Broadband Network Gateway

Vodafone recently completed the second phase test of Huawei's cloud-based Broadband Network Gateway (BNG) solution in a fixed broadband scenario.

The phase I testing was performed in December 2017 and focused on 52 functional tests of the solution, while the phase II testing, completed in May 2018, focused on the Vodafone Portugal service architecture including internet access and VPN services. Both phase I and II have been completed successfully.

Phase II testing covered access, authentication and accounting for home broadband users in various scenarios. It also included performance, reliability and security testing of cloud-based BNG systems. Vodafone and Huawei verified functionality of the cloud-based BNG solution using virtual network functions (VNFs) as the control plane and also using physical network functions (PNFs) as the user plane.

Huawei says its BNG solution features a Control & User Plane Separation (CUPS) architecture, which decouples the control and user planes of traditional BNG architectures. The control plane integrates the user management functions of multiple BNGs and shifts their resources to the cloud. In addition to automated service provisioning and network O&M, the solution deployment in the cloud also enables global resource sharing, elastic capacity scaling, flexible architecture adjustment and network capability exposure.

Jeffrey Gao, President of Huawei's Router & Carrier Ethernet Product line, stated: "Cloud-based BNG is an innovative implementation of Huawei's Intent-Driven Network in the context of network service cloudification. The Intent-Driven Network decouples traditional networks into an elastic, reliable bearer layer and an agile service layer. This creates a simple architecture enabling the rapid and flexible adjustment of resources. This solution helps operators improve the efficiency of their network operations, reduce O&M costs and smoothly evolve network services to the cloud."

Sunday, September 2, 2018

MACOM readies chipset for 200G and 400G optical modules

MACOM Technology Solutions has begun sampling a chipset solution for 200G and 400G CWDM optical module providers servicing Cloud Data Center applications. The company plans live demos at the upcoming China International Optoelectronic Exposition (CIOE) and European Conference on Optical Communication (ECOC) tradeshows. The chipset enables 200G modules at under 4.5W and 400G modules at under 9W total power consumption.

MACOM said its full transmit and receive solution operates at up to 53 Gbps PAM-4 data rates per lane and is optimized for use in 200G QSFP56 and 400G QSFP-DD and OSFP module applications.

For 200G demonstration, the solution is comprised of the MAOM-38051 four-channel transmit CDR and modulator driver and MAOT-025402 TOSA with embedded MAOP-L284CN CWDM L-PIC (silicon photonic integrated circuit with integrated CW lasers) transmitter, and on the receive side features the MAOR-053401 ROSA with embedded demultiplexer, BSP56B photodetectors MATA-03819 quad TIA and the MASC-38040 four-channel receive CDR. The combined, high-performance MACOM solution enables a low bit error rate (BER) and better than 1E-8 pre-forward error correction (Pre-FEC).

“MACOM is committed to leading the evolution of Data Center interconnects from 100G to 200G and 400G, as evidenced by our unique ability to deliver a complete 200G chipset and TOSA/ROSA subassembly solution with market-leading performance and power efficiency,” said Gary Shah, Vice President, High-Performance Analog Business Line, MACOM. “With this solution, optical module providers are expected to benefit from seamless component interoperability and a unified support team, reducing design complexity and costs while accelerating their time to market.”

Commercial shipments are targeted for early 2019.

https://www.macom.com/


Liberty Global picks Ericsson to consolidate NOCs

Liberty Global has selected Ericsson for the consolidation of Network Operations Center service delivery in six European locations: the UK, Ireland, the Netherlands, Hungary, Poland and Germany.

Under the contract, Ericsson has successfully undertaken operations consolidation and transfer of services functions of the various NOCs. This builds on the existing Managed Services contract between Ericsson and Liberty Global for mobile networks and fixed field services in Poland, Hungary and Austria.

Jeanie York, Managing Director Core Network Planning, Engineering, and Operations, Liberty Global, says: “Our partnership with Ericsson is part of Liberty Global’s strategy to continually improve the quality of our services while creating operational efficiencies throughout the region. Ericsson’s leadership in Managed Services was an ideal fit for us as we innovate to improve the customer experience.”

Saturday, September 1, 2018

Intel and Ericsson complete 5G-NR call over 39 GHz

Intel and Ericsson completed the first 5G-NR compliant live data call over the 39 GHz band using Intel’s RF mm-Wave chip with Ericsson Radio System commercial equipment, including the 5G NR radio AIR 5331, baseband and Intel 5G Mobile Trial Platform.

The 5G trial was completed in labs in Kista, Sweden, and Santa Clara, California.

“This live 5G demonstration on the 39 GHz band signifies how close 5G commercial services are to reality in North America. Using the Intel 5G Mobile Trial Platform configured with a 39 GHz RF chip/antenna, we successfully demonstrated a 3GPP-compliant data call performed connecting to an Ericsson commercial 5G g-NB base station, an important step in ensuring our commercial platforms are field ready for deployment in 2019,” stated Asha Keddy, vice president Next Generation and Standards at Intel.

“Completing this end-to-end data call on 39 GHz with Intel shows our commitment to realizing 5G in different spectrum bands,” said Fredrik Jejdling, executive vice president and head of Business Area Networks at Ericsson. “In July we did it on 3.5 GHz and now on 39 GHz, which will smoothen the path to 5G for our customers. Using commercial 5G radios for this multivendor interoperability milestone shows our progress towards making 5G a commercial reality.”

Colt extends SD-WAN internationally

Colt Technology Services announced the expansion of its SD-WAN service across Asia Pacific and North America,  enabling customers to benefit from application-based traffic steering, real-time service changes via an interactive customer portal, virtual routing and firewall services enabled via Network Function Virtualisation (NFV).

The SD-WAN services are delivered via universal CPEs, which are now also available on a self-install basis for faster customer delivery. Colt provides a range of network access options including delivery over Colt’s owned fibre network, third party internet and 3G/4G radio access at remote sites, with customers also being able to prioritise traffic using advanced routing techniques.

Colt has also just launched its On Demand offering in Singapore. The service was launched in Europe in 2017 and Japan in 2018.

“These two launches demonstrate that Colt is continuing to invest in advanced SDN and NFV capabilities on a global scale” explains Peter Coppens, Vice President Product Portfolio, Colt Technology Services.

“Through Colt’s SD WAN and On Demand services, organisations can now take full control over their agile, high bandwidth network in the way that best suits their business needs. It’s such technology, that Colt believes, truly allows organisations to undertake the digital transformations required to thrive in the business environment of today.”

Colt activates U.S. network

Colt has connected 13 major cities in North America, including New York, San Francisco and Chicago, to its dense Asian and European metro networks, which is made up of more than 870 data centers and 26,000 fiber-connected buildings.

Colt services available in the US include; enterprise bandwidth services up to 100Gbps, delivered over entire wavelengths and Ethernet, with private network options, and a number of wholesale services.

Colt’s On Demand bandwidth provisioning is available to businesses in Europe and Asia, with the service launching in Q4 in the US.

“Colt has been disrupting the market for more than 25 years, from our beginning as the only challenger to the local incumbents in the City of London to today, where we are a global network challenger that thinks and acts differently in a rapidly consolidating US market,” said Carl Grivner, Chief Executive Officer of Colt. “We know from our experience that business agility and the need for real-time response to customers is vital for large enterprises and financial firms. Colt is able to deliver on both counts. We’re privately held, affiliated with Fidelity Investments, and have the freedom to act extremely rapidly in a market characterized by unique, on-demand requirements.”

Friday, August 31, 2018

ColorChip to showcase 100G-400G PAM4 optical interconnects

ColorChip will showcase a family of PAM4 optical interconnects ranging from 100G to 400G, with reaches up to 40km, at the CIOE 2018 exhibition in Shenzhen, China.

ColorChip's100G CWDM4 2km and 4WDM-10 10km QSFP28 solutions leverage its proprietary "SystemOnGlass" technology.

"To support the massive use of fiber in fronthaul and backhaul networks, the evolving 5G infrastructure will require unparalleled volumes of high speed optical modules," commented Yigal Ezra, ColorChip's CEO. "ColorChip is well positioned to leverage existing 100G QSFP28 CWDM4 production lines, already proven and scaled for massive mega datacenter demand, to support the growing needs of the 5G market, with capacity ramping up to millions of units per year."

https://www.color-chip.com

Thursday, August 30, 2018

OpenStack's "Rocky" release enhances bare metal provisioning

OpenStack, which now powers more than 75 public cloud data centers and thousands of private clouds at a scale of more than 10 million compute cores, has now advanced to its 18th major release.

OpenStack "Rocky" has dozens of dozens of enhancements, the significant being refinements to Ironic (the bare metal provisioning service) and fast forward upgrades. There are also several emerging projects and features designed to meet new user requirements for hardware accelerators, high availability configurations, serverless capabilities, and edge and internet of things (IoT) use cases.

OpenStack bare metal clouds, powered by Ironic, enable both VMs and containers to support emerging use cases like edge computing, network functions virtualization (NFV) and artificial intelligence (AI) /machine learning.

New Ironic features in Rocky include:

  • User-managed BIOS settings—BIOS (basic input output system) performs hardware initialization and has many configuration options that support a variety of use cases when customized. Options can help users gain performance, configure power management options, or enable technologies like SR-IOV or DPDK. Ironic now lets users manage BIOS settings, supporting use cases like NFV and giving users more flexibility.
  • Conductor groups—In Ironic, the “conductor” is what uses drivers to execute operations on the hardware. Ironic has introduced the “conductor_group” property, which can be used to restrict what nodes a particular conductor (or conductors) have control over. This allows users to isolate nodes based on physical location, reducing network hops for increased security and performance.
  • RAM Disk deployment interface—A new interface in Ironic for diskless deployments. This is seen in large-scale and high performance computing (HPC) use cases when operators desire fully ephemeral instances for rapidly standing up a large-scale environment.

“OpenStack Ironic provides bare metal cloud services, bringing the automation and speed of provisioning normally associated with virtual machines to physical servers,” said Julia Kreger, principal software engineer at Red Hat and OpenStack Ironic project team lead. “This powerful foundation lets you run VMs and containers in one infrastructure platform, and that’s what operators are looking for.”

"At Oath, OpenStack manages hundreds of thousands of bare metal compute resources in our data centers. We have made significant changes to our supply chain process using OpenStack, fulfilling common bare metal quota requests within minutes,” said James Penick, IaaS Architect at Oath.

Database for the Instant Experience -- a profile of Redis Labs

The user experience is the ultimate test of network performance. For many applications, this often comes down to the lag after clicking and before the screen refreshes. We can trace the packets back from the user's handset, through the RAN, mobile core, metro transport, and perhaps long-haul optical backbone to a cloud data center. However, even if this path traverses the very latest generation infrastructure, if it ends up triggering a search in an archaic database, the delayed response time will be more harmful to the user experience than the network latency. Some databases are optimized for performance. Redis, an open source, in-memory, high-performance database, claims to be the fastest -- a database for the Instant Experience. I recently sat down with Ofer Bengal to discuss Redis, Redis Labs and the implication for networking and hyperscale clouds.



Jim Carroll:  The database market has been dominated by a few large players for a very long time. When did this space start to open up, and what inspired Redis Labs to jump into this business?

Ofer Bengal: The database segment of the software market had been on a stable trajectory for decades. If you had asked me ten years ago if it made sense to create a new database company, I would have said that it would be insane to try. But cracks started to open when large Internet companies such as Amazon and Facebook, which generated huge amounts of data and had very stringent performance requirements, realized that the relational databases provided by market leaders like Oracle, were not good enough for their modern use cases. With a relational database, when the amount of data grows beyond the size of a single server it is very complex to cluster and performance goes down dramatically.

About fifteen years ago, a number of Internet companies started to develop internal solutions to these problems. Later on, the open source community stepped in to address these challenges and a new breed of databases was born, which today is broadly categorized under “unstructured" or "NoSQL" databases.

Redis Labs was started in a bit of an unusual way, and not as a database company. The original idea was to improve application performance, because we, the founders, came from that space. We always knew that databases were the main bottleneck in app performance and looked for ways to improve that. So, we started with database caching. At that time, Memcached was a very popular open source caching system for accelerating database performance. We decided to improve it and make it more robust and enterprise-ready. And that's how we started the company.

In 2011, when we started to develop the product, we discovered a fairly new open source project by the name "Redis" (which stands for "Remote Dictionary Server"), which was started by Salvatore Sanfilippo, an Italian developer, who lives in Sicily until this very day. He essentially created his own in-memory database for a certain project he worked on and released it as open source. We decided to adopt it as the engine under the hood for what we were doing. However, shortly thereafter we started to see the amazing adoption of this open source database.  After a while, it was clear we were in the wrong business, and so we decided to focus on Redis as our main product and became a Redis company.  Salvatore Sanfilippo later joined the company and continues to lead the development of the open source project, with a group of developers. A much larger R&D team develops Redis Enterprise, our commercial offering.

Jim Carroll: To be clear, there is an open source Redis community and there's a company called Redis Labs, right?

Ofer Bengal:  Yes. Both the open source Redis and Redis Enterprise are developed by Redis Labs, but by two separate development teams. This is because a different mindset is required for developing open source code and an end-to-end solution suitable for enterprise deployment.
 
Jim Carroll: Tell us more about Redis Labs, the company.

Offer Bengal: We have a monumental number of open source Redis downloads. Its adoption has spread so widely that today you find it in most companies in the world. Our mission, at Redis Labs, is to help our customers unlock answers from their data. As a result, we invest equally into both open source Redis and enterprise-grade Redis, Redis Enterprise, and deliver disruptive capabilities that will help our customers find answers to their challenges and help them deliver the best application and service for their customers. We are passionate about our customers, community, people and our product. We're seeing a noticeable trend where enterprises that adopt OSS Redis are maturing their implementation with Redis Enterprise, to better handle scale, high availability, durability and data persistence. We have customers from all industry verticals, including six of the top Fortune 10 companies and about 40% of the Fortune 100 companies. To give you a few examples of some of our customers, we have AMEX, Walmart, DreamWorks, Intuit, Vodafone, Microsoft, TD Bank, C.H. Robinson, Home Depot, Kohl's, Atlassian, eHarmony – I could go on.

Redis Labs has now over 220 employees across our Mountain View CA HQ, R&D center in Israel, London sales office and other locations around the world.  We’ve completed a few investment rounds, totaling $80 million from Bain Capital Ventures, Goldman Sachs, Viola Ventures (Israel) and Dell Technologies Capital.

Jim Carroll: So, how can you grow and profit in an open source market as a software company?

Ofer Bengal:  The market for databases has changed a lot. Twenty years ago, if a company adopted Oracle, for example, any software development project carried out in that company had to be built with this database. This is not the case anymore. Digital transformation and cloud adoption are disrupting this very traditional model and driving the modernization of applications. New-age developers now have the flexibility to select their preferred solutions and tools for their specific problem at hand or use cases. They are looking for the best-of-breed database to meet each use case of their application. With the evolution of microservices, which is the modern way of building apps, this is even more evident. Each microservice may use a different database, so you end up with multiple databases for the same application. A simple smartphone application, for instance, may use four, five or even six different databases. These technological evolutions opened the market to database innovations.

In the past, most databases were relational, where the data is modeled in tables, and tables are associated with one another. This structure, while still relevant for some use cases, does not satisfy the needs of today’s modern applications.

Today, there are many flavors of unstructured NoSQL databases, starting with simple key value databases like DynamoDB, document-based databases like MongoDB, column-based databases like Cassandra, graph databases like Neo4j, and others.  Each one is good for certain use cases. There is also a new trend called multi-model databases, which means that a single database can support different data modeling techniques, such as relational, document, graph, etc.  The current race in the database world is about becoming the optimal multi-model database.

Tying it all together, how do we expect to grow as an organization and profit in an open source market?  We have never settled for the status quo. We looked at today’s environments and the challenges that come with them and have figured out a way to deliver Redis as a multi-model database. We continually strive to lead and disrupt this market. With the introduction of modules, customers can now use Redis Enterprise as a key-value store, document store, graph database, and for search and so much more. As a result, Redis Enterprise is the best-of-breed database suited to cater to the needs of modern-day applications. In addition to that, Redis Enterprise delivers the simplicity, ease of scale and high availability large enterprises desire. This has helped us become a well-loved database and a profitable business

Jim Carroll: What makes Redis different from the others?

Ofer Bengal: Redis is by far the fastest and most powerful database. It was built from day one for optimal performance: besides processing entirely in RAM (or any of the new memory technologies), everything is written in C, a low-level programming language. All the commands, data types, etc., are optimized for performance. All this makes Redis super-fast. For example, from a single, average size, cloud instance on Amazon, you can easily generate 1.5 million transactions per second at sub-millisecond latency. Can you imagine that? This means that the average latency of those 1.5 million transactions will be less than one millisecond. There is no database that comes even close to this performance. You may ask, what is the importance of this?  Well, the speed of the database is by far the major factor influencing application performance and Redis can guarantee instant application response.

Jim Carroll: How are you tracking the popularity of Redis?

Ofer Bengal: If you look at DockerHub, which is the marketplace for Docker containers, you can see the stats on how many containers of each type were launched there.  The last time I checked, over 882 million Redis containers have been launched on DockerHub. This compares to about 418 million for MySQL, and 642 million of MongoDB containers. So, Redis is way more popular than both MongoDB and MySQL. And we have many other similar data points confirming the popularity of Redis.

Jim Carroll: If Redis puts everything in RAM, how do you scale? RAM is an expensive resource, and aren’t you limited by the amount that you can fit in one system?

Ofer Bengal: We developed very advanced clustering technology which enables Redis Enterprise to scale infinitely. We have customers that have 10s of terabytes of data in RAM. The notion that RAM is tiny and used only for very special purposes, is no longer true, and as I said, we see many customers with extremely large datasets in RAM. Furthermore, we developed a technology for running Redis on Flash, with near-RAM performance at 20% the servers cost. The intelligent data tiering that Redis on Flash delivers allows our customers to keep their most used data in RAM while moving the less utilized data onto cheaper flash storage. This has organizations such as Whitepages saving over 80% of their infrastructure costs, with little compromise to performance.

In addition to that, we’re working very closely with Intel on their Optane™ DC persistent memory based on 3D Xpoint™. As this technology becomes mainstream, the majority of the database market will have to move to being in-memory.


Jim Carroll: What about the resiliency challenge? How does Redis deal with outages?

Ofer Bengal: Normally with memory-based systems, if something goes wrong with a node or a cluster, there is a risk of losing data. This is not the case with Redis Enterprise, because it is redundant and persistent.  You can write everything to disk without slowing down database operations. This is important to note because persisting to disk is a major technological challenge due to the bottleneck of writing to disk. We developed a persistence technology that preserves Redis' super-fast performance, while still writing everything to disk. In case of memory failures, you can read everything from disk. On top of that, the entire dataset is replicated in memory.  Each database can have multiple such replicas, so if one node fails, we instantly fail-over to a replica. With this and some other provisions, we provide several layers of resiliency.

We have been running our database-as-a-service for five years now, with thousands of customers, and never lost a customer's data, even when cloud nodes failed.

Jim Carroll: So how is the market for in-memory databases developing? Can you give some examples of applications that run best in memory?

Ofer Bengal: Any customer-facing application today needs to be fast. The new generation of end users expect instant experience from all their apps and are not tolerant to slow response, whether caused by the application or by the network.

You may ask "how is 'instant experience' defined?"  Let’s take an everyday example to illustrate what ‘instant’ really means., When browsing on your mobile device, how long are you willing to wait before your information is served to you? What we have found is that the expected time from tapping your smartphone or clicking on your laptop until you get the response, should not be more than 100 milliseconds. As an end consumer, we are all dissatisfied with waiting and we expect information to be served instantly. What really happens behind the scenes, however, is once you tap your phone, a query goes over the Internet to a remote application server, which processes the request and may generate several database queries. The response is then transmitted back over the Internet to your phone.

Now, the round trip over the Internet (in a "good" Internet day) is at least 50 milliseconds, and the app server needs at least 50 milliseconds to process your request. This means that at the database layer, the response time should be within sub-millisecond or you’re pretty much exceeding what is considered the acceptable standard wait time of 100 milliseconds. At a time of increasing digitization, consumers expect instant access to the service, and anything less will directly impact the bottom line. And, as I already mentioned, Redis is the only database that can respond in less than one millisecond, under almost any load of transactions.

Let me give you some use case examples. Companies in the finance industry (banks, financial institutions) are using relational databases for years. Any change, such as replacing an Oracle database, is analogous to open heart surgery. But when it comes to new customer facing banking applications, such as checking your account status or transferring funds, they would like to have instant experience. Many banks are now moving this type of applications to other databases, and Redis is often chosen for its blazing fast performance bar none.

As I mentioned earlier, the world is moving to microservices. Redis Enterprise fits the needs of this architecture quite nicely as a multi-model database. In addition, Redis is very popular for messaging, queuing and time series capabilities. It is also strong when you need fast data ingest, for example, when massive amounts of data are coming in from IoT devices, or in other cases where you have huge amounts of data that needs to be ingested in your system. What started off as a solution for caching has, over the course of the last few years, evolved into an enterprise data platform.

Jim Carroll: You mentioned microservices, and that word is almost becoming synonymous with containers. And when you mention containers, everybody wants to talk about Kubernetes, and managing clusters of containers in the cloud. How does this align with Redis?

Ofer Bengal: Redis Enterprise maintains a unified deployment across all Kubernetes environments, such as RedHat OpenShift, Pivotal Container Services (PKS), Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), Amazon Elastic Container Service for Kubernetes (EKS) and vanilla Kubernetes. It guarantees that each Redis Enterprise node (with one or more open source servers) reside on a POD that is hosted on a different VM or physical server. And in using the latest Kubernetes primitives, Redis Enterprise can now be run as a stateful service across these environments.

We use a layered architecture that splits responsibilities between tasks that Kubernetes does efficiently, such as node auto-healing and node scaling, tasks that Redis Enterprise cluster is good at, such as failover, shard level scaling, configuration and Redis monitoring functions, and tasks that both can orchestrate together, such as service discovery and rolling upgrades with zero downtime.

Jim Carroll: How are the public cloud providers supporting Redis?

Ofer Bengal:  Most cloud providers, such as AWS, Azure and Google, have launched their own versions of Redis database-as-a-service, based on open source Redis, although they hardly contribute to it.

Redis Labs, the major contributor to open source Redis, has launched services on all those clouds, based on Redis Enterprise.  There is a very big difference between open source Redis and Redis Enterprise, especially if you need enterprise-level robustness.

Jim Carroll: So what is the secret sauce that you add on top of open source Redis?

Offer Bengal:  Redis Enterprise brings many additional capabilities to open source Redis. For example, as I mentioned earlier, sometimes an installation requires terabytes of RAM, which can get quite expensive. We have built-in capabilities on Redis Enterprise that allows our customers to run Redis on SSDs with almost the same performance as RAM. This is great for reducing the customer's total cost of ownership.  By providing this capability, we can cut the underlying infrastructure costs by up to 80%. For the past few years, we’ve been working with most vendors of advanced memory technologies such as NVMe and Intel’s 3D Xpoint.  We will be one of the first database vendors to take advantage of these new memory technologies as they become more and more popular. Databases like Oracle, which were designed to write to disk, will have to undergo a major facelift in order to take advantage of these new memory technologies.

Another big advantage Redis Enterprise delivers is high availability. With Redis Enterprise, you can create multiple replicas in the same data center, across data centers, across regions, and across clouds.  You can also replicate between cloud and on-premise servers. Our single digit seconds failover mechanism guarantees service continuity.

Another differentiator is our active-active global distribution capability. If you would like to deploy an application both in the U.S. and Europe, for example, your will have application servers in a European data center and in a US data center. But what about the database? Would it be a single database for those two locations? While this helps avoid data inconsistency it’s terrible when it comes to performance, for at least one of these two data centers. If you have a separate database in each data center, performance may improve, but at the risk of consistency. Let’s assume that you and your wife share the same bank account, and that you are in the U.S. and she is traveling in Europe. What if both of you withdraw funds at an ATM at about the same time? If the app servers in the US and Europe are linked to the same database, there is no problem, but if the bank's app uses two databases (one in the US and one in Europe), how would they prevent overdraft? Having a globally distributed database with full sync is a major challenge. If you try to do conflict resolution over the Internet between Europe and the U.S., database operation will slow down dramatically, which is a no-go for the instant experience end users demand. So, we developed a unique technology for Redis Enterprise based on the mathematically proven CRDT concept, developed in universities. Today, with Redis Enterprise, our customers can deploy a global database in multiple data centers around the world while assuring local latency and strong eventual consistency. Each one works as if it is fully independent, but behind the scene we ensure they are all in sync.          

Jim Carroll: What is the ultimate ambition of this company?

Offer Bengal: We have the opportunity to build a very big software company. I’m not a kid anymore and I do not live on fantasies. Look at the database market – it’s huge! It is projected to grow to $50–$60 billion (depending on which analyst firm you ask) in sales in 2020. It is the largest segment in the software business, twice the size of the security/cyber market. The crack in the database market that opened up with NoSQL will represent 10% of this market in the near term. However, the border line between SQL and NoSQL is becoming a blur, as companies such as Oracle add NoSQL capabilities and NoSQL vendors add SQL capabilities. I think that over time, it will become a single large market. Redis Labs provides a true multi-model database. We support key-value with multiple data structures, graph, search, JSON (document based), all with built-in functionality, not just APIs. We constantly increase the use case coverage of our database, and that is ultimately the name of the game in this business. Couple all that with Redis' blazing fast performance, the massive adoption of open source Redis and the fact that it is the "most loved database" (according to StackOverflow), and you would agree that we have once in a lifetime opportunity!





Ciena posts strong quarter as revenue rises to $818.8m

Ciena reported revenue of $818.8 million for its fiscal third quarter 2018,  as compared to $728.7 million for the fiscal third quarter 2017.

Ciena's GAAP net income for the fiscal third quarter 2018 was $50.8 million, or $0.34 per diluted common share, which compares to a GAAP net income of $60.0 million, or $0.39 per diluted common share, for the fiscal third quarter 2017.

Ciena's adjusted (non-GAAP) net income for the fiscal third quarter 2018 was $74.3 million, or $0.48 per diluted common share, which compares to an adjusted (non-GAAP) net income of $56.4 million, or $0.35 per diluted common share, for the fiscal third quarter 2017.

"The combination of continued execution against our strategy and robust, broad-based customer demand resulted in outstanding fiscal third quarter performance," said Gary B. Smith, president and CEO of Ciena. "With our diversification, global scale and innovation leadership, we remain confident in our business model and our ability to achieve our three-year financial targets.”

Some highlights:

  • U.S. customers contributed 57.3% of total revenue
  • Three customers accounted for greater than 10% of revenue and represented 33% of total revenue
  • 37% of revenue comes from non-telco customers; In Q3, three of the top ten revenue accounts were webscale customers, including one that exceeded 10% of total quarterly sales – a first for Ciena.
  • Secured wins with tier one global service providers – many of whom are new to Ciena – including Deutsche Telekom in support of its international wholesale business entity. The project includes a Europe-wide network deployment leveraging our WaveLogic technology.
  • APAC sales were up nearly 50%, with India once again contributing greater than 10% of global revenue. India grew 100% year-over-year, and Japan doubled in the same period. Australia also remained a strong contributor to quarterly results.
  • The subsea segment was up 23% year-over-year, largely driven webscale company demand. Ciena noted several new and significant wins in Q3, including four new logos, and Ciena was selected as the preferred vendor for two large consortia cables.
  • The Networking Platforms business was up more than 14% year-over-year.
  • Adjusted gross margin was 43.4%
  • Headcount totaled 5,889
https://investor.ciena.com/events-and-presentations/default.aspx




At this

ZTE counts its losses for 1H18, renews focus on 5G

ZTE Corporation reported revenue of RMB 39.434 billion for the first six months of 2018, down 27% from RMB 54.010 for the same period in 2017.

For the six months, ZTE's net profit attributable to holders of ordinary shares of the listed company amounted to RMB-7.824 billion, representing year-on-year decline of 441.24%. Basic earnings per share amounted to RMB-1.87, which reflected mainly the company’s payment of the US$1 billion penalty to the U.S. government.

ZTE's operating revenue from the domestic market amounted to RMB25.746 billion, accounting for 65.29% of the Group’s overall operating revenue, while international sales amounted to RMB13.688 billion, accounting for 34.71% of the total.

ZTE's operating revenue for carriers’ networks, government and corporate business and consumer business amounted to RMB23.507 billion, RMB4.433 billion and RMB11.494 billion, respectively.

Management's commentary included the following:  "Looking to the second half of 2018, the Group will welcome new opportunities for development, given rapid growth in the volume of data flow over the network and the official announcement of the complete fully-functional 5G standards of the first stage. Specifically, such opportunities will be represented by: the acceleration of 5G commercialisation with the actual implementation of trial 5G deployment backed by ongoing upgrades of network infrastructure facilities; robust demand for smart terminals; as well as an onrush of new technologies and models with AI, IOT and smart home, among others, providing new growth niches. "

"In the second half of 2018, the Group will step up with technological innovation and enhance cooperation with customers and partners in the industry with an ongoing focus on high-worth customers and core products. In the meantime time, we will improve our internal management by enhancing human resources, compliance and internal control to ensure our Group’s prudent and sustainable development."



QSFP-DD MSA Group completes mechanical plugfest

The Quad Small Form Factor Pluggable Double Density (QSFP-DD) Multi Source Agreement (MSA) completed a mechanical plugfest to validate the compatibility and interoperation between member’s designs.

The MSA said the event confirmed that the maturity of design experience resulted in a highly successful outcome. A key value proposition of QSFP-DD form factor is the backward compatibility with the widely adopted QSFP28.

The areas of focus for this event included testing the electrical, latching and mechanical designs all of which address the industry need for a high-density, high-speed networking solution.

In total, 15 companies participated in the private Plug Fest, which was hosted by Cisco at their headquarters in San Jose, California.

http://www.qsfp-dd.com/

New QSFP-DD MSA Targets 400G


The Quad Small Form Factor Pluggable Double Density (QSFP-DD) Multi Source Agreement (MSA) group has released a specification for the new QSFP-DD form factor, which is a next generation high-density, high-speed pluggable module with a QSFP28 compatible double-density interface. QSFP-DD pluggable modules can quadruple the bandwidth of networking equipment while remaining backwards compatible with existing QSFP form factors used across Ethernet, Fibre...

Nutanix says software sales growing at 49% annual clip

Nutanix reported revenue of $303.7 million for its fourth quarter ended July 31, 2018, up from $252.5 million a year earlier, reflecting the elimination of approximately $95 million in pass-through hardware revenue in the quarter as the company continues to execute its shift toward increasing software revenue.

Software and support revenue amounted to $267.9 million in the quarter, growing 49% year-over-year from $179.6 million in the fourth quarter of fiscal 2017.

GAAP net loss was $87.4 million, compared to a GAAP net loss of $66.1 million in the fourth quarter of fiscal 2017. Non-GAAP net loss of $19.0 million, compared to a non-GAAP net loss of $26.0 million in the fourth quarter of fiscal 2017.

“We ended the year on a high note with a record quarter on many fronts, positioning us extremely well for the future. We will continue to invest in talent and hybrid cloud technology while incubating strategic multi-cloud investments such as Netsil, Beam, and now Frame,” said Dheeraj Pandey, Chairman, Founder and CEO of Nutanix. “Frame increases our addressable market, brings another service to our growing platform, and adds employees with insurgent mindsets who will help us continue to challenge the status quo.”

“The company’s strong achievement of 78 percent non-GAAP gross margin, the best in our history, is the direct result of our successful execution toward a software-defined business model,” said Duston Williams, CFO of Nutanix. “We’re also tracking above our target performance we set using the ‘Rule of 40’ framework, demonstrating our ability to balance growth and cash flow.”


Dell'Oro: Sales of 25 Gbps NICs take off

Sales of 25 Gbps controller and adapter ports is forecasted to grow at a 45 percent compound annual growth rate over the next five years, according to a new report from Dell'Oro Group, as 25 Gbps advances to become the mainstream speed in cloud and enterprise servers.

“25 Gbps has seen a strong initial ramp-up and is now expected to be the dominant speed over the next five years. We have seen Amazon and Facebook as early adopters of 25 Gbps technology, but more end users are transitioning as product availability increases," said Baron Fung, Senior Business Analysis Manager at Dell'Oro Group. "There's been a steady wave of 10 Gbps to 25 Gbps migration as other cloud service providers and high-end enterprises renew and upgrade their servers. Shipment of 25 Gbps ports is expected to peak in 2021, when 50 and 100 Gbps products based on 56 Gbps serial lanes start to ramp-up," said Fung.

Additional highlights from the Server 5-Year Forecast Report:
  • The total controller and adapter market is forecasted to grow at a four percent compound annual rate, with 25 Gbps sales driving most of the growth.
  • Smart NICs could offer adapter vendors an opportunity to introduce innovative new products at higher price point, which could lower the total cost of ownership in the data center

Dell'Oro: Server landscape shifts toward white box cloud servers

The server market is on track to surge $10 billion higher in 2018 before growth rates taper, according to a new report from Dell'Oro Group.  Vendor landscape is trending to lower-cost white box cloud servers.

“Although we forecast a five-year compounded annual growth rate of only two percent, the growth of the server market in 2018 will be at an unprecedented level,” said Baron Fung, Senior Business Analysis Manager at Dell’Oro Group. “However, the cloud segment, which consists of a high proportion of lower-cost custom designed servers, will continue to gain unit share over the Enterprise, putting long-term revenue growth under pressure.  Furthermore, the vendor landscape will continue to shift from OEM to white box Servers as the market is shifting towards the cloud,” added Fung.

Additional highlights from the Server 5-Year Forecast Report:


  • The 2018 growth is primarily attributed to rising average selling prices, resulting from vendors passing on higher commodity prices and end-users purchasing higher-end server configurations.
  • We estimate half of all servers shipping this year go to the cloud, and foresee this share growing to two-thirds by 2022.

A10 announces preliminary revenue of $60.7 million

A10 Networks announced preliminary revenue of $60.7 million for the quarter ended June 30, 2018, up 12% year-over-year. GAAP net loss was $4.5 million, or $0.06 per share, and non-GAAP net income was $1.6 million, or $0.02 per share.

“We have made steady progress across our key initiatives including strengthening our team, increasing our pace of innovation, and targeting our R&D investments in cloud, security and 5G. While our first quarter was impacted by our sales transformation, we were pleased to see improved momentum in the second quarter,” said Lee Chen, president and chief executive officer of A10 Networks. “There are a number of trends in the market that play to A10's strengths that we believe present many opportunities for growth over the long-term. We are focused as a management team and believe we are on the right path to continue to improve our execution and drive growth.”