Wednesday, May 30, 2018

ExteNet Systems to acquire Hudson Fiber Network

ExteNet Systems, a private developer, owner and operator of distributed networks across the United States, agreed to acquire Hudson Fiber Network (HFN). Financial terms were not disclosed.

Hudson Fiber Network (HFN) is a data transport provider which has a significant metro fiber network in the greater New York City area and operates a national wide-area network with key international points of presence.


"We are pleased to announce our intention to acquire Hudson Fiber Network to accelerate growth of ExteNet’s Optical Network Solutions business,” said Ross Manire, President and CEO of ExteNet Systems. “We have served the northeast region, including New York City, for many years with our fiber, small cell and indoor network solutions. We plan to leverage the core competencies of both companies to offer our customers an expanded portfolio of carrier and enterprise solution offerings and rapidly expand into other major markets by leveraging ExteNet’s extensive fiber plant.”

OFS expands fiber portfolio

OFS has expanded its AccuTube+ Rollable Ribbon Cable family to include cables with 432, 576 and 864 fiber counts featuring rollable ribbon technology in a ribbon-in-loose-tube cable design.

This expanded product line of 100% gel-free cables will offer both single jacket/all-dielectric and light armor constructions.

OFS said rollable ribbon fiber optic cables can help users achieve significant time and cost savings using mass fusion splicing while also doubling their fiber density in a given duct size compared to traditional flat ribbon cable designs.

Each OFS rollable ribbon features 12 individual optical fibers that are partially bonded to each other at predetermined points. These ribbons can be "rolled" into a flexible and compact bundle that offers the added benefit of improved fiber routing and handling in closure preparation.

The AccuTube+ Rollable Ribbon Cable product line also features cables with 1728 fibers in both single jacket and light armor designs and 3456 fibers in a single jacket construction. All of these cables meet or exceed the requirements of Telcordia GR-20 issue 4.

http://www.ofsoptics.com

Masergy expands global bandwidth-on-demand to SD-WAN

Masergy announced the extension of their Intelligent Service Control with Global Bandwidth on Demand for Managed SD-WAN.

The Global Bandwidth on Demand feature is built into Masergy’s Intelligent Service Control (ISC) customer portal enabling customers to instantly ramp up or reduce Managed SD-WAN bandwidth by location. Enterprise IT managers typically use this feature to accommodate data back-up, multi-site video conferences, disaster recovery measures or other business requirements that use atypical bandwidth at high speeds. As with the private network, Masergy Global Bandwidth on Demand for public links can also be calendarized, so users can pre-select times throughout the week to increase bandwidth and ensure uptime for scheduled analytics projects or data backups. The customer is billed incrementally only for the specific spike of bandwidth usage.

“The one certainty today in enterprise information technology is rapid change,” said Chris MacFarland, CEO, Masergy. “As the complexity of the enterprise application environment increases, IT professionals are turning to software-defined hybrid networks to deliver superior user application experiences. This enhancement gives IT professionals complete control of their global hybrid networks, regardless of the access methodology, by extending our patented service control capabilities to our fully integrated Managed SD-WAN solution.”

“Enterprises are increasingly turning to service providers who deliver the flexibility of hybrid WAN architectures that leverage both public internet and private MPLS links," said Mike Sapien, VP and Chief Analyst at Ovum. “Masergy designs global network solutions based on their customer's users, application needs and each location's risk tolerance. With its latest innovation, the Masergy Global Bandwidth on Demand solution provides the ability to not only increase public network bandwidth dynamically in real time or at predetermined times, but also provides customers reliable business continuity if either private or public networks fail.”

Tuesday, May 29, 2018

AT&T NetBond brings direct connect to Google Cloud Platform

AT&T and Google Cloud announced two areas of collaboration.

First, business customers will be able to use AT&T NetBond for Cloud to connect in a highly secure manner to Google Cloud Platform. Google's Partner Interconnect offers organizations private connectivity to GCP and allows data centers geographically distant from a Google Cloud region or point of presence to connect at up to 10 Gbps. Google has joined more than 20 leading cloud providers in the NetBond® for Cloud ecosystem, which gives access to more than 130 different cloud solutions.

Second, G Suite, which is Google's cloud-based productivity suite for business including Gmail, Docs and Drive, is now available through AT&T Collaborate, a hosted voice and collaboration solution for businesses.

"We're committed to helping businesses transform through our edge-to-edge capabilities. This collaboration with Google Cloud gives businesses access to a full suite of productivity tools and a highly secure, private network connection to the Google Cloud Platform," said Roman Pacewicz, chief product officer, AT&T Business. "Together, Google Cloud and AT&T are helping businesses streamline productivity and connectivity in a simple, efficient way."

"AT&T provides organizations globally with secure, smart solutions, and our work to bring Google Cloud's portfolio of products, services and tools to every layer of its customers' business helps serve this mission," said Paul Ferrand, President Global Customer Operations, Google Cloud. "Our alliance allows businesses to seamlessly communicate and collaborate from virtually anywhere and connect their networks to our highly-scalable and reliable infrastructure."

Semtech announces PAM4 clock and data recovery platform

Semtech announced a PAM4 clock and data recovery (CDR) platform optimized for low power and low-cost PAM4 optical interconnects used in data center and active optical cable (AOC) applications.

Semtech's Tri-Edge is a new CDR platform technology being developed for the PAM4 communication protocol. It builds on the success of Semtech’s ClearEdge NRZ-based CDR platform technology and extends it to PAM4 signaling.

The company says its Tri-Edge CDR platform will be applicable for 100G, 200G and 400G requirements.

“The rapidly growing demand for bandwidth in the data center market requires a disruptive solution to meet the power, density and cost requirements. By combining leading-edge technologies with a focused application, we can enable a disruptive solution that we believe will meet the needs of the data centers in both the near-term and long-term,” said Imran Sherazi, Vice President of Marketing and Applications for Semtech’s Signal Integrity Products Group.

Semtech notes that its ClearEdge CDRs are the world’s most widely selected optical transceiver CDRs for use in 10G applications and 100G data center applications.


Oclaro and Acacia collaborate on 100/200G CFP2-DCO

Acacia Communications and Oclaro are collaborating on a multi-vendor environment of fully interoperable CFP2-DCO modules based on Acacia’s Meru DSP.

Specifically, Oclaro plans to launch a new CFP2-DCO module that will feature plug-and-play compatibility with the Acacia CFP2-DCO, providing customers with two proven coherent optics suppliers for the 100/200G CFP2-DCO form factor. 

CFP2-DCOs integrate the coherent DSP into the pluggable module. The digital host interface enables simpler integration between module and system resulting in faster service activation and a pay-as-you-grow deployment model for telecommunication providers whereby the cost of additional ports can be deferred until additional services are needed.

The CFP2-DCO pluggable form factor, which is being introduced by multiple network equipment manufacturers (NEMs) in switch, router, and transport platforms, supports four times higher density than current generation 100G CFP-DCO solutions by doubling the data rate.

The companies said their CFP2-DCO pluggable coherent modules support transmission speeds of 100G and 200G for use in access, metro and data center interconnect markets.  In addition to proprietary operating modes, both companies intend to support the requirements of the Open ROADM MSA for interoperability at 100G.

“Network operators and our system partners have been excited about the ramp of our CFP2-DCO module,” said Benny Mikkelsen, Chief Technology Officer of Acacia Communications.  “By partnering with Oclaro to ensure interoperability with their Meru-based CFP2-DCO module, we believe we will be better positioned to address the DCO market as industry trends shift favorably toward the CFP2 form factor.  We are excited about our relationship with Oclaro and believe that broader adoption of 200G CFP2-DCO modules will be mutually beneficial to our two companies and the customers we serve.”

“Our 43Gbaud Coherent Transmitter Receiver Optical Sub-Assembly (TROSA) is at the heart of our CFP2-DCO. The TROSA leverages proven Indium Phosphide PIC technology from Oclaro’s highly successful CFP2-ACO to achieve industry-leading optical performance in a small form factor,” said Beck Mason President of the Integrated Photonics Business at Oclaro. “By establishing a fully interoperable solution with Acacia, our customers will have two sources of supply for these critical components, enabling them to efficiently upgrade their networks to higher speeds.”

NYU develops AR learning tool using Verizon's 5G testbed

NYU’s Future Reality Lab are using Verizon’s pre-commercial 5G technology at Alley, a co-working space and site of Verizon’s 5G incubator in New York City, to develop ChalkTalk, an open source AR learning tool that renders multimedia objects in 3D.

The idea is to use AR on mobile devices to create more effective learning tools that are able to update and respond in real time as the instructor makes his or her point.

“We’ve been able to test and experiment with the 5G technology,” said NYU's Dr. Ken Perlin. “We’re looking at simple use cases now, but will be looking at more involved, more interesting applications as time goes on.”

http://www.verizon.com/about/news/chalktalk--using-5g-and-ar-enhance-learning-experience

Samsung hails the rapid pace in 5G standardization

Two years after hosting the Third Generation Partnership Project (3GPP) meeting in Busan, Korea that kicked off the 5G standardization process,  Samsung Electronics this month hosted another 3GPP meeting to wrap-up the first phase of the effort.

Based on this latest meeting in Busan, the 3GPP is expected to make the final announcement of 5G phase-1 standards at a general meeting to be held in the U.S. in June. The 5G standardization process that started in April 2016 will end next month after a 27-month journey, significantly faster than the LTE standards development process.

In a blog posting, Samsung recounts its contributions to the hectic 5G development process.

“Samsung Electronics has been working on ultra-high frequency three years faster than other companies,” said Younsun Kim, Principal Engineer of Standards Research Team at Samsung Research and Vice Chairman of RAN1 working group in 3GPP. “When the world started to discuss the setting of standards, Samsung had already developed the related technologies. We had strong aspirations to bring the standardization for 5G commercialization faster than any other company in the world.”

Some notes from Samsung:

  1. Within 3GPP, Samsung has been in charge of four positions including the Chair of Service & System TSG and Chair of RAN4 working group, which oversees the frequency and performance that is key to 5G, and in 2018, one more Chair position – SA6 working group for mission-critical applications 
  2. Samsung has registered 1,254 patents with ETSI as essential to 5G. 

https://news.samsung.com/global/pioneer-in-5g-standards-part-2-a-hectic-27-month-journey-to-achieve-standardization

Supermicro unveils 2 PetaFLOP SuperServer based on New NVIDIA HGX-2

Super Micro Computer is using the new NVIDIA HGX-2 cloud server platform to develop a 2 PetaFLOP "SuperServer" aimed at artificial intelligence (AI) and high-performance computing (HPC) applications.

"To help address the rapidly expanding size of AI models that sometimes require weeks to train, Supermicro is developing cloud servers based on the HGX-2 platform that will deliver more than double the performance," said Charles Liang, president and CEO of Supermicro. "The HGX-2 system will enable efficient training of complex models. It combines 16 Tesla V100 32GB SXM3 GPUs connected via NVLink and NVSwitch to work as a unified 2 PetaFlop accelerator with half a terabyte of aggregate memory to deliver unmatched compute power."

The design packs over 80,000 CUDA cores.

Mellanox intros Hyper-scable Enterprise Framework

Mellanox Technologies introduced its Hyper-scalable Enterprise Framework for private cloud and enterprise data centers.

The five key elements of the ‘Mellanox Hyper-scalable Enterprise Framework’ are:
  • High Performance Networks – Mellanox end-to-end suite of 25G, 50G, and 100G adapters, cables, and switches is proven within hyperscale data centers who have adopted these solutions for the simple reason that an intelligent and high-performance network delivers total infrastructure efficiency
  • Open Networking – an open and fully disaggregated networking platform is key to scalability and flexibility as well as achieving operational efficiency
  • Converged Networks on an Ethernet Storage Fabric – a fully converged network supporting compute, communications, and storage on a single integrated fabric
  • Software Defined Everything and Virtual Network Acceleration – Enables enterprise to enjoy the benefits of the hyperscalers who have embraced software-defined networking, storage, and virtualization – or software-defined everything (SDX)
  • Cloud Software Integration – networking solutions that are fully integrated with the most popular cloud platforms such as OpenStack, vSphere, and Azure Stack and support for advanced software-defined storage solutions such as Ceph, Gluster, Storage Spaces Direct, and VSAN
“With the advent of open platforms and open networking it is now possible for even modestly sized organizations to build data centers like the hyperscalers do,” said Kevin Deierling, vice president of marketing at Mellanox Technologies. “We are confident and excited to release the Mellanox Hyper-scalable Enterprise Framework to the industry – and to provide an open, intelligent, high performance, accelerated and fully converged network to enable enterprise and private cloud architects to build a world-class data center.”

Samsung hits mass production of 10nm-Class 32GB DDR4

Samsung Electronics Co. started mass producing the industry’s first 32-gigabyte (GB) double data rate 4 (DDR4) memory.

The small outline dual in-line memory modules (SoDIMMs) are used in gaming laptops.

Samsung said that compared to its 16GB SoDIMM based on 20nm-class 8-gigabit (Gb) DDR4, which was introduced in 2014, the new 32GB module doubles the capacity while being 11 percent faster and approximately 39 percent more energy efficient. A 64GB laptop configured with two 32GB DDR4 modules consumes less than 4.6 watts (W) in active mode and less than 1.4W when idle.

Salesforce is now on a $12 billion per year run rate

Salesforce reported first quarter revenue og $3.01 billion, an increase of 25% year-over-year, and 22% in constant currency. Subscription and support revenues were $2.81 billion, an increase of 27% year-over-year. Professional services and other revenues were $196 million, an increase of 4% year-over-year. First quarter GAAP diluted earnings per share was $0.46, and non-GAAP diluted earnings per share was $0.74. The company also reported unearned revenue (deferred revenue) of $6.20 billion, an increase of 25% year-over-year, and 23% in constant currency.

"Salesforce delivered more than $3 billion in revenue in the first quarter, surpassing a $12 billion annual revenue run rate," said Marc Benioff, chairman and CEO, Salesforce. "Our relentless focus on customer success is yielding incredible results, including delivering nearly two billion AI predictions per day with Einstein."

KKR to acquire BMC for its enterprise software

KKR, a leading global investment firm, agreed to acquire BMC for an undisclosed sum. BMC is currently owned by a private investor group led by Bain Capital Private Equity and Golden Gate Capital together with GIC, Insight Venture Partners and Elliott Management.

Founded in 1980, BMC is a leading systems software provider which helps enterprise organizations manage and optimize information technology across cloud, hybrid, on-premise, and mainframe environments. The company claims more than 10,000 customers worldwide, including 92% of the Forbes® Global 100.

"With the support and partnership of our Investor Group, BMC significantly accelerated its innovation of new technologies and new go-to-market capabilities over the past five years," said Peter Leav, President and Chief Executive Officer of BMC. "Our growth outlook remains strong as BMC is competitively advantaged to continue to invest and win in the marketplace. Our customers can expect the BMC team to remain focused on providing innovative solutions and services with our expanding ecosystem of partners to help them succeed across changing enterprise environments. We are excited to embark on our next chapter with KKR as our partner."

"In an ever-changing IT environment that is only becoming more complex, companies that help simplify and manage this essential infrastructure for their enterprise customers play an increasingly important role," said Herald Chen, KKR Member and Head of the firm's Technology, Media & Telecom (TMT) industry team, and John Park, KKR Member. "With more than 10,000 customers and 6,000 employees, BMC is a global leader in managing digital and IT infrastructure with a broad portfolio of software solutions.  We are thrilled to partner with the talented BMC team to accelerate growth—including via M&A—building on BMC's deep technology expertise and long-standing customer relationships."

Toshiba debuts portable SSDs based on 64-layer 3D Flash

Toshiba Memory America introduced its XS700 Series of portable solid state drives (SSDs) offering capacity of up to 240GB.

The new drives use Toshiba's in-house 3D flash memory, 64-layer BiCS FLASH technology. The XS700 includes USB 3.1 Gen 2 support, and features the latest USB Type-CTM connector.


Monday, May 28, 2018

Start-up profile: Rancher Labs, building container orchestration on Kubernetes

Rancher Labs is a start-up based in Cupertino, California that offers a container management platform that has racked up over four million downloads. The company recently released a major update for its container management system. Recently, I sat down with company co-founders Sheng Liang (CEO) and Shannon Williams (VP of Sales) to talk about Kubernetes, the open source container orchestration system that was originally developed by Google. Kubernetes was initially released in 2014, about the time that Rancher Labs was getting underway.

Jim Carroll, OND: So where does Kubernetes stand today?

Sheng Liang, Rancher Labs: Kubernetes has come a long way. When we started three years ago, Kubernetes was also just getting started. It had a lot of promise, but people were talking about orchestration wars and stuff. Kubernetes had not yet won but more importantly, it wasn't really useful.  In the early days, we couldn't even bring ourselves to say that we were going to focus exclusively on Kubernetes. It was not that we did not believe in Kubernetes, but it just didn't work for a lot of users. Kubernetes was almost seen as an end unto itself. Even standing up Kubernetes was such a challenge back then that just getting it to run became an end goal.  A lot of people in those days were experimenting with it, and the goal was simply to prove - hey- you've got a Kubernetes cluster.  Success was to get a few simple apps.  And its come a long way in 3 years.


A lot of things have changed. First, Kubernetes is now really established as the de facto container orchestration platform. We used to support Mesosphere, we used to support Swarm, and we used to build our own container orchestrations platform, which we called Cattle. We stopped doing all of that to focus entirely on Kubernetes. Luckily, the way we developed Cattle was closely modeled on Kubernetes, sort of an easy-to-use version of Kubernetes. So we were able to bring a lot our experience to run on top of Kubernetes. And now it turns out that we don't have to support all of those other frameworks. Kubernetes has settled that. It is now a common tool that everyone can use.

JC: The Big Three cloud companies are now fully behind Kubernetes, right?

Sheng Liang: Right. I think that for the longest time a lot of vendors were looking for opportunities to install and run Kubernetes. That kept us alive for a while. Some of the early Kubernetes deals that we closed were about installing Kubernetes.  These projects then turned to operation contracts because people thought they were going to need to help with upgrading or just maintaining the health of the cluster. This got blown out of the water last year when all of the big cloud providers started to offer Kubernetes as a service.

If you are on the cloud already, there is really no reason to stand up your own Kubernetes cluster.

Well, we're really not quite there yet, even though Amazon announced EKS in November, it is not even GA yet. It is still in closed beta status, but later this year Kubernetes as a service should become a commercial reality. And there are other benefits too.

I'm not sure about Amazon, but both Google and Microsoft  have decided to not charge for the management plane, so whatever resource you use to run the database, and the control plane nodes, you don't really pay for, I guess they must have a very efficient way of running it on some shared infrastructure. That's what I suspect. This allows them to amortize that cost on what they charge for the worker nodes.

The way people set up Kubernetes clusters in the early days was actually very wasteful. Like you would use three nodes for ECD and you would use two nodes for the control plane and then when setting it up people would throw in two more nodes for workers. So, they were using five nodes to manage two nodes, while paying for seven.

With cloud services, you don't have to do that. I think this makes Kubernetes table stakes. It is not just limited to the cloud.  I think it's really wherever you can get infrastructure. Enterprise customers, for instance, are still getting infrastructure from VMware. Or they get it from Nutanix.

All of the cloud companies have announced, or will shortly announce, support for Kubernetes out of the box. Kubernetes then will equate to infrastructure, just like virtual machines, or virtual SANS.

JC: So, how is Kubernetes actually being used now? Is it a one-way bridge or a two-way bridge for moving workloads? Are people actually moving workloads on a consistent basis, or it basically a one-time move to a new server or cloud?

Shannon Williams, Rancher Labs: Portability is actually less important than other features. It may be the sexy part of Kubernetes to say that you can move clusters of containers. The reality is that Kubernetes is just a really good way to run containers reliably.

The vast majority of people who are running containers are not using Kubernetes for the purpose of moving containers between clouds.  The vast majority of people running Kubernetes are doing so because it is more reliable than running containers directly on VMs. It is easier to use Kubernetes from an operational perspective. It is easier from a development perspective. It is easier from a testing perspective. So if you think of the value prop that Kubernetes represents, it comes down to faster development cycles, better operations. The portability is kind of the cherry on top of the Sundae.

It is interesting that people are excited about the portability enabled by Kubernetes, and I think it will become really important over the long term, but it is just as important that I can run it on my laptop as that I can run it on one Kubernetes cluster versus another.

Sheng Liang: I think that is a very important point. The vast major of the accounts we are familiar with run Kubernetes at just one place. That really tells you something about the power of Kubernetes. The fact that people are using this at just one place really tells you that portability is not the primary motivator.  The primary benefit is that Kubernetes is really a rock-solid way to run containers.

JC: What is the reason that Kubernetes is not being used so much for portability today? Is the use case weak for container transport? I would guess that a lot of companies would want to move jobs up to the cloud and back again.

Sheng Liang:  I just don't think that portability is the No.1 requirement for companies using containers today. Procurement teams are excited about this capability but operations people just don't need it right now.

Shannon Williams: From the procurement side, knowing that your containers could be moved to another cloud gives you the assurance that you won't be locked in.

But portability in itself is a complex problem. Even Kubernetes does not solve all the issues of porting an application from one system to another. For instance, I may be running Kubernetes on AWS but I may also be running an Amazon Relational Database (RDS) service as well.  Kubernetes is not going to magically support both of these in migrating to another cloud. There is going to be work required. I think we are still a ways away from ubiquitous computing but we are heading into a world where Kubernetes is how you run containers and containers are going to be the way that all microservices and next-gen applications are built. It may even be how I run my legacy applications. So, having Kubernetes everywhere means that the engineers can quickly understand all of these different infrastructure platforms without having to go through a heavy learning curve. With Kubernetes they will have already learned how to run containers reliably wherever it happens to be running.

JC: So how are people using Kubernetes? Where are the big use cases?

Shannon Williams: I think with Kubernetes we are seeing the same adoption pattern as with Amazon. The initial consumers of Kubernetes were people who were building early containerized applications, predominantly microservices, cloud-native Web apps, mobile apps, gaming, etc. One of the first good use cases was Pokemon Go. It needed massively-scalable systems and ran on Google Cloud. It needed to have systems that could handle rapid upgrades and changes. The adoption of Kubernetes moved from there to more traditional Web applications, to the more traditional applications.

Every business is trying to adopt an innovative stance with their IT department.  We have a bunch of insurance companies as customers. We have media companies as customers. We have many government agencies as customers, such as the USDA -- they run containers to be able to deliver websites. They have lots of constituencies that they need to build durable web services for.  These have to run consistently. Kubernetes and containers give them a lot of HA (high availability).

A year or so ago we were in Phase 0 with this movement. Now I would say we are entering Phase 1 with many new use cases. Any organization that is forward-looking in their IT strategy is probably adopting containers and Kubernetes. This is the best architecture for building applications.

JC: Is there physical limit to how far you can scale with Kubernetes?

Shannon Williams:  It is pretty darn big. You're talking about spanning maybe 5,000 servers.

Sheng Liang: I don't think there is a theoretical limit to how big you can go, but in practice, there is a database that eventually will bottleneck. The might be the limiting factor.

 I think some deployments have hit 5,000 nodes and each node these days could actually be a one terabyte machine. So that is actually a lot of resources. I think it could be made bigger, but so far that seems to be enough.

Shannon Williams: The pressure to hit that maximum size of 5,000 nodes or more in a cluster really is not applicable to the vast majority of the market.

Sheng Liang: And you could always manage multiple clusters with load balancing. It is probably not a good practice anyway to put everything in one superbig cluster.

Generally, we are not seeing people create huge clusters across multiple data centers or multiple regions.

Shannon Williams: In fact, I would say that we are seeing the trend move in the opposite direction.  Which is that the number of clusters in an organization is increasing faster than the size of any one cluster. What we see is any application that is running probably has at least two clusters available  -- one for testing and one for production.  There are often many divisions inside a company that push this requirement forward. For instance, a large media company has more than 150 Kubernetes clusters -- all deployed by different employees in different regions and often running different versions of their software. The even have multiple cloud providers. I think we are heading in that direction, rather than one massive Kubernetes cluster to rule them all.

Sheng Liang:  This is not what some of the web companies initially envisioned for Kubernetes.  When Google originally developed Kubernetes, they were used to the model where you have a very big pool of resources with bare metal servers. Their challenge was how to schedule all the workloads inside of that pool. When enterprises started adopting Kubernetes, one thing that immediately changed was that they really don't have the operational maturity to put all their eggs in one basket and make that really resilient. Second, because all of them were using some form of virtualization. They were either using VMware or they were using a cloud, so essentially the cost of making small clusters come down. There is not a lot of overhead. You can have a lot of clusters without having to dedicate the whole server into these clusters.

JC: Is there an opportunity then for the infrastructure provider, or the cloud provider, to add their own special sauce on top of Kubernetes?

Sheng Liang:  The cloud guys are all starting to do that. Over time, I think they will do more. Today is still early. Amazon, for instance, has not yet commercially launched the service to the public. And Digital Ocean just announced it. But Google has been offering Kubernetes as a service for three years. Microsoft has been doing it for probably over a year. If you look at Google's Kubernetes service, which is probably the most advanced, now includes more management dashboards and UIs, but nothing really fancy yet.

What I would expect them to do -- and this would be really great from my perspective -- is to bring their entire service suite, including their databases, AI and ML capabilities, and make them available inside of Kubernetes.

Shannon Williams: Yeah, they will want to integrate their entire cloud ecosystems. That's one of the appealing things about cloud providers offering Kubernetes -- there will be some level of standardization but they will have the opportunity to differentiate for local requirements and flavors.

That kind of leads to the challenge we are addressing.

There are three big things that most organizations face (1) you want to be able to run Kubernetes on-prem.  Some teams may run it on VMware, some may wish to run in on bare metal. They would like to be able to run it on-prem in a way that is reliable, consistent and supported. For IT groups, there is a growing requirement of offer Kubernetes as a service in the same way they offer VMs. To do so, they must standardize Kubernetes. (2) There is another desire to manage all of these clusters in a way that complies with your organization's policies. There will be questions like "how do I manage multiple clusters in a centralized way even if some are on-prem and some are in the cloud?"  This is a distro-level problem for Kubernetes. (3) Then there is a compliance and security concern with how to configure Kubernetes to enforce all of my access control policies, security policies, monitoring policies, etc.  Those are the challenges that we are taking on with Rancher 2.0

Jim Carroll, OND: Where does Rancher Labs fit in?

Shannon Williams, Rancher Labs: The challenge we are taking on is how to manage multiple Kubernetes clusters, including how to manage users and policies across multiple clusters in an organization.

Kubernetes is now available as a supported, enterprise-grade service for anybody in your company. At this scale, Kubernetes really becomes appealing to organizations as a standardization approach, not just so that workloads can easily move between places but so that workloads can be deployed to lots of places.  For instance, I might want some workloads to run on Alibaba Cloud for a project we are doing in China, or I might want to run some workloads on T-Systems's cloud for a project in Germany, where I have to comply with the new data privacy laws. I can now do those things with Kubernetes without having to understand the specific cloud parameters, benefits or limitations of any specific cloud. Kubernetes normalizes this experience. Rancher Labs makes it happen in a consistent way. That is a large part of what we are working on at Rancher Labs -- consistent distribution and consistent management of any cluster. We will manage the lifecycle of Amazon Kubernetes or Google Kubernetes, our Kubernetes, or new Kubernetes coming out of a dev lab.

JC: So the goal is to have the Rancher Labs experience running both on-prem and in the public cloud?

Shannon Williams, Rancher Labs:: Exactly. So think about it like this. We have a distro of Kubernetes and we can use it to implement Kubernetes for you on bare metal, or on VMware, or in the cloud, if you prefer, so you can build exactly the version of Kubernetes that suits you. That is the first piece of value -- we'll give you Kubernetes wherever you need it. The second piece is that we will manage all of the Kubernetes clusters for you, including where you requested Kubernetes from Amazon or Google. You have the options of consuming from the cloud as you wish or staying on-prem. There is one other piece that we are working on. It is one thing to provide this normalized service. The additional layer is about engaging users.

What you are seeing with Kubernetes is similar to the cloud. Early adopters move in quickly and have no hesitancy in consuming it -- but.they represent maybe 1% or 2% of the users.The challenge for the IT department is to make this preferred way to deliver resources. At this point, you want to encourage adoption and that means developing a positive experience.

JC: Is your goal to have all app developers aware of the Kubernetes layer? Or is Kubernetes management really the responsibility of the IT managers who thus far are also responsible for running the network, running the storage, running the firewalls..?

Shannon Williams, Rancher Labs: Great question, because Kubernetes is actually part of the infrastructure, but it is also part of the application resiliency layer. It deals with how an application handles a physical infrastructure failure, for example. Do I spin up another container? Do I wait to let a user decide what to do? How do I connect these parts of an application and how do I manage the secrets that are deployed around it? How do I perform system monitoring and alerting of application status? Kubernetes is blurring the line.

Sheng Liang, Rancher Labs: It is not really something the coders will be interested in. The interest in Kubernetes starts with DevOps and stops just before you get to storage and networking infrastructure management.

Shannon Williams, Rancher Labs: Kubernetes is becoming of interest to system architects -- the people who are designing how an application is going to be delivered. They are very aware that the app is going to be containerized and running in the cloud. The cloud-native architecture is pulling in developers. So I think it is a little more blurred than whether or not coders get to this level.

Sheng Liang, Rancher Labs: For instance, the Netflix guys used to talk a lot about how they developed applications. Most developers don't spend a lot of time worrying about how their applications are running. They have to spend most of their time worrying about the outcome. But they are highly aware of the architecture. Kubernetes is well regarded as the best way to develop such applications. Scalable, Resilient, Secure -- those are what's driving the acceptance of Kubernetes.

Shannon Williams, Rancher Labs:  I would add one more to the list -- quick to improve. There is a continuous pace of improvement with Kubernetes. I saw a great quote about containerization from a CIO, who said "I don't care about Docker or any other containers or Kubernetes. All I care about is continuous delivery. I care that we can improve our application continuously and it so happens that containers give us the best way to do that." The point is -- get more applications to your users in a safe, secure, and scalable process.

The Cloud-Native Computing Foundation (CNCF) aims to build next-generation systems that are more reliable, more secure, more scalable and Kubernetes is a big part of this effort.  That's why I've said the value of workload portability is often exaggerated.

Jim Carroll, OND:  Tell me about the Rancher Labs value proposition.

Shannon Williams, Rancher Labs: Our value proposition is centered on the idea that Kubernetes will become the common platform for cloud-native architecture. It is going to be really important for organizations to deliver that as a service reliably. It going to be really important for them to understand how to secure that and how to enforce company policies. Mostly, it will enable people to run their applications in a standardized way. That's our focus.

As an open source software company that means we build the tooling that thousands of companies are going to use to adopt Kubernetes. Rancher has 10,000 organizations using our platform today with our version 1.0 product. I expect our version 2.0 product to be even more popular because it is built around this exploding market for Kubernetes.

JC:  What is the customer profile? When does it make sense to go from Kubernetes to Kubernetes plus Rancher?

Shannon Williams, Rancher Labs: Anywhere where Kubernetes and containers are being adopted, really.  Our customers talk about the D-K-R stack:  Docker- Kubernetes-Rancher.

JC: Is there a particular threshold or requirement that drives the need for Rancher?

Shannon Williams, Rancher Labs:: Rancher is often something that users discover early in their exploration of Docker or Kubernetes.  Once they have a cluster deployed, they start to wonder how they are going to manage it on an on-going basis. This often occurs right at the beginning of a container deployment program - day 1, day 2 or day 3.

Like any other open source software companies, users can download our software for free. The point when a Rancher user becomes a Rancher customer usually happens when the deployment has moved to a mission-critical level.  When their business actually runs on the Kubernetes cluster, that's when we are asked to step in to provide support. We end up establishing a business relationship to support them with everything we build.

JC: And how does the business model work in a world of open source, container management? 

Shannon Williams, Rancher Labs: Customers purchase support subscriptions on an annual basis.

JC: Are you charging based on the number of clusters or nodes? 

Shannon Williams, Rancher Labs: Yes, based on the number or clusters and hosts. A team that is running their critical business systems on Kubernetes will get a lot of benefits in knowing that everything from the lowest level up, including the container runtime, the Kubernetes engine, the management platform, logging, monitoring  -- we provide that unified support.

JC: Does support mean that you actually run the clusters on behalf of the clients? 

Shannon Williams, Rancher Labs: Well, no, they're running it on their systems or in the cloud. Like other open source software developers, we can provide incident response for issues like "why is this running differently in Amazon than on-prem?" We also provide training for their teams and collaboration on the technology evolution.

JC: What about the company itself. What are the big milestones for Rancher Labs?

Shannon Williams, Rancher Labs: We're growing really fast and now have about 85 employees around the world. We have offices around the world, including in Australia, Japan, the UK and are expanding. We have about 170 customer accounts worldwide. We have over 10,000 organizations using the product and over 4 million downloads to date.  The big goals are rolling out Version 2.0, which is now in commercial release, and driving adoption of Kubernetes across the board. We're hoping to get lots of feedback as version 2.0 gets rolled out. So much of the opportunity now concerns the workload management layer.  How do we make it easier for customers to deploy containerized applications? How can we smoothe the rollout of containerized databases in a Kubernetes world? How do we solve the storage portability challenge? There are enormous opportunities to innovate in these areas. It is really exciting.

JC: What is needed to scale your company to the next level?

Shannon Williams, Rancher Labs: Right now we are in a good spot. We benefit from the magic of open source. We were able to grow this fast just on our Series B funding round because thousands of people downloaded our software and loved it. This has given us inroads with companies that often are the biggest in their industries. Lot's of the Fortune 500 are now using Rancher to run critical business functions for their teams. We get to work with the most innovative parts of most organizations.

Sheng Liang, Rancher Labs: There is a lot of excitement. We just have to make sure that we keep our quality high and that we make our customers successful. I feel the market is still in its early days. There is a lot more work to make Kubernetes really the next big thing.

Shannon Williams, Rancher Labs: We're still a tiny minority inside of IT. It will be a ten-year journey but the pieces are coming together.


Telefónica to bundle Netflix in Europe and Latin America

Telefónica has agreed to offer Netflix service in Europe and Latin America.

Market launch in Latin American countries is expected in the next few weeks. The launch is Spain planned for the end of this year.

“This agreement is a big step forward in Telefónica’s bet on open innovation and collaboration with leading companies around the world”, said José María Álvarez-Pallete, Executive Chairman of Telefónica. “We want to offer our customers the most compelling video offering possible, whether it’s our own content or third party providers. The partnership with Netflix will significantly enhance our existing multichannel video platforms.”

“Over the next several years, our partnership with Telefónica will benefit millions of consumers who will be able to easily access their favorite Netflix shows, documentaries, stand-ups, kids content and movies across a range of Telefonica platforms", said Reed Hastings, Netflix co-founder and Chief Executive Officer. "Making Netflix available on Telefónica’s familiar, easy-to-use TV and video platforms enables consumers to watch all the content they love in one place.”

Netflix is based in Los Gatos, California.

Telefónica to resell AWS cloud services

Telefónica Business Solutions has agreed to sell Amazon Web Services in its cloud offering portfolio.

Telefónica will train and certify specialists in AWS services and best practices. AWS has agreed to have dedicated resources to support Telefónica and their customers.

Hugo de los Santos, Director Global B2B Products and Services at Telefónica commented, “Our customers are asking for advice and support in their Cloud adoption processes. AWS, with its depth and breadth of services as well as global presence, is a piece that fits perfectly in our Cloud portfolio. Telefónica’s cloud offering thus empowers our customers to run their infrastructure, applications and workloads on the most suitable Cloud service possible.”

Trump announces deal to lift export ban on ZTE

President Trump announced a deal to save ZTE by lifting the current export ban on U.S. products to the company. In exchange, ZTE is to pay a $1.3 billion fine, make changes to its management, and hire U.S. compliance officers.

As of Monday, there has not an official statement or posted order by the U.S. Department of Commerce lifting the ban.

The deal continues to face opposition in Congress, including from Senator Marco Rubio, a Republican from Florida, who is threatening legislative action to block the deal.

Trade negotiations between the U.S. and China are expected to resume in early June.



Sunday, May 27, 2018

Rostelecom test 5G at State Hermitage Museum in St. Petersburg

Rostelecom has deployed a 5G trial zone on the premises of the State Hermitage Museum in St. Petersburg, Russia – one of the largest art museums in the world. Ericsson supplied a full range of 5G technical solutions and expertise during the implementation and integration phases.

Several use cases have been demonstrated, including the use of robotic equipment for art restoration projects, and a remote learning application that transmits 4K video streams to VR glasses. The demonstrations were carried out on Rostelecom’s 5G test network deployed in 3500 MHz frequency band.

Mikhail Piotrovski, Director of the Hermitage Museum: “The Hermitage is a leader in our field. New technologies match our style and spirit and we enjoy using modern achievements and experimenting with them. This is why we are the first museum in the world to launch a 5G demo zone. It’s also critical to test technology in the cultural sector, understand the human impact, and make sure it fits the unique needs of museums like ours.”

See also