Showing posts with label Kubernetes. Show all posts
Showing posts with label Kubernetes. Show all posts

Thursday, June 14, 2018

Azure Kubernetes Service enters general availability

Microsoft announced that its Azure Kubernetes Service (AKS) is now generally available in ten regions across three continents. Microsoft expects to add ten more regions in the coming months.

The new Kubernetes service features an Azure-hosted control plane, automated upgrades, self-healing, easy scaling, and a simple user experience for both developers and cluster operators. Users are able to control access to their Kubernetes cluster with Azure Active Directory accounts and user groups. A key attribute of AKS is operational visibility into the managed Kubernetes environment. Control plane telemetry, log aggregation, and container health are monitored via the Azure portal.

Microsoft also announced five new regions including Australia East, UK South, West US, West US 2, and North Europe.

MapR adds Amazon Elastic Container Service for Kubernetes

The MapR Data Platform now supports Amazon Elastic Container Service for Kubernetes (Amazon EKS), making it easier organizations to adopt and manage their data seamlessly on-premises and on AWS.

MapR previously announced persistent storage for containers to enable the deployment of stateful containerized applications.

Amazon EKS automatically manages the availability, scalability, and scheduling of containers. With MapR, organizations can retain the disaggregation of scaling compute independent of their storage, without having to worry about over subscription. MapR also secures containers from data access vulnerabilities through wire-level encryption and a full end-to-end set of access, authorization, and authentication features.

"Data agility is essential for next-gen analytics and advanced applications,” said Jack Norris, senior vice president, data and applications at MapR. “The robustness of MapR combined with the agility of Amazon EKS enables enterprises to quickly build a flexible and secure production environment for large scale AI and machine learning."

Monday, June 11, 2018

A10 brings container-native load balancing and analytics for Kubernetes

A10 Networks is introducing an automated way to integrate enterprise-grade load-balancing with application visibility and analytics.

The new A10 Ingress Controller for Kubernetes, which integrates with A10’s container-native load balancing and application delivery solution, can automatically provision application delivery configuration and policies. It ties directly into the container lifecycle to automatically update application delivery configuration with the dynamism of a Kubernetes environment. As application services scale up and down, the A10 load balancer is dynamically updated. A10's "Lightning" containerized load balancer also scales up and down automatically with the scale of a Kubernetes cluster.

The A10 Ingress Controller can run anywhere Kubernetes is deployed, including public clouds (Amazon Web Services (AWS), Microsoft Azure, and Google Compute Engine (GCP), and private clouds (running VMware and bare metal infrastructure). 

A10 said its solution provides comprehensive application analytics by collecting hundreds of application metrics, thus enabling operations teams to troubleshoot faster, manage capacity planning and also detect performance and security anomalies. The analytics data is available via dashboards on the A10 Harmony portal or via APIs.

“As application teams adopt container and microservice architectures, Kubernetes has become the de-facto standard for container orchestration,” said Kamal Anand, Vice President of Cloud, A10 Networks. “A10’s Kubernetes solution provides enterprise applications teams with container-native enterprise grade application delivery for their mission-critical applications. With bundled monitoring, traffic analytics and application security, it reduces their operational burden and allows them to focus on core application value.”

“The transition to software containers, micro-segmented application architectures, and DevOps practices is underway, making it imperative that ADCs can be easily included in these applications and orchestrated along with containers by container management software such as Kubernetes,” said Cliff Grossner, Ph.D., senior research director and advisor of cloud and data center research practice for IHS Markit, a global business information provider. “For 2017 we estimated revenue from commercial license of container software at $350 million, with revenue over $1.2 billion forecast for 2022, signaling a strong need for application delivery ecosystems to support containers. A10’s focus on integrating its ADC software Kubernetes container management software answers an important market requirement.”

“IDC finds that enterprises are increasingly adopting cloud-native containers and microservices. A challenge for those enterprises, though, is ensuring that the right application-delivery infrastructure is deployed to facilitate the agility, elasticity, flexibility, security, and scale that production environments require. At the edge of a Kubernetes cluster, the ingress controller provides important functionality – applying rules to Layer 7 routing to allow inbound connections to reach cluster services – and its integration with enterprise-grade application-delivery infrastructure, such as A10’s containerized load balancer and controller, makes considerable sense” said Brad Casemore, Research VP, Datacenter Networks, IDC.



Tuesday, June 5, 2018

AWS debuts Elastic Container Service for Kubernetes

Amazon Web Services (AWS) announced the general availability of Amazon Elastic Container Service for Kubernetes (Amazon EKS), a fully managed service for deploying, managing, and scale containerized applications using Kubernetes on AWS.

AWS said its customers are running hundreds of millions of containers every week, many using Amazon Elastic Container Service (Amazon ECS), which is a container orchestration service that supports Docker containers and is integrated with many familiar AWS features like AWS Identity and Access Management (IAM), security groups, and Elastic Load Balancing.

Amazon EKS can now be used for managed cluster operations and administration tasks, ensuring that deployments are properly provisioned, secure, highly available, backed up, and updated. Amazon EKS is certified Kubernetes conformant.

“More customers run containers on AWS and Kubernetes on AWS than anywhere else,” said Deepak Singh, Director of AWS Compute Services. “Prior to Amazon EKS, customers either had to do considerable work to architect a highly fault-tolerant way to run Kubernetes, or just accept a lack of resiliency. With the launch of Amazon EKS, customers no longer have to live with either of those trade-offs, and they get a highly available, fault-tolerant, managed Kubernetes service. It’s no wonder so many of our customers are excited.”

https://aws.amazon.com/eks

Start-up profile: Rancher Labs, building container orchestration on Kubernetes


Rancher Labs is a start-up based in Cupertino, California that offers a container management platform that has racked up over four million downloads. The company recently released a major update for its container management system. Recently, I sat down with company co-founders Sheng Liang (CEO) and Shannon Williams (VP of Sales) to talk about Kubernetes, the open source container orchestration system that was originally developed by Google. Kubernetes...


Monday, May 28, 2018

Start-up profile: Rancher Labs, building container orchestration on Kubernetes

Rancher Labs is a start-up based in Cupertino, California that offers a container management platform that has racked up over four million downloads. The company recently released a major update for its container management system. Recently, I sat down with company co-founders Sheng Liang (CEO) and Shannon Williams (VP of Sales) to talk about Kubernetes, the open source container orchestration system that was originally developed by Google. Kubernetes was initially released in 2014, about the time that Rancher Labs was getting underway.

Jim Carroll, OND: So where does Kubernetes stand today?

Sheng Liang, Rancher Labs: Kubernetes has come a long way. When we started three years ago, Kubernetes was also just getting started. It had a lot of promise, but people were talking about orchestration wars and stuff. Kubernetes had not yet won but more importantly, it wasn't really useful.  In the early days, we couldn't even bring ourselves to say that we were going to focus exclusively on Kubernetes. It was not that we did not believe in Kubernetes, but it just didn't work for a lot of users. Kubernetes was almost seen as an end unto itself. Even standing up Kubernetes was such a challenge back then that just getting it to run became an end goal.  A lot of people in those days were experimenting with it, and the goal was simply to prove - hey- you've got a Kubernetes cluster.  Success was to get a few simple apps.  And its come a long way in 3 years.


A lot of things have changed. First, Kubernetes is now really established as the de facto container orchestration platform. We used to support Mesosphere, we used to support Swarm, and we used to build our own container orchestrations platform, which we called Cattle. We stopped doing all of that to focus entirely on Kubernetes. Luckily, the way we developed Cattle was closely modeled on Kubernetes, sort of an easy-to-use version of Kubernetes. So we were able to bring a lot our experience to run on top of Kubernetes. And now it turns out that we don't have to support all of those other frameworks. Kubernetes has settled that. It is now a common tool that everyone can use.

JC: The Big Three cloud companies are now fully behind Kubernetes, right?

Sheng Liang: Right. I think that for the longest time a lot of vendors were looking for opportunities to install and run Kubernetes. That kept us alive for a while. Some of the early Kubernetes deals that we closed were about installing Kubernetes.  These projects then turned to operation contracts because people thought they were going to need to help with upgrading or just maintaining the health of the cluster. This got blown out of the water last year when all of the big cloud providers started to offer Kubernetes as a service.

If you are on the cloud already, there is really no reason to stand up your own Kubernetes cluster.

Well, we're really not quite there yet, even though Amazon announced EKS in November, it is not even GA yet. It is still in closed beta status, but later this year Kubernetes as a service should become a commercial reality. And there are other benefits too.

I'm not sure about Amazon, but both Google and Microsoft  have decided to not charge for the management plane, so whatever resource you use to run the database, and the control plane nodes, you don't really pay for, I guess they must have a very efficient way of running it on some shared infrastructure. That's what I suspect. This allows them to amortize that cost on what they charge for the worker nodes.

The way people set up Kubernetes clusters in the early days was actually very wasteful. Like you would use three nodes for ECD and you would use two nodes for the control plane and then when setting it up people would throw in two more nodes for workers. So, they were using five nodes to manage two nodes, while paying for seven.

With cloud services, you don't have to do that. I think this makes Kubernetes table stakes. It is not just limited to the cloud.  I think it's really wherever you can get infrastructure. Enterprise customers, for instance, are still getting infrastructure from VMware. Or they get it from Nutanix.

All of the cloud companies have announced, or will shortly announce, support for Kubernetes out of the box. Kubernetes then will equate to infrastructure, just like virtual machines, or virtual SANS.

JC: So, how is Kubernetes actually being used now? Is it a one-way bridge or a two-way bridge for moving workloads? Are people actually moving workloads on a consistent basis, or it basically a one-time move to a new server or cloud?

Shannon Williams, Rancher Labs: Portability is actually less important than other features. It may be the sexy part of Kubernetes to say that you can move clusters of containers. The reality is that Kubernetes is just a really good way to run containers reliably.

The vast majority of people who are running containers are not using Kubernetes for the purpose of moving containers between clouds.  The vast majority of people running Kubernetes are doing so because it is more reliable than running containers directly on VMs. It is easier to use Kubernetes from an operational perspective. It is easier from a development perspective. It is easier from a testing perspective. So if you think of the value prop that Kubernetes represents, it comes down to faster development cycles, better operations. The portability is kind of the cherry on top of the Sundae.

It is interesting that people are excited about the portability enabled by Kubernetes, and I think it will become really important over the long term, but it is just as important that I can run it on my laptop as that I can run it on one Kubernetes cluster versus another.

Sheng Liang: I think that is a very important point. The vast major of the accounts we are familiar with run Kubernetes at just one place. That really tells you something about the power of Kubernetes. The fact that people are using this at just one place really tells you that portability is not the primary motivator.  The primary benefit is that Kubernetes is really a rock-solid way to run containers.

JC: What is the reason that Kubernetes is not being used so much for portability today? Is the use case weak for container transport? I would guess that a lot of companies would want to move jobs up to the cloud and back again.

Sheng Liang:  I just don't think that portability is the No.1 requirement for companies using containers today. Procurement teams are excited about this capability but operations people just don't need it right now.

Shannon Williams: From the procurement side, knowing that your containers could be moved to another cloud gives you the assurance that you won't be locked in.

But portability in itself is a complex problem. Even Kubernetes does not solve all the issues of porting an application from one system to another. For instance, I may be running Kubernetes on AWS but I may also be running an Amazon Relational Database (RDS) service as well.  Kubernetes is not going to magically support both of these in migrating to another cloud. There is going to be work required. I think we are still a ways away from ubiquitous computing but we are heading into a world where Kubernetes is how you run containers and containers are going to be the way that all microservices and next-gen applications are built. It may even be how I run my legacy applications. So, having Kubernetes everywhere means that the engineers can quickly understand all of these different infrastructure platforms without having to go through a heavy learning curve. With Kubernetes they will have already learned how to run containers reliably wherever it happens to be running.

JC: So how are people using Kubernetes? Where are the big use cases?

Shannon Williams: I think with Kubernetes we are seeing the same adoption pattern as with Amazon. The initial consumers of Kubernetes were people who were building early containerized applications, predominantly microservices, cloud-native Web apps, mobile apps, gaming, etc. One of the first good use cases was Pokemon Go. It needed massively-scalable systems and ran on Google Cloud. It needed to have systems that could handle rapid upgrades and changes. The adoption of Kubernetes moved from there to more traditional Web applications, to the more traditional applications.

Every business is trying to adopt an innovative stance with their IT department.  We have a bunch of insurance companies as customers. We have media companies as customers. We have many government agencies as customers, such as the USDA -- they run containers to be able to deliver websites. They have lots of constituencies that they need to build durable web services for.  These have to run consistently. Kubernetes and containers give them a lot of HA (high availability).

A year or so ago we were in Phase 0 with this movement. Now I would say we are entering Phase 1 with many new use cases. Any organization that is forward-looking in their IT strategy is probably adopting containers and Kubernetes. This is the best architecture for building applications.

JC: Is there physical limit to how far you can scale with Kubernetes?

Shannon Williams:  It is pretty darn big. You're talking about spanning maybe 5,000 servers.

Sheng Liang: I don't think there is a theoretical limit to how big you can go, but in practice, there is a database that eventually will bottleneck. The might be the limiting factor.

 I think some deployments have hit 5,000 nodes and each node these days could actually be a one terabyte machine. So that is actually a lot of resources. I think it could be made bigger, but so far that seems to be enough.

Shannon Williams: The pressure to hit that maximum size of 5,000 nodes or more in a cluster really is not applicable to the vast majority of the market.

Sheng Liang: And you could always manage multiple clusters with load balancing. It is probably not a good practice anyway to put everything in one superbig cluster.

Generally, we are not seeing people create huge clusters across multiple data centers or multiple regions.

Shannon Williams: In fact, I would say that we are seeing the trend move in the opposite direction.  Which is that the number of clusters in an organization is increasing faster than the size of any one cluster. What we see is any application that is running probably has at least two clusters available  -- one for testing and one for production.  There are often many divisions inside a company that push this requirement forward. For instance, a large media company has more than 150 Kubernetes clusters -- all deployed by different employees in different regions and often running different versions of their software. The even have multiple cloud providers. I think we are heading in that direction, rather than one massive Kubernetes cluster to rule them all.

Sheng Liang:  This is not what some of the web companies initially envisioned for Kubernetes.  When Google originally developed Kubernetes, they were used to the model where you have a very big pool of resources with bare metal servers. Their challenge was how to schedule all the workloads inside of that pool. When enterprises started adopting Kubernetes, one thing that immediately changed was that they really don't have the operational maturity to put all their eggs in one basket and make that really resilient. Second, because all of them were using some form of virtualization. They were either using VMware or they were using a cloud, so essentially the cost of making small clusters come down. There is not a lot of overhead. You can have a lot of clusters without having to dedicate the whole server into these clusters.

JC: Is there an opportunity then for the infrastructure provider, or the cloud provider, to add their own special sauce on top of Kubernetes?

Sheng Liang:  The cloud guys are all starting to do that. Over time, I think they will do more. Today is still early. Amazon, for instance, has not yet commercially launched the service to the public. And Digital Ocean just announced it. But Google has been offering Kubernetes as a service for three years. Microsoft has been doing it for probably over a year. If you look at Google's Kubernetes service, which is probably the most advanced, now includes more management dashboards and UIs, but nothing really fancy yet.

What I would expect them to do -- and this would be really great from my perspective -- is to bring their entire service suite, including their databases, AI and ML capabilities, and make them available inside of Kubernetes.

Shannon Williams: Yeah, they will want to integrate their entire cloud ecosystems. That's one of the appealing things about cloud providers offering Kubernetes -- there will be some level of standardization but they will have the opportunity to differentiate for local requirements and flavors.

That kind of leads to the challenge we are addressing.

There are three big things that most organizations face (1) you want to be able to run Kubernetes on-prem.  Some teams may run it on VMware, some may wish to run in on bare metal. They would like to be able to run it on-prem in a way that is reliable, consistent and supported. For IT groups, there is a growing requirement of offer Kubernetes as a service in the same way they offer VMs. To do so, they must standardize Kubernetes. (2) There is another desire to manage all of these clusters in a way that complies with your organization's policies. There will be questions like "how do I manage multiple clusters in a centralized way even if some are on-prem and some are in the cloud?"  This is a distro-level problem for Kubernetes. (3) Then there is a compliance and security concern with how to configure Kubernetes to enforce all of my access control policies, security policies, monitoring policies, etc.  Those are the challenges that we are taking on with Rancher 2.0

Jim Carroll, OND: Where does Rancher Labs fit in?

Shannon Williams, Rancher Labs: The challenge we are taking on is how to manage multiple Kubernetes clusters, including how to manage users and policies across multiple clusters in an organization.

Kubernetes is now available as a supported, enterprise-grade service for anybody in your company. At this scale, Kubernetes really becomes appealing to organizations as a standardization approach, not just so that workloads can easily move between places but so that workloads can be deployed to lots of places.  For instance, I might want some workloads to run on Alibaba Cloud for a project we are doing in China, or I might want to run some workloads on T-Systems's cloud for a project in Germany, where I have to comply with the new data privacy laws. I can now do those things with Kubernetes without having to understand the specific cloud parameters, benefits or limitations of any specific cloud. Kubernetes normalizes this experience. Rancher Labs makes it happen in a consistent way. That is a large part of what we are working on at Rancher Labs -- consistent distribution and consistent management of any cluster. We will manage the lifecycle of Amazon Kubernetes or Google Kubernetes, our Kubernetes, or new Kubernetes coming out of a dev lab.

JC: So the goal is to have the Rancher Labs experience running both on-prem and in the public cloud?

Shannon Williams, Rancher Labs:: Exactly. So think about it like this. We have a distro of Kubernetes and we can use it to implement Kubernetes for you on bare metal, or on VMware, or in the cloud, if you prefer, so you can build exactly the version of Kubernetes that suits you. That is the first piece of value -- we'll give you Kubernetes wherever you need it. The second piece is that we will manage all of the Kubernetes clusters for you, including where you requested Kubernetes from Amazon or Google. You have the options of consuming from the cloud as you wish or staying on-prem. There is one other piece that we are working on. It is one thing to provide this normalized service. The additional layer is about engaging users.

What you are seeing with Kubernetes is similar to the cloud. Early adopters move in quickly and have no hesitancy in consuming it -- but.they represent maybe 1% or 2% of the users.The challenge for the IT department is to make this preferred way to deliver resources. At this point, you want to encourage adoption and that means developing a positive experience.

JC: Is your goal to have all app developers aware of the Kubernetes layer? Or is Kubernetes management really the responsibility of the IT managers who thus far are also responsible for running the network, running the storage, running the firewalls..?

Shannon Williams, Rancher Labs: Great question, because Kubernetes is actually part of the infrastructure, but it is also part of the application resiliency layer. It deals with how an application handles a physical infrastructure failure, for example. Do I spin up another container? Do I wait to let a user decide what to do? How do I connect these parts of an application and how do I manage the secrets that are deployed around it? How do I perform system monitoring and alerting of application status? Kubernetes is blurring the line.

Sheng Liang, Rancher Labs: It is not really something the coders will be interested in. The interest in Kubernetes starts with DevOps and stops just before you get to storage and networking infrastructure management.

Shannon Williams, Rancher Labs: Kubernetes is becoming of interest to system architects -- the people who are designing how an application is going to be delivered. They are very aware that the app is going to be containerized and running in the cloud. The cloud-native architecture is pulling in developers. So I think it is a little more blurred than whether or not coders get to this level.

Sheng Liang, Rancher Labs: For instance, the Netflix guys used to talk a lot about how they developed applications. Most developers don't spend a lot of time worrying about how their applications are running. They have to spend most of their time worrying about the outcome. But they are highly aware of the architecture. Kubernetes is well regarded as the best way to develop such applications. Scalable, Resilient, Secure -- those are what's driving the acceptance of Kubernetes.

Shannon Williams, Rancher Labs:  I would add one more to the list -- quick to improve. There is a continuous pace of improvement with Kubernetes. I saw a great quote about containerization from a CIO, who said "I don't care about Docker or any other containers or Kubernetes. All I care about is continuous delivery. I care that we can improve our application continuously and it so happens that containers give us the best way to do that." The point is -- get more applications to your users in a safe, secure, and scalable process.

The Cloud-Native Computing Foundation (CNCF) aims to build next-generation systems that are more reliable, more secure, more scalable and Kubernetes is a big part of this effort.  That's why I've said the value of workload portability is often exaggerated.

Jim Carroll, OND:  Tell me about the Rancher Labs value proposition.

Shannon Williams, Rancher Labs: Our value proposition is centered on the idea that Kubernetes will become the common platform for cloud-native architecture. It is going to be really important for organizations to deliver that as a service reliably. It going to be really important for them to understand how to secure that and how to enforce company policies. Mostly, it will enable people to run their applications in a standardized way. That's our focus.

As an open source software company that means we build the tooling that thousands of companies are going to use to adopt Kubernetes. Rancher has 10,000 organizations using our platform today with our version 1.0 product. I expect our version 2.0 product to be even more popular because it is built around this exploding market for Kubernetes.

JC:  What is the customer profile? When does it make sense to go from Kubernetes to Kubernetes plus Rancher?

Shannon Williams, Rancher Labs: Anywhere where Kubernetes and containers are being adopted, really.  Our customers talk about the D-K-R stack:  Docker- Kubernetes-Rancher.

JC: Is there a particular threshold or requirement that drives the need for Rancher?

Shannon Williams, Rancher Labs:: Rancher is often something that users discover early in their exploration of Docker or Kubernetes.  Once they have a cluster deployed, they start to wonder how they are going to manage it on an on-going basis. This often occurs right at the beginning of a container deployment program - day 1, day 2 or day 3.

Like any other open source software companies, users can download our software for free. The point when a Rancher user becomes a Rancher customer usually happens when the deployment has moved to a mission-critical level.  When their business actually runs on the Kubernetes cluster, that's when we are asked to step in to provide support. We end up establishing a business relationship to support them with everything we build.

JC: And how does the business model work in a world of open source, container management? 

Shannon Williams, Rancher Labs: Customers purchase support subscriptions on an annual basis.

JC: Are you charging based on the number of clusters or nodes? 

Shannon Williams, Rancher Labs: Yes, based on the number or clusters and hosts. A team that is running their critical business systems on Kubernetes will get a lot of benefits in knowing that everything from the lowest level up, including the container runtime, the Kubernetes engine, the management platform, logging, monitoring  -- we provide that unified support.

JC: Does support mean that you actually run the clusters on behalf of the clients? 

Shannon Williams, Rancher Labs: Well, no, they're running it on their systems or in the cloud. Like other open source software developers, we can provide incident response for issues like "why is this running differently in Amazon than on-prem?" We also provide training for their teams and collaboration on the technology evolution.

JC: What about the company itself. What are the big milestones for Rancher Labs?

Shannon Williams, Rancher Labs: We're growing really fast and now have about 85 employees around the world. We have offices around the world, including in Australia, Japan, the UK and are expanding. We have about 170 customer accounts worldwide. We have over 10,000 organizations using the product and over 4 million downloads to date.  The big goals are rolling out Version 2.0, which is now in commercial release, and driving adoption of Kubernetes across the board. We're hoping to get lots of feedback as version 2.0 gets rolled out. So much of the opportunity now concerns the workload management layer.  How do we make it easier for customers to deploy containerized applications? How can we smoothe the rollout of containerized databases in a Kubernetes world? How do we solve the storage portability challenge? There are enormous opportunities to innovate in these areas. It is really exciting.

JC: What is needed to scale your company to the next level?

Shannon Williams, Rancher Labs: Right now we are in a good spot. We benefit from the magic of open source. We were able to grow this fast just on our Series B funding round because thousands of people downloaded our software and loved it. This has given us inroads with companies that often are the biggest in their industries. Lot's of the Fortune 500 are now using Rancher to run critical business functions for their teams. We get to work with the most innovative parts of most organizations.

Sheng Liang, Rancher Labs: There is a lot of excitement. We just have to make sure that we keep our quality high and that we make our customers successful. I feel the market is still in its early days. There is a lot more work to make Kubernetes really the next big thing.

Shannon Williams, Rancher Labs: We're still a tiny minority inside of IT. It will be a ten-year journey but the pieces are coming together.


Monday, May 21, 2018

Google Cloud releases Kubernetes Engine

The Google Kubernetes Engine 1.10 has now entered commercial release.

In parallel to the GA of Kubernetes Engine 1.10, Google Cloud is new features to support enterprise use cases:

  • Shared Virtual Private Cloud (VPC) for better control of network resources
  • Regional Persistent Disks and Regional Clusters for higher-availability and stronger SLAs
  • Node Auto-Repair GA, and Custom Horizontal Pod Autoscaler for greater automation

Google also outlined several upcoming features for its Kubernetes Engine, including the ability for
teams within large organizations to share physical resources while maintaining logical separation of resources between departments. Workloads can be deployed in Google’s global Virtual Private Cloud (VPC) in a Shared VPC model.

Google's Kubernetes Engine will also gain Regional Persistent Disk (Regional PD) support. This will ensure that network-attached block storage has synchronous replication of data between two zones in a region.

https://cloudplatform.googleblog.com/

Progress on OpenStack-Kubernetes integration efforts

OpenStack and Kubernetes are being used in tandem to develop a new generation of cloud-native platforms, according to SIG-Kubernetes, an OpenStack special interest group focused on cross-community efforts with Kubernetes.

Development efforts combining OpenStack and Kubernetes can be found at AT&T, CERN, SK Telecom, and Superfluidity, which is a European Research project (Horizon 2020) trying to build the basic infrastructure blocks for 5G networks by leveraging and extending well known open source projects.

In conjunction with OpenStack Summit in Vancouver, SIG-Kubernetes published a whitepaper highlighting progress on OpenStack-Kubernetes integration efforts, including:

  • The OpenStack Cloud Provider, an external cloud controller manager for running Kubernetes in an OpenStack cluster, has a permanent new home. Cloud Provider OpenStack gives Kubernetes direct access to OpenStack resources such as Nova compute instance information, Cinder block storage, and Neutron and Octavia load balancing.
  • The latest release of the CNCF dashboard features OpenStack as one of the target public clouds. This CI system runs nightly test jobs against CNCF projects. It uses a cross-cloud deployment tool to build a multi-node, highly available Kubernetes cluster. It runs Kubernetes end-to-end tests against the installation and also tests other cloud-native applications like Helm and Prometheus on the OpenStack-hosted Kubernetes test cluster.
  • Cinder now offers one integration point for over 80 different storage options through a single Cinder API with a choice of Flex or Container Storage Interface (CSI) drivers.
  • The community has documented how to Integrate Keystone authentication and authorization with Kubernetes role-based access control (RBAC). This approach allows Kubernetes to use OpenStack Keystone as an identity server. 
The whitepaper is here: https://www.openstack.org/containers/whitepaper



Sunday, May 20, 2018

Project Airship aims for fully containerized clouds - OpenStack on Kubernetes

AT&T is working with SKT, Intel and the OpenStack Foundation to launch Project Airship, a new open infrastructure project that will offer a unified, declarative, fully containerized, and cloud-native platform. The idea is to let cloud operators manage sites at every stage from creation through minor and major updates, including configuration changes and OpenStack upgrades.

AT&T said the project builds on the foundation laid by the OpenStack-Helm project launched in 2017. In a blog posting, Amy Wheelus, vice president of Cloud and Domain 2.0 Platform Integration, says the initial focus is "to introduce OpenStack on Kubernetes (OOK) and the lifecycle management of the resulting cloud, with the scale, speed, resiliency, flexibility, and operational predictability demanded of network clouds."

She states that AT&T will use Airship as the foundation of its network cloud running over its 5G core, which will support the launch of 5G services in 12 cities later this year.  Airship will also be used by Akraino Edge Stack, which is a new Linux Foundation project for creating an open source software stack supporting high-availability cloud services optimized for edge computing systems and applications.

"We are pleased to bring continued innovation with Airship, extending the work we started in 2016 with the OpenStack and Kubernetes communities to create a continuum for modern and open infrastructure. Airship will bring new network edge capabilities to these stacks and Intel is committed to working with this project and the many other upstream projects to continue our focus of upstream first development and accelerating the industry," stated Imad Sousou, corporate vice president and general manager of the Open Source Technology Center at Intel.

http://airshipit.org

AT&T seeds Akraino project for carrier-scale edge computing

The Linux Foundation will host a new Akraino project to create an open source software stack supporting high-availability cloud services optimized for edge computing systems and applications.

To seed the project, AT&T is contributing code designed for carrier-scale edge computing applications running in virtual machines and containers.

“This project will bring the extensive work AT&T has already done to create low-latency, carrier-grade technology for the edge that address latency and reliability needs,” said Jim Zemlin, Executive Director of The Linux Foundation. “Akraino complements LF Networking projects like ONAP in automating services from edge to core. We’re pleased to welcome it to The Linux Foundation and invite the participation of others as we work together to form Akraino and establish its governance.”

“Akraino, coupled with ONAP and OpenStack, will help to accelerate progress towards development of next-generation, network-based edge services, fueling a new ecosystem of applications for 5G and IoT,” said Mazin Gilbert, Vice President of Advanced Technology at AT&T Labs.

Tuesday, January 30, 2018

Red Hat to acquire CoreOS for Kubernetes platform

Red Hat agreed to acquire CoreOS, a developer of Kubernetes and container-native solutions, for $250 million.

CoreOS, which was founded in 2013 and is based in San Francisco, offers a commercial Kubernetes platform that let's customer build "Google-style" where workloads and applications placed in containers can be moved rapidly across clouds. CoreOS Tectonic is an enterprise-ready Kubernetes platform that provides automated operations, enables portability across private and public cloud providers, and is based on open source software. The company also offers CoreOS Quay, an enterprise-ready container registry. CoreOS is also well-known for being a leading contributor to Kubernetes; Container Linux, a lightweight Linux distribution created and maintained by CoreOS that automates software updates and is streamlined for running containers; etcd, the distributed data store for Kubernetes; and rkt, an application container engine, donated to the Cloud Native Computing Foundation (CNCF), that helped drive the current Open Container Initiative (OCI) standard.

Red Hat said the deal furthers its vision of enabling customers to build any application and deploy them in any environment with the flexibility afforded by open source.

“The next era of technology is being driven by container-based applications that span multi- and hybrid cloud environments, including physical, virtual, private cloud and public cloud platforms. Kubernetes, containers and Linux are at the heart of this transformation, and, like Red Hat, CoreOS has been a leader in both the upstream open source communities that are fueling these innovations and its work to bring enterprise-grade Kubernetes to customers. We believe this acquisition cements Red Hat as a cornerstone of hybrid cloud and modern app deployments,” stated Paul Cormier, president, Products and Technologies, Red Hat.


  • In May 2016, CoreOS received $28 million in Series B funding round led by GV (formerly Google Ventures). Intel Capital participated in the round, as well as existing investors Accel, Fuel Capital, Kleiner Perkins Caufield & Byers (KPCB), Y Combinator Continuity Fund and others, bringing the company’s funding to date to $48 million.

Thursday, October 12, 2017

IBM and Google collaborate on container security API

IBM is joining forces with Google to create and open source the Grafeas project, which is an open source initiative to define a uniform way for auditing and governing the modern software supply chain.

Grafeas (“scribe” in Greek) provides a central source of truth for tracking and enforcing policies across an ever growing set of software development teams and pipelines. The idea is to provide a central store that other applications and tools can query to retrieve metadata on software components of all kinds.

IBM is also working on Kritis, a component which allows organizations to set Kubernetes governance policies based on metadata stored in Grafeas.

Monday, June 26, 2017

IBM launches new microservices tools

IBM announced it is expanding its portfolio of developer tools with the new Microservice Builder, designed to make it easier for developers and organisations to create, deploy and manage apps built with microservices and part of its effort to simplify how developers manage their data and build applications.

IBM's Microservice Builder offering is designed to provide developers with the flexibility to deploy microservices onto on-premises systems or in any cloud environment.

IBM noted that microservices are being adopters as they allow developers to work on multiple parts of an app simultaneously without disrupting operations. The new set of capabilities offers developers an end-to-end solution so that they can quickly create these services and better integrate common functions for faster app deployment.

Microservice Builder helps developers with each stage of the development process from writing and testing code, to deploying and updating new features, and helps create and standardise common functions such as runtimes, resiliency testing, configuration and security, so developers do not have to handle these tasks separately. Teams can also build with specific policies and protocols to ensure all services work together as a complete solution.

Microservice Builder works in conjunction with existing tools available via IBM Cloud that are designed to support microservices development and deployment. It uses a Kubernetes-based container management platform to simplify deployment, running and management of applications in public or hybrid cloud environments.

Microservice Builder also works with Istio, an open platform IBM has built in conjunction with Google and Lyft to connect, manage and secure microservices. IBM plans to extend the integration between Microservice Builder and Istio as the Istio fabric develops.


IBM Microservice Builder uses programming languages and protocols including MicroProile and Java EE programming models, Maven, Jenkins and Docker and offers functions including: MicroProfile programming model, which extends Java EE; integrated devops pipeline; security features via OpenID Connect and JSON Web Token; production-ready runtime environment for cloud or on-premises systems through WebSphere Liberty.


Thursday, July 7, 2016

Latest Kubernetes Release Scales for 2,000-node Clusters

Newly released version 1.3 of Kubernetes brings supports 2000-node clusters.  The new release also adds better end-to-end pod startup time, with latency of API calls within one-second Service Level Objective (SLO).

One new features is Kubemark, a performance testing tool to detect performance and scalability regressions.

http://blog.kubernetes.io/

Wednesday, April 20, 2016

Diamanti Raises $12.5 Million Appliance Built to Track Linux Containers

Diamanti (previously Datawise.io) , a start-up based in San Jose, California, emerged from stealth to unveil its converged appliance built to address the infrastructure problems that developers and operators face when deploying containers in production. The solution’s software-defined controls, which Diamanti has contributed to Kubernete’s open source project, empower developers to specify their network and storage resources and service levels. The appliance includes network and storage innovations that deliver guaranteed performance levels with 10X latency and throughput improvements. It also plugs seamlessly into existing infrastructure and is simple to deploy, manage, and scale for operators.

"Diamanti was founded to solve network and storage challenges in Linux containerized environments,” said Jeff Chou, Diamanti co-founder and CEO. “Diamanti’s guiding philosophy is that solving IO challenges demands converged networking and storage."

“Our vision is to enable enterprises to deploy containerized applications quickly, knowing with certainty how they will perform and that they will work off the shelf in an open ecosystem,” said Chou. “We fast track containers to production by automating their most challenging networking and storage operations.”

Diamanti has contribute its FlexVolume code to Kubernetes. This contribution automates IO configuration based on user-defined requirements. Diamanti's scheduler contribution enables the Kubernetes scheduler to factor storage and networking requirements when placing workloads, leveraging a declarative model for developers and container administrators.

“Red Hat OpenShift Enterprise, built including Kubernetes and Docker technologies, accelerates containerized application development for our customers,” said Ashesh Badani, VP and General Manager Openshift Group, Red Hat. “Diamanti’s network and storage contributions to the community give users further choice in how they deploy performance applications at scale. We look forward to continued collaboration with Diamanti to drive industry adoption of containers.”

Diamanti also announced $12.5 million in funding. Backers include CRV, DFJ, GSR Ventures, and Goldman Sachs.

http://www.diamanti.com

Monday, November 9, 2015

Sysdig Debuts Monitoring Solution for Kubernetes

Sysdig, a start-up offering container-native visibility, announced support for Kubernetes, which is the open source container orchestration tool originally created by Google. The company said its Sysdig Cloud is first and only monitoring solution to offer complete visibility into Kubernetes environments. Open source system exploration tool, sysdig, has also added native support for Kubernetes, further building on its Docker troubleshooting capabilities.

Kubernetes is rapidly becoming the most popular framework on which to deploy microservice-oriented applications in Docker containers. The idea is to simplify the deployment of Docker and clusters of microservices at scale. Kubernetes, like many orchestration tools, deploys applications by distributing interconnected Docker containers across a cluster of shared physical resources.

Sysdig Cloud’s container-native monitoring for Kubernetes includes key features such as:

  • Container-native monitoring with ContainerVision: Sysdig’s ContainerVision offers deep, application-level visibility into Kubernetes and Docker containers, while respecting the independence and isolation of each container.
  • Kubernetes metadata integration: Kubernetes translates microservices into “pods”, “services”, “replication controllers”, and “namespaces”. Sysdig Cloud now understands the full context of your Kubernetes system, and offers alerts and dashboards that correlate this metadata directly with all your system, network, application, and infrastructure data.
  • Zero configuration deployment: Just drop the Sysdig Cloud container into your Kubernetes environment. ContainerVision automatically detects every application running in every other container in your infrastructure and starts streaming back a stunning level a detail in real time. 

The first Kubecon conference is being held this week in San Francisco.

https://sysdig.com/press-releases/sysdig-announces-first-comprehensive-monitoring-solution-for-kubernetes/

Tuesday, July 21, 2015

Kubernetes V1 Released on Github

Kubernetes, the open source container orchestration system, has reached the v1 milestone (GitHub), indicating that it is now ready for commercial use.

Google noted that Kubernetes was built by over 400 contributors with 14,000 commits. The list of set of features in this release includes:

App Services, Network, Storage 

  • Includes core functionality critical for deploying and managing workloads in production, including DNS, load balancing, scaling, application-level health checking, and service accounts
  • Stateful application support with a wide variety of local and network based volumes, such as Google Compute Engine persistent disk, AWS Elastic Block Store, and NFS
  • Deploy your containers in pods, a grouping of closely related containers, which allow for easy updates and rollback
  • Inspect and debug your application with command execution, port forwarding, log collection, and resource monitoring via CLI and UI.   
Cluster Management
  • Upgrade and dynamically scale a live cluster
  • Partition a cluster via namespaces for deeper control over resources.  For example, you can segment a cluster into different applications, or test and production environments.
Performance and Stability
  • Fast API responses, with containers scheduled < 5s on average
  • Scale tested to 1000s of containers per cluster, and 100s of nodes
  • A stable API with a formal deprecation policy
http://googlecloudplatform.blogspot.com/2015/07/Kubernetes-V1-Released.html

See also