Showing posts with label Kubernetes. Show all posts
Showing posts with label Kubernetes. Show all posts

Thursday, July 29, 2021

Red Hat and Nutanix enter strategic partnership

Red Hat and Nutanix announced a strategic partnership focused on building, scaling and managing cloud-native applications on-premises and in hybrid clouds. The collaboration brings together Red Hat OpenShift and Red Hat Enterprise Linux with Nutanix Cloud Platform, including Nutanix AOS and AHV.

Key elements of the partnership include:

  • Red Hat OpenShift as the preferred choice for enterprise full stack Kubernetes on Nutanix Cloud Platform. Customers looking to run Red Hat Enterprise Linux and Red Hat OpenShift on hyperconverged infrastructure (HCI) will be able to use an industry-leading cloud platform from Nutanix, which includes both Nutanix AOS and AHV.
  • Nutanix Cloud Platform is now a preferred choice for HCI for Red Hat Enterprise Linux and Red Hat OpenShift. This will enable customers to deploy virtualized and containerized workloads on a hyperconverged infrastructure, building on the combined benefits of Red Hat’s open hybrid cloud technologies and Nutanix’s hyperconverged offerings.
  • Nutanix AHV is now a Red Hat certified hypervisor enabling full support for Red Hat Enterprise Linux and OpenShift on Nutanix Cloud Platform. 
  • The certification of the Nutanix built-in hypervisor, AHV, for Red Hat Enterprise Linux and OpenShift offers enterprise customers a simplified full stack solution for their containerized and virtualized cloud-native applications. This certification delivers Red Hat customers additional choice in hypervisor deployments, especially as many organizations explore innovative, modern virtualization technologies.
  • Joint engineering roadmap providing robust interoperability. Red Hat and Nutanix will focus on delivering continuous testing of Red Hat Enterprise Linux and Red Hat OpenShift with Nutanix AHV to provide robust interoperability. The companies will also collaborate to deliver more timely support by aligning product roadmaps.

https://www.nutanix.com/blog/red-hat-and-nutanix-partner-to-deliver-big-on-hybrid-cloud

Tuesday, July 13, 2021

Red Hat's Kubernetes integrates with Ansible automation

Red Hat announced a new version of its enterprise-grade Kubernetes management offering with new capabilities for managing and scaling hybrid and multicloud environments in a unified and automated way. 

Red Hat Advanced Cluster Management for Kubernetes 2.3 brings integration with the Red Hat Ansible Automation Platform and Red Hat Advanced Cluster Management for a more modern, hybrid cloud-ready environment.



Red Hat Advanced Cluster Management can now automatically trigger Ansible Playbooks before or after key lifecycle actions such as application and cluster creation, making it easier to automate tasks like configuring networking, connecting applications to databases, constructing load balancers and firewalls, and updating IT service management (ITSM) ticketing systems. With a Resource Operator for Red Hat Advanced Cluster Management, building on the Kubernetes Operator-based foundation of Red Hat OpenShift to encapsulate complex operational knowledge into code, Advanced Cluster Management can call on Ansible Automation Platform to execute tasks more efficiently outside of the Kubernetes cluster. The result is a single, automated workflow for customers to operationalize Red Hat OpenShift environments alongside traditional IT systems.

The Red Hat Advanced Cluster Management 2.3 also includes additional support for importing managed Kubernetes clusters for Red Hat OpenShift on AWS (ROSA), as well as OpenShift clusters on IBM Power Systems. This builds on supported managed Kubernetes clusters for Red Hat OpenShift on IBM Cloud (ROKS), Microsoft Azure Red Hat OpenShift (ARO), Red Hat OpenShift Dedicated (OSD) and IBM system Z. It also supports provisioning on-premises Red Hat OpenShift clusters on Red Hat OpenStack directly from Red Hat Advanced Cluster Management.

“Cloud-native applications and services are not an island. We need to meet organizations where they are to bridge the divide between traditional IT infrastructure and cloud-native development, so that IT teams can focus on innovation rather than trying to get disparate technologies to work together. Red Hat is uniquely positioned to bring these capabilities together through a GitOps-based approach, helping to accelerate and scale modernization. Now, customers can automate the full stack from start to finish, from the cluster to the policy and governance to the application deployment, helping to eliminate silos and further organization-wide hybrid cloud strategies,” states Dave Lindquist, general manager and vice president, Software Engineering, Advanced Kubernetes Management, Red Hat.


Wednesday, March 3, 2021

Mavenir teams up with Platform9 for Kubernetes at the 5G edge

Mavenir will leverage Platform9’s Kubernetes solution to deliver a robust web-scale platform that runs containerized cloud-native network functions. The companies say their strategic partnership will accelerate the rollout of 5G services because Kubernetes is ideally suited for building scalable 5G networks at the edge by running on vendor-neutral hardware and providing open-source orchestration.

Mavenir has integrated Telco PaaS (Platform as a Service) that it contributed to Opensource XGVela (https://xgvela.org/) on top of the Platform9 Managed Kubernetes (PMK) solution to meet the requirements of OpenRAN and other telco workloads. 

Platform9’s Kubernetes solution is also an option for Mavenir’s Private Networks deployments. The open-source and cloud-native capabilities of the platform provide the scale, velocity and agility sought by service providers as they roll out their next generation 5G networks.

“With the advent of 5G, there’s a cloud touch to the service provider’s network and ecosystem. It represents an evolution of a cloud-native software platform that offers competitive and differentiated services for many customers,” said Mavenir Chief Strategy Officer, Bejoy Pankajakshan.

“We are delighted that Platform9’s Kubernetes solution has been included to power Mavenir’s industry leading vision for cloud-native 5G infrastructure,” said Sirish Raghuram, CEO of Platform9. “Our automation technology for Kubernetes can accelerate new site deployments, reduce ongoing operational complexity and provide cloud elasticity for Mavenir’s solutions.”


Tuesday, March 2, 2021

Microsoft Azure Arc promises single control plane for Kubernetes

 Microsoft introduced a set of technologies that extends Azure management and services to any infrastructure running Kubernetes. 

Azure Arc offers a single control plane in any Kubernetes environment, including  on-premises, multicloud, or at the edge. 

Azure Arc is built to work with any cloud native computing foundation (CNCF) conformant Kubernetes distribution. Microsoft has collaborated with popular Kubernetes distributions including VMware Tanzu and Nutanix Karbon, which join Red Hat OpenShift, Canonical’s Charmed Kubernetes, and Rancher Kubernetes Engine (RKE) to test and validate their implementations with Azure Arc. 

Azure Arc enabled services will include Azure Machine Learning, which is an enterprise-grade service that enables data scientists and developers to build, deploy, and manage machine learning models. By using Azure Arc to extend machine learning (ML) capabilities to hybrid and multicloud environments, customers can train ML models directly where the data lives using their existing infrastructure investments. This reduces data movement while meeting security and compliance requirements.

Customers can sign up for Azure Arc enabled Machine Learning today and deploy to any Kubernetes cluster. In one click, data scientists can now use familiar tools to build machine learning models consistently and reliably and deploy anywhere.

Microsoft released Azure Arc enabled Kubernetes in preview last fall and is now releasing it as general availability.

https://azure.microsoft.com/en-us/blog/innovate-across-hybrid-and-multicloud-with-new-azure-arc-capabilities/

Thursday, October 22, 2020

Verizon Business offers managed Kubernetes for edge and multi-cloud

Verizon Business introduced VNS Application Edge service for managing Kubernetes clusters and containerized app deployment. The offering was developed in collaboration with Rafay Systems.

Verizon's VNS Application Edge is a Platform as a Service (PaaS) offering that provides a turnkey automation framework for Kubernetes. Verizon will now deliver a unified experience for both network and containerized application lifecycle management, using a single orchestrated platform and end-to-end service management even in complex, multi-cloud and multi-cluster environments.

Potential use cases for VNS Application Edge include:

  • Apply computer vision models at the edge to Instrumentation and Telemetry data in the field for near-real-time anomaly detection and mitigation
  • Improve in-store customer experience by deploying microservices in retail and enterprise locations to automate inventory management, order handling, and more.
  • Predictive Maintenance - Improve assembly output quality, reduce downtime and maintenance costs in manufacturing with the latest IOT technology, leveraging AI/ML innovations, right at the edge.

“VNS Application Edge is key to enterprises evolving to deliver a new set of experiences and functionalities,” said Aamir Hussain, SVP Chief Product Officer, Verizon Business. “With enterprises able to easily and rapidly deploy applications anywhere across multi-cloud and edge environments, enterprises can quickly adapt to meet market needs and further enhance the customer experience.”

“Verizon has a track record of launching successful, enterprise-focused offerings, and we are excited to partner with them on the VNS Application Edge solution,” said Haseeb Budhani, co-founder and CEO of Rafay Systems. “With a majority of enterprises modernizing their applications to meet market needs, VNS Application Edge is the right offering at the right time to help enterprises accelerate their application modernization journeys.”

Monday, August 17, 2020

Red Hat builds edge use cases with new Kubernetes capabilities

Red Hat unveiled new products and capabilities aimed at helping enterprises launch edge computing strategies built on an open hybrid cloud backbone, including new features in Red Hat OpenShift and Red Hat Advanced Cluster Management for Kubernetes.

Red Hat said it believes that Kubernetes and its supporting technologies provide a perfect blend of power, reliability and innovation for edge computing, highlighted by the latest enhancements to Red Hat OpenShift and the newly-launched Red Hat Advanced Cluster Management for Kubernetes.

Red Hat’s new capabilities intended for edge use cases include:

  • 3-node cluster support within Red Hat OpenShift 4.5, bringing the full capabilities of enterprise Kubernetes to bear at the network’s edge in a smaller footprint. Combining supervisor and worker nodes, 3-node clusters scale down the size of a Kubernetes deployment without compromising on capabilities, making it ideal for edge sites that are space-constrained while still needing the breadth of Kubernetes features.
  • Management of thousands of edge sites with Red Hat Advanced Cluster Management for Kubernetes along with core sites via a single consistent view across the hybrid cloud making highly scaled-out edge architectures as manageable, consistent, compliant and secure as standard datacenter deployments.
  • Evolving the operating system to meet the demands of the edge with the continued leadership and innovation of Red Hat Enterprise Linux, backed by the platform’s long history of running remote workloads.


Tuesday, June 2, 2020

Rancher releases cloud-native container storage solution

Rancher Labs, which offers a widely used Kubernetes management platform, announced the general availability (GA) of Longhorn, an enterprise-grade, cloud-native container storage solution.

The company says Longhorn directly answers the need for an enterprise-grade, vendor-neutral persistent storage solution that supports the easy development of stateful applications within Kubernetes.

the resources required to manage data and operate environments, enabling teams to focus on shipping code faster, and delivering better applications.

Longhorn is 100% open source, distributed block storage built using microservices. Since the product was released in Beta in 2019, thousands of users have battle-hardened Longhorn by stress-testing the product as a Cloud Native Computing Foundation (CNCF) Sandbox project.

The GA version of Longhorn delivers a rich set of enterprise storage features, including:


  • Thin-provisioning, snapshots, backup, and restore
  • Non-disruptive volume expansion
  • Cross-cluster disaster recovery volume with defined RTO and RPO
  • Live upgrade of Longhorn software without impacting running volumes
  • Full-featured Kubernetes CLI integration and standalone UI
  • Users can leverage Longhorn to create distributed block storage mirrored across local disks. Longhorn also serves as a bridge to integrate enterprise-grade storage with Kubernetes by enabling users to deploy Longhorn on existing NFS, iSCSI, and Fibre Channel storage arrays and on cloud storage systems like AWS EBS, all the while adding useful features such as application-aware snapshots, backups, and remote replication.

“As enterprises deploy more production applications in containers, the need for persistent container storage continues to grow rapidly,” said Sheng Liang, CEO at Rancher Labs. “Longhorn fills the need for a 100% open source and easy-to-deploy enterprise-grade Kubernetes storage solution.”

Monday, November 18, 2019

HPE intros Kubernetes for Bare-Metal and Edge to Cloud Deployments

HPE introduced an enterprise-grade Kubernetes-based container platform designed for both cloud-native applications and monolithic applications with persistent storage.

The new HPE Container Platform can run on bare-metal or virtualized infrastructure, on any public cloud, and at the edge. It leverages technology from HPE’s acquisitions of BlueData and MapR, together with open source Kubernetes.

HPE said it designed the new platform to address large-scale enterprise Kubernetes deployments across a wide range of use cases, from machine learning and edge analytics to CI/CD pipelines and application modernization.

“Application development is migrating to containers, and Kubernetes is the de facto standard for container orchestration,” said Kumar Sreekanti, senior vice president and chief technology officer of Hybrid IT at HPE. “We’re combining our expertise and intellectual property from recent acquisitions together with open source Kubernetes to deliver an unmatched enterprise-class container platform. Our container-first approach will provide enterprises with a faster and lower cost path to application modernization, optimized for bare-metal and extensible to any infrastructure from edge to cloud.”

Sunday, August 25, 2019

Blueprint: Kubernetes is the End Game for NFVI

by Martin Taylor, Chief Technical Officer, Metaswitch

In October 2012, when a group of 13 network operators launched their white paper describing Network Functions Virtualization, the world of cloud computing technology looked very different than it does today.  As cloud computing has evolved, and as telcos have developed a deeper understanding of it, so the vision for NFV has evolved and changed out of all recognition.
The early vision of NFV focused on moving away from proprietary hardware to software running on commercial off-the-shelf servers.  This was described in terms of “software appliances”.  And in describing the compute environment in which those software appliances would run, the NFV pioneers took their inspiration from enterprise IT practices of that era, which focused on consolidating servers with the aid of hypervisors that essentially virtualized the physical host environment.

Meanwhile, hyperscale Web players such as Netflix and Facebook were developing cloud-based system architectures that support massive scalability with a high degree of resilience, which can be evolved very rapidly through incremental software enhancements, and which can be operated very cost-effectively with the aid of a high degree of operations automation.  The set of practices developed by these players has come to be known as “cloud-native”, which can be summarized as dynamically orchestratable micro-services architectures, often based on stateless processing elements working with separate state storage micro-services, all deployed in Linux containers.

It’s been clear to most network operators for at least a couple of years that cloud-native is the right way to do NFV, for the following reasons:

  • Microservices-based architectures promote rapid evolution of software capabilities to enable enhancement of services and operations, unlike legacy monolithic software architectures with their 9-18 month upgrade cycles and their costly and complicated roll-out procedures.
  • Microservices-based architectures enable independent and dynamic scaling of different functional elements of the system with active-active N+k redundancy, which minimizes the hardware resources required to deliver any given service.
  • Software packaged in containers is inherently more portable than VMs and does much to eliminate the problem of complex dependencies between VMs and the underlying infrastructure which has been a major issue for NFV deployments to date.
  • The cloud-native ecosystem includes some outstandingly useful open source projects, foremost among which is Kubernetes – of which more later.  Other key open source projects in the cloud-native ecosystem include Helm, a Kubernetes application deployment manager, service meshes such as Istio and Linkerd, and telemetry/logging solutions including Prometheus, Fluentd and Grafana.  All of these combine to simplify, accelerate and lower the cost of developing, deploying and operating cloud-native network functions.

5G is the first new generation of mobile technology since the advent of the NFV era, and as such it represents a great opportunity to do NFV right – that is, the cloud-native way.  The 3GPP standards for 5G are designed to promote a cloud-native approach to the 5G core – but they don’t actually guarantee that 5G core products will be recognisably cloud-native.  It’s perfectly possible to build a standards-compliant 5G core that is resolutely legacy in its software architecture, and we believe that some vendors will go down that path.  But some, at least, are stepping up to the plate and building genuinely cloud native solutions for the 5G core.

Cloud-native today is almost synonymous with containers orchestrated by Kubernetes.  It wasn’t always thus: when we started developing our cloud-native IMS solution in 2012, these technologies were not around.  It’s perfectly possible to build something that is cloud-native in all respects other than running in containers – i.e. dynamically orchestratable stateless microservices running in VMs – and production deployments of our cloud native IMS have demonstrated many of the benefits that cloud-native brings, particularly with regard to simple, rapid scaling of the system and the automation of lifecycle management operations such as software upgrade.  But there’s no question that building cloud-native systems with containers is far better, not least because you can then take advantage of Kubernetes, and the rich orchestration and management ecosystem around it.

The rise to prominence of Kubernetes is almost unprecedented among open source projects.  Originally released by Google as recently as July 2015, Kubernetes became the seed project of the Cloud Native Computing Foundation (CNCF), and rapidly eclipsed all the other container orchestration solutions that were out there at the time.  It is now available in multiple mature distros including Red Hat OpenShift and Pivotal Container Services, and is also offered as a service by all the major public cloud operators.  It’s the only game in town when it comes to deploying and managing cloud native applications.  And, for the first time, we have a genuinely common platform for running cloud applications across both private and public clouds.  This is hugely helpful to telcos who are starting to explore the possibility of hybrid clouds for NFV.

So what exactly is Kubernetes?  It’s a container orchestration system for automating application deployment, scaling and management.   For those who are familiar with the ETSI NFV architecture, it essentially covers the Virtual Infrastructure Manager (VIM) and VNF Manager (VNFM) roles.

In its VIM role, Kubernetes schedules container-based workloads and manages their network connectivity.  In OpenStack terms, those are covered by Nova and Neutron respectively.  Kubernetes includes a kind of Load Balancer as a Service, making it easy to deploy scale-out microservices.

In its VNFM role, Kubernetes can monitor the health of each container instance and restart any failed instance.  It can also monitor the relative load on a set of container instances that are providing some specific micro-service and can scale out (or scale in) by spinning up new containers or spinning down existing ones.  In this sense, Kubernetes acts as a Generic VNFM.  For some types of workloads, especially stateful ones such as databases or state stores, Kubernetes native functionality for lifecycle management is not sufficient.  For those cases, Kubernetes has an extension called the Operator Framework which provides a means to encapsulate any application-specific lifecycle management logic.  In NFV terms, a standardized way of building Specific VNFMs.

But Kubernetes goes way beyond the simple application lifecycle management envisaged by the ETSI NFV effort.  Kubernetes itself, together with a growing ecosystem of open source projects that surround it, is at the heart of a movement towards a declarative, version-controlled approach to defining both software infrastructure and applications.  The vision here is for all aspects of a complex cloud native system, including cluster infrastructure and application configuration, to be described in a set of documents that are under version control, typically in a Git repository, which maintains a complete history of every change.  These documents describe the desired state of the system, and a set of software agents act so as to ensure that the actual state of the system is automatically aligned with the desired state.  With the aid of a service mesh such as Istio, changes to system configuration or software version can be automatically “canary” tested on a small proportion of traffic prior to be rolled out fully across the deployment.  If any issues are detected, the change can simply be rolled back.  The high degree of automation and control offered by this kind of approach has enabled Web-scale companies such as Netflix to reduce software release cycles from months to minutes.

Many of the network operators we talk to have a pretty good understanding of the benefits of cloud native NFV, and the technicalities of containers and Kubernetes.  But we’ve also detected a substantial level of concern about how we get there from here.  “Here” means today’s NFV infrastructure built on a hypervisor-based virtualization environment supporting VNFs deployed as virtual machines, where the VIM is either OpenStack or VMware.  The conventional wisdom seems to be that you run Kubernetes on top of your existing VIM.  And this is certainly possible: you just provision a number of VMs and treat these as hosts for the purposes of installing a Kubernetes cluster.  But then you end up with a two-tier environment in which you have to deploy and orchestrate services across some mix of cloud native network functions in containers and VM-based VNFs, where orchestration is driving some mix of Kubernetes, OpenStack or VMware APIs and where Kubernetes needs to coexist with proprietary VNFMs for life-cycle management.  It doesn’t sound very pretty, and indeed it isn’t.

In our work with cloud-native VNFs, containers and Kubernetes, we’ve seen just how much easier it is to deploy and manage large scale applications using this approach compared with traditional hypervisor-based approaches.  The difference is huge.  We firmly believe that adopting this approach is the key to unlocking the massive potential of NFV to simplify operations and accelerate the pace of innovation in services.  But at the same time, we understand why some network operators would baulk at introducing further complexity into what is already a very complex NFV infrastructure.
That’s why we think the right approach is to level everything up to Kubernetes.  And there’s an emerging open source project that makes that possible: KubeVirt.

KubeVirt provides a way to take an existing Virtual Machine and run it inside a container.  From the point of view of the VM, it thinks it’s running on a hypervisor.  From the point of view of Kubernetes, it sees just another container workload.  So with KubeVirt, you can deploy and manage applications that comprise any arbitrary mix of native container workloads and VM workloads using Kubernetes.

In our view, KubeVirt could open the way to adopting Kubernetes as “level playing field” and de facto standard environment across all types of cloud infrastructure, supporting highly automated deployment and management of true cloud native VNFs and legacy VM-based VNFs alike.  The underlying infrastructure can be OpenStack, VMware, bare metal – or any of the main public clouds including Azure, AWS or Google.  This grand unified vision of NFV seems to us be truly compelling.  We think network operators should ratchet up the pressure on their vendors to deliver genuinely cloud native, container-based VNFs, and get serious about Kubernetes as an integral part of their NFV infrastructure.  Without any question, that is where the future lies.

Monday, July 22, 2019

Cloud Native - Kubernetes becomes the universal abstraction layer

Kubernetes, which is sometimes described as "the Linux of the cloud," is poised to play a major role in next-gen carrier infrastructures, says Dan Kohn, Executive Director, Cloud Native Computing Foundation, which is part of The Linux Foundation.

Over time, virtual network functions (VNFs) will evolve to become cloud-native network functions (CNFs). In this world, Kubernetes can become the universal abstraction layer.



A "2019 Next-Generation Central Office Report" from AvidThink is available for download. It covers the evolution in NGCO, recent trends, and key projects like CORD and VCO, as well as the role of the NGCO in new applications in edge computing and SD-WAN/vCPE.

https://nginfrastructure.com/

Wednesday, March 20, 2019

Portworx lands $27 million for cloud-native storage and management

Portworx, a start-up based in Los Altos, California, announced $27 million in Series C funding to support its cloud-native storage and data management solutions.

Portworx reduces storage, compute and infrastructure costs for running mission-critical multi-cloud applications while promising zero downtime or data loss. Major customers include GE Digital, Lufthansa Systems, HPE and thirty members of the Fortune Global 2000 or federal agencies.

The oversubscribed funding round was co-led by Sapphire Ventures and the ventures arm of Mubadala Investment Company, with support from existing investors Mayfield Fund and GE Ventures, and new financing from Cisco Investments, HPE, and NetApp. The company has raised $55 million to date.

“Kubernetes alone is not sufficient to handle critical data services that power enterprise applications,” said Murli Thirumale, CEO and co-founder at Portworx. “Portworx cloud-native storage and data management solutions enable enterprises to run all their applications in containers in production. With this investment round the cloud-native industry recognizes Portworx and its incredible team as the container storage and data-management leader. Our customer-first strategy continues to pay off!”

Tuesday, February 5, 2019

Rancher Labs adds support for multi-cluster Kubernetes applications

Rancher Labs has added support for multi-cluster applications within Rancher, its open source Kubernetes management platform.

Multi-cluster Kubernetes application support extends the feature set of Helm, the Kubernetes package manager. Users simply select the application from the Rancher Application Catalog, add target clusters, provide information about each cluster, and deploy. Multi-cluster applications use Kubernetes controllers running in the Rancher management plane to fetch Helm charts and deploy the application to each target cluster. The use of Helm charts allows Rancher to leverage features like upgrades, rollbacks, and versioning of the applications.

“Rancher has made deployment and management of Kubernetes a breeze, and over the years we have added several new capabilities like RBAC, Projects and out-of-the-box monitoring to make it even simpler,” said Will Chan, co-founder and vice president of engineering at Rancher Labs. “As the number of users and clusters grow within an organization, for example in edge computing scenarios, the deployment, management and upgrade of applications and services that run across these clusters becomes impossible to manage. With Rancher’s support for multi-cluster apps, we’re thrilled to provide our customers with the exact tools needed to alleviate the complexity and meet enterprise requirements.”

Monday, February 4, 2019

Platform9 intros managed Kubernetes service on VMware

Platform9, a start-up based in Sunnyvale, California, announced a fully managed Kubernetes service on VMware vSphere with Platform9 Managed Kubernetes (PMK).

Platform9 says its solution eliminates the operational complexity of Kubernetes at scale by delivering it as a fully managed service, with all enterprise-grade capabilities included out of the box: zero-touch upgrades, multi-cluster operations, high availability, monitoring, and more, all handled automatically and backed by a 24x7x365 SLA. The service delivers centralized visibility and management across all Kubernetes environments - whether on-premises, in the public cloud, or at the Edge - with quota management and role-based access control.

"Kubernetes is the #1 enabler for cloud-native applications and is critical to the competitive advantage for software-driven organizations today. VMware was never designed to run containerized workloads, and integrated offerings in the market today are extremely clunky, hard to implement and even harder to manage," said Sirish Raghuram, Co-founder and CEO of Platform9. "We're proud to take the pain out of Kubernetes on VMware, delivering a pure open source-based, Kubernetes-as-a-Service solution that is fully managed, just works out of the box, and with an SLA guarantee in your own environment."

Wednesday, December 12, 2018

Tigera raises $30 million for Kubernetes security

Tigera, a start-up based in San Francisco, announced $30 million in funding for its security and compliance solutions for Kubernetes platforms.

Tigera says modern microservices architectures present a unique challenge for legacy security and compliance solutions since these new workloads are highly dynamic and ephemeral. This new architecture creates an explosion of internal, or east-west traffic that must be evaluated and secured by the network and security operations teams.

Tigera Secure Enterprise Edition (TSEE) secures Kubernetes environments and ensures continuous compliance using a declarative model similar to Kubernetes. Under the hood, TSEE authenticates all service-to-service communication using multiple sources of identity, authorizes each service based on multi-factor rules, encrypts network traffic, and enforces security policies at the edge of the host, pod, and container within the infrastructure for a defense in depth security model. All connection details are logged in a compliance-ready format that is also used for incident management and security forensic analysis.

The Series B funding was led by Insight Venture Partners, with participation from existing investors Madrona, NEA, and Wing.

Monday, October 8, 2018

Rancher's new release targets improved Kubernetes cluster ops

Rancher Labs released version 2.1 of its container management software, introducing next generation automatic cluster operations and application management, as well as a migration path for users moving from Rancher’s Cattle orchestrator to Rancher Kubernetes.

“Rancher continues to be the de facto choice for enterprises looking to run containers and Kubernetes in production,” said Sheng Liang, CEO and co-founder of Rancher Labs. “With Rancher 2.1, we’re providing key upgrades to the product that further enables any enterprise to embrace Kubernetes and accelerate development, reduce infrastructure costs and improve application reliability.”

Rancher 2.1 brings scalability improvements, as well as the ability to define and manage Kubernetes Clusters as code with Rancher. Additionally, Rancher is now enabling users to snapshot and export the complete configuration of Kubernetes clusters managed by Rancher, and later on restore Kubernetes clusters by importing the same configuration file.

http://www.rancher.com

Wednesday, August 29, 2018

Google hands over management of Kubernetes project to the community

Kubernetes, which is the container orchestration system introduced by Google in 2014, is taking the next step in its evolution.

Throughout this period, Google has provided the cloud resources that support the project development—namely CI/CD testing infrastructure, container downloads, and other services like DNS, all running on Google Cloud Platform (GCP).

Since 2015, Kubernetes has been part of the Cloud Native Computing Foundation (CNCF) under the direction of the Linux Foundation.

Google said now that Kubernetes has become one of the world’s most popular open-source projects, it is time to hand over control. Google hosts the Kubernetes container registry and last month it served 129,537,369 container image downloads of core Kubernetes components. That’s over 4 million per day—and a lot of bandwidth!

Google will hand over all project operations of Kubernetes to the community (including many Googlers), who will take ownership of day-to-day operational tasks such as testing and builds, as well as maintaining and operating the image repository and download infrastructure.

Under the new plan, Google will make a $9 million grant of GCP credits to the CNCF, split over three years, to cover infrastructure costs. In addition to the world-wide network and storage capacity required to serve all those container downloads, a large part of this grant will be dedicated to funding scalability testing, which regularly runs 150,000 containers across 5,000 virtual machines.

Monday, July 16, 2018

Microsoft Azure Service Fabric Mesh enters beta

Microsoft announced the public preview release of Azure Service Fabric, which is its new distributed systems platform for managing scalable microservices and container-based applications for Windows and Linux.

Microsoft describes Service Fabric as a foundational technology for its core Azure infrastructure, as well as other Microsoft cloud services such as Skype for Business, Azure Cosmos DB, Azure SQL Database, Dynamics 365, and many more.

Azure Service Fabric Mesh will be a fully-managed service that enables developers to deploy and operate containerized applications without having to manage VMs, storage or networking configuration, while keeping the enterprise-grade reliability, scalability, and mission-critical performance of Service Fabric.

https://azure.microsoft.com/en-us/blog/azure-service-fabric-mesh-is-now-in-public-preview/

Azure Kubernetes Service enters general availability

Microsoft announced that its Azure Kubernetes Service (AKS) is now generally available in ten regions across three continents. Microsoft expects to add ten more regions in the coming months.

The new Kubernetes service features an Azure-hosted control plane, automated upgrades, self-healing, easy scaling, and a simple user experience for both developers and cluster operators. Users are able to control access to their Kubernetes cluster with Azure Active Directory accounts and user groups. A key attribute of AKS is operational visibility into the managed Kubernetes environment. Control plane telemetry, log aggregation, and container health are monitored via the Azure portal.

Microsoft also announced five new regions including Australia East, UK South, West US, West US 2, and North Europe.

Thursday, June 14, 2018

Azure Kubernetes Service enters general availability

Microsoft announced that its Azure Kubernetes Service (AKS) is now generally available in ten regions across three continents. Microsoft expects to add ten more regions in the coming months.

The new Kubernetes service features an Azure-hosted control plane, automated upgrades, self-healing, easy scaling, and a simple user experience for both developers and cluster operators. Users are able to control access to their Kubernetes cluster with Azure Active Directory accounts and user groups. A key attribute of AKS is operational visibility into the managed Kubernetes environment. Control plane telemetry, log aggregation, and container health are monitored via the Azure portal.

Microsoft also announced five new regions including Australia East, UK South, West US, West US 2, and North Europe.

MapR adds Amazon Elastic Container Service for Kubernetes

The MapR Data Platform now supports Amazon Elastic Container Service for Kubernetes (Amazon EKS), making it easier organizations to adopt and manage their data seamlessly on-premises and on AWS.

MapR previously announced persistent storage for containers to enable the deployment of stateful containerized applications.

Amazon EKS automatically manages the availability, scalability, and scheduling of containers. With MapR, organizations can retain the disaggregation of scaling compute independent of their storage, without having to worry about over subscription. MapR also secures containers from data access vulnerabilities through wire-level encryption and a full end-to-end set of access, authorization, and authentication features.

"Data agility is essential for next-gen analytics and advanced applications,” said Jack Norris, senior vice president, data and applications at MapR. “The robustness of MapR combined with the agility of Amazon EKS enables enterprises to quickly build a flexible and secure production environment for large scale AI and machine learning."

Monday, June 11, 2018

A10 brings container-native load balancing and analytics for Kubernetes

A10 Networks is introducing an automated way to integrate enterprise-grade load-balancing with application visibility and analytics.

The new A10 Ingress Controller for Kubernetes, which integrates with A10’s container-native load balancing and application delivery solution, can automatically provision application delivery configuration and policies. It ties directly into the container lifecycle to automatically update application delivery configuration with the dynamism of a Kubernetes environment. As application services scale up and down, the A10 load balancer is dynamically updated. A10's "Lightning" containerized load balancer also scales up and down automatically with the scale of a Kubernetes cluster.

The A10 Ingress Controller can run anywhere Kubernetes is deployed, including public clouds (Amazon Web Services (AWS), Microsoft Azure, and Google Compute Engine (GCP), and private clouds (running VMware and bare metal infrastructure). 

A10 said its solution provides comprehensive application analytics by collecting hundreds of application metrics, thus enabling operations teams to troubleshoot faster, manage capacity planning and also detect performance and security anomalies. The analytics data is available via dashboards on the A10 Harmony portal or via APIs.

“As application teams adopt container and microservice architectures, Kubernetes has become the de-facto standard for container orchestration,” said Kamal Anand, Vice President of Cloud, A10 Networks. “A10’s Kubernetes solution provides enterprise applications teams with container-native enterprise grade application delivery for their mission-critical applications. With bundled monitoring, traffic analytics and application security, it reduces their operational burden and allows them to focus on core application value.”

“The transition to software containers, micro-segmented application architectures, and DevOps practices is underway, making it imperative that ADCs can be easily included in these applications and orchestrated along with containers by container management software such as Kubernetes,” said Cliff Grossner, Ph.D., senior research director and advisor of cloud and data center research practice for IHS Markit, a global business information provider. “For 2017 we estimated revenue from commercial license of container software at $350 million, with revenue over $1.2 billion forecast for 2022, signaling a strong need for application delivery ecosystems to support containers. A10’s focus on integrating its ADC software Kubernetes container management software answers an important market requirement.”

“IDC finds that enterprises are increasingly adopting cloud-native containers and microservices. A challenge for those enterprises, though, is ensuring that the right application-delivery infrastructure is deployed to facilitate the agility, elasticity, flexibility, security, and scale that production environments require. At the edge of a Kubernetes cluster, the ingress controller provides important functionality – applying rules to Layer 7 routing to allow inbound connections to reach cluster services – and its integration with enterprise-grade application-delivery infrastructure, such as A10’s containerized load balancer and controller, makes considerable sense” said Brad Casemore, Research VP, Datacenter Networks, IDC.