Showing posts with label Containers. Show all posts
Showing posts with label Containers. Show all posts

Wednesday, March 20, 2019

Portworx lands $27 million for cloud-native storage and management

Portworx, a start-up based in Los Altos, California, announced $27 million in Series C funding to support its cloud-native storage and data management solutions.

Portworx reduces storage, compute and infrastructure costs for running mission-critical multi-cloud applications while promising zero downtime or data loss. Major customers include GE Digital, Lufthansa Systems, HPE and thirty members of the Fortune Global 2000 or federal agencies.

The oversubscribed funding round was co-led by Sapphire Ventures and the ventures arm of Mubadala Investment Company, with support from existing investors Mayfield Fund and GE Ventures, and new financing from Cisco Investments, HPE, and NetApp. The company has raised $55 million to date.

“Kubernetes alone is not sufficient to handle critical data services that power enterprise applications,” said Murli Thirumale, CEO and co-founder at Portworx. “Portworx cloud-native storage and data management solutions enable enterprises to run all their applications in containers in production. With this investment round the cloud-native industry recognizes Portworx and its incredible team as the container storage and data-management leader. Our customer-first strategy continues to pay off!”

Thursday, December 20, 2018

2019 Network Predictions - 5G just can’t ‘contain’ itself

by John English, Director of Marketing, Service Provider Solutions, NETSCOUT

5G just can’t ‘contain’ itself 

In 2019 as virtualized network architectures are rapidly adopted to support 5G we expect to see containers emerge as the de-facto platform to run new applications and workloads

The excitement around 5G is building as we hear more news about network deployments, trials and handsets. However, one 5G-related issue that hasn’t yet been crystallized is what form 5G software and innovations will take, and how these new services and applications will be deployed into the network. Unlike 4G/LTE network infrastructure, the architectures that support 5G are virtualized and cloud-based, so the smart money is on application developers, mobile operators and equipment vendors using microservices, and in particular containers, to drive 5G evolution.

It makes sense to use containers to support 5G as they will provide operators with a flexible and easier to use platform to build, test and deploy applications that is now also becoming more secure. This is vital for the development of 5G services at a time when the use cases for 5G are still being defined. Operators will need to be in a position to spin up services as and when needed to support different use cases, by using containers it will be possible to serve customers quickly and efficiently.

Another key aspect is the need to deliver services and applications closer to the end user by utilizing mobile edge computing. This is integral to ensuring the low latency and high-bandwidth associated with 5G and will support use cases across a wide range of verticals including transport, manufacturing and healthcare. However, flexible architectures will be required to support this type of infrastructure throughout hybrid cloud and virtualized environments. As operators move network infrastructure to the edge, the use of containers will become pivotal to supporting 5G applications.

The use of microservices and containers will increase during 2019 as operators’ ramp up their 5G propositions. Despite offering clear advantages, they will also add a new layer of complexity and carriers will need to have clear visibility across their IT infrastructure if they are going to make a success of 5G.

5G will drive virtualization in 2019 

Momentum is building behind 5G. The US and South Korea are leading the charge with the rollout of the first commercial networks; trials are taking place in every major market worldwide; and Verizon and Samsung have just announced plans to launch a 5G handset in early 2019. Expectations for 5G are high – the next-generation mobile standard will underpin mission-critical processes and innovations, including telemedicine, remote surgery and even driverless cars. However, vast sums of money will need to be spent on network infrastructure before any of this can happen, and it's the mobile and fixed carriers who will be expected to foot the bill. This is compounded by the fact that many of the aforementioned 5G use cases have yet to be defined, so carriers are being asked to gamble on an uncertain future.

So, what will the 5G future look like and what will it take to get us there?

One thing is for certain - 5G will drive network virtualization. In 2019, we will see an increasing number of carriers committing to deploying virtualized network infrastructure to support 5G applications and services. Without virtualization, it will be ‘virtually’ impossible to deliver 5G. This is because 5G requires virtualization both at the network core, and critically at the network edge. Puns aside, the days of building networks to support a single use case, such as mobile voice and data, or home broadband, are behind us. If 5G is to become a reality, then the networks of the future will need to be smart and automated, with the ability to switch between different functions to support a range of use cases.

However, moving from the physical world to the virtual world is no mean feat. Carriers are now discovering that their already complex networks are becoming even more so, as they replicate existing functions and create new ones in a virtualized environment. Wholesale migrations aren’t possible either, so carriers are having to get to grips with managing their new virtual networks alongside earlier generations of mobile and fixed technologies. Despite these challenges, 5G will undoubtedly accelerate the virtualization process. Subsequently, no-one will want to be left behind and we will see greater competition emerge between carriers as they commit funds and resources to building out their virtualised network infrastructures.

To justify this spend, and to tackle the challenges that lie ahead, carriers will require smart visibility into their constantly evolving network architectures. Virtual probes that produce smart data, supported by intelligent tools, offer much-needed visibility into the performance of these new networks and the services they support. The invaluable knowledge they provide will be absolutely critical for carriers as they accelerate their use of virtualized infrastructure to successfully deploy 5G.

Wednesday, December 12, 2018

Tigera raises $30 million for Kubernetes security

Tigera, a start-up based in San Francisco, announced $30 million in funding for its security and compliance solutions for Kubernetes platforms.

Tigera says modern microservices architectures present a unique challenge for legacy security and compliance solutions since these new workloads are highly dynamic and ephemeral. This new architecture creates an explosion of internal, or east-west traffic that must be evaluated and secured by the network and security operations teams.

Tigera Secure Enterprise Edition (TSEE) secures Kubernetes environments and ensures continuous compliance using a declarative model similar to Kubernetes. Under the hood, TSEE authenticates all service-to-service communication using multiple sources of identity, authorizes each service based on multi-factor rules, encrypts network traffic, and enforces security policies at the edge of the host, pod, and container within the infrastructure for a defense in depth security model. All connection details are logged in a compliance-ready format that is also used for incident management and security forensic analysis.

The Series B funding was led by Insight Venture Partners, with participation from existing investors Madrona, NEA, and Wing.

Sunday, August 12, 2018

Portworx expands container data management options for AWS

Portworx, a start-up based in Los Altos, California announced that its PX-Enterprise can now be integrated with Amazon Elastic Container Service (ECS), enabling mission critical stateful workloads to run in Docker containers with dynamic provisioning, cross-Availability Zone high availability, application consistent snapshots, auto-scaling and encryption functionality.

Portworx can also be integrated with Amazon Elastic Container Service for Kubernetes (EKS).

"Enterprise container adoption is skyrocketing as companies recognize the value that container technologies provide on the path to digital transformation," said Murli Thirumale, co-founder and CEO of Portworx. "Amazon Web Services integration with Portworx for both EKS and now ECS is evidence of a sea change happening in the industry: enterprises running on Amazon need flexible cloud native storage solutions that play well containers. By giving enterprises these two options for container data management, we're radically simplifying operations of containerized stateful services running on Amazon."

Key benefits of Amazon ECS with Portworx's cloud native storage include:

  • Multi-AZ EBS for Containers – Docker containers within and across Availability Zones based on business needs. Portworx will not only replicate each container's volume data among ECS nodes and across Availability Zones, but also add additional EBS drives based on reaching capacity thresholds.
  • Daemon Scheduling on ECS:  automatically run a daemon task on every one of a selected set of instances in an ECS cluster. This ensures that as ECS adds new nodes, every server can consume and access Portworx storage volumes.
  • Auto-scaling groups for stateful applications – dynamic creation of EBS volumes for an ASG, so if a pod is rescheduled after a host failure, the pre-existing EBS volume will be reused, reducing failover time by 300%.
  • Hyperconverged compute and storage for ultra-high performance databases – ECS can reschedule the pod to another host in the cluster where Portworx has placed an up-to-date replica. This ensures hyperconvergence is maintained even across reschedules.
  • Application-aware snapshots – ECS administrators can define groups of volumes that constitute their application state and consistently snapshot directly via .docker. These group snapshots can be backed up to S3 or moved directly to another Amazon region in case of a disaster.


Monday, May 28, 2018

Start-up profile: Rancher Labs, building container orchestration on Kubernetes

Rancher Labs is a start-up based in Cupertino, California that offers a container management platform that has racked up over four million downloads. The company recently released a major update for its container management system. Recently, I sat down with company co-founders Sheng Liang (CEO) and Shannon Williams (VP of Sales) to talk about Kubernetes, the open source container orchestration system that was originally developed by Google. Kubernetes was initially released in 2014, about the time that Rancher Labs was getting underway.

Jim Carroll, OND: So where does Kubernetes stand today?

Sheng Liang, Rancher Labs: Kubernetes has come a long way. When we started three years ago, Kubernetes was also just getting started. It had a lot of promise, but people were talking about orchestration wars and stuff. Kubernetes had not yet won but more importantly, it wasn't really useful.  In the early days, we couldn't even bring ourselves to say that we were going to focus exclusively on Kubernetes. It was not that we did not believe in Kubernetes, but it just didn't work for a lot of users. Kubernetes was almost seen as an end unto itself. Even standing up Kubernetes was such a challenge back then that just getting it to run became an end goal.  A lot of people in those days were experimenting with it, and the goal was simply to prove - hey- you've got a Kubernetes cluster.  Success was to get a few simple apps.  And its come a long way in 3 years.


A lot of things have changed. First, Kubernetes is now really established as the de facto container orchestration platform. We used to support Mesosphere, we used to support Swarm, and we used to build our own container orchestrations platform, which we called Cattle. We stopped doing all of that to focus entirely on Kubernetes. Luckily, the way we developed Cattle was closely modeled on Kubernetes, sort of an easy-to-use version of Kubernetes. So we were able to bring a lot our experience to run on top of Kubernetes. And now it turns out that we don't have to support all of those other frameworks. Kubernetes has settled that. It is now a common tool that everyone can use.

JC: The Big Three cloud companies are now fully behind Kubernetes, right?

Sheng Liang: Right. I think that for the longest time a lot of vendors were looking for opportunities to install and run Kubernetes. That kept us alive for a while. Some of the early Kubernetes deals that we closed were about installing Kubernetes.  These projects then turned to operation contracts because people thought they were going to need to help with upgrading or just maintaining the health of the cluster. This got blown out of the water last year when all of the big cloud providers started to offer Kubernetes as a service.

If you are on the cloud already, there is really no reason to stand up your own Kubernetes cluster.

Well, we're really not quite there yet, even though Amazon announced EKS in November, it is not even GA yet. It is still in closed beta status, but later this year Kubernetes as a service should become a commercial reality. And there are other benefits too.

I'm not sure about Amazon, but both Google and Microsoft  have decided to not charge for the management plane, so whatever resource you use to run the database, and the control plane nodes, you don't really pay for, I guess they must have a very efficient way of running it on some shared infrastructure. That's what I suspect. This allows them to amortize that cost on what they charge for the worker nodes.

The way people set up Kubernetes clusters in the early days was actually very wasteful. Like you would use three nodes for ECD and you would use two nodes for the control plane and then when setting it up people would throw in two more nodes for workers. So, they were using five nodes to manage two nodes, while paying for seven.

With cloud services, you don't have to do that. I think this makes Kubernetes table stakes. It is not just limited to the cloud.  I think it's really wherever you can get infrastructure. Enterprise customers, for instance, are still getting infrastructure from VMware. Or they get it from Nutanix.

All of the cloud companies have announced, or will shortly announce, support for Kubernetes out of the box. Kubernetes then will equate to infrastructure, just like virtual machines, or virtual SANS.

JC: So, how is Kubernetes actually being used now? Is it a one-way bridge or a two-way bridge for moving workloads? Are people actually moving workloads on a consistent basis, or it basically a one-time move to a new server or cloud?

Shannon Williams, Rancher Labs: Portability is actually less important than other features. It may be the sexy part of Kubernetes to say that you can move clusters of containers. The reality is that Kubernetes is just a really good way to run containers reliably.

The vast majority of people who are running containers are not using Kubernetes for the purpose of moving containers between clouds.  The vast majority of people running Kubernetes are doing so because it is more reliable than running containers directly on VMs. It is easier to use Kubernetes from an operational perspective. It is easier from a development perspective. It is easier from a testing perspective. So if you think of the value prop that Kubernetes represents, it comes down to faster development cycles, better operations. The portability is kind of the cherry on top of the Sundae.

It is interesting that people are excited about the portability enabled by Kubernetes, and I think it will become really important over the long term, but it is just as important that I can run it on my laptop as that I can run it on one Kubernetes cluster versus another.

Sheng Liang: I think that is a very important point. The vast major of the accounts we are familiar with run Kubernetes at just one place. That really tells you something about the power of Kubernetes. The fact that people are using this at just one place really tells you that portability is not the primary motivator.  The primary benefit is that Kubernetes is really a rock-solid way to run containers.

JC: What is the reason that Kubernetes is not being used so much for portability today? Is the use case weak for container transport? I would guess that a lot of companies would want to move jobs up to the cloud and back again.

Sheng Liang:  I just don't think that portability is the No.1 requirement for companies using containers today. Procurement teams are excited about this capability but operations people just don't need it right now.

Shannon Williams: From the procurement side, knowing that your containers could be moved to another cloud gives you the assurance that you won't be locked in.

But portability in itself is a complex problem. Even Kubernetes does not solve all the issues of porting an application from one system to another. For instance, I may be running Kubernetes on AWS but I may also be running an Amazon Relational Database (RDS) service as well.  Kubernetes is not going to magically support both of these in migrating to another cloud. There is going to be work required. I think we are still a ways away from ubiquitous computing but we are heading into a world where Kubernetes is how you run containers and containers are going to be the way that all microservices and next-gen applications are built. It may even be how I run my legacy applications. So, having Kubernetes everywhere means that the engineers can quickly understand all of these different infrastructure platforms without having to go through a heavy learning curve. With Kubernetes they will have already learned how to run containers reliably wherever it happens to be running.

JC: So how are people using Kubernetes? Where are the big use cases?

Shannon Williams: I think with Kubernetes we are seeing the same adoption pattern as with Amazon. The initial consumers of Kubernetes were people who were building early containerized applications, predominantly microservices, cloud-native Web apps, mobile apps, gaming, etc. One of the first good use cases was Pokemon Go. It needed massively-scalable systems and ran on Google Cloud. It needed to have systems that could handle rapid upgrades and changes. The adoption of Kubernetes moved from there to more traditional Web applications, to the more traditional applications.

Every business is trying to adopt an innovative stance with their IT department.  We have a bunch of insurance companies as customers. We have media companies as customers. We have many government agencies as customers, such as the USDA -- they run containers to be able to deliver websites. They have lots of constituencies that they need to build durable web services for.  These have to run consistently. Kubernetes and containers give them a lot of HA (high availability).

A year or so ago we were in Phase 0 with this movement. Now I would say we are entering Phase 1 with many new use cases. Any organization that is forward-looking in their IT strategy is probably adopting containers and Kubernetes. This is the best architecture for building applications.

JC: Is there physical limit to how far you can scale with Kubernetes?

Shannon Williams:  It is pretty darn big. You're talking about spanning maybe 5,000 servers.

Sheng Liang: I don't think there is a theoretical limit to how big you can go, but in practice, there is a database that eventually will bottleneck. The might be the limiting factor.

 I think some deployments have hit 5,000 nodes and each node these days could actually be a one terabyte machine. So that is actually a lot of resources. I think it could be made bigger, but so far that seems to be enough.

Shannon Williams: The pressure to hit that maximum size of 5,000 nodes or more in a cluster really is not applicable to the vast majority of the market.

Sheng Liang: And you could always manage multiple clusters with load balancing. It is probably not a good practice anyway to put everything in one superbig cluster.

Generally, we are not seeing people create huge clusters across multiple data centers or multiple regions.

Shannon Williams: In fact, I would say that we are seeing the trend move in the opposite direction.  Which is that the number of clusters in an organization is increasing faster than the size of any one cluster. What we see is any application that is running probably has at least two clusters available  -- one for testing and one for production.  There are often many divisions inside a company that push this requirement forward. For instance, a large media company has more than 150 Kubernetes clusters -- all deployed by different employees in different regions and often running different versions of their software. The even have multiple cloud providers. I think we are heading in that direction, rather than one massive Kubernetes cluster to rule them all.

Sheng Liang:  This is not what some of the web companies initially envisioned for Kubernetes.  When Google originally developed Kubernetes, they were used to the model where you have a very big pool of resources with bare metal servers. Their challenge was how to schedule all the workloads inside of that pool. When enterprises started adopting Kubernetes, one thing that immediately changed was that they really don't have the operational maturity to put all their eggs in one basket and make that really resilient. Second, because all of them were using some form of virtualization. They were either using VMware or they were using a cloud, so essentially the cost of making small clusters come down. There is not a lot of overhead. You can have a lot of clusters without having to dedicate the whole server into these clusters.

JC: Is there an opportunity then for the infrastructure provider, or the cloud provider, to add their own special sauce on top of Kubernetes?

Sheng Liang:  The cloud guys are all starting to do that. Over time, I think they will do more. Today is still early. Amazon, for instance, has not yet commercially launched the service to the public. And Digital Ocean just announced it. But Google has been offering Kubernetes as a service for three years. Microsoft has been doing it for probably over a year. If you look at Google's Kubernetes service, which is probably the most advanced, now includes more management dashboards and UIs, but nothing really fancy yet.

What I would expect them to do -- and this would be really great from my perspective -- is to bring their entire service suite, including their databases, AI and ML capabilities, and make them available inside of Kubernetes.

Shannon Williams: Yeah, they will want to integrate their entire cloud ecosystems. That's one of the appealing things about cloud providers offering Kubernetes -- there will be some level of standardization but they will have the opportunity to differentiate for local requirements and flavors.

That kind of leads to the challenge we are addressing.

There are three big things that most organizations face (1) you want to be able to run Kubernetes on-prem.  Some teams may run it on VMware, some may wish to run in on bare metal. They would like to be able to run it on-prem in a way that is reliable, consistent and supported. For IT groups, there is a growing requirement of offer Kubernetes as a service in the same way they offer VMs. To do so, they must standardize Kubernetes. (2) There is another desire to manage all of these clusters in a way that complies with your organization's policies. There will be questions like "how do I manage multiple clusters in a centralized way even if some are on-prem and some are in the cloud?"  This is a distro-level problem for Kubernetes. (3) Then there is a compliance and security concern with how to configure Kubernetes to enforce all of my access control policies, security policies, monitoring policies, etc.  Those are the challenges that we are taking on with Rancher 2.0

Jim Carroll, OND: Where does Rancher Labs fit in?

Shannon Williams, Rancher Labs: The challenge we are taking on is how to manage multiple Kubernetes clusters, including how to manage users and policies across multiple clusters in an organization.

Kubernetes is now available as a supported, enterprise-grade service for anybody in your company. At this scale, Kubernetes really becomes appealing to organizations as a standardization approach, not just so that workloads can easily move between places but so that workloads can be deployed to lots of places.  For instance, I might want some workloads to run on Alibaba Cloud for a project we are doing in China, or I might want to run some workloads on T-Systems's cloud for a project in Germany, where I have to comply with the new data privacy laws. I can now do those things with Kubernetes without having to understand the specific cloud parameters, benefits or limitations of any specific cloud. Kubernetes normalizes this experience. Rancher Labs makes it happen in a consistent way. That is a large part of what we are working on at Rancher Labs -- consistent distribution and consistent management of any cluster. We will manage the lifecycle of Amazon Kubernetes or Google Kubernetes, our Kubernetes, or new Kubernetes coming out of a dev lab.

JC: So the goal is to have the Rancher Labs experience running both on-prem and in the public cloud?

Shannon Williams, Rancher Labs:: Exactly. So think about it like this. We have a distro of Kubernetes and we can use it to implement Kubernetes for you on bare metal, or on VMware, or in the cloud, if you prefer, so you can build exactly the version of Kubernetes that suits you. That is the first piece of value -- we'll give you Kubernetes wherever you need it. The second piece is that we will manage all of the Kubernetes clusters for you, including where you requested Kubernetes from Amazon or Google. You have the options of consuming from the cloud as you wish or staying on-prem. There is one other piece that we are working on. It is one thing to provide this normalized service. The additional layer is about engaging users.

What you are seeing with Kubernetes is similar to the cloud. Early adopters move in quickly and have no hesitancy in consuming it -- but.they represent maybe 1% or 2% of the users.The challenge for the IT department is to make this preferred way to deliver resources. At this point, you want to encourage adoption and that means developing a positive experience.

JC: Is your goal to have all app developers aware of the Kubernetes layer? Or is Kubernetes management really the responsibility of the IT managers who thus far are also responsible for running the network, running the storage, running the firewalls..?

Shannon Williams, Rancher Labs: Great question, because Kubernetes is actually part of the infrastructure, but it is also part of the application resiliency layer. It deals with how an application handles a physical infrastructure failure, for example. Do I spin up another container? Do I wait to let a user decide what to do? How do I connect these parts of an application and how do I manage the secrets that are deployed around it? How do I perform system monitoring and alerting of application status? Kubernetes is blurring the line.

Sheng Liang, Rancher Labs: It is not really something the coders will be interested in. The interest in Kubernetes starts with DevOps and stops just before you get to storage and networking infrastructure management.

Shannon Williams, Rancher Labs: Kubernetes is becoming of interest to system architects -- the people who are designing how an application is going to be delivered. They are very aware that the app is going to be containerized and running in the cloud. The cloud-native architecture is pulling in developers. So I think it is a little more blurred than whether or not coders get to this level.

Sheng Liang, Rancher Labs: For instance, the Netflix guys used to talk a lot about how they developed applications. Most developers don't spend a lot of time worrying about how their applications are running. They have to spend most of their time worrying about the outcome. But they are highly aware of the architecture. Kubernetes is well regarded as the best way to develop such applications. Scalable, Resilient, Secure -- those are what's driving the acceptance of Kubernetes.

Shannon Williams, Rancher Labs:  I would add one more to the list -- quick to improve. There is a continuous pace of improvement with Kubernetes. I saw a great quote about containerization from a CIO, who said "I don't care about Docker or any other containers or Kubernetes. All I care about is continuous delivery. I care that we can improve our application continuously and it so happens that containers give us the best way to do that." The point is -- get more applications to your users in a safe, secure, and scalable process.

The Cloud-Native Computing Foundation (CNCF) aims to build next-generation systems that are more reliable, more secure, more scalable and Kubernetes is a big part of this effort.  That's why I've said the value of workload portability is often exaggerated.

Jim Carroll, OND:  Tell me about the Rancher Labs value proposition.

Shannon Williams, Rancher Labs: Our value proposition is centered on the idea that Kubernetes will become the common platform for cloud-native architecture. It is going to be really important for organizations to deliver that as a service reliably. It going to be really important for them to understand how to secure that and how to enforce company policies. Mostly, it will enable people to run their applications in a standardized way. That's our focus.

As an open source software company that means we build the tooling that thousands of companies are going to use to adopt Kubernetes. Rancher has 10,000 organizations using our platform today with our version 1.0 product. I expect our version 2.0 product to be even more popular because it is built around this exploding market for Kubernetes.

JC:  What is the customer profile? When does it make sense to go from Kubernetes to Kubernetes plus Rancher?

Shannon Williams, Rancher Labs: Anywhere where Kubernetes and containers are being adopted, really.  Our customers talk about the D-K-R stack:  Docker- Kubernetes-Rancher.

JC: Is there a particular threshold or requirement that drives the need for Rancher?

Shannon Williams, Rancher Labs:: Rancher is often something that users discover early in their exploration of Docker or Kubernetes.  Once they have a cluster deployed, they start to wonder how they are going to manage it on an on-going basis. This often occurs right at the beginning of a container deployment program - day 1, day 2 or day 3.

Like any other open source software companies, users can download our software for free. The point when a Rancher user becomes a Rancher customer usually happens when the deployment has moved to a mission-critical level.  When their business actually runs on the Kubernetes cluster, that's when we are asked to step in to provide support. We end up establishing a business relationship to support them with everything we build.

JC: And how does the business model work in a world of open source, container management? 

Shannon Williams, Rancher Labs: Customers purchase support subscriptions on an annual basis.

JC: Are you charging based on the number of clusters or nodes? 

Shannon Williams, Rancher Labs: Yes, based on the number or clusters and hosts. A team that is running their critical business systems on Kubernetes will get a lot of benefits in knowing that everything from the lowest level up, including the container runtime, the Kubernetes engine, the management platform, logging, monitoring  -- we provide that unified support.

JC: Does support mean that you actually run the clusters on behalf of the clients? 

Shannon Williams, Rancher Labs: Well, no, they're running it on their systems or in the cloud. Like other open source software developers, we can provide incident response for issues like "why is this running differently in Amazon than on-prem?" We also provide training for their teams and collaboration on the technology evolution.

JC: What about the company itself. What are the big milestones for Rancher Labs?

Shannon Williams, Rancher Labs: We're growing really fast and now have about 85 employees around the world. We have offices around the world, including in Australia, Japan, the UK and are expanding. We have about 170 customer accounts worldwide. We have over 10,000 organizations using the product and over 4 million downloads to date.  The big goals are rolling out Version 2.0, which is now in commercial release, and driving adoption of Kubernetes across the board. We're hoping to get lots of feedback as version 2.0 gets rolled out. So much of the opportunity now concerns the workload management layer.  How do we make it easier for customers to deploy containerized applications? How can we smoothe the rollout of containerized databases in a Kubernetes world? How do we solve the storage portability challenge? There are enormous opportunities to innovate in these areas. It is really exciting.

JC: What is needed to scale your company to the next level?

Shannon Williams, Rancher Labs: Right now we are in a good spot. We benefit from the magic of open source. We were able to grow this fast just on our Series B funding round because thousands of people downloaded our software and loved it. This has given us inroads with companies that often are the biggest in their industries. Lot's of the Fortune 500 are now using Rancher to run critical business functions for their teams. We get to work with the most innovative parts of most organizations.

Sheng Liang, Rancher Labs: There is a lot of excitement. We just have to make sure that we keep our quality high and that we make our customers successful. I feel the market is still in its early days. There is a lot more work to make Kubernetes really the next big thing.

Shannon Williams, Rancher Labs: We're still a tiny minority inside of IT. It will be a ten-year journey but the pieces are coming together.


Tuesday, May 22, 2018

First release of open source Kata Containers

The open source Kata Containers project marked its version 1.0 release.

Kata Containers 1.0 delivers the fully integrated code bases of the two technologies contributed to form the foundation of the project: Intel Clear Containers from Intel Corporation and runV technology from Hyper.sh.

The developers q1say Kata Containers offer a fast and secure deployment option for anything from highly-regulated workloads to untrusted code, spanning public/private cloud, containers-as-a-service and edge computing use cases.

The Kata Containers project was launched in December 2017 by the OpenStack Foundation. Arm, Canonical, Dell/EMC, Intel and Red Hat have announced financial support for the project. Other companies supporting the project include 99cloud, AWcloud, China Mobile, City Network, CoreOS, EasyStack, Fiberhome, Google, Huawei, JD.com, Mirantis, NetApp, SUSE, Tencent, Ucloud and UnitedStack.

Sunday, May 20, 2018

Project Airship aims for fully containerized clouds - OpenStack on Kubernetes

AT&T is working with SKT, Intel and the OpenStack Foundation to launch Project Airship, a new open infrastructure project that will offer a unified, declarative, fully containerized, and cloud-native platform. The idea is to let cloud operators manage sites at every stage from creation through minor and major updates, including configuration changes and OpenStack upgrades.

AT&T said the project builds on the foundation laid by the OpenStack-Helm project launched in 2017. In a blog posting, Amy Wheelus, vice president of Cloud and Domain 2.0 Platform Integration, says the initial focus is "to introduce OpenStack on Kubernetes (OOK) and the lifecycle management of the resulting cloud, with the scale, speed, resiliency, flexibility, and operational predictability demanded of network clouds."

She states that AT&T will use Airship as the foundation of its network cloud running over its 5G core, which will support the launch of 5G services in 12 cities later this year.  Airship will also be used by Akraino Edge Stack, which is a new Linux Foundation project for creating an open source software stack supporting high-availability cloud services optimized for edge computing systems and applications.

"We are pleased to bring continued innovation with Airship, extending the work we started in 2016 with the OpenStack and Kubernetes communities to create a continuum for modern and open infrastructure. Airship will bring new network edge capabilities to these stacks and Intel is committed to working with this project and the many other upstream projects to continue our focus of upstream first development and accelerating the industry," stated Imad Sousou, corporate vice president and general manager of the Open Source Technology Center at Intel.

http://airshipit.org

AT&T seeds Akraino project for carrier-scale edge computing

The Linux Foundation will host a new Akraino project to create an open source software stack supporting high-availability cloud services optimized for edge computing systems and applications.

To seed the project, AT&T is contributing code designed for carrier-scale edge computing applications running in virtual machines and containers.

“This project will bring the extensive work AT&T has already done to create low-latency, carrier-grade technology for the edge that address latency and reliability needs,” said Jim Zemlin, Executive Director of The Linux Foundation. “Akraino complements LF Networking projects like ONAP in automating services from edge to core. We’re pleased to welcome it to The Linux Foundation and invite the participation of others as we work together to form Akraino and establish its governance.”

“Akraino, coupled with ONAP and OpenStack, will help to accelerate progress towards development of next-generation, network-based edge services, fueling a new ecosystem of applications for 5G and IoT,” said Mazin Gilbert, Vice President of Advanced Technology at AT&T Labs.

Tuesday, May 8, 2018

IBM to adopt Red Hat OpenShift Container Platform for all its software

IBM will extend its private cloud platforms (IBM Cloud Private and IBM Cloud Private for Data) and its middleware offerings to Red Hat OpenShift Container Platform as Red Hat Certified Containers.
The agreement builds on IBM’s recent move to re-engineer its entire software portfolio with containers, including WebSphere, MQ Series and Db2.

The companies said there is growing consensus that container technologies are the best way to move applications across multiple IT footprints, from existing data centers to the public cloud and vice versa.

Under their agreement, enterprise customers will be able to more easily adopt a hybrid cloud strategy with IBM Cloud Private and Red Hat OpenShift serving as the common foundation. This will enable the IBM Cloud Private container platform to provide a single view of all enterprise data.


“With IBM’s recent move to containerize its middleware, today’s landmark partnership between IBM and Red Hat provides customers with more choice and flexibility. Our common vision for hybrid cloud using container architectures allows millions of enterprises – from banks, to airlines, to government organizations - to access leading technology from both companies without having to choose between public and private cloud,” stated Arvind Krishna, Senior Vice president, IBM Hybrid Cloud.

“Today’s enterprises need a succinct roadmap for digital transformation as well as confidence in deployment consistency across every IT footprint. By extending our long-standing collaboration with IBM, we’re bringing together two leading enterprise application platforms in Red Hat OpenShift Container Platform and IBM Cloud Private and adding the power of IBM’s software and cloud solutions. Together, we’re providing customers with a supported, consistent offering across their computing environments,” said Paul Cormier, President, Products and Technologies, Red Hat.


Red Hat OpenShift Kubernetes to extend across Azure and on-prem

Microsoft and Red Hat announced an expanded alliance to enable enterprises to run container-based applications across Microsoft Azure and on-premises using the Red Hat OpenShift Kubernetes platform.

Red Hat OpenShift on Azure is a fully-managed service that provides the flexibility to reely move applications between on-premises environments and Azure with a consistent platform.

The companies said they can enable applications to connect faster, and with enhanced security, between Azure and on-premises OpenShift clusters with hybrid networking. From the containers, enterprises will be able to access other Microsoft Azure services like Azure Cosmos DB, Azure Machine Learning, and Azure SQL DB.

A preview of Red Hat OpenShift on Azure is expected in the coming months. Red Hat OpenShift Container Platform and Red Hat Enterprise Linux on Azure and Azure Stack are currently available.

“Microsoft and Red Hat are aligned in our vision to deliver simplicity, choice and flexibility to enterprise developers building cloud-native applications. Today, we’re combining both companies’ leadership in Kubernetes, hybrid cloud and enterprise operating systems to simplify the complex process of container management, with an industry-first solution on Azure,” stated Scott Guthrie, executive vice president, Cloud and Enterprise Group, Microsoft.

Tuesday, April 18, 2017

Cloudways Launches Auto-scalable Cloud Hosting on Bare-metal containers

Cloudways based in Malta, a provider of managed Container-as-a-Service for web apps, has unveiled managed, auto-scalable cloud hosting on Kyup bare-metal containers designed to scale up and down without human intervention and with virtually no downtime.

The company noted that auto-scalability is a key requirement for web app deployment, and websites deployed on managed Kyup containers from Cloudways able to provide auto-scaling in response to fluctuating traffic volumes. As a website registers high traffic levels, the server (RAM and CPU) scales to prevent downtime, before reverting to normal when traffic declines to allow improved resource utilisation.

Leveraging Cloudways Cloud Platform, users can quickly launch Kyup containers, while developers can select from a range of provided installers to create web apps. Users can also launch a simple PHP stack for deploying custom PHP-based applications.

In addition, Cloudways ThunderStack, a formula developed based on the latest web-server and caching technologies, is claimed to enable apps to run up to 300% faster compared with other cloud platforms.

Cloudways also features the CloudwaysBot, a bot that generates notifications to website owners within the platform and via email, Slack, or HipChat when the server scales up and down. The bot can also provide users with server and app related insights.


Together with Kyup, Cloudways claims to offer the first managed Container-as-a-Service for web apps. With Cloudways, developers and designers can use managed, auto-scalable Kyup containers to efficiently deploy PHP-based apps. The platform features over 50 one-click features, including browser-based SSH, free SSL, Git and staging areas.

Wednesday, February 22, 2017

Diamanti Releases its Hyper-Converged Container Platform, Raises $18M

Diamanti (previously Datawise.io), a start-up based in San Jose, California, announced general availability of its Diamanti D10 hyper-converged container platform.

Diamanti said its platform enables enterprise IT organizations to deploy Docker containers in seconds with guaranteed service levels at a fraction of the cost of traditional data center infrastructure. The Diamanti D10 appliance ships pre-integrated with all of the container software, compute, networking and storage resources necessary to optimally deploy and operate high-performance containerized applications at production scale. Diamanti’s network and storage API contributions to popular Kubernetes open-source container orchestration software automate the container deployment process across all infrastructure resources, enabling production deployment in seconds versus days and weeks of manual approaches.

The company also announced the close of an $18 million Series B funding round to drive growth across product development, support, sales, and marketing, bringing the company’s total funding to more than $30 million. New investor Northgate Capital led the round, with additional investments from CRV, DFJ, Translink, and GSR Ventures.

“Modern enterprise container adopters are targeting data intensive applications,” said Jeff Chou, Diamanti CEO and co-founder. “Enterprises are delivering large-scale digital services in private and public clouds built around Docker containers faster than ever. These data-driven applications include real-time data pipelines and analytics that can overwhelm existing infrastructure built for monolithic virtualized applications. Serving this new class of agile digital services requires an operational model that brings predictable network and storage I/O service levels to workloads including Cassandra, MongoDB, and PostgreSQL across multiple environments as customers adopt varying hybrid cloud models.”

https://www.diamanti.com/

Tuesday, June 14, 2016

Portworx Debuts Enterprise-Class Storage for Containers

Portworx, a start-up based in Redwood City, California, introduced its purpose-built, enterprise-class storage for containers.

While containers are distributed and fast, legacy storage is siloed and slow, and cannot be quickly provisioned or scaled to respond to changing container workloads. Portworx said its solution for container storage provides per-container storage management and can keep pace with container scalability and bursts.

The company also claims that it can cut the cost of traditional storage arrays and virtual machines by up to 70 percent.

Key features and benefits of PX-Enterprise include:

  • Scale-out block storage deployed as a container, which minimizes required resources
  • Container-granular controls for storage capacity, performance and availability
  • Container-granular snapshots, which require less storage capacity
  • Replication, which adds an extra measure of redundancy
  • The ability to deploy in the cloud, on-premises or both, thus preventing vendor lock-in
  • A RESTful API and a command-line interface, which allows easy deployment and integration with containers
  • Unified storage with a global file namespace that makes storage easy to deploy and manage
  • Multi-cluster visibility for unified storage management
  • Predictive capacity management that provides proactive alerts before storage capacity is reached

“Like containers themselves, Portworx container-defined storage is radically simple, and the cost-savings are undeniable,” said Murli Thirumale, CEO and co-founder of Portworx. “For the first time, we’ve eliminated the complex and expensive decision of purchasing storage for containers in production. Portworx will enable enterprises of all sizes to realize the true potential of containers.”

http://www.portworx.com

Wednesday, February 17, 2016

Avi Launches Container Service Fabric for Mesosphere, Docker

Avi Networks and Mesosphere introduced an integrated solution to help enterprises build and deploy microservices applications at scale using Docker Containers.

The Avi Vantage Platform works with the Mesosphere Datacenter Operating System (DCOS) and Docker containers to provide a dynamic service fabric for micro-segmentation, service discovery, graphical application maps, load balancing, and autoscaling capabilities deployed across thousands of DCOS nodes.

Key capabilities of Avi Vantage and Mesosphere DCOS:

  • Full integration with the Mesosphere DCOS for automated, policy-driven deployments of apps and services
  • Comprehensive security services including micro-segmentation, DDoS protection, SSL, and web application security
  • Software load balancer with support for East-West traffic and automated configuration updates
  • Visibility to inter-service relationships with graphical application maps and application performance analytics
  • Predictive autoscaling of applications based on real time performance metrics

In conjunction with the launch, Avi Networks is also announcing free development and test licenses of its products that application developers can download along with complete documentation and a knowledge base.

“Mesosphere simplifies building and running distributed applications at scale, but until now supporting application services such as service discovery and load balancing have been disparate point solutions that can’t match the agility and automation that Mesosphere provides,” said Ranga Rajagopalan, CTO of Avi Networks. “Avi’s container service fabric automates application services and accelerates the path to production deployments.”

http://www.avinetworks.com

Monday, January 4, 2016

Blueprint: The (Near) Future of Enterprise Apps, Analytics, Big Data and The Cloud

by Derek Collison, Founder and CEO of Apcera

In 2016, technical innovation, combined with evolutionary trends, will bring rapid organizational changes and new competitive advantages to enterprises capable of adopting new technologies. Not surprisingly, however, the same dynamics will mean competitive risk for organizations that have not positioned themselves to easily absorb (and profit from) new technological changes. The following predictions touch on some of the areas in IT that I think will see the biggest evolutions in 2016 and beyond.
  1. Hadoop: old news in 24 months. Within the next two years, no one will be talking about big data and Apache Hadoop—at least, not as we think of the technology today. Machine Learning and AI will become so good and so fast that it will be possible to extract patterns, perform real-time predictions, and gain insight around causation and correlation without human intervention to model or prepare raw data. In order to function effectively, automated analytics typically need to be embedded in other systems that bring forth data. Next-generation AI-enabled machine learning systems (aka “big data,” even though this term will soon fade away), will be able to automatically assemble and deliver financial, marketing, scientific and other insights to managers, researchers, executive decision makers and consumers—giving them new levels of competitive advantage.
  2. Microservices will change how applications are developed. Containers will disrupt the industry by giving organizations the ability to build less and assemble more since the cost of the isolation context is so small, fast and cheap. While microservices are inherently complex, new platforms are emerging that will make it possible for IT organizations to innovate at speed without compromising security, or performing the undifferentiated heavy lifting to construct these micro-service systems in production. With robust auditing and logging tools, these platforms will be able to reason and decide how to effectively manage all IT resources, including containers, VMs and hybrids.
  3. The container ecosystem will continue to diversify and evolve. The coming year will see significant evolution in the container management space. Some container products will simply vanish from the market, while certain companies, not wanting to miss out on the hype, will simply acquire existing technology to claim a spot in the new ecosystem. This consolidation will shrink the size of the playing field, making viable container management choices easier for IT decision makers to identify. Over time, as container vendors seek to differentiate themselves, those that survive will be the ones that demonstrate the ability to orchestrate complex and blended workloads, in a manner that enterprises can manage with trust. The container will slowly become the most unimportant piece of the equation.
  4. True isolation and security will continue to push technology forward. Next year, look for creative advances in enabling technology, such as hybrid solutions, consisting of fast and lightweight virtual machines (VMs) that wrap containers, micro-task virtualization and unikernels. This is already beginning to happen. For example, Intel's Clear Containers (which are actually stripped-down VMs) use no more than 20 MB of memory each, making them look more like containers in terms of server overhead, and spin up in just 100-200 milliseconds. The goal here is to provide the isolation and security required by the enterprise, combined with the speed of the minimalist “Clear Linux OS.” Unikernels, another emerging technology, possess meaningful security benefits for organizations because they have an extremely small code footprint, which, by definition, reduces the size of the “attack surface.” In addition, unikernels feature low boot times, a performance characteristic always in favor with online customers who have dollars to spend and the burgeoning micro-services crowd.
This coming year is set to be a busy one. Technology is advancing at a pace that has never been seen before. The rise of machine learning in agile enterprises will truly transform the way information is gathered, analyzed and used. Microservices and containers are going to change the way software systems are designed and built, and we’ll see a lot of movement and acquisitions within the container ecosystem. And, as always, security will be a prominent concern; however, much of the new technology adopted next year will be built upon a foundation of isolation and security, not bolted on as an afterthought. Innovation that doesn’t compromise security will be a welcome change. 2016 is shaping up to be an exciting year.

About the Author

Derek Collison is CEO and founder of Apcera, provider of the trusted cloud platform for global 2000 companies. An industry veteran and pioneer in large-scale distributed systems and enterprise computing, Derek has held executive positions at TIBCO Software, Google and VMware. While at Google, he co-founded the AJAX APIs group and went on to VMware to design and architect the industry’s first open PaaS, Cloud Foundry. With numerous software patents and frequent speaking engagements, Derek is a recognized leader in distributed systems design and architecture and emerging cloud platforms.


Got an idea for a Blueprint column?  We welcome your ideas on next gen network architecture.
See our guidelines.

Monday, December 21, 2015

Oracle Acquires StackEngine for Container Management

Oracle has acquired StackEngine, a start-up specializing in container operations management.  Financial terms were not disclosed.

StackEngine, which is based in Austin, offers software to manage and automate Docker applications, giving organizations the power to compose, deploy, and automate resilient container-native applications. Its flagship product, Container Application Center, is an end-to-end container application management solution for developers, DevOps and IT operations teams that brings users through the entire container application lifecycle, from development to deployment.

All StackEngine employees will be joining Oracle as part of Oracle Public Cloud.

http://www.stackengine.com/
https://www.oracle.com/corporate/acquisitions/stackengine/index.html

Tuesday, December 8, 2015

Open Container Initiative Cites Progress and Growing Membership

The Open Container Initiative (OCI), which was launched earlier this year with a mission to host an open source, technical community and build a vendor-neutral, portable and open specification and runtime for container-based solutions, cited a number of milestones and a growing membership base.

Founding members, including nine new companies committed to the OCI include: Amazon Web Services, Apcera, Apprenda, AT&T, ClusterHQ, Cisco, CoreOS, Datera, Dell, Docker, EMC, Fujitsu Limited, Goldman Sachs, Google, Hewlett Packard Enterprise, Huawei, IBM, Infoblox, Intel, Joyent, Kismatic, Kyup, Mesosphere, Microsoft, Midokura, Nutanix, Odin, Oracle, Pivotal, Polyverse, Portworx, Rancher Labs, Red Hat, Resin.io, Scalock, Sysdig, SUSE, Twistlock, Twitter, Univa, Verizon Labs, VMware and Weaveworks.

Key points:

  • The OCI follows an open governance model that guides the project’s technical roadmap, currently available on GitHub. 
  • A Technical Developer Community (TDC) has been formed for the project and includes independent maintainers as well as maintainers from founding members including Docker, CoreOS, Google and Huawei. The TDC is responsible for maintaining the project and handling the releases of both the runtime and specification. 
  • A Technical Oversight Board (TOB) will be appointed by the members of the OCI and the TDC. The TOB will work closely with the TDC to ensure cross-project consistencies and workflows. The governance model also includes a Trademark Board to oversee the development and use of the OCI’s trademarks and certifications. 
  • As part of the original formation of the OCI in June of this year, Docker has donated both a draft specification for the base format and runtime and the code associated with a reference implementation of that specification. 
  • Since the OCI’s inception, there have been two releases of the specification and six releases of runc. Docker will be integrating the latest version of runc into future releases of Docker and Cloud Foundry has implemented runc as part of its Garden Project. 

“Collaborative development continues to prove its ability to transform markets and advance emerging technologies. The OCI is a welcome addition to The Linux Foundation Collaborative Project ecosystem,” said Jim Zemlin, executive director, The Linux Foundation. “This level of industry support illustrates the prevalence of container technologies across IT infrastructures, much in the way we saw with virtualization 10 years ago. I’m very excited to support the work of this community.”

https://www.opencontainers.org
http://www.linuxfoundation.org

Thursday, November 12, 2015

Rancher Can Now Orchestrate Persistent Storage Services for Docker

Rancher Labs, a start-up based in Cupertino, California, now offers the ability to orchestrate Persistent Storage Services for Docker.  This makes its easier for developers to define and deploy the exact storage capabilities needed for their containerized applications.

Rancher, which is the company's flagship software tool for building a private container service, can be used with a Docker 1.9 volume plugin to:

  • Orchestrate the deployment and configuration of storage services directly on container hosts, utilizing any software defined storage platforms shipped as containers, including, for example, Gluster, Ceph and NexentaEdge.
  • Launch applications using Docker Compose that can leverage these storage services to automatically create and mount persistent Docker volumes to support stateful application services such as traditional databases.
  • Utilize any vendor-specific advanced storage features offered by storage services such as snapshot, backup, remote replication and data analytics.
  • Deploy an application with all of its necessary storage services on any virtual machine or bare metal server, running in any public cloud or private data center.

For example, a user deploying a web application running on Tomcat and MySQL can also use Rancher to deploy a Persistent Storage Service based on Gluster. The Gluster storage service can then be distributed across multiple disks running on different hosts within the Rancher cluster. When the MySQL service is launched, Rancher can orchestrate the deployment of a block storage volume from the Gluster service, which is then connected to the MySQL container as a Docker Volume. Users can leverage GlusterFS geo-replication to backup the application to a remote site.

Once the application is deployed, Rancher continues to manage both the application and storage services, monitoring hosts and disks for failures and responding whenever necessary to ensure the availability of the application.

“As developers begin to deploy more stateful applications using Docker, it is critical that storage services become as easy to deploy and migrate as regular Docker containers,” said Sheng Liang, co-founder and CEO of Rancher Labs. “We’re thrilled to announce this latest update to Rancher, as we believe persistent storage services will unleash a new wave of innovation for the Docker ecosystem, community and users.”

http://www.rancher.com

Rancher Labs Launches its Container Infrastructure Platform

Rancher Labs, a start-up based in Cupertino, California, announced the beta release of its platform for running Docker in production. It includes a fully-integrated set of infrastructure services purpose built for containers, including networking, storage management, load balancing, service discovery, monitoring, and resource management. Rancher connects these infrastructure services with standard Docker plugins and application management tools, such as Docker Compose, to make it simple for organizations to deploy and manage containerized workloads on any infrastructure.

Key Rancher features:

  • Cross-host networking: Rancher creates a private software defined network for each user, allowing secure communication between containers across hosts and even clouds.
  • Container load balancing: Rancher provides an integrated, elastic load balancing service to distribute traffic between containers or services.
  • Storage Management: Rancher supports live snapshot and backup of Docker volumes, enabling users to backup stateful containers and stateful services.
  • Service discovery: Rancher implements a distributed DNS-based service discovery function with integrated health checking that allows containers to automatically register themselves as services, as well as services to dynamically discover each other over the network.
  • Service upgrades: Rancher makes it easy for users to upgrade existing container services, by allowing service cloning and redirection of service requests. This makes it possible to ensure services can be validated against their dependencies before live traffic is directed to the newly upgraded services.
  • Resource management: Rancher supports Docker Machine, a powerful tool for provisioning hosts directly from cloud providers. Rancher then monitors host resources and manages container deployment.
  • Native Docker Support: Rancher supports native Docker management of containers. Users can directly provision containers using the Docker API or CLI, as well as using Docker Compose for more complex application management functions. Third-party tools that are built on Docker API, such as Kubernetes, work seamlessly on Rancher.
  • Multi-tenancy and user management: Rancher is designed for multiple users and allows organizations to collaborate throughout the application lifecycle. By connecting with existing directory services, Rancher allows users to create separate development, testing, and production environments and invite their peers to collaboratively manage resources and applications.

Monday, August 24, 2015

Latest Windows Server Preview Adds Container Capabilities

Microsoft released  a new preview of Windows Server 2016 and System Center 2016 includes the first public preview of Windows Server Containers, new Nano Server functionality, and software-defined data center enhancements.

Microsoft says Windows Server Containers create a highly agile Windows Server environment, enabling developers to work with the languages of their choice – whether.NET, ASP.NET, and PowerShell or Python, Ruby on Rails, Java, etc..

The release builds on a partnership with Docker to offer container and DevOps benefits to Linux and Windows Server users alike. Windows Server Containers are now part of the Docker open source project. These containers can be deployed and managed either using PowerShell or the Docker client.

In addition to working with Docker to deliver Windows Server Containers, Microsoft plans to support choice and flexibility around containers through:

  • Ensuring a first-rate experience for containers on Azure. Microsoft recently released Docker VM Extensions for Linux on Azure, Docker CLI support on Windows, and Visual Studio Tools for Docker. Contributing to the open source development of the Docker Engine for Windows Server with a goal to enable the Docker client to manage multi-container applications using both Linux and Windows containers, regardless of the hosting environment or cloud provider.
  • Joining the Open Container Initiative to deliver an open, universal container image format and runtime under the Linux Foundation.
  • Expanding the ecosystem, through work with Canonical around the LXD REST API, a cross-platform container management layer that will bring new container innovation to Windows and Ubuntu developers.
  • Updates to Visual Studio and Visual Studio Online for Windows Server Container.
  • A future preview of Windows Server 2016, will include Hyper-V Containers, a second container deployment option that will provide higher isolation using an optimized virtualization and Windows Server operating system that separates containers from each other and from the host operating system. 

The latest preview of Windows Server 2016 also includes new Azure-inspired software-defined datacenter features, extending the functionality of our leading operating system and application platform:

  • Enhanced Nano Server functionality: adds a  new Emergency Management Console.
  • Simplified software-defined networking: we are delivering a scalable network controller, for centralized network configuration as well as a software load balancer for high availability and performance.
  • Extended security: Shielded VMs, which enable isolation between the underlying host and virtual machines, help protect resources in shared environments. 
  • Management: System Center feature enhancements make it easier to manage virtualized environments including support for rolling upgrades, shielded VM’s and guarded host support, and automated maintenance windows.


http://www.microsoft.com/en-us/evalcenter/evaluate-windows-server-technical-preview

Monday, August 17, 2015

IBM Unveils LinuxONE Mainframes with Virtualization and Container Support

IBM introduced two Linux mainframe servers – called LinuxONE – designed for hybrid clouds and the new era of open systems.  With its new LinuxONE mainframe, IBM will enable open source and industry tools and software, including Apache Spark, Node.js, MongoDB, MariaDB, PostgreSQL, Chef and Docker. IBM, which helped pioneer virtualization on the mainframe, will now offer more choices for virtualization by enabling the new LinuxONE systems to be provisioned as a virtual machine through the open standards-based KVM hypervisor.

Some highlights of the announcement:

  • LinuxONE is a new portfolio of hardware, software and services solutions, providing two distinct Linux systems for large enterprises and mid-size businesses.
  • LinuxONE Emperor, based on the IBM z13, is described as the world’s most advanced Linux system with the fastest processor in the industry. The system is capable of analyzing transactions in “real time” and can be used to help prevent fraud as it is occurring. The system can scale up to 8,000 virtual machines or hundreds of thousands of containers – currently the most of any single Linux system.
  • LinuxONE Rockhopper, an entry into the portfolio, is designed for clients and emerging markets seeking the speed, security and availability of the mainframe but in a smaller package.
  • SUSE, which provides Linux distribution for the mainframe, will now support KVM, giving clients a new hypervisor option.
  • Canonical and IBM also announced plans to create an Ubuntu distribution for LinuxONE and z Systems.
  • IBM will contribute a large amount of its mainframe code to open source community – this includes code to help enterprises identify issues and help prevent failures before they happen, help improve performance across platforms and enable better integration with the broader network and cloud.
  • IBM is joining the Linux Foundation's new “Open Mainframe Project,” which brings together a collaboration of nearly a dozen organizations across academia, government and corporate sectors to advance development and adoption of Linux on the mainframe.
"Fifteen years ago IBM surprised the industry by putting Linux on the mainframe, and today more than a third of IBM mainframe clients are running Linux,” said Tom Rosamilia, senior vice president, IBM Systems. “We are deepening our commitment to the open source community by combining the best of the open world with the most advanced system in the world in order to help clients embrace new mobile and hybrid cloud workloads. Building on the success of Linux on the mainframe, we continue to push the limits beyond the capabilities of commodity servers that are not designed for security and performance at extreme scale."

http://www.ibm.com/linuxone
http://ibm.biz/linuxONEimages

Wednesday, August 12, 2015

Docker Content Trust Ensures Integrity of Containers

A newly released Docker Content Trust capability uses digital signatures to ensure the integrity of Dockerized content. The idea is to allow Docker users to operate exclusively on signed content when building or deploying Dockerized applications. The capability is built using Notary and The Update Framework.

When enabled, Docker Content Trust ensures that all operations using a remote registry enforce the signing and verification of images. In particular, Docker’s central commands `push`, `pull`, `build`, `create` and `run` will only operate on images that either have content signatures or explicit content hashes.

Docker said it will be signing the Docker Hub Official Repos, providing users with a trusted set of base images they can use to build distributed applications.

“As organizations evolve from a monolithic software architecture to distributed applications, the secure distribution of software becomes increasingly difficult to solve,” said Diogo Mónica, Security Lead for Docker. “Without a standard method for validating the integrity of content, Docker has the unique opportunity to leapfrog the status quo and build a system that meets the strongest standard for software distribution. With Docker Content Trust, users have a solution that works across any infrastructure, offering security guarantees that were not previously available to them.”

Docker Content Trust also generates a Timestamp key that provides protection against replay attacks, which would allow a malicious actor to serve signed but expired content. Docker manages the Timestamp key for the user, reducing the hassle of having to constantly refresh the content client-side.

https://docs.docker.com/security/trust/content_trust/

See also