Showing posts with label #Docker. Show all posts
Showing posts with label #Docker. Show all posts

Wednesday, December 20, 2017

Top 5 Container Predictions for 2018

by David Messina, CMO, Docker

Prediction #1: The next big security breach will be foiled by containers

As we witnessed with the Equifax breach in early September, data breaches can place personal data at risk and in doing so, erode consumer confidence. But what if you could prevent a major breach by simply placing the software in a container? The Equifax breach occurred when a piece of web software was vulnerable and exposed to hackers. Containers act to reduce the attack surface available for exploitation, and in doing so greatly increase the difficulty and minimize the possibility of many forms of compromise. In many cases, simple steps like using read-only containers will fully mitigate a broad range of attack vectors.
                                                                                                                                                                    From being ephemeral and isolated in nature to enabling frequent patching and scanning against the latest CVEs, containers are vital to securing the software supply chain. Containers will be more widely relied upon in the coming year to combat future threats.

Prediction #2: Complexity and time to market will thwart PaaS adoption

As calls for accelerated cloud strategies only get louder across the Global 10K, it's becoming increasingly clear that outdated Platform as a Service (PaaS) frameworks are not equipped to handle the demand of managing all of the applications that are part of today’s modern enterprise. For the past few years, utilizing PaaS has been considered a cutting-edge approach to migrating your apps to the cloud. What is often overlooked, is the time required to set up PaaS frameworks, retrain employees and re-code each application - efforts that can take a year to drive and complete. In 2018, we expect to see PaaS adoption stall as enterprises recognize the time to value is too prolonged for the current and future pace of business. This will give way to accelerated Container as a Service (CaaS) platform adoption as enterprises look to migrate more workloads to the cloud while achieving greater agility, innovation, and cost-efficiencies.

Prediction #3: Containers will break the 80/20 Rule for IT budgeting 

It’s widely understood that CIOs typically commit 80% of their budget towards maintenance with only 20% left for innovation - a major roadblock in the path to digital transformation. We expect this to change in 2018 as CIOs rewrite the 80/20 rule in favor of innovation by unlocking new methods for managing and modernizing their legacy apps. In the past, application modernization required refactoring apps, ripping/replacing existing infrastructure and implementing new processes. Instead, enterprises are now using containerization for meaningful application modernization results in days. Organizations will reap the benefits of cloud portability and security while using the significant cost-efficiencies to reinvest their savings in more strategic digitization efforts.

Prediction #4: Security, not orchestration, will write the next chapter of containerization

2016 and yes even some of 2017 might have been about the orchestration wars but now that companies like Docker offer a choice of orchestration, some might argue that orchestration has been largely commoditized. With container adoption expected to grow into a nearly $3 billion dollar market by 2020 according to 451 Research and Docker itself experiencing more than one billion downloads bi-weekly, security will be the next frontier that companies need to address. Ironically, the threats will come from the applications themselves, making “container boundaries” an imperative for segmenting and isolating threats. The container boundary can also make it more difficult for an attacker to get the data out, resulting in detection. Securing the software supply chain will be paramount to safeguarding the application journey.

Prediction #5: CIOs will accelerate plans for digital transformation with containers

Although “digital transformation” has become somewhat of a buzzword as of late,  enterprises certainly accept the idea behind it - and with a greater sense of urgency. According to Gartner, as many as two-thirds of business leaders are concerned that their companies aren’t moving fast enough on the digital transformation front, leading to potential competitive disadvantages. In 2018, CIOs will increasingly feel the pressure to speed up digitization efforts and will accelerate their journey through containers. As businesses build out and implement strategies around cloud migration, DevOps and microservices, containers will play an increasingly important role in achieving these initiatives. By Dockerizing their applications, our enterprise customers have experienced the immediate benefits of digital transformation: faster app delivery times, portability across environments, hardened security and more.

Thursday, July 7, 2016

Can Docker Become the Dominant Port Authority for Workloads Between Cloud?


If you think these little snippets of Linux source code might have limited revenue-bearing potential given the fact that anyone can activate them on an open source basis, then you might want to consider DockerCon 2016, which was held June 19-20 at the Washington State Convention Center in Seattle.  DockerCon is an annual technology conference for Docker Inc., the much touted San Francisco-based start-up that developed and popularized Docker runtime Linux containers, which are no longer proprietary but hosted as an open source project under the Linux Foundation.  Docker Inc. (the company) is among the rarified “unicorns” of Silicon Valley – start-ups with valuations exceeding $1 billion based on a really hot idea, but with nascent business models and perhaps limited revenue streams at this stage of their development.



Even with a conference ticket price of $990, DockerCon 2016 in Seattle was completely sold out.  Over 4,000 attendees showed up and there was a substantial waiting list. For comparison, last year, DockerCon in San Francisco had about 2,000 people. The inaugural DockerCon event in 2014 was attended by about 500 people. The conference featured company keynotes, technology demonstrations, customer testimonials, and an exhibition area with dozens of vendors rushing into this space. Big companies exhibiting at DockerCon included Microsoft, IBM, AWS, Cisco, NetApp, HPE and EMC.
 
Punching way above its size, Docker rented Seattle's Space Needle and EMP museum complex to feed and entertain all 4,000+ guests on the evening of the summer solstice.  

Clearly, Docker’s investors are making a big bet that the company grow from being the inventor of an open source standard.

Why should the networking and telecom community care about a new virtualization format at the OS level?

There is a game plan afoot to put Docker at the crossroads of application virtualization, cyber security, service orchestration, and cloud connectivity.  Docker enables applications to be packed into a standard shipping container, enabling software contained within to run the same regardless of the underlying infrastructure. Compared with virtual machines (VMs), containers launch quicker.  The container includes the application and all of its dependencies.  However, containers make better use of the underlying servers because they share the kernel with other containers, running as isolated processes in user space on the host operating system.  The vision is to allow these shipping containers to move easily between servers or between private and public clouds.  As such, by controlling the movement of containers, you essentially control the movement of workloads locally and across the wide area network. The applications running within containers need to remain securely connected to data and processing resources from wherever the container may be located. Thus, software-defined networking becomes part of the containerization paradigm. Not surprisingly, we are seeing a lot of Silicon Valley’s networking talent move from the established hardware vendors in San Jose to the new generation of software start-ups in San Francisco, as exemplified by Docker Inc.
 
The Timeline of Significant Events for Docker

Docker was started by Solomon Hykes as an internal project at dotCloud, a platform-as-a-service company based in France and founded around 2011. The initial Docker work appears to have started around 2012/3 and the project soon grew to become the major focus of the company, which adopted the Docker name.  The official launch of Docker occurred on March 13, 2013 in a presentation by Solomon Hykes entitled “The Future of Linux Containers” hosted at the PyCon industry conference.  Soon after, the Docket whale icon was posted and a developer community began to form.

In May 2013, dotCloud hired Ben Golub as CEO with a goal of restructuring from the PaaS business to the huge opportunity it now saw in building and orchestrating cloud containers. Previously, Golub was CEO of Gluster, another open source software company but which focused on scale-out storage.  Gluster offered an open-source software-based network-attached filesystem that could be installed on commodity hardware.  The Silicon Valley company successfully raised venture funding, grew its customer based quickly, and was acquired by Red Hat in 2011. 

Within 3 months of joining Docker, Golub established an alliance with Red Hat. A second round of venture funding, led by Greylock Partners, brought in $15 million. Headquarter were moved to San Francisco.  In June 2014, Docker 1.0 was officially released, marking an important milestone for the project.

In August 2014, Docker sold off its original dotCloud (PaaS) business to Berlin-based cloudControl, however, the operation was shut down earlier this year after a two-year struggle. Other dotCloud engineers credited with work on the initial project include Andrea Luzzardi and Francois-Xavier Bourlet. A month later, in September 2014, Docker secured $40 million in a series C funding round that was led by Sequoia Capital and included existing investors Benchmark, Greylock Partners, Insight Ventures, Trinity Ventures, and Jerry Yang.  

In October 2014, Microsoft announced integration of the Docker engine into its upcoming Windows Server release, and native support for the Docker client role in Windows.  In December 2014, IBM announced a strategic partnership with Docker to integrate the container paradigm into the IBM Cloud.  A year and a half later, in June 2015, IBM's Bluemix platform-as-a-service began supporting Docker containers. IBM Bluemix also supports Cloud Foundry and OpenStack as key tools for designing portable distributed applications. Additionally, IBM claims the industry's best performance of Java on Docker. IBM Java is optimized to be two times faster and occupies half the memory when used with the IBM Containers Service. Moreover, as a Docker based service, IBM Containers include open features and interfaces such as the new Docker Compose orchestration services.

In March 2015, Docker acquired SocketPlane, a start-up focused on Docker-native software defined networking. SocketPlane had only been founded a few months earlier by Madhu Venugopal, who previously worked on SDN and OpenDaylight while at Cisco Systems, before joining Red Hat as Senior Principal Software Engineer.  These SDN capabilities are now being integrated into Docker.

In April 2015, Docker raised $95 million in a Series D round of funding led by Insight Venture Partners with new contributions from Coatue, Goldman Sachs and Northern Trust. Existing investors Benchmark, Greylock Partners, Sequoia Capital, Trinity Ventures and Jerry Yang’s AME Cloud Ventures also participated in the round.

In October 2015, Docker acquired Tutum, a start-up based in New York City. Tutum developed a cloud service that helps IT teams to automate their workflows when building, shipping or running distributed applications. Tutum launched its service in October 2013. 

In November 2015, Docker extended is Series D funding round by adding $18 million in new investment.  This brings total funding for Docker to $180 million.

In January 2016, Docker acquired Unikernel Systems, a start-up focused on unikernel development, for an undisclosed sum. Unikernel Systems, which was based in Cambridge, UK, was founded by pioneers from Xen, the open-source virtualization platform. Unikernels are defined by the company as specialized, single-address-space machine images constructed by using library operating systems. The idea is to reduce complexity by compiling source code into a custom operating system that includes only the functionality required by the application logic. The unikernel technology, including orchestration and networking, is expected to be integrated with the Docker runtime, enabling users to choose how they ‘containerize’ and manage their application - from the data center to the cloud to the Internet of Things.

Finally, at this year’s DockerCon conference, Docker announced that it will add built-in orchestration capabilities to it Docker Engine.  This will enable IT managers to form a self-organizing, self-healing pool of machines on which to run multi-container distributed applications – both traditional apps and microservices – at scale in production. Specifically, Docker 1.12 will offer an optional “Swarm mode” feature that users can select to “turn on” built-in orchestration, or they can also elect to use either their own custom tooling or third-party orchestrators that run on Docker Engine. The upcoming Docker 1.12 release simplifies the process of creating groups of Docker Engines, also known as swarms, which are now backed by automated service discovery and a built-in distributed datastore. The company said that unlike other systems, the swarm itself has no single point of failure. The state of all services is replicated in real time across a group of managers so containers can be rescheduled after any node failure. Docker orchestration includes a unique in-memory caching layer that maintains state of the entire swarm, providing a non-blocking architecture which assures scheduling performance even during peak times. The new orchestration capabilities go above and beyond Kubernetes

Tuesday, June 21, 2016

Software-defined storage for Containers



Eric Carter provides a quick overview of Hedvig's software-defined storage for containers at #DockerCon.

See video: https://youtu.be/151KSqYPfQM


Monday, June 20, 2016

Datadog Sees Docker Deployments Increasing

Docker market share has grown 30% in one year with larger enterprise companies leading adoption, according to a recent survey by Datadog, which offers a monitoring service for dynamic cloud infrastructure.

Datadog’s Docker Adoption Research was based on a sample of 10,000 companies and tracks real usage, making it the largest and most accurate review of Docker adoption published to date. Datadog found that two-thirds of companies that try Docker adopt it within one month and quintuple their usage within nine months. These statistics demonstrate that containerization is solving real, immediate problems for companies at scale.


Datadog also introduced its Automated Service Discovery, a service that enables teams to seamlessly monitor Dockerized infrastructure without interruption as it expands, contracts, and shifts across hosts by continuously listening to Docker events. Whenever a container is created or started, the Datadog Agent identifies which service is running then starts collecting and reporting metrics. Whenever a container is stopped or destroyed, the Agent recognizes that as well.

“Datadog has been adopted by thousands of leading enterprise companies transitioning away from legacy IT,” said Olivier Pomel, Co-Founder and CEO of Datadog. “This gives us unique insight into where the industry is heading, what innovative technologies are being adopted, and allows us to better service these customers.”

http://www.datadog.com

See also