Created
July 24, 2014 17:04
-
-
Save jamtur01/6d35573849333a88838c to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
ServiceVirtualization.com: What were the main challenges and limitations of the basic Linux container specification, and how does Docker address these? | |
Turnbull: Containers have been around for a while. In Linux, they are most strongly represented by LXC. The big challenge with LXC was that it was written by kernel engineers and it's not really user frie. Hence you don't see a lot of LXC deployments aside from companies like Facebook, Etsy and Google with highly skilled engineering teams. Docker makes this easier by providing a UI around the kernel and provides a workflow for things you want to build in containers. | |
ServiceVirtualization.com: What have been the limitations of virtual machines for providing consistency across software development, QA and production environments? | |
Turnbull: The big challenge from the performance with a traditional VM is overhead. Every time you deploy a VM, you are deploying the hypervisor on top of the OS infrastructure. Up to 20 percent of the hardware is consumed by the hypervisor, and every VM has a full version of the guest OS and kernel. You cannot deploy the application without deploying multiple gigabytes of additional software. | |
Docker has a thin layer between you and the container. You get back that 20 percent of your machine, plus each of the containers tends to be smaller than the VM. | |
On top of that, there are performance improvements. IBM has done research showing five to 10 times the performance of using VMs because you are no longer going through the hypervisor. We are finding that most people get about 10 times the performance compared with VMs. So you not only get additional performance in terms of the containers, you will also get considerably better bang for the buck out of the bare metal underneath. | |
There are also workflow improvements. Every time you deploy a VM, you have to take the OS image and build or apply the latest packages, install your applications and do management stuff. Then if you want to make a change, you need to upgrade the application on the VM or rebuild the VM. | |
If you are rebuilding or packaging, it takes time and it leads to entropy since every VM is slightly different. With the VM, it can take hours to set it up while a new Docker image can be spun up in three or four minutes. Also launching a traditional VM might take three to five minutes versus only a second for a Docker container. | |
So, for example, tests running on traditional VMs take 10 minutes to set up and another 10 minutes to run. Using Docker you can eliminate that 10 minute set up time, cutting your test run time in half. In the continuous integration world, that is pretty powerful. | |
ServiceVirtualization.com: What is the best way to connect multiple applications together using containers? | |
Turnbull: When implementing complex applications, we generally recommend that people deploy each component in a separate container. You would have a database in one container, an application server in another, and the Web presentation layer in another. We provide features and tools for connecting those containers together. | |
One of the things we announced at DockerCon was a new orchestration cool called Libswarm that will be released next month. That will help you do cross-container orchestration for building and deploying large-scale applications with hundreds of containers. | |
ServiceVirtualization.com: In what ways does the Docker container address the goal of writing applications once and then leveraging them across multiple cloud environments, and what are the limitations for doing so today? | |
Turnbull: Docker provides an abstraction layer on top of platforms and cloud environments. You can deploy a Docker host on a VMware server, on Amazon, on bare metal or in a variety of other local and cloud platforms. You can then easily use Docker to move build and run applications portably across those environments. | |
ServiceVirtualization.com: What is the current state of tools and methodologies for creating, testing and deploying Dockerized applications that rely on multiple types of application services, such as databases, ERP and CRM? | |
Turnbull: The suite of tools around libswarm will help in this area and there are is already a bunch of tooling around this. We will be releasing more too. In addition, others are releasing similar tooling, such as New Relic's Centurion and Google Kubernetes, to make this easier. | |
ServiceVirtualization.com: How can tools for simulating software services, such as mocks, stubs and Service Virtualization, complement application containers based on Docker? | |
Turnbull: The big difference is, the reason you build virtual services and mock them, is that it is hard to replicate what is used in the production environment in the dev/test environment. Docker is lightweight and is easy to build. You can take services you have in production and create them as Docker apps and deploy them anywhere. Instead of creating a fake database or mock API, you can run what it looks like in production in the dev/test environment. Instead of a mock API, you create a replica of what is in production and test against the real API. As a result, tests are likely to be more realistic. | |
However, Docker is Linux-centric, so you cannot replicate mainframe or Windows applications at the moment. But Windows is on the roadmap, and we are working with Microsoft to support Docker in the future. Likewise, you cannot get at network services like Google Maps. The struggle you have with live services is that you call that service in production and it makes a data update or some kind of change, like verifying a payment. In the Docker world, it is easy to build cheap, disposable services or provide a layer between you and production services because the services are cheap and disposable. Docker does provide some oil in the process, but it is not able to replicate Google Maps and PayPal. | |
It is also easier to scale up applications for performance testing using Docker it is easier to connect multiple users and hosts. From the point of view of services, each Docker container looks like a separate host. For example, if you could spawn a hundred VMs for testing, with Docker you could deploy 1,000 containers and validate the application a bit better. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment