In Project Atomic we implemented two models to install what one might call "system containers".
There's now "atomic install" and "atomic install --system". I will call the former "atomic install" and the latter "system containers".
These both use:
- Docker format
- Systemd for service dependency
The core difference is system containers doesn't rely on dockerd (using runc+ostree) instead.
Neither of these have any concept of host-side dependencies. For everything related to Docker a core problem we're ignoring is kernel version dependencies, but in practice glibc is likely just going to keep support for older kernels for a long time.
For other host dependencies then, this is really only relevant if one is executing code in a host context. If we look at my proposed direction in: https://fedorapeople.org/~walters/2016.06-rhsummit-systemcontainers/#/ Then everything which executes on the host should likely be a layered RPM.
However, let's take a look at what others are doing in this area.
Clear has the concept of a "bundle": https://clearlinux.org/documentation/bundles_overview.html
As far as I can tell, the implementation details of bundles is totally undocumented. But reading the code and playing around with the client a little bit, it appears that bundles are locked to the base OS version, and so when one goes to install a bundle, there must be one known to work with that exact version. See for example
Say I'm running OS version 9230, the client will read: https://download.clearlinux.org/packs/9230/
Basically then, they're version locked.
(I assume that if a bundle isn't changed, it just gets propagated forwards to the new version and I would hope shares storage on the server side.)
Advantages:
- Simple and avoids any "dependency" concepts besides single host version number
- Allows fully privileged bundles that clearly extend the host
Disadvantages:
- It's not possible to reuse upstream binary bundles with a "remixed" host OS
- Conversely, hard for an ISV to provide a bundle, as it requires knowledge of the OS version before it's rleeased
- Bundles don't have any relation to containers
Comparison with Project Atomic: Most similar to package layering, perhaps technologically closest if we used ostree format instead of rpm, but libsolv+rpm for dependencies.
CoreOS is heavily investing in rkt, which has a core advantage over Docker in that it's designed for use with systemd, and hence one can do "system containers".
But as far as I can tell there isn't anything in the default install that tries to manage code directly on the host. CoreOS is basically focused on rkt to launch containers, which might save some files on the host.
This page is quite useful.
A really interesting example is how the CoreOS docs suggest doing
Kubernetes configuration. The default OS instance comes with a
/usr/lib/coreos/kubelet-wrapper
which runs the kubelet via rkt
, by
default pulling from quay.io
.
If for example the version of Kubernetes inside the image required a newer Docker on the host, as far as I know it would just fail at runtime.
There's also their toolbox
which is basically the same as atomic run centos/tools
. It mounts the host,
but is otherwise still a container with no direct dependencies on the host.
Finally, one will see guides which basically download binaries into
/opt
, but to the best of my knowledge there isn't really much
support for this - e.g. ensuring your binaries are built with a
compatible version of glibc as the host, etc.
Advantages:
- Everything in a container maximizes isolation and flexibility
- Docker is the one true format the host system tools speak
Disadvantages:
- No way to express dependencies of containers on host (e.g. new flannel needs some new kernel feature, new kubernetes needs new docker)
Comparison with Project Atomic:
rkt
~=atomic install --system
||docker run
toolbox
==atomic run
TODO
TODO
TODO