Containers are just normal Linux Processes with additional configuration applied.
Docker can help us identify information about the process including the PID (Process ID) and PPID (Parent Process ID) via
docker top db
ps aux | grep <ppid>
pstree
command will list all the subprocesses.
pstree -c -p -A $(pgrep dockerd)
One of the fundamental parts of a container is namespaces. The concept of namespaces is to limit what processes can see and access certain parts of the system, such as other network interfaces or processes.
Under the covers, Namespaces are inode
locations on disk.
- Mount (mnt)
- Process Id (pid)
- Network (net)
- Interprocess Communication (ipc)
- User Id
- Control Group
An important part of a container process is the ability to have different files that are independent of the host.
Chroot provides the ability for a process to start with a different root directory to the parent OS. This allows different files to appear in the root.
CGroups limit the amount of resources a process can consume. These cgroups are values defined in particular files within the /proc directory.
By default, there is no memory limit on the container process.
All actions with Linux is done via syscalls. The kernel has 330 system calls that perform operations such as read files, close handles and check access rights.
AppArmor is a application defined profile that describes which parts of the system a process can access.
Seccomp provides the ability to limit which system calls can be made, blocking aspects such as installing Kernel Modules or changing the file permissions.
Capabilities are groupings about what a process or user has permission to do.
A container image is a tar file containing tar files. Each of the tar file is a layer. Once all tar files have been extract into the same location then you have the container's filesystem.
tar cv --files-from /dev/null | docker import - empty
docker images
To get the images
Image can be created without Docker file
Convert the directory into a tar and automatically imported into Docker as an image.
tar -C busybox -c . | docker import - busybox
docker run busybox cat /release
docker login && docker push <image name>
https://github.com/palantir/gradle-docker
https://spring.io/guides/gs/spring-boot-docker/
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
- Define the application using
Dockerfile
- Define the services that make up the application using
docker-compose.yml
file - Run
docker-compose up
and Compose starts and runs your entire app
To see the docker process docker ps
To stop the runner container docker stop <container id>
To get the container config docker inspect <container id>
To get the container list docker container ls
To build the image using the gradle build task ./gradlew build docker
https://github.com/palantir/gradle-docker
Get the image docker images
Run the docker image docker run -p 8080:8080 -t com.example.bean/spring-boot-docker-hello
Check whether the endpoint is UP and Running using CURL command curl localhost:8080
https://kubernetes.io/docs/tutorials/hello-minikube/#create-a-deployment
Start the minikube minikube start
This will install the minikube, hyperkit, VM boot image, kubeadm, kubelet, launch the Kubernetes and apiserver etc.
Check the minikube status minikube status
To set the context to use minikube kubectl config use-context minikube
To get cluster info kubectl cluster-info
To open minikube dashboard minikube dashboard
To use minikube docker daemon eval $(minikube docker-env)
kubectl create deployment hello-node --image=com.example.bean/spring-boot-docker-hello
To get the deployment status kubectl get deployments
To get the Pod status kubectl get pods
kubectl delete -n default deployment hello-node
Runc is a CLI tool for spawning and running containers according to the OCI specification.
https://github.com/opencontainers/runc
runc
currently supports running its test suite via Docker.
runc
has the ability to run containers without root privileges. This is called rootless. You need to pass some parameters to runc in order to run rootless containers.
https://docs.docker.com/registry/spec/api/
The Registry is a stateless, highly scalable server side application that stores and lets you distribute container images.
There are many options for standing up a container registry. Kubernetes solution is preferred. Pleasew install a registry through the stable Helm chart.
helm install stable/docker-registry --name private --namespace kube-system --set image.tag=2.7.1 --set service.type=NodePort --set service.nodePort=31500
libpod
contains a tools called podman
for managing Pods, Containers, and Container Images.
LibPod provides a library for applications looking to use the Container Pod concept popularized by Kubernetes. LibPod containers are not running via Docker.
The configuration of a container can be outputted via inspect. The output is compatible to the Docker API.
podman inspect http
The Metasploit Project is a computer security project that provides information about security vulnerabilities and aids in penetration testing and IDS signature development.
Metasploit is a collection of exploits that can be used to test vulnerabilities.
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
- Define app environment with
Dockerfile
- Define the services that compose the app using
docker-compose.yml
- Run
docker-compose up
and Compose starts and runs your entire app
https://github.com/moby/buildkit
BuildKit is a toolkit for converting source code to build artifacts in an efficient, expressive and repeatable manner.
The cluster management and orchestration features embedded in the Docker Engine are built using swarmkit
. Swarmkit is a separate project which implements Docker’s orchestration layer and is used directly within Docker.
A swarm consists of multiple Docker hosts which run in swarm mode and act as managers and workers.
One of the key advantages
of swarm services over standalone containers is that you can modify a service’s configuration, including the networks and volumes it is connected to, without the need to manually restart the service.
When Docker is running in swarm mode, you can still run standalone containers on any of the Docker hosts participating in the swarm, as well as swarm services. A key difference between standalone containers and swarm services is that only swarm managers can manage a swarm, while standalone containers can be started on any daemon. Docker daemons can participate in a swarm as managers, workers, or both.
The swarm manager uses ingress load balancing to expose the services you want to make available externally to the swarm. The swarm manager can automatically assign the service a PublishedPort or you can configure a PublishedPort for the service.