Skip to content

Instantly share code, notes, and snippets.

@calumroy
Last active January 5, 2022 01:58
Show Gist options
  • Save calumroy/ee7ea8768539ac2be0017fc158424763 to your computer and use it in GitHub Desktop.
Save calumroy/ee7ea8768539ac2be0017fc158424763 to your computer and use it in GitHub Desktop.
Docker containers

Docker Containers

Install docker from https://docs.docker.com/engine/installation/

Run the docker for Mac app. This must be started before you can call docker commands.

Docker containers are like a lightweight virtual machine.
They can be created from scratch or downloaded from the docker hub https://hub.docker.com/

An instance of an image is called a container. You have an image, which is a set of layers. If you start this image, you have a running container of this image. You can have many running containers of the same image. You can stop a container and you can run a stopped container and keep on doing things you where doing with it.

You can see all your images with docker images whereas you can see your running containers with docker ps (and you can see all containers with docker ps -a).

So a running instance of an image is a container.

Use docker pull container_name to download a container. Note: Some containers have multiple tags that are used to identify different versions. You may need to append the tag name onto the end of the container name e.g

docker pull container_name:1.2

To view all the docker images on the system

docker images -a

To delete a docker image

docker rmi Image Image

To view that status of any docker containers use:

docker ps

Docker containers are seperated from the OS. They have to be started and then logged into.

Docker containers will not save there state so if they are stopped and restarted all files that where saved in them are lost.

Mount a directory and save stuff in it to get around this issue. Alternatively you can save the state of a container with

docker ps

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS              NAMES
c3f279d17e0a        ubuntu:12.04        /bin/bash           7 days ago          Up 25 hours                            desperate_dubinsky
197387f1b436        ubuntu:12.04        /bin/bash           7 days ago          Up 25 hours                            focused_hamilton

$ docker commit c3f279d17e0a  svendowideit/testimage:version3

Note that this will create a new image, which will take a few GBs space on your disk.

Start a container

If my docker container is called gcr.io/tensorflow/tensorflow:latest-devel then start it with

docker run -it gcr.io/tensorflow/tensorflow:latest-devel

Here, ‘-it’ will allocate a terminal to created container.

You can log into the Docker Image using the root user (ID = 0) instead of the provided default user when you use the -u option. This will grant you sudo privleges.

docker run -u 0 -it mycontainer bash

Share folders between a container and the OS

To mount a folder so it can be accessed by both the OS and the docker container start docker container with the following command:

docker run -it -v $HOME/directory/:d_dir/ gcr.io/tensorflow/tensorflow:latest-devel

This starts our docker container and also mounts a directory from the OS named directory and
calls it dir in the container. This directory is now accessible from both the docker container and the OS.

Modifying an Existing Docker Image

To install a custom package or modify an existing docker image we need to

  1. run a docker a container from the image we wish to modify
  2. modify the docker container
  3. commit the changes to the container as a docker image
  4. test changes made to image

1.) Running a docker container from an image

The command to do this is,

docker run -it yhat/scienceops-python:0.0.2 /bin/bash
  • The -i tells docker to attach stdin to the container

  • The -t tells docker to give us a pseudo-terminal

  • /bin/bash will run a terminal process in your container

2.) Modify the docker container

Once we are in our container we can install package(s), and set environment variables

$ sudo apt-get install vim
$ export AWS_SECRET_KEY=mysecretkey123
$ export AWS_ACCESS_KEY=fooKey

Copy files/folders to the container or vice versa.

docker cp source_dir/. container_instance_name:/destination_dir/

Use the following to get the container instance name

docker ps --format "{{.Names}}"

When you are done modifying your container you must exit by running the exit command. Once we exit the container, we need to find the container ID by running

docker ps -a

3.) Commit the changes to the container as a new image

Copy the container ID for the container you just modified, and then run the docker commit command to commit changes to your container as an image.

docker commit [options] [container ID] [repository:tag]

An example docker commit command is the following.

docker commit e8f0671518a2 yhat/scienceops-python:0.0.2

Note here! You must commit the changes with the same tags as the scienceops image on your system. To see your new image run.

docker images

4.) Test changes made to image

To test your changes when adding an environment variable run the test command

$ docker run -it yhat/scienceops-python:0.0.2 echo $AWS_SECRET_KEY

Define a container with Dockerfile

Create an empty directory. Change directories (cd) into the new directory.
Create a file called Dockerfile with no name extension.
Copy-and-paste the following content into that file, and save it. Take note of the comments that explain each statement in your new Dockerfile.

# Use an official Python runtime as a parent image
FROM python:2.7-slim

# Set the working directory to /app
WORKDIR /app

# Copy the current directory contents into the container at /app
ADD . /app

# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt

# Make port 80 available to the world outside this container
EXPOSE 80

# Define environment variable
ENV NAME World

# Run app.py when the container launches
CMD ["python", "app.py"]
This Dockerfile refers to a couple of files we haven’t created yet, namely app.py and requirements.txt. Let’s create those next.

Building the image from the docker file

From inside the my_build directory (where the dockerfile is located), we’ll use the docker build command, passing the -t flag to “tag” the new image with a name, which in this case will be my_image. The . indicates that the Dockerfile is in the current directory, along with so-called “context” — that is, the rest of the files that may be in that location:

cd ~/my_build
docker build -t my_image .

Now once built we can use the following to view it along with all our other docker images.

docker images

Create a new image based off a downloaded image

  1. Create an empty directory and copy or create script.sh in it
  2. Create Dockerfile name it "Dockerfile" with following content:
FROM repo/image
WORKDIR $HOME

Where repo/image is the name of the downloaded image you want to base the new image on.

  1. Run the following command from the new directory.
docker build -t="new_image_name_with_tag" .
@calumroy
Copy link
Author

calumroy commented Apr 21, 2019

Images and Containers

Using an object-oriented programming analogy, the difference between a Docker image and a Docker container is the same as that of the difference between a class and an object. An object is the runtime instance of a class. Similarly, a container is the runtime instance of an image.

Delete an image or container

Image:

List images
docker images

Remove one image
docker rmi image_name

Force remove one image
docker rmi -f image_name

Container:

List all containers
docker ps -a

Remove one container
docker rm container_id

Force remove one container
docker rm -f container_id

@calumroy
Copy link
Author

Docker with GPU

Use Nvidia's docker version to easily allow the GPU to be used form docker containers.
You can use standard docker as well but more input parameters are required. See below.

For example using tensorflow with GPU in a docker container

Start the image with nvidia-docker-compose

This image can be easily started with the following docker-compose.yaml file in an empty directory.

version: '3'
services:
  tf:
    image: gcr.io/tensorflow/tensorflow:latest-gpu
    ports:
      - 8888:8888
    volumes:
      - .:/notebooks

To start the image use

nvidia-docker-compose up

or with the follwoing in .bashrc file

alias doc='nvidia-docker-compose'
alias docl='doc logs -f --tail=100'

then run

doc up

To run an interactive shell use

doc run tf

Alternatively

sudo nvidia-docker run -u 0 -it gcr.io/tensorflow/tensorflow:latest-gpu bash

To Run with standard docker run

docker run --rm --device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidiactl:/dev/nvidiactl --device /dev/nvidia-uvm:/dev/nvidia-uvm -u 0 -it gcr

@calumroy
Copy link
Author

calumroy commented Jan 5, 2022

Use bash scripts to start and enter a running docker container

start_rtk_docker.sh

docker run -it -u rtkuser -e DISPLAY=$ip:0 \
					      --env="DISPLAY"     \
					      --env="QT_X11_NO_MITSHM=1" \
					      --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
					      --mount source=visualcode,target=/opt/visualcode \
					      -v $PWD/:/home/rtkuser/ \
					      -v /home/calum/Documents/logs_rosbag2/:/home/rtkuser/logs_rosbag2 \
                                              -v /home/calum/Documents/notebooks/:/home/rtkuser/notebooks \
					      --cap-add=SYS_PTRACE \
					      --security-opt seccomp=unconfined \
					      --network host \
					      --hostname aht002 \
					      --privileged \
					      rtk_dev:latest   \
					      /bin/bash -c "sudo dpkg -i /home/rtkuser/rti_deb/packages/rti-connext-dds_6.1.0-1_amd64.deb /home/rtkuser/rti_deb/packages/rti-connext-dds-dev_6.1.0-1_amd64.deb && /bin/bash"

# docker run -it -u rtkuser -e XDG_RUNTIME_DIR=/tmp \
# 				           -e WAYLAND_DISPLAY=$WAYLAND_DISPLAY \
# 				           -v $XDG_RUNTIME_DIR/$WAYLAND_DISPLAY:/tmp/$WAYLAND_DISPLAY  \
# 				           --user=$(id -u):$(id -g) \
# 				           --mount source=visualcode,target=/opt/visualcode -v $PWD/:/home/rtkuser/ --cap-add=SYS_PTRACE --security-opt seccomp=unconfined rtk_dev:latest   /bin/bash

To enter the same container run this bash script.

#!/bin/bash

# Enter the docker container that has already been started.
# This expects the container to have a certain tag name rtk
docker exec -it $(docker ps | grep 'rtk' | awk '{ print $1 }') /bin/bash 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment