Skip to content

Instantly share code, notes, and snippets.

@marcelrv
Last active September 2, 2024 13:23
Show Gist options
  • Save marcelrv/e7dac93863945dccc46f70915c84b5db to your computer and use it in GitHub Desktop.
Save marcelrv/e7dac93863945dccc46f70915c84b5db to your computer and use it in GitHub Desktop.
Docker migrate to overlay2 from aufs script

Docker migrate to overlay2 from aufs script

Crazy that this is pretty much forced change without proper transition script

note. Based on https://www.sylvaincoudeville.fr/2019/12/docker-migrer-le-stockage-aufs-vers-overlay2/ NOT WORKING!!!! IF FOLLOWING THE ABOVE.. SOMEHOW THE CONTAINERS DO NOT RE-APPEAR!

The only way I found that is somewhat automated is the the docker-compose way..

Which is still not 100% and require manual fixing of stupid errors of the docker-compose tool (mainly things that ere not interpreted right, like dates & userIds that need to manually surrounded by quotes etc)

Also I needed to add network_mode: bridge or network_mode: host to the configs as I did not want a network definition for each container which seems to be the docker-compose default way

Backup the whole crap before doing heart surgery

sudo systemctl stop docker
sudo cp -au /var/lib/docker /var/lib/docker.bk
sudo systemctl start docker

Export Containers

Export all container configs

cd ~
mkdir ~/dockercontainers
sudo docker container list -a > ~/dockercontainers/containers.txt
cat  ~/dockercontainers/containers.txt

CIDS=$(docker container list -a | sed '1d' | awk '{print $1}')
for c in $CIDS; do mkdir ~/dockercontainers/$c ; docker run --rm -v /var/run/docker.sock:/var/run/docker.sock ghcr.io/red5d/docker-autocompose $c > ~/dockercontainers/$c/docker-compose.yml ; done

# or if local (with fixes)
for c in $CIDS; do mkdir ~/dockercontainers/$c ; ~/autocompose.py $c > dockercontainers/$c/docker-compose.yml ; done

# check if there are no errors in the created files
for c in $CIDS; do cd ~/dockercontainers/$c ; echo ======== $c ======= ;docker-compose config  ; done

#Fix possible errors... e.g. adding quotes around strings docker-compose does not swallow (e.g. dates)

Store all images (I think not needed if no persistent references are used)

cd ~
mkdir dockersave
sudo docker images > dockersave/list.txt
cat  dockersave/list.txt

IDS=$(docker images | sed '1d' | awk '{print $3}')
for c in $IDS; do docker save -o dockersave/$c.tar $c; done
cat dockersave/list.txt | sed '1d' | grep -v "" | awk '{ print "docker tag "$3" "$1":"$2 }'>> dockersave/tag

Change driver

sudo systemctl stop docker
sudo nano  /etc/docker/daemon.json

if the file is already there, add line "storage-driver" : "overlay2"

if the file is empty, you can go with echo '{ "storage-driver" : "overlay2" }' > /etc/docker/daemon.json sudo systemctl start docker

What also works is edit the file first and than just restart (so no need to stop first) sudo systemctl restart docker

Restore images

cd dockersave/
IDS=$(ls *.tar)
for c in $IDS; do docker load -i $c; done

Restore containers

for c in $CIDS; do cd ~/dockercontainers/$c ; docker-compose up -d --no-deps --build ; done

When something goes horrably wrong you can remove the containers and try again

docker container stop `docker container ls -aq`
docker container rm `docker container ls -aq`
#! /usr/bin/env python
# Copy of https://github.com/Red5d/docker-autocompose dirty hacked with some fixes for dates and
# possiblity to create docker command lines from the existing containers by using the -o2 parameter
import sys, argparse, pyaml, docker
from collections import OrderedDict
def main():
parser = argparse.ArgumentParser(description='Generate docker-compose yaml definition from running container.')
parser.add_argument('-v', '--version', type=int, default=3, help='Compose file version (1 or 3)')
parser.add_argument('-o', '--output', type=int, default=0, help='Type of output (0 or 1,2)')
parser.add_argument('cnames', nargs='*', type=str, help='The name of the container to process.')
args = parser.parse_args()
struct = {}
networks = []
for cname in args.cnames:
cfile, networks,cmdline = generate(cname)
struct.update(cfile)
if args.output >0:
print ("\r\n"+ cmdline + "\r\n")
if args.output <2:
render(struct, args, networks)
def render(struct, args, networks):
# Render yaml file
if args.version == 1:
pyaml.p(OrderedDict(struct))
else:
pyaml.p(OrderedDict({'version': '"3"', 'services': struct, 'networks': networks}))
def generate(cname):
c = docker.from_env()
try:
cid = [x.short_id for x in c.containers.list(all=True) if cname == x.name or x.short_id in cname][0]
except IndexError:
print("That container is not available.")
sys.exit(1)
cattrs = c.containers.get(cid).attrs
# Build yaml dict structure
cfile = {}
cfile[cattrs['Name'][1:]] = {}
ct = cfile[cattrs['Name'][1:]]
cmdLine ="docker run --name " + str( cattrs['Name'][1:])
values = {
'cap_add': cattrs['HostConfig']['CapAdd'],
'cap_drop': cattrs['HostConfig']['CapDrop'],
'cgroup_parent': cattrs['HostConfig']['CgroupParent'],
'container_name': cattrs['Name'][1:],
'devices': [],
'dns': cattrs['HostConfig']['Dns'],
'dns_search': cattrs['HostConfig']['DnsSearch'],
'environment': cattrs['Config']['Env'],
'extra_hosts': cattrs['HostConfig']['ExtraHosts'],
'image': cattrs['Config']['Image'],
'labels': cattrs['Config']['Labels'],
'links': cattrs['HostConfig']['Links'],
#'log_driver': cattrs['HostConfig']['LogConfig']['Type'],
#'log_opt': cattrs['HostConfig']['LogConfig']['Config'],
#'logging': {'driver': cattrs['HostConfig']['LogConfig']['Type'], 'options': cattrs['HostConfig']['LogConfig']['Config']},
'networks': {x for x in cattrs['NetworkSettings']['Networks'].keys() if x != 'bridge'},
'security_opt': cattrs['HostConfig']['SecurityOpt'],
'ulimits': cattrs['HostConfig']['Ulimits'],
'volumes': cattrs['HostConfig']['Binds'],
'volume_driver': cattrs['HostConfig']['VolumeDriver'],
'volumes_from': cattrs['HostConfig']['VolumesFrom'],
'entrypoint': cattrs['Config']['Entrypoint'],
'user': cattrs['Config']['User'],
'working_dir': cattrs['Config']['WorkingDir'],
'domainname': cattrs['Config']['Domainname'],
'hostname': cattrs['Config']['Hostname'],
'ipc': cattrs['HostConfig']['IpcMode'],
'mac_address': cattrs['NetworkSettings']['MacAddress'],
'privileged': cattrs['HostConfig']['Privileged'],
'restart': cattrs['HostConfig']['RestartPolicy']['Name'],
'read_only': cattrs['HostConfig']['ReadonlyRootfs'],
'stdin_open': cattrs['Config']['OpenStdin'],
'tty': cattrs['Config']['Tty']
}
# Populate devices key if device values are present
if cattrs['HostConfig']['Devices']:
values['devices'] = [x['PathOnHost']+':'+x['PathInContainer'] for x in cattrs['HostConfig']['Devices']]
networks = {}
if values['networks'] == set():
del values['networks']
values['network_mode'] = 'bridge'
else:
networklist = c.networks.list()
for network in networklist:
if network.attrs['Name'] in values['networks']:
networks[network.attrs['Name']] = {'external': (not network.attrs['Internal'])}
# Check for command and add it if present.
if cattrs['Config']['Cmd'] != None:
values['command'] = " ".join(cattrs['Config']['Cmd']),
# Check for exposed/bound ports and add them if needed.
try:
expose_value = list(cattrs['Config']['ExposedPorts'].keys())
ports_value = [cattrs['HostConfig']['PortBindings'][key][0]['HostIp']+':'+cattrs['HostConfig']['PortBindings'][key][0]['HostPort']+':'+key for key in cattrs['HostConfig']['PortBindings']]
# If bound ports found, don't use the 'expose' value.
if (ports_value != None) and (ports_value != "") and (ports_value != []) and (ports_value != 'null') and (ports_value != {}) and (ports_value != "default") and (ports_value != 0) and (ports_value !=$
for index, port in enumerate(ports_value):
if port[0] == ':':
ports_value[index] = port[1:]
# print ( "-p " , str (ports_value[index]))
cmdLine = cmdLine + " -p " + str (ports_value[index])
values['ports'] = ports_value
else:
values['expose'] = expose_value
except (KeyError, TypeError):
# No ports exposed/bound. Continue without them.
ports = None
# fix some dates from causing errors.
try:
if values['labels']['org.label-schema.build-date'] != None:
d = values['labels']['org.label-schema.build-date']
values['labels']['org.label-schema.build-date'] = "'" + str( d ) + "'"
except KeyError as ke:
pass
# Iterate through values to finish building yaml dict.
for key in values:
value = values[key]
if (value != None) and (value != "") and (value != []) and (value != 'null') and (value != {}) and (value != "default") and (value != 0) and (value != ",") and (value != "no"):
ct[key] = value
if cattrs['HostConfig']['Binds']:
for key in cattrs['HostConfig']['Binds']:
cmdLine = cmdLine + ' -v ' + str(key)
if cattrs['Config']['Env']:
for key in values['environment']:
cmdLine = cmdLine + ' -e ' + str(key.split('=')[0])
if len(key.split('='))>1:
cmdLine = cmdLine + '="' + str(key.split('=')[1]) + '"'
cmdLine = cmdLine + " " + str ( cattrs['Config']['Image'])
return cfile, networks, cmdLine
if __name__ == "__main__":
main()
@cueedee
Copy link

cueedee commented Nov 9, 2023

Please correct me if I'm wrong, but...

The steps that appear to have worked for me were no more complicated than:

  • sudo systemctl stop docker.service;
  • Edit /etc/docker/daemon.json to include the "storage-driver" : "overlay2" tuple;
  • sudo systemctl start docker.service;

After this:

  • journalctl -u docker.service --since '10 minutes ago' no longer reported warnings about using aufs, like:
    ... dockerd[...]: time="..." level=info msg="[graphdriver] using prior storage driver: aufs"
    ... dockerd[...]: time="..." level=warning msg="[graphdriver] WARNING: the aufs storage-driver is deprecated, and will be removed in a future release"
    ... dockerd[...]: time="..." level=info msg="Docker daemon" commit=b0f5bc3 graphdriver(s)=aufs version=20.10.7
    
    ... and instead, reports:
    ... dockerd[...]: time="..." level=info msg="Docker daemon" commit=b0f5bc3 graphdriver(s)=overlay2 version=20.10.7
    
  • Morever, both /var/lib/docker/ and /var/lib/docker/image/ now each have overlay2/ sub-directories that they didn't have before; (their repective aufs/ sub-directories are still there, too, btw);
  • docker image ls, docker container ls and docker service ls all report the same data as they did before...
    ... except for:
    • docker image ls which now list <none> for tags;
  • Not to mention that the deployed stack also still appears to work as before;

Do note that this all is on a host in swarm mode (i.e., it had been docker swarm init ... and docker stack deploy --compose-file docker-compose.yml ...-ed at some point);

It would appear that all images were automatically converted from aufs to overlay2 by this, except for the replication of their tags, and hence that the only remaining housekeeping expected to-do, is to get those reattached.

Additionally, I think I should rm -r /var/lib/docker{,/image}/aufs once truly convinced that they're no longer needed.

@agostof
Copy link

agostof commented Jan 24, 2024

For restoring and tagging the images on the same step, a restore script can be created as follows:

cat tag | awk '{ OFS="";print "docker load < ",$3, ".tar; ", $0}' > restore_images.sh

This assumes that tag contents look like this:

#tag contents
docker tag e34e831650c1 ubuntu:latest
docker tag f3d89a2abe0d postgres:15

The restore_images.sh script will look like:

docker load < e34e831650c1.tar; docker tag e34e831650c1 ubuntu:latest
docker load < f3d89a2abe0d.tar; docker tag f3d89a2abe0d postgres:15

Then run bash restore_images.sh.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment