-
-
Save marvell/7c812736565928e602c4 to your computer and use it in GitHub Desktop.
apt-get clean autoclean | |
apt-get autoremove --yes | |
rm -rf /var/lib/{apt,dpkg,cache,log}/ |
Permanently making apt non-functional is actually a very desirable goal, CIS 4.x guidelines would dictate this for security reasons. In a proper container deployment environment you would never need to run apt again after the Docker image was built, you would redo the docker image and repush that image itself.
In fact, it is often needed to be able to install new binaries on an instance when something is going wrong and you need to debug things.
@danekantner slight correction, those guidelines refer to deployed images - saying that you don't need apt "after the image was built" is a misinterpretation, otherwise you'd never ever have base images. You can (and should) disable apt via the entrypoint, but not via Dockerfile itself.
@danekantner slight correction, those guidelines refer to deployed images - saying that you don't need apt "after the image was built" is a misinterpretation, otherwise you'd never ever have base images. You can (and should) disable apt via the entrypoint, but not via Dockerfile itself.
how does one disable apt via the entry point and why is this beneficial over doing so in the dockerfile? you can certainly have base images without apt in the base image itself, if it's the last thing that is removed
how does one disable apt via the entry point and why is this beneficial over doing so in the dockerfile? you can certainly have base images without apt in the base image itself, if it's the last thing that is removed
@danekantner I think @andrei-dascalu was referring to the situation where you use one image as a base image for another one (where you want to install further packages via apt
). If you remove apt
itself in the base image already, you can't do that.
Permanently making apt non-functional is actually a very desirable goal, CIS 4.x guidelines would dictate this for security reasons. In a proper container deployment environment you would never need to run apt again after the Docker image was built, you would redo the docker image and repush that image itself.
In fact, it is often needed to be able to install new binaries on an instance when something is going wrong and you need to debug things.
it is insecure
ideally you wouldn't have root on your container anyways, so you won't be able to do apt thingies anyways
@redthor, I think @JimmyChatz point still stands for docker though; if you
rm -rf /var/lib/{apt,dpkg,cache,log}/
and make it impossible to useapt
after that point, you are preventing anyone from using your image as a base image and making modifications withapt
.If you making a conscious decision to do that in exchange for 200 bytes and provide documentation warning people about this, it's probably fine. I, however, think that 200 bytes vs ruining the image's ability to be a base image is a bad tradeoff.
Also,
apt-get clean
is the superset ofapt-get autoclean
, so you only need to runclean
. As per the docs (emphasis mine): https://linux.die.net/man/8/apt-get
clean
Clears out the local repository of retrieved package files. It removes everything but the lock file from /var/cache/apt/archives/ and /var/cache/apt/archives/partial/.
autoclean
Like clean, autoclean clears out the local repository of retrieved package files. The difference is that it only removes package files that can no longer be downloaded, and are largely useless. This allows a cache to be maintained over a long period of time without it growing out of control. The configuration option APT::Clean-Installed will prevent installed packages from being erased if it is set to off.
making apt unusable is for security reasons. If someone were to ssh into the pod they wouldnt be able to install malicious packages or even install ftp, sftp or scp and transfer secrets, certs or files from the code inside the docker to their remote server.
If someone were to ssh into the pod they wouldnt be able to install malicious packages
Uh?
-
If you have an SSH server on your container, remove your SSH server ASAP. It is not needed to enter inside. That is a FAQ.
-
If an un-trusted user is able to enter in your container as root, your container is TOTALLY COMPROMISED. NUKE IT ASAP.
-
Destabilizing APT to make an "un-trusted root user" more hampered, so that they cannot use "APT", is really a nonsense, since I do not know even one kracker that uses "APT" to download "malicious software". A malicious software is directly executed in other low-level ways, like opening a TCP tunnel to a resource, and piping the response to a shell. Trust me, a cracker will not run "apt install supertuxkart" or similar.
If you remove the apt
lists and make apt unusable, you might as well remove apt entirely RUN apt remove apt --autoremove -y --allow-remove-essential
to save 10Mb
Hi guys, simple question: what's the meaning of && rm -rf /var/lib/apt/lists/*
given by the docker doc, and should I do it in my Dockerfile?
Anyone who is coming to this gist to remove apt-cache in their docker images; I recommend you to install dive
tool and check which directories consume more space in your image. For me; /var/lib
folder itself was 53MB, where I could have saved a bunch of MBs on other directories.
A tool for exploring each layer in a docker image
https://github.com/wagoodman/dive
A tool for exploring each layer in a docker image https://github.com/wagoodman/dive
@leiless Thanks for introducing that. really nice.
Hi guys, simple question: what's the meaning of
&& rm -rf /var/lib/apt/lists/*
given by the docker doc, and should I do it in my Dockerfile?
@ZYinMD ubuntu:22.04
image had it empty and it increased even if I did apt-get clean
. removing it would not harm anything.
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
Official Debian and Ubuntu images automatically run apt-get clean, so explicit invocation is not required.
https://github.com/moby/moby/blob/03e2923e42446dbb830c654d0eec323a0b4ef02a/contrib/mkimage/debootstrap#L82-L105