-
-
Save jgillman/8191d6f587ffc7207a88e987e034b675 to your computer and use it in GitHub Desktop.
# Assumes the database container is named 'db' | |
DOCKER_DB_NAME="$(docker-compose ps -q db)" | |
DB_HOSTNAME=db | |
DB_USER=postgres | |
LOCAL_DUMP_PATH="path/to/local.dump" | |
docker-compose up -d db | |
docker exec -i "${DOCKER_DB_NAME}" pg_restore -C --clean --no-acl --no-owner -U "${DB_USER}" -d "${DB_HOSTNAME}" < "${LOCAL_DUMP_PATH}" | |
docker-compose stop db |
Ok that makes sense, thank you :)
pg_restore: [archiver] input file is too short (read 0, expected 5) error why ?
pg_restore: [archiver] input file is too short (read 0, expected 5) error why ?
x2
pg_restore: [archiver] input file is too short (read 0, expected 5) error why ?
did you ever solved it?
Your dump is probably empty or the argument on docker exec are missing.
Could you cat
your dump and paste the command you use ?
pg_restore: [archiver] input file is too short (read 0, expected 5) error why ?
x2
x3
Did you forget the -i
option in docker command by chance?
did somebody solve it? tips above didnt help
This is a stupid bug for backwards compatibility. It is hard to understand unless you know you're in a pseudo TTY and one is interactive. The -i and -T have exact opposite values whether you use docker-compose (should be docker compose now).
I am guessing the docker-compose original implementation did some sort of weird low level stuff to work so -i and -T while being poorly implemented probably had to deal with calling one container and the compose was just a convenience more or less and existed in its own process/memory space.
So you're originally getting the name of the db from a separate command (docker-compose) then exec with docker. Who the hell knows if the db is actual text "db" or maps to a process ... somehow. Furthermore I don't know how interactive and tty are executed in the way you're doing it.
I was kind of right I think it is closed-ish ... and exposed that a single command could be executed on a TLS and non-TLS port which doesn't make sense and allowed it to escape the container.
Seems to be a lot of reason but the socket being not encrypted seems to be the problem I bet you have two different versions of TLS on the containers and/or pg_restore switches to another container or even local for ... whatever reason.
I use Firecracker and a hypervisor to isolate containers in pods and the same for the pods themselves. I also run qVisor ontop of seccomp/SELinux. People think I'm crazy but secure connections with rotating keys in a "microVM" in a hypervisor with TDX pretty much is as defense in depth as possible.
Sounds chasing the latest tech but really its old tech with new marketing. Better than having to stick to Docker because there's like a billion clusters running on it that have workarounds to this shit.
@lud Since you run the command
docker-compose exec ...
on your computer, that<
operator is interpreted on your computer as well not on the container ;-) I mean; the command to be run on the container is:pg_restore -C --clean --no-acl --no-owner -U "${DB_USER}" -d "${DB_NAME}"
w/o the< "${LOCAL_DUMP_PATH}"
suffix; that suffix is run on your computer. Thepg_restore
is run w/o the--filename
parameter, it expects queries to be fed via std-input and<
provides data to std-input.