-
Star
(153)
You must be signed in to star a gist -
Fork
(25)
You must be signed in to fork a gist
-
-
Save onjin/2dd3cc52ef79069de1faa2dfd456c945 to your computer and use it in GitHub Desktop.
postgres: | |
image: postgres:9.4 | |
volumes: | |
- ./init.sql:/docker-entrypoint-initdb.d/init.sql |
create table sometable(id int); |
@poloche make sure you aren't setting the PGDATABASE environment variable. I had a script that created this database, but got an error that the database didn't exist before the init scripts had a chance to run
I was facing the same issue of init scripts getting ignored.
I HAVE FOUND A SOLUTION if it helps anyone !!
version: '2'
services:
common_db:
image: postgres:10.4
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres123
- POSTGRES_MULTIPLE_DATABASES=db1,db2
volumes:
- ./db.sql:/docker-entrypoint-initdb.d/db.sql
- ./postgres-data:/var/lib/postgresql/data
ports:
- 5436:5432
networks:
- app-network
volumes:
postgres-data:
networks:
app-network:
driver: bridge`
Now as you can see I have postgres-data:
volume mounted.
If i remove this directory before starting docker-compose file things work !!.
I that directory/volume is present the init scripts are ignored !!
enjoy!!
init scripts are run when there postgres-data
folder is created. If the postgres-data
folder is initialized, the init scripts will be ignored.
My
docker-compose.yaml
file looks like this below. I've tried all combinations of possible configurations.volumes: - ./dumps:/docker-entrypoint-initdb.d/dumps - ./dumps/portal.sql:/docker-entrypoint-initdb.d/portal.sql - ./dumps:/docker-entrypoint-initdb.d/dumps.sql
And
./dumps
containsportal.sql
. However, I get this error:/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/dumps
I am following the directions, so what gives?
UPDATE: I got it to work!
Here is what works:
volumes: - ./dumps/portal2.sql:/docker-entrypoint-initdb.d/1-portal.sql ... volumes: dumps:
This works. Apparently you have to have a subdirectory which you make a volume down below and then your scripts sit in it and can link as volumes directly inside the service.
@rjurney are you sure that is the reason? I wouldn't think a top-level named volume would have anything to do with a local folder. Does anyone know?
init scripts are run when there
postgres-data
folder is created. If thepostgres-data
folder is initialized, the init scripts will be ignored.
nice caveat, thanks
I'm running into an issue where my db init process is in fact running in the container, at least batching a bunch of CREATES and INSERTS of my dump data, but then is abruptly shut down and restarted.
strends_psql | CREATE DATABASE
strends_psql | You are now connected to database "rivers" as user "usr".
strends_psql | CREATE SCHEMA
strends_psql | SET
strends_psql | SET
strends_psql | SET
strends_psql | SET
strends_psql | SET
strends_psql | SET
strends_psql | SET
strends_psql | CREATE TABLE
strends_psql | INSERT 0 4
strends_psql |
strends_psql |
strends_psql | 2019-04-29 17:43:06.633 UTC [41] LOG: received fast shutdown request
strends_psql | waiting for server to shut down....2019-04-29 17:43:06.637 UTC [41] LOG: aborting any active transactions
When I get a shell into my db container, the database rivers
was created but is empty, with no tables, relations, or data from my batched inserts.
@honestcomrade did you manage to fix this?
@honestcomrade any luck?
I'm running into an issue where my db init process is in fact running in the container, at least batching a bunch of CREATES and INSERTS of my dump data, but then is abruptly shut down and restarted.
strends_psql | CREATE DATABASE strends_psql | You are now connected to database "rivers" as user "usr". strends_psql | CREATE SCHEMA strends_psql | SET strends_psql | SET strends_psql | SET strends_psql | SET strends_psql | SET strends_psql | SET strends_psql | SET strends_psql | CREATE TABLE strends_psql | INSERT 0 4 strends_psql | strends_psql | strends_psql | 2019-04-29 17:43:06.633 UTC [41] LOG: received fast shutdown request strends_psql | waiting for server to shut down....2019-04-29 17:43:06.637 UTC [41] LOG: aborting any active transactions
When I get a shell into my db container, the database
rivers
was created but is empty, with no tables, relations, or data from my batched inserts.
I had the same issue but after a few hours of trying, I found a working combination of docker-compose and init script. Btw, I read here that the container shutting down and restarting is (apparently) part of the initialization process.
This is part of my initdb.sh:
#!/bin/bash
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" -d "$POSTGRES_DB" <<-EOSQL
create schema if not exists $SCHEMA;
create table $SCHEMA.todos (
id serial primary key,
done boolean not null default false,
task text not null,
due timestamptz
);
create role $ANON nologin;
create role $AUTHENTICATOR noinherit login password '$POSTGRES_PASSWORD';
grant $ANON to $AUTHENTICATOR;
EOSQL
This is part of my docker-compose.yml:
version: '3.7'
services:
db:
image: postgres
restart: always
volumes:
- ./initdb.sh:/docker-entrypoint-initdb.d/initdb.sh
- ./pgdata:/var/lib/postgresql/data
ports:
- 8002:5432
environment:
POSTGRES_PASSWORD: docker
POSTGRES_DB: mydb
SCHEMA: public
ANON: web_anon
AUTHENTICATOR: authenticator
@Juliannnnshipit this looks like it might help me out, thanks a bunch for the reply.
Hi there! Thank you! Is there way to update my sql script and then update existing image, without data loss? for example I need to add trigger for existing postgres image...
Hi there! Thank you! Is there way to update my sql script and then update existing image, without data loss? for example I need to add trigger for existing postgres image...
Hi, initdb.sh scripts are called only if there is no database set, so if you need to migrate current database it's better to run this sql migrations from psql cli for already created databases, and initdb.sh/ scripts will be run for new clean instances.
FYI: do not pass the host (localhost) in the initialization script or you'll likely to encounter an error about not being able to connect to database.
in docker-compose.yml
patch /file.sql:/patch/to/docker-entrypoint-initdb.d/
in fact, point the file to the directory
hint: remember where the work dir is base on your docker file for the image you are using to compose this container
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/
did anyone get this to work properly. i am currently facing similar issues
it is easier to make a docker file and COPY the init.sql to /docker-entrypoint-initdb.d
did anyone get this to work properly. i am currently facing similar issues
well what i did which worked was that i deleted all previous containers of the same image.
docker rm -f -v postgres_container_name.
restarted and it worked like a charm after that
What is the best way to add tables from sql files after the first init?
at the moment i have a docker compose file with mariadb und phpmyadmin
docker-entrypoint-initdb.d works the first time
but i need the possibilty to creates tables from sql files after docker-compose up
can you help me?
after init, I'm using a separate file with database migrations (using app specific db migrations system) and run them by application entrypoint.sh, from time to time then I merge migrations to init, and clear migrations file.
I had some trouble getting it to work. My docker-compose.yml
looks like this:
volumes:
- ./:/app
services:
db:
image: postgres:12
restart: unless-stopped
volumes:
- ./project/data/init.sql:/docker-entrypoint-initdb.d/init.sql
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=picker
- POSTGRES_PASSWORD=#
- POSTGRES_DB=pricepicker-v2
volumes:
postgres_data:
This is because Postgres does not execute init scripts if postgres-data
folder is present. So I just renamed it to postgres
and it works like a charm!
Thanks @Juliannnnshipit for your help. I got it working.
My setup.sh file looks something like this:
#!/usr/bin/env bash
psql "postgres://$POSTGRES_USER:$POSTGRES_PASSWORD@$POSTGRES_HOST/$POSTGRES_DB?sslmode=disable" <<-EOSQL
-- YOUR SQL Statements here
EOSQL
docker-compose.yml
postgresdb:
image: postgres:9.4
container_name: postgresdb
environment:
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASS}
- POSTGRES_DB=${DB_NAME}
- DATABASE_HOST=${DB_HOST}
ports:
- 5433:5432
volumes:
- ./contrib/setup.sh:/docker-entrypoint-initdb.d/initdb.sh
- ./pgdata:/var/lib/postgresql/data
Thank you so much. It saved my time
I struggled with this same issue and thought the only solution was to not use a sql file in the docker-entrypoint-initdb.d directory but actually the only problem was that I wasn't granting pivs to my user i was logging in as so the schemas were getting created just invisible to my user...
Don't have to do this
#!/bin/bash
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" -d "$POSTGRES_DB" <<-EOSQL
create schema if not exists $SCHEMA;
create table $SCHEMA.todos (
id serial primary key,
done boolean not null default false,
task text not null,
due timestamptz
);
create role $ANON nologin;
create role $AUTHENTICATOR noinherit login password '$POSTGRES_PASSWORD';
grant $ANON to $AUTHENTICATOR;
EOSQL
You can do this...
create_tables.sql
CREATE DATABASE audit_service;
GRANT ALL PRIVILEGES ON DATABASE audit_service to "postgres";
docker-compose.yml
db:
image: postgres:14.3-alpine
restart: unless-stopped
networks:
- my-appliance
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- '5432:5432'
volumes:
- ./postgres/sql/create_tables.sql:/docker-entrypoint-initdb.d/create_tables.sql
- db_data:/var/lib/postgresql/data
I'm running into an issue where my db init process is in fact running in the container, at least batching a bunch of CREATES and INSERTS of my dump data, but then is abruptly shut down and restarted.
strends_psql | CREATE DATABASE strends_psql | You are now connected to database "rivers" as user "usr". strends_psql | CREATE SCHEMA strends_psql | SET strends_psql | SET strends_psql | SET strends_psql | SET strends_psql | SET strends_psql | SET strends_psql | SET strends_psql | CREATE TABLE strends_psql | INSERT 0 4 strends_psql | strends_psql | strends_psql | 2019-04-29 17:43:06.633 UTC [41] LOG: received fast shutdown request strends_psql | waiting for server to shut down....2019-04-29 17:43:06.637 UTC [41] LOG: aborting any active transactions
When I get a shell into my db container, the database
rivers
was created but is empty, with no tables, relations, or data from my batched inserts.I had the same issue but after a few hours of trying, I found a working combination of docker-compose and init script. Btw, I read here that the container shutting down and restarting is (apparently) part of the initialization process. This is part of my initdb.sh:
#!/bin/bash psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" -d "$POSTGRES_DB" <<-EOSQL create schema if not exists $SCHEMA; create table $SCHEMA.todos ( id serial primary key, done boolean not null default false, task text not null, due timestamptz ); create role $ANON nologin; create role $AUTHENTICATOR noinherit login password '$POSTGRES_PASSWORD'; grant $ANON to $AUTHENTICATOR; EOSQL
This is part of my docker-compose.yml:
version: '3.7' services: db: image: postgres restart: always volumes: - ./initdb.sh:/docker-entrypoint-initdb.d/initdb.sh - ./pgdata:/var/lib/postgresql/data ports: - 8002:5432 environment: POSTGRES_PASSWORD: docker POSTGRES_DB: mydb SCHEMA: public ANON: web_anon AUTHENTICATOR: authenticator
If you put a CREATE DATABASE statement at the top of your actual SQL script, but don't supply POSTGRES_DB, it will use the POSETGRES_USER as the DB name so the tables don't go where they should.
If you supply both POSTGRES_DB and put a CREATE DATABASE statement at the top of the SQL script, you will get an error about the DB already existing since POSTGRES_DB will create the DB before the script runs. You can see this order of operations in the logs.
I think the real fix here is to:
- set POSTGRES_DB to be the name of the DB you want to create and don't CREATE TABLE in your script
- Add SQL scripts to volume per docs
I was facing the same issue of init scripts getting ignored.
I HAVE FOUND A SOLUTION if it helps anyone !!
version: '2' services: common_db: image: postgres:10.4 restart: always environment: - POSTGRES_USER=postgres - POSTGRES_PASSWORD=postgres123 - POSTGRES_MULTIPLE_DATABASES=db1,db2 volumes: - ./db.sql:/docker-entrypoint-initdb.d/db.sql - ./postgres-data:/var/lib/postgresql/data ports: - 5436:5432 networks: - app-network volumes: postgres-data: networks: app-network: driver: bridge`
Now as you can see I have
postgres-data:
volume mounted. If i remove this directory before starting docker-compose file things work !!.I that directory/volume is present the init scripts are ignored !! enjoy!!
GOD!!!
I know its maybe out of topic but asking out of cough cough despair, did anyone find a way to do something like this with an oracle database? "docker-entrypoint-initdb.d" doesn't seem to work for oracle... but worked perfectly when I tried with postgres
I know its maybe out of topic but asking out of cough cough despair, did anyone find a way to do something like this with an oracle database? "docker-entrypoint-initdb.d" doesn't seem to work for oracle... but worked perfectly when I tried with postgres
It depends on docker image you're using if it is supporting entry scripts and how. For postgres docker image it's you can see execution here:
For your image you need to check the documentation of source code if it's available.
postgres:
image: postgres
container_name: postgres
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_DB=postgresdb
ports:
- "5432:5432"
volumes:
- ./app/db/init_db/:/docker-entrypoint-initdb.d/
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user -d postgresdb"]
interval: 10s
timeout: 5s
retries: 5
and remember to do chmod 644 of your .sql file ..in my case chmod 644 ./app/db/init_db/init.sql this is muy mucho importante
volumes:
postgres_data:
I followed this thread and figured out the docker-compose.yml which works. But when I am trying to create a new database or access the table created, or the table's data, I see no output nor any error. It's just new database didn't get created, new table didn't get created. (I tried to connect to the database using docker exec -it containerId bash
, then psql -h <hostname> -p <port> -U <username> -d <database>
)
Surprisingly, when I do the GET operation from my codebase, it works to retrieve the inserted data. Is this expected or Am I missing something?
My
docker-compose.yaml
file looks like this below. I've tried all combinations of possible configurations.And
./dumps
containsportal.sql
. However, I get this error:/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/dumps
I am following the directions, so what gives?
UPDATE: I got it to work!
Here is what works:
This works. Apparently you have to have a subdirectory which you make a volume down below and then your scripts sit in it and can link as volumes directly inside the service.