This will guide you through setting up a replica set in a docker environment using.
- Docker Compose
- MongoDB Replica Sets
- Mongoose
- Mongoose Transactions
Thanks to https://gist.github.com/asoorm for helping with their docker-compose file!
This will guide you through setting up a replica set in a docker environment using.
Thanks to https://gist.github.com/asoorm for helping with their docker-compose file!
mongo-setup: | |
container_name: mongo-setup | |
image: mongo | |
restart: on-failure | |
networks: | |
default: | |
volumes: | |
- ./scripts:/scripts | |
entrypoint: [ "/scripts/setup.sh" ] # Make sure this file exists (see below for the setup.sh) | |
depends_on: | |
- mongo1 | |
- mongo2 | |
- mongo3 | |
mongo1: | |
hostname: mongo1 | |
container_name: localmongo1 | |
image: mongo | |
expose: | |
- 27017 | |
ports: | |
- 27017:27017 | |
restart: always | |
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0", "--journal", "--dbpath", "/data/db", "--enableMajorityReadConcern", "false" ] | |
volumes: | |
- <VOLUME-DIR>/mongo/data1/db:/data/db # This is where your volume will persist. e.g. VOLUME-DIR = ./volumes/mongodb | |
- <VOLUME-DIR>/mongo/data1/configdb:/data/configdb | |
mongo2: | |
hostname: mongo2 | |
container_name: localmongo2 | |
image: mongo | |
expose: | |
- 27017 | |
ports: | |
- 27018:27017 | |
restart: always | |
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0", "--journal", "--dbpath", "/data/db", "--enableMajorityReadConcern", "false" ] | |
volumes: | |
- <VOLUME-DIR>/mongo/data2/db:/data/db # Note the data2, it must be different to the original set. | |
- <VOLUME-DIR>/mongo/data2/configdb:/data/configdb | |
mongo3: | |
hostname: mongo3 | |
container_name: localmongo3 | |
image: mongo | |
expose: | |
- 27017 | |
ports: | |
- 27019:27017 | |
restart: always | |
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0", "--journal", "--dbpath", "/data/db", "--enableMajorityReadConcern", "false" ] | |
volumes: | |
- <VOLUME-DIR>/mongo/data3/db:/data/db | |
- <VOLUME-DIR>/mongo/data3/configdb:/data/configdb |
# NOTE: This is the simplest way of achieving a replicaset in mongodb with Docker. | |
# However if you would like a more automated approach, please see the setup.sh file and the docker-compose file which includes this startup script. | |
# run this after setting up the docker-compose This will instantiate the replica set. | |
# The id and hostname's can be tailored to your liking, however they MUST match the docker-compose file above. | |
docker-compose up -d | |
docker exec -it localmongo1 mongo | |
rs.initiate( | |
{ | |
_id : 'rs0', | |
members: [ | |
{ _id : 0, host : "mongo1:27017" }, | |
{ _id : 1, host : "mongo2:27017" }, | |
{ _id : 2, host : "mongo3:27017", arbiterOnly: true } | |
] | |
} | |
) | |
exit |
// If on a linux server, use the hostname provided by the docker compose file | |
// e.g. HOSTNAME = mongo1, mongo2, mongo3 | |
// If on MacOS add the following to your /etc/hosts file. | |
// 127.0.0.1 mongo1 | |
// 127.0.0.1 mongo2 | |
// 127.0.0.1 mongo3 | |
// And use localhost as the HOSTNAME | |
mongoose.connect('mongodb://<HOSTNAME>:27017,<HOSTNAME>:27018,<HOSTNAME>:27019/<DBNAME>', { | |
useNewUrlParser : true, | |
useFindAndModify: false, // optional | |
useCreateIndex : true, | |
replicaSet : 'rs0', // We use this from the entrypoint in the docker-compose file | |
}) |
#!/bin/bash | |
#MONGODB1=`ping -c 1 mongo1 | head -1 | cut -d "(" -f 2 | cut -d ")" -f 1` | |
#MONGODB2=`ping -c 1 mongo2 | head -1 | cut -d "(" -f 2 | cut -d ")" -f 1` | |
#MONGODB3=`ping -c 1 mongo3 | head -1 | cut -d "(" -f 2 | cut -d ")" -f 1` | |
MONGODB1=mongo1 | |
MONGODB2=mongo2 | |
MONGODB3=mongo3 | |
echo "**********************************************" ${MONGODB1} | |
echo "Waiting for startup.." | |
until curl http://${MONGODB1}:27017/serverStatus\?text\=1 2>&1 | grep uptime | head -1; do | |
printf '.' | |
sleep 1 | |
done | |
# echo curl http://${MONGODB1}:28017/serverStatus\?text\=1 2>&1 | grep uptime | head -1 | |
# echo "Started.." | |
echo SETUP.sh time now: `date +"%T" ` | |
mongo --host ${MONGODB1}:27017 <<EOF | |
var cfg = { | |
"_id": "rs0", | |
"protocolVersion": 1, | |
"version": 1, | |
"members": [ | |
{ | |
"_id": 0, | |
"host": "${MONGODB1}:27017", | |
"priority": 2 | |
}, | |
{ | |
"_id": 1, | |
"host": "${MONGODB2}:27017", | |
"priority": 0 | |
}, | |
{ | |
"_id": 2, | |
"host": "${MONGODB3}:27017", | |
"priority": 0 | |
} | |
],settings: {chainingAllowed: true} | |
}; | |
rs.initiate(cfg, { force: true }); | |
rs.reconfig(cfg, { force: true }); | |
rs.slaveOk(); | |
db.getMongo().setReadPref('nearest'); | |
db.getMongo().setSlaveOk(); | |
EOF |
async function transaction() { | |
// Start the transaction. | |
const session = await ModelA.startSession(); | |
session.startTransaction(); | |
try { | |
const options = { session }; | |
// Try and perform operation on Model. | |
const a = await ModelA.create([{ ...args }], options); | |
// If the first operation succeeds this next one will get called. | |
await ModelB.create([{ ...args }], options); | |
// If all succeeded with no errors, commit and end the session. | |
await session.commitTransaction(); | |
session.endSession(); | |
return a; | |
} catch (e) { | |
// If any error occured, the whole transaction fails and throws error. | |
// Undos changes that may have happened. | |
await session.abortTransaction(); | |
session.endSession(); | |
throw e; | |
} | |
} |
when I initialize the replica set , I got the errmsg:
replSetInitiate quorum check failed because not all proposed set members responded affirmatively: mongo2:27017 failed with Error connecting to mongo2:27017 :: caused by :: Could not find address for mongo2:27017: SocketException: Host not found (authoritative), mongo3:27017 failed with Error connecting to mongo3:27017 :: caused by :: Could not find address for mongo3:27017: SocketException: Host not found (authoritative)
how these set couldn't connect each other
Failed to connect to mongo on startup - retrying in 1 sec MongoNetworkError: failed to connect to server [mongo3:27019] on first connect [Error: connect ECONNREFUSED 172.26.0.4:27019
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1142:16) {
name: 'MongoNetworkError',
[Symbol(mongoErrorContextSymbol)]: {}
}]
at Pool. (/usr/app/node_modules/mongodb/lib/core/topologies/server.js:438:11)
at Pool.emit (events.js:315:20)
at Pool.EventEmitter.emit (domain.js:485:12)
at /usr/app/node_modules/mongodb/lib/core/connection/pool.js:561:14
at /usr/app/node_modules/mongodb/lib/core/connection/pool.js:1008:9
at /usr/app/node_modules/mongodb/lib/core/connection/connect.js:31:7
at callback (/usr/app/node_modules/mongodb/lib/core/connection/connect.js:264:5)
at Socket. (/usr/app/node_modules/mongodb/lib/core/connection/connect.js:294:7)
at Object.onceWrapper (events.js:422:26)
at Socket.emit (events.js:315:20)
at Socket.EventEmitter.emit (domain.js:485:12)
at emitErrorNT (internal/streams/destroy.js:100:8)
at emitErrorCloseNT (internal/streams/destroy.js:68:3)
at processTicksAndRejections (internal/process/task_queues.js:84:21) {
[Symbol(mongoErrorContextSymbol)]: {}
i got this error
@jessequinn Thanks, I have adapted that and updated the gist.
@funfungo & @Sashakil12 try the new updated docker compose + setup.sh
what if i have services that require waiting for rs initialization
@harveyconnor I am using the docker-compose and setup.sh combination above and I am getting
mongo-setup | standard_init_linux.go:211: exec user process caused "no such file or directory"
mongo-setup exited with code 1
Error.
Although when I visit: http://localhost:27017/
I see the message:
It looks like you are trying to access MongoDB over HTTP on the native driver port.
Ideas?
If anyone else faces the same error on windows.
Please change the LineEndings to LF in setup.sh [if it is CRLF].
For VS Users, look at the Right Part of the Status Bar [in the bottom].
Thanks for reporting and solving :)
@harveyconnor, @thearabbit Hi, I am having the same issues like "thearabbit", I can only connect using host ip_address. is there any way i can connect using docker host names or local host. I have posted detailed question here. Can someone tell me what am i missing?
@AnushaPulichintha
This might be of help: https://github.com/harveyconnor/mongo-docker-local
First, thanks @harveyconnor for all the work here.
My main issue while following the steps above was that I could connect directly from host to a single mongo container, but no with the replica set. To make it work I had to change a few configurations to both docker-compose.yml
and setup.sh
, so here's what I did:
TLDR: Here's a gist with my custom setup.
My first issue was that the setup.sh
couldn't connect to the other containers, so the setup wasn't occurring properly. To fix this, I've added ALL containers to the same network as mongo-setup
, i.e.:
mongo1:
# other options
networks:
default:
The second issue is that the hosts and ports known by the replica set are the same hosts and ports that I need to use (in my local machine, docker's host) in order establish the connection properly, so I had to:
/etc/hosts
(I've added the IP printed by docker network inspect <network_name>
, haven't tested with 127.0.0.1
). Here's a great tutorial to make it automatically;container_name
to match the service name, i.e.: service mongo1 -> container_name mongo1, ...entrypoint
, like: entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0", "--journal", "--dbpath", "/data/db", "--enableMajorityReadConcern", "false", "--port", "27018" ]
. I choose, 27017 for mongo1, 27018 for mongo2 and 27019 for mongo3;expose
and ports
for each container accordingly to the port used in step 2;cfg
inside setup.sh
. Change each host's port to the same one choosed in step 2.Note: I'm currently using Pop!_OS 21.10 (Ubuntu-based)
This worked for me. But only sometimes. Empirically, I find that if I have N replicas in the set, then I have to wait until they are all ready before sending the json config to one of them. Eg, with 3 replicas defined in docker-compose.yaml with service names mongo1, mongo2, and mongo3 then:
# setup.sh
...
for n in $(seq 3); do
until mongo --host "mongo${n}" --eval "print(\"waited for connection\")"; do
echo -n .; sleep 2
done
done
...
You might need authentication with the remote mongo replica. I used this script and it went well – somehow you need to supply schema for this to work
#!/bin/bash
mongosh -u root -p example mongodb://mongo1:27017 << EOF
rs.initiate(
{
_id: 'rs',
members: [
{_id: 0, host: "mongo1:27017"},
{_id: 1, host: "mongo2:27017"},
{_id: 2, host: "mongo3:27017"}
]
}
);
EOF
This worked for me. But only sometimes. Empirically, I find that if I have N replicas in the set, then I have to wait until they are all ready before sending the json config to one of them. Eg, with 3 replicas defined in docker-compose.yaml with service names mongo1, mongo2, and mongo3 then:
# setup.sh ... for n in $(seq 3); do until mongo --host "mongo${n}" --eval "print(\"waited for connection\")"; do echo -n .; sleep 2 done done ...
Thank you, this works for me.
I'm using Linux. I configured replicaset successfully and even connected via a server running at the same network of the mongo containers.
However, If I wanted to connect mongo compass but all my tries failed!!
mongodb://<USER>:<PASS>@mongo1:27017,mongo2:27018,mongo3:27019/?replicaSet=rs0 // failed
mongodb://<USER>:<PASS>@localhost:27017,localhost:27018,localhost:27019/?replicaSet=rs0 // failed
Even using my local IP address failed to connect. I do see docker sensing the connecting but paining with the following error:
mongo1 | {"t":{"$date":"2024-03-23T17:23:58.511+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"192.168.2.84:56344","uuid":{"uuid":{"$uuid":"17e133d8-6eb9-448a-9af2-5cb2d024766a"}},"connectionId":78,"connectionCount":11}}
mongo2 | {"t":{"$date":"2024-03-23T17:23:58.511+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"192.168.2.84:58228","uuid":{"uuid":{"$uuid":"03d389c1-c3f6-474a-8b28-2e1428df858c"}},"connectionId":81,"connectionCount":12}}
mongo3 | {"t":{"$date":"2024-03-23T17:23:58.511+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"192.168.2.84:40516","uuid":{"uuid":{"$uuid":"2184a73b-af1b-46c6-ab68-5284f2fc4823"}},"connectionId":92,"connectionCount":22}}
mongo1 | {"t":{"$date":"2024-03-23T17:23:58.512+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn78","msg":"client metadata","attr":{"remote":"192.168.2.84:56344","client":"conn78","negotiatedCompressors":[],"doc":{"application":{"name":"MongoDB Compass"},"driver":{"name":"nodejs","version":"6.5.0"},"platform":"Node.js v18.18.2, LE","os":{"name":"linux","architecture":"x64","version":"6.5.0-26-generic","type":"Linux"}}}}
mongo2 | {"t":{"$date":"2024-03-23T17:23:58.512+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn81","msg":"client metadata","attr":{"remote":"192.168.2.84:58228","client":"conn81","negotiatedCompressors":[],"doc":{"application":{"name":"MongoDB Compass"},"driver":{"name":"nodejs","version":"6.5.0"},"platform":"Node.js v18.18.2, LE","os":{"name":"linux","architecture":"x64","version":"6.5.0-26-generic","type":"Linux"}}}}
mongo3 | {"t":{"$date":"2024-03-23T17:23:58.512+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn92","msg":"client metadata","attr":{"remote":"192.168.2.84:40516","client":"conn92","negotiatedCompressors":[],"doc":{"application":{"name":"MongoDB Compass"},"driver":{"name":"nodejs","version":"6.5.0"},"platform":"Node.js v18.18.2, LE","os":{"name":"linux","architecture":"x64","version":"6.5.0-26-generic","type":"Linux"}}}}
mongo1 | {"t":{"$date":"2024-03-23T17:23:58.513+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn78","msg":"Connection ended","attr":{"remote":"192.168.2.84:56344","uuid":{"uuid":{"$uuid":"17e133d8-6eb9-448a-9af2-5cb2d024766a"}},"connectionId":78,"connectionCount":10}}
mongo2 | {"t":{"$date":"2024-03-23T17:23:58.513+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn81","msg":"Connection ended","attr":{"remote":"192.168.2.84:58228","uuid":{"uuid":{"$uuid":"03d389c1-c3f6-474a-8b28-2e1428df858c"}},"connectionId":81,"connectionCount":11}}
mongo3 | {"t":{"$date":"2024-03-23T17:23:58.514+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn92","msg":"Connection ended","attr":{"remote":"192.168.2.84:40516","uuid":{"uuid":{"$uuid":"2184a73b-af1b-46c6-ab68-5284f2fc4823"}},"connectionId":92,"connectionCount":21}}
Running rs.status()
giving me the following
{
set: 'rs0',
date: ISODate('2024-03-23T17:24:32.085Z'),
myState: 2,
term: Long('2'),
syncSourceHost: 'mongo3:27019',
syncSourceId: 2,
heartbeatIntervalMillis: Long('2000'),
majorityVoteCount: 2,
writeMajorityCount: 2,
votingMembersCount: 3,
writableVotingMembersCount: 3,
optimes: {
lastCommittedOpTime: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
lastCommittedWallTime: ISODate('2024-03-23T17:24:28.813Z'),
readConcernMajorityOpTime: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
appliedOpTime: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
durableOpTime: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
lastAppliedWallTime: ISODate('2024-03-23T17:24:28.813Z'),
lastDurableWallTime: ISODate('2024-03-23T17:24:28.813Z')
},
lastStableRecoveryTimestamp: Timestamp({ t: 1711214658, i: 1 }),
electionParticipantMetrics: {
votedForCandidate: true,
electionTerm: Long('2'),
lastVoteDate: ISODate('2024-03-23T17:11:28.736Z'),
electionCandidateMemberId: 2,
voteReason: '',
lastAppliedOpTimeAtElection: { ts: Timestamp({ t: 1711213450, i: 1 }), t: Long('1') },
maxAppliedOpTimeInSet: { ts: Timestamp({ t: 1711213450, i: 1 }), t: Long('1') },
priorityAtElection: 1,
newTermStartDate: ISODate('2024-03-23T17:11:28.780Z'),
newTermAppliedDate: ISODate('2024-03-23T17:11:28.802Z')
},
members: [
{
_id: 0,
name: 'mongo1:27017',
health: 1,
state: 2,
stateStr: 'SECONDARY',
uptime: 794,
optime: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
optimeDate: ISODate('2024-03-23T17:24:28.000Z'),
lastAppliedWallTime: ISODate('2024-03-23T17:24:28.813Z'),
lastDurableWallTime: ISODate('2024-03-23T17:24:28.813Z'),
syncSourceHost: 'mongo3:27019',
syncSourceId: 2,
infoMessage: '',
configVersion: 1,
configTerm: 2,
self: true,
lastHeartbeatMessage: ''
},
{
_id: 1,
name: 'mongo2:27018',
health: 1,
state: 2,
stateStr: 'SECONDARY',
uptime: 793,
optime: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
optimeDurable: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
optimeDate: ISODate('2024-03-23T17:24:28.000Z'),
optimeDurableDate: ISODate('2024-03-23T17:24:28.000Z'),
lastAppliedWallTime: ISODate('2024-03-23T17:24:28.813Z'),
lastDurableWallTime: ISODate('2024-03-23T17:24:28.813Z'),
lastHeartbeat: ISODate('2024-03-23T17:24:31.652Z'),
lastHeartbeatRecv: ISODate('2024-03-23T17:24:31.118Z'),
pingMs: Long('0'),
lastHeartbeatMessage: '',
syncSourceHost: 'mongo3:27019',
syncSourceId: 2,
infoMessage: '',
configVersion: 1,
configTerm: 2
},
{
_id: 2,
name: 'mongo3:27019',
health: 1,
state: 1,
stateStr: 'PRIMARY',
uptime: 793,
optime: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
optimeDurable: { ts: Timestamp({ t: 1711214668, i: 1 }), t: Long('2') },
optimeDate: ISODate('2024-03-23T17:24:28.000Z'),
optimeDurableDate: ISODate('2024-03-23T17:24:28.000Z'),
lastAppliedWallTime: ISODate('2024-03-23T17:24:28.813Z'),
lastDurableWallTime: ISODate('2024-03-23T17:24:28.813Z'),
lastHeartbeat: ISODate('2024-03-23T17:24:31.652Z'),
lastHeartbeatRecv: ISODate('2024-03-23T17:24:31.118Z'),
pingMs: Long('0'),
lastHeartbeatMessage: '',
syncSourceHost: '',
syncSourceId: -1,
infoMessage: '',
electionTime: Timestamp({ t: 1711213888, i: 1 }),
electionDate: ISODate('2024-03-23T17:11:28.000Z'),
configVersion: 1,
configTerm: 2
}
],
ok: 1,
'$clusterTime': {
clusterTime: Timestamp({ t: 1711214668, i: 1 }),
signature: {
hash: Binary.createFromBase64('VClxfA6lXgDXDuYJP68mAP6veSw=', 0),
keyId: Long('7349605288829255686')
}
},
operationTime: Timestamp({ t: 1711214668, i: 1 })
}
Any idea how to use mongo compass app with docker containers of mongo replicaset ?
I managed to make it working by adding IP mapping in /etc/hosts
127.0.0.1 mongo1
127.0.0.1 mongo2
127.0.0.1 mongo3
Then connection was working as expected.
great work here; however, if i may offer a suggestion
The above script will/should correctly create the replicate setup you desire without the need of initially building etc. You may reduce the sleep time.