#MongoDB - Setting up a Replica Set
cd \mongodbdir\
mkdir db1
mkdir db2
mkdir db3
###Primary mongod --dbpath ./db1 --port 30000 --replSet "demo"
###Secondary mongod --dbpath ./db2 --port 40000 --replSet "demo"
###Arbiter mongod --dbpath ./db3 --port 50000 --replSet "demo"
mongo --port 3000
db.getMongo()
###Configuration You need to connect to one of the database sets and setup the configuration. The mongo client is a javascript repl so you can do all the usual javascript commands.
var demoConfig = {
_id: "demo",
members: [
{ _id: 0,
host: 'localhost: 30000',
priority: 10
},
{ _id: 1,
host: 'localhost: 40000'
},
{ _id: 2,
host: 'localhost: 50000',
arbiterOnly: true
}
]
};
###Initialize the Replica Set The rs stands for replica set.
rs.initiate(demoConfig)
Please note that this can take some time to setup the databases initially. When it is complete it will return back:
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
The prompt will change to demo:PRIMARY>
.
###Test To test add some data to the primary:
db.foo.save({_id:1, value:'hello world'})
Check that the data is saved on the primary:
db.foo.find()
This will return:
{ "_id" : 1, "value" : "hello world" }
Then log onto the secondary. This will have a prompt of demo:SECONDARY>
.
db.foo.find()
This will return:
{ "_id" : 1, "value" : "hello world" }
####Error not master and slaveOk=false
You may get an initial error error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
when you are trying to run db.foo.find()
. This is because the secondary (ie the Slave) is not setup to perform reads. This can be enabled using the following command:
db.setSlaveOk()
You should then be able to access the data.
###Replica Set Status To check on the status of a replica set just use:
rs.status()
Sorry, I don't know much about connecting from pymongo or mongoengine or flask-mongoengine but normally there is a way to connect to connect to a replica set. Check out https://api.mongodb.org/python/2.2/examples/replica_set.html
I suspect that the reason why it does not failover in your 4 server configuration is that it has to be a majority to elect a new Primary and with 4 servers and 2 down, you only have 50% and not 51%+. You are better having an odd number of servers like 5.