You'll need to build from source. For that we will use gradle
(https://gradle.org/).
Make sure that your JAVA_HOME
environment variable is set by running:
$ echo $JAVA_HOME
If not, add the following in your .bash_profile
or .zshrcr
(see this: https://stackoverflow.com/a/6588410)
export JAVA_HOME="$(/usr/libexec/java_home -v 11)"
...and reload it
source ~/.bash_profile
# or
source ~/.zshrc
$ cd /path/to/dev/elasticsearch # your local checkout of the ES repo
$ git checkout master
$ git pull upstream master
$ ./gradlew assemble --parallel
$ cd distribution/archives/zip/build/distributions
$ unzip elasticsearch-7.0.0-alpha1-SNAPSHOT.zip
$ cd elasticsearch-7.0.0-alpha1-SNAPSHOT
Launch each instance in a separate terminal window
bin/elasticsearch -E cluster.name=prod1
# and
bin/elasticsearch -E cluster.name=prod2 -E node.max_local_storage_nodes=2 -E transport.tcp.port=9400
Check that both clusters are up as distinct clusters
$ curl 'http://localhost:9200/'
$ curl 'http://localhost:9201/'
In the example below we will consider
:9200
as the leader and:9201
as the follower cluster.
- Activate trial on both instances
curl -X POST http://localhost:9200/_xpack/license/start_trial?acknowlegment=true -H “Content-Type: application/json”
curl -X POST http://localhost:9201/_xpack/license/start_trial?acknowlegment=true -H “Content-Type: application/json”
- Check that the CCR stats API exists
GET _ccr/stats
- Create remote cluster (send the request on the remote instance)
PUT _cluster/settings
{
"persistent": {
"cluster": {
"remote": {
"prod1": {
"seeds": [
"127.0.0.1:9300"
]
},
"prod2": {
"seeds": [
"127.0.0.1:9400"
]
}
}
}
}
}
- Check the connection
GET _remote/info
You should see that 1 node is connected to each cluster.
Great! Let's now verify that the Cross Cluster Replication works.
- Create the leader index
Important: soft_deletes
has to be enabled on the index.
Execute the request on the leader instance
PUT my_index
{
"settings": {
"number_of_shards": 1,
"soft_deletes.enabled": true
}
}
GET _cat/indices?v
- Create the follower index and make it follow the leader
Execute this on the follower cluster (e.g. http://localhost.9201
)
PUT my_index_f/_ccr/follow
{
"remote_cluster" : "prod1",
"leader_index" : "my_index"
}
GET _cat/indices?v
GET my_index_f/_search
Add a document to the leader index and verify that it shows up in the follower index
# localhost:9200
PUT my_index/_doc/1
{
"foo": "bar"
}
# localhost:9201
GET my_index_f/_search
Add another document
# localhost:9200
PUT my_index/_doc/2
{
"foo": "qux"
}
# localhost:9201
GET my_index_f/_search
Delete a document
# localhost:9200
DELETE my_index/_doc/1
# localhost:9201
GET my_index_f/_search
small note: change
GET my_index_f/_searh
intoGET my_index_f/_search