This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
require('dotenv').load({'path': '.env'}); | |
var elasticSearch = require("elasticsearch"); | |
var esRWClient = require("./esClient"); //This is es client initialization | |
var esRClient = require("./esClient_readOnly"); | |
var bunyan = require('bunyan'); | |
var target_index = process.env.target||'reindex'; //Change this to your target index . | |
var source_index = process.env.source; //Change this to your source index. | |
var global_scroll_id; |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#Reindexing your Elasticsearch indice with limited resource can be a painw when you have limited resources and need it running at the same time | |
#Hence it is advisable to size up the quantity and break it down into chunks based on time. | |
#Look to Kibana. The break down is already done for you even as you perform your search. | |
#Just pop up the request and the aggregation query is there. | |
#Using this, you can tally your document count according to time to verify your activities. | |
#I need to do this as due to resource constrains. Logstash input plugin sometimes hit into error and the plugin restart. | |
#When it restarts the query get executed again. With logstash plugin-input-Elasticsearch, it resume a new search. | |
#Any previous scroll ID is discarded. This is something you do not want happening. | |
#You can end up with more document in the target than the source. #Thus breaking it down to chucks limit the corruption and makes remediation easier. |