"transform": {
"script": {
"source": "return [ 'dups': ctx.payload.aggregations.my_buckets.buckets.stream().filter(t -> { return t.doc_count > 1 }).map(t -> { return ['key': t.key ] }).collect(Collectors.toList()) ]",
"lang": "painless"
}
}
curl -o /dev/null -s -w 'Establish Connection: %{time_connect}s\nTTFB: %{time_starttransfer}s\nTotal: %{time_total}s\n' https://your_endpoint_url:port
You'll need to set NODE_OPTIONS
in your environment variables.
export NODE_OPTIONS="--max-old-space-size=2048"
(which equivalent to 2GB for example)
Please note a Kibana restart is needed
cat file.json | jq -c '.hits.hits[] | { index: {_index:._index, _type:._type, _id:._id}}, ._source' | curl -XPOST -H "Content-Type: application/x-ndjson" localhost:9200/_bulk --data-binary @- | jq .
GET .monitoring-es*/_search
{
"_source": ["node_stats.process.cpu.percent"],
"size": 200,
"query": {
"exists": {
"field": "node_stats.process.cpu.percent"
}
}
Elasticsearch Painless script which aims to calculate the difference in days between a date indexed into a document and the current date.
GET days_compare/_search
{
"script_fields": {
"diffdate": {
"script": {
"lang": "painless",
"source": """
PUT my_lowercase_tokenizer/
{
"settings": {
"analysis": {
"analyzer": {
"my_custom_analyzer": {
"tokenizer": "lowercase"
}
}
if (doc['field'].size() != 0) {
// Do what operation you need
}
- Choose your CSV file to import including your coordinates data and upload it
location.csv
lat,long, timestamp
41.12,-71.34,1569476964
38.85896,-106.01665,1569476964
65.47629,18.61576,1569476964