This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# GOAL: Secure a cluster and an index using Elasticsearch Security | |
# INITIAL SETUP: (i) a running Elasticsearch cluster with at least one node and a Kibana instance, (ii) no index named `hamlet` | |
# Copy-paste the following instructions into your Kibana console, and work directly from there | |
# Enable xPack security on the cluster | |
# Set the password of the `elastic` and `kibana` built-in users, by using the pattern "{{username}}-password" (e.g., "elastic-password") | |
# Login to Kibana using the `elastic` user credentials | |
# Create the index `hamlet` and add some documents by running the following _bulk command | |
PUT hamlet/_doc/_bulk | |
{"index":{"_index":"hamlet","_id":0}} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# GOAL: Deploy an Elasticsearch cluster that satisfies a given set of requirements | |
# INITIAL SETUP: / | |
# Download the latest 6.x version of Elasticsearch and Kibana | |
# Deploy the cluster `eoc-01-cluster`, so that it satisfies the following requirements: (i) has three nodes, named `node1`, `node2`, and `node3`, (ii) all nodes are eligible master nodes | |
# Configure the nodes to avoid the split brain scenario | |
# Configure `node1` so that the node (i) is a data node but not an ingest node, (ii) is bound to the network address "192.168.0.100" and HTTP port "9201", (iii) doesn't allow swapping on its host | |
# Configure the Zen Discovery module of `node2` and `node3` to use the address and default transport port of `node1` | |
# Configure the JVM settings of each node so that it uses a minimum and maximum of 8 GB for the heap | |
# Configure the logging settings of each node so that (i) the logs directory is not the default one, (ii) the log level for transport-related events is set to "debug" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# GOAL: Allocate the shards in a way that satisfies a given set of requirements | |
# INITIAL SETUP: / | |
# Download the latest 6.x version of Elasticsearch and Kibana | |
# Deploy the cluster `eoc-06-cluster`, with three nodes named `node1`, `node2`, and `node3` | |
# Configure the Zen Discovery module of each node so that they can communicate with each other | |
# Connect a Kibana instance to `node3` | |
# Start the cluster | |
# Create the index `hamlet-1` with two primary shards and one replica | |
# Add some documents to `hamlet-1` by running the following _bulk command |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# GOAL: Backup and cross-cluster search | |
# INITIAL SETUP: / | |
# Download the latest 6.x version of Elasticsearch and Kibana | |
# Deploy the cluster `eoc-06-earth-cluster`, with one node named `node-earth` | |
# Connect a Kibana instance to `node-earth` | |
# Start the cluster | |
# Create the index `hamlet` and add some documents by running the following _bulk command | |
PUT hamlet/_doc/_bulk | |
{"index":{"_index":"hamlet","_id":0}} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# GOAL: Create, update and delete indices while satisfying a given set of requirements | |
# INITIAL SETUP: (i) a running Elasticsearch cluster with at least one node and a Kibana instance, (ii) no index or index template that starts by `hamlet` | |
# Copy-paste the following instructions into your Kibana console, and work directly from there | |
# Create the index `hamlet-raw` with one primary shard and four replicas | |
# Add a document to `hamlet-raw`, so that the document (i) has id "1", (ii) has default type, (iii) has a field named `line` with value "To be, or not to be: that is the question" | |
# Update the document with id "1" by adding a field named `line_number` with value "3.1.64" | |
# Add a new document to `hamlet-raw`, so that the document (i) has no explicit id, (ii) has default type, (iii) has a field named `text_entry` with value "Whether tis nobler in the mind to suffer", (iv) has a field named `line_number` with value "3.1.66" | |
# Update the last document by setting the value of `line_number` to "3.1.65" | |
# In one |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# GOAL: Create index templates that satisfy a given set of requirements | |
# INITIAL SETUP: (i) a running Elasticsearch cluster with at least one node and a Kibana instance, (ii) no index or index template that starts by `hamlet` | |
# Copy-paste the following instructions into your Kibana console, and work directly from there | |
# Create the index template `hamlet_template`, so that the template (i) matches any index that starts by "hamlet_" or "hamlet-", (ii) allocates one primary shard and no replicas for each matching index | |
# Create the indices `hamlet2` and `hamlet_test` | |
# Verify that only `hamlet_test` applies the settings defined in `hamlet_template` | |
# In one request, delete both `hamlet2` and `hamlet_test` | |
# Update `hamlet_template` by defining a mapping for the type "_doc", so that (i) the type has three fields, named `speaker`, `line_number`, and `text_entry`, (ii) `speaker` and `line_number` map to a unanalysed string, (iii) `text_entry` uses an "english" analyzer | |
# Create the index `hamlet-1` and add some |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# GOAL: Create an alias, reindex indices, and create ingest pipelines | |
# INITIAL SETUP: (i) a running Elasticsearch cluster with at least one node and a Kibana instance, (ii) no index or index template that starts by `hamlet` | |
# Copy-paste the following instructions into your Kibana console, and work directly from there | |
# Create the indices `hamlet-1` and `hamlet-2`, each with two primary shards and no replicas | |
# Add some documents to `hamlet-1` by running the following _bulk command | |
PUT hamlet-1/_doc/_bulk | |
{"index":{"_index":"hamlet-1","_id":0}} | |
{"line_number":"1.1.1","speaker":"BERNARDO","text_entry":"Whos there?"} | |
{"index":{"_index":"hamlet-1","_id":1}} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# ** EXAM OBJECTIVE: INSTALLATION AND CONFIGURATION ** | |
# GOAL: Setup an Elasticsearch cluster that satisfies a given set of requirements | |
# REQUIRED SETUP: / |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
POST _opendistro/_sql?format=csv&separator=%3B | |
{ | |
"query": "SELECT * FROM *" | |
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Based on the query pusblished in the question, I added the following documents to the index. | |
[ | |
{ | |
"@timestamp" : "2020-01-15", | |
"condition" : "B", | |
"value" : 10, | |
"conditionType" : "ABCD" | |
}, | |
{ |