Last active
February 20, 2019 22:31
-
-
Save glenacota/0f6138dcaeecbf689e0ea6085fa2fdf7 to your computer and use it in GitHub Desktop.
An extended exercise that covers the "Installation and Configuration" and "Cluster Administration" objectives of the Elastic exam.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# ** EXAM OBJECTIVES: INSTALLATION AND CONFIGURATION + CLUSTER ADMINISTRATION ** | |
# Our goal is to deploy an Elasticsearch cluster named `training-cluster`, which satisfies all the requirements that follows | |
# Add three nodes to `training-cluster`, and name them `node1`, `node2`, and `node3` | |
# Configure each node to be eligible as a master node | |
# Configure each node to be a data node, but not an ingest node | |
# Configure each node to disable swapping on its host | |
# Configure the JVM on each node to use a minimum and maximum of 8 GB for the heap | |
# Bind `node1` to the network address "192.168.0.100" | |
# Configure the Zen Discovery module of `node2` and `node3` to use the address (and default transport port) of `node1` | |
# Configure `training-cluster` so as to avoid the split brain scenario | |
# Connect a Kibana instance to `node3` | |
# Start the cluster | |
# Create the index `hamlet_raw`, with four primary shards, one replica, and a refresh interval of 30 seconds | |
# Populate `hamlet_raw` by running the _bulk command with the request-body below | |
{"index":{"_index":"hamlet_raw","_id":0}} | |
{"line_number":"1","speaker":"BERNARDO","text_entry":"Whos there?"} | |
{"index":{"_index":"hamlet_raw","_id":1}} | |
{"line_number":"2","speaker":"FRANCISCO","text_entry":"Nay, answer me: stand, and unfold yourself."} | |
{"index":{"_index":"hamlet_raw","_id":2}} | |
{"line_number":"3","speaker":"BERNARDO","text_entry":"Long live the king!"} | |
{"index":{"_index":"hamlet_raw","_id":3}} | |
{"line_number":"4","speaker":"FRANCISCO","text_entry":"Bernardo?"} | |
{"index":{"_index":"hamlet_raw","_id":4}} | |
{"line_number":"5","speaker":"BERNARDO","text_entry":"He."} | |
{"index":{"_index":"hamlet_raw","_id":5}} | |
{"line_number":"6","speaker":"FRANCISCO","text_entry":"You come most carefully upon your hour."} | |
{"index":{"_index":"hamlet_raw","_id":6}} | |
{"line_number":"7","speaker":"BERNARDO","text_entry":"Tis now struck twelve; get thee to bed, Francisco."} | |
{"index":{"_index":"hamlet_raw","_id":7}} | |
{"line_number":"8","speaker":"FRANCISCO","text_entry":"For this relief much thanks: tis bitter cold,"} | |
{"index":{"_index":"hamlet_raw","_id":8}} | |
{"line_number":"9","speaker":"FRANCISCO","text_entry":"And I am sick at heart."} | |
{"index":{"_index":"hamlet_raw","_id":9}} | |
{"line_number":"10","speaker":"BERNARDO","text_entry":"Have you had quiet guard?"} | |
# Check the shard distribution of `hamlet_raw` by using the _cat API. For example, you can run the command below | |
GET _cat/shards/hamlet_raw?v&s=shard,node | |
# Let's assume that we deploy the cluster across two availability zones, named `zone1` and `zone2`. Add the attribute `my_zone` to the nodes configuration, and set its value to "zone1" for `node1` and `node2`, and to "zone2" for `node3`. Finally, restart the cluster | |
# Configure `training-cluster` to force shard allocation awareness taking into account availability zones, and make this setting persistent across cluster restarts | |
# Verify the success of the last action by checking again the shard distribution of `hamlet_raw` by using the _cat API | |
# Configure `training-cluster` to reflect a hot/warm architecture, with `node1` as the only hot node | |
# Configure `hamlet_raw` to allocate its shards only to warm nodes | |
# Verify the success of the last action by checking again the shard distribution of `hamlet_raw` | |
# Remove the hot/warm shard filtering configuration from the nodes' configuration | |
# Let's assume that each node has either a "large" or "small" local storage, and that `node2` is the only with small storage. Configure each node with an attribute named `my_storage` that binds the node to a storage size | |
# Configure `hamlet_raw` to allocate its shards only to nodes with a large storage size | |
# Verify the success of the last action by checking again the shard distribution of `hamlet_raw` | |
# Enable xPack security on `training-cluster` | |
# Configure the passwords of each built-in user, by using the password pattern "{{username}}-password" (e.g., "kibana-password") | |
# Create the role `francisco_role` in the xpack security native realm, satisfying the following criteria: (i) assign "monitor" privileges on the cluster; (ii) assign all privileges to the `hamlet_raw` index | |
# Create the role `bernardo_role` in the xpack security native realm, satisfying the following criteria: (i) assign "monitor" privileges on the cluster; (ii) assign read-only privileges to the `hamlet_raw` index, and only for the documents that have "BERNARDO" as a `speaker`; (iii) can see only the `text_entry` field | |
# Create the user `francisco` with password "francisco-password", and assign the role `francisco_role` to the user | |
# Login using the `francisco` user credentials, and run some queries on `hamlet_raw` to verify that the role privileges were correctly set | |
# Create the user `bernardo` with password "bernardo-password", and assign the role `bernardo_role` to the user | |
# Login using the `bernardo` user credentials, and run some queries on `hamlet_raw` to verify that the role privileges were correctly set | |
# Change to "poor-bernardo" the password of the `bernardo` user | |
# Login using the `elastic` user credentials | |
# Create the `hamlet_backup` file system repository to store snapshots of the `hamlet_raw` index | |
# Create the snapshot `hamlet_snapshot_1` of `hamlet_raw` and store it into `hamlet_backup` | |
# Delete the index `hamlet_raw` | |
# Restore the index `hamlet_raw` using `hamlet_snapshot_1` | |
# Enable cross-cluster search on `training-cluster`, satisfying the following criteria: (i) the name of the second cluster is `training-cross-cluster`; (ii) the seed for the second cluster is a node named `cross1`, which is listening on the default transport port; (iii) the cross-cluster configuration must not persist across multiple restart |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment