Exam Objectives
1 Compare authentication methods
1a Describe authentication methods
1b Choose an authentication method based on use case
1c Differentiate human vs. system auth methods
2 Create Vault policies
2a Illustrate the value of Vault policy
2b Describe Vault policy syntax: path
2c Describe Vault policy syntax: capabilities
2d Craft a Vault policy based on requirements
3 Assess Vault tokens
3a Describe Vault token
3b Differentiate between service and batch tokens. Choose one based on use-case
3c Describe root token uses and lifecycle
3d Define token accessors
3e Explain time-to-live
3f Explain orphaned tokens
3g Create tokens based on need
4 Manage Vault leases
4a Explain the purpose of a lease ID
4b Renew leases
4c Revoke leases
5 Compare and configure Vault secrets engines
5a Choose a secret method based on use case
5b Contrast dynamic secrets vs. static secrets and their use cases
5c Define transit engine
5d Define secrets engines
6 Utilize Vault CLI
6a Authenticate to Vault
6b Configure authentication methods
6c Configure Vault policies
6d Access Vault secrets
6e Enable Secret engines
6f Configure environment variables
7 Utilize Vault UI
7a Authenticate to Vault
7b Configure authentication methods
7c Configure Vault policies
7d Access Vault secrets
7e Enable Secret engines
8 Be aware of the Vault API
8a Authenticate to Vault via Curl
8b Access Vault secrets via Curl
9 Explain Vault architecture
9a Describe the encryption of data stored by Vault
9b Describe cluster strategy
9c Describe storage backends
9d Describe the Vault agent
9e Describe secrets caching
9f Be aware of identities and groups
9g Describe Shamir secret sharing and unsealing
9h Be aware of replication
9i Describe seal/unseal
9j Explain response wrapping
9k Explain the value of short-lived, dynamically generated secrets
10 Explain encryption as a service
10a Configure transit secret engine
10b Encrypt and decrypt secrets
10c Rotate the encryption key
What is Vault?
Benefits:
- Store long-lived, static secrets
- Dynamically generate secrets upon request
- API
- EaaS
- Intermediate CA
- Identity-based access
Core components:
- Storage BE
- Secret Engines
- Auth methods
- Audit devices
# Create an alias v so you don't have to type vault:
alias 'v=vault'
# Run in dev mode, stores nothing permanently (`inmem` storage type):
vault server -dev
# Export environment variable of the API:
VAULT_ADDR=http://127.0.0.1:8200
# This would also work:
VAULT_ADDR=http://8200
- Log in with the root token displayed in the output.
- In the GUI, you will see that the key/value v2 engine is enabled.
- To create a secret:
- Specify: Path for this secret
- Specify: Maximum number of versions
- Specify: Secret data
key=value
- After the creation, you can create a new version.
- When you try to delete the secret, there are options:
- Delete this version (can be undeleted
v kv undelete
) - Destroy this version (
v kv destroy
is permanent, be careful!) - Destroy all versions (this is very dangerous obviously)
- Delete this version (can be undeleted
# Help for specific PATH:
v path-help $PATH
# CLI alternative of the KV creation/deletion:
v kv put secret/foo password=s3cr3tX
v kv get secret/foo
> password=s3cr3tX
# When you delete, reference is still there:
v kv delete -version=2 secret/foo
# Delete a key and all existing versions:
v kv metadata delete secret/foo
# Any command can specify output -format (table/JSON/YAML) or:
export VAULT_FORMAT=json
# Difference between data/ and metadata/ can be seen here:
v kv get kvv2/apps/circleci
v kv metadata get kvv2/apps/circleci
# See a specific version:
v kv get -version=8 kv/app/db
# Destroy only specific version:
v kv destroy -version=3 kv/app/db
# Writes the data to the corresponding path in the key-value store (not like put which overwrites):
v kv patch
# Restores a given previous version to the current version at the given path:
v kv rollback
- Tokens map to information, they are used to access Vault
- Most importantly, attached policies are mapped to token
- Option Do not attach default policy to generated tokens
- If the policy is assigned later you need to issue a new token again (token policies assigned at creation & renewal), however changing policy already associated with token will work
- Token creation:
- Auth method
- Parent token
- Root token via special process
- Current token, a.k.a. token helper is stored in
~/.vault-token
VAULT_TOKEN
overwrites the location of the token helper- Effective max TTL in auth method overrides sys/mount max (using
v write
cmd) - Effective max TTL must be < system and < mount TTL
- Mount max TTL overwrites system max TTL (768h) (using
v tune
cmd) - Initial token1 = parent
- Token2 created by initial token1 = child
- If parent is revoked/expires, so do all of its children (including children of children)
- Orphan tokens (
orphan=true
) are not created as child of their parent, thus do not expire due to parent expiry, when token hierarchy is not desirable (still can expire when TTL) - Periodic token has no max TTL, must be renewed periodically (within the time period), root or sudo can generate them and they may live as long as they are renewed (indefinitely)
- Batch tokens (encrypted blob) are lightweight, no storage cost, but: cannot be root tokens, cannot create child tokens, cannot be renewed, cannot be periodic, no max TTL (only fixed lifetime), no accessors, no cubbyhole
- Service tokens (majority) start with
s.
and batch tokens start withb.
(starting from Vault 1.10, this is different:hvs.XXXX
for service tokens,hvb.XXXX
for batch tokens andhvr.XXXX
for recovery tokens) - Initial root token should be revoked after the first setup of other auth methods
Example - How to generate these different type of tokens automatically?
v auth enable approle
v write auth/approle/role/training policies="training" token_type="batch" token_ttl="60s"
v write auth/approle/role/jenkins policies="jenkins" period="72h"
Metadata attached to the token:
- Accessor
- Policies
- TTL
- Max TTL
- Number of uses left
- Orphaned token
- Renewal status
# CLI:
# If $TOKEN is not specified, it shows the current:
v token lookup $TOKEN
# Create token with TTL:
v token create --ttl=600
# Create token, explicit-max-ttl takes precedence, must be < max eff.:
v token create --ttl=600 --explicit-max-ttl=700
# Renewal works with both token & accessor:
v token renew --increment=30m $TOKEN
v token renew --increment=30m $ACCESSOR
# See the configuration:
v read sys/auth/token/tune
> default_lease_ttl = 768h # 32 days
max_lease_ttl = 768h
# Create orphan token, requires root/sudo in path auth/token/create-orphan (this will lead to orphan=true):
v token create -orphan
# Periodic tokens (this will lead to explicit_max_ttl=0s, period=24h, renewable=true):
v token create -policy=default -period=24h
# 120m is pointless below, it gets overwritten by "period" value 24h:
v token renew -increment=120m $TOKEN
v token revoke -self
# Create batch token, if type is not specified then it will be service token:
v token create -type batch -policy default
# Limited use tokens (this will lead to num_uses=2, will expire at the end of last use regardless of remaining TTL):
v token create -policy="training" -use-limit=2
- Lookup the token without the value of token (without exposing the actual token)
- Accessors cannot create/reverse engineer tokens
- Reference to a token can perform limited actions:
- Lookup token's properties without token ID
- Lookup token's capabilities on a path
- Renew token
- Revoke token
# CLI:
v list auth/token/accessor
v token lookup -accessor $ACCESSOR
v token renew -accessor $ACCESSOR
v token revoke -accessor $ACCESSOR
- The reason you deploy Vault!
- Plugins used to handle sensitive data
- They store (static), generate (dynamic) and encrypt
- Must be enabled at given unique path
- Generic
- KV (key/value)
- v1, no versioning by default, run
v kv enable-versioning <PATH>
- v2
- v1, no versioning by default, run
- PKI certificates
- SSH
- Transit
- TOTP (time-based one-time passwords)
- KV (key/value)
- Cloud
- Active Directory (AD)
- AliCloud
- Amazon Web Services (AWS)
- Azure
- Google Cloud
- Google Cloud KMS
- Infra
- HashiCorp Consul
- Databases
- Microsoft SQL
- PostgreSQL
- MongoDB
- HashiCorp Nomad
- RabbitMQ
- Identity Engine (special)
- Cubbyhole Engine (special)
- Lifecycle of the SE:
- Enable
- Disable (deletes all of the data!)
- Move
# CLI:
v secrets enable -path=demopath/ -version=2 -description="demo" kv
v secrets tune $PATH
v secrets move $PATH # Revokes all existing leases
# This will show 3 properties:
v read demopath/config
> cas_required, max_versions, delete_versions_after
# To identify KV versions, map[] vs map[version:2]
v secrets list -detailed
# All secrets are immediately revoked when disabled:
v secrets disable demopath/
- All auth & secret backends are considered plugins
- Pass the plugin directory argument or
plugin_directory
in config
# CLI:
# Plugin must already by compiled in `./vault/plugins`:
v server -dev -dev-root-token-id=root -dev-plugin-dir=./vault/plugins
export VAULT_ADDR="http://127.0.0.1:8200"
v secrets list
v secrets enable $MY_NEW_PLUGIN
v secrets list
v write my-mock-plugin/test msg="Hello"
v read my-mock-plugin/test
> msg="Hello"
- Majority of SEs are dynamic, but not all secrets support DS
- When user requests secret, they are generated at that time
- Secrets are automatically revoked after Time-To-Live (TTL)
- It does not provide stronger crypto key generation
Example of AWS DS:
- You need to create an AWS IAM user to be used by Vault
- Attach a role that is able to generate a new users
- Create an IAM role that will be attached to these new users
- Run
v write aws/config/root access_key=... secret_key=... region=...
- It is a good idea to rotate the provided key immediately
v write -f aws/config/rotate-root
- Create a Vault role (type can be
iam_user
,assumed_role
,federation_token
):v write aws/roles/readonly policy_arns=arn:aws:iam::aws:policy/ReadOnlyAccess credential_type=iam_user
- See if it was created:
v read aws/roles/readonly
- To generate creds (returns access_key, secret_key):
v read aws/creds/readonly
Example of AWS Assume Role DS:
- Vault is deployed with EC2 Instance policy - e.g.:
Sid: PermitAccessToCrossAccountRole
Effect: Allow
Action: sts:AssumeRole
Resource: [
arn:aws:iam::<other_aws_account_number>:role/vault-role-bucket-access
]
- Create a Vault role
v write aws/roles/s3-access role_arns=arn:aws:iam::<other_aws_account_number>:role/vault-role-bucket-access credential credential_type=assumed_role
- To generate creds (returns access_key, secret_key):
v write aws/sts/s3-access -ttl=60m
Example of database DS:
v write database/config/prod-db plugin_name=... connection_url=... allowed_roles=... username=... password=...
- Each role maps to set of permissions on the targeted platform (db), you need to create roles inside Vault and map those to proper permissions on those systems
- In the web UI, it is under Access -> Leases
- Invalidated when expiration reached, or manually revoked
- When a token is revoked, Vault will revoke all leases that were created using that token
- They control DS lifecycle by including metadata about secrets:
lease_id
lease_duration
# countdown timerlease_renewable
# true/false
- Default lease TTL = automatic value for secrets
- Max lease TTL = you can renew up to this value
- Inheritance is on different levels:
- System
- Mount
- Object
- If you try to increment above the Max lease TTL, it will use Max
- Path-based revocation requires sudo permissions
- All revocations are queued
- "Credentials could not be found" error happens in situations where the secret in the target
secret engine was manually removed, you need
-force
# CLI:
# See leases in $PATH:
v list sys/leases/lookup/$PATH
# To view leases properties, strangely use write:
v write sys/leases/lookup lease_id=demopath/$ID
# Extend the lease. Strangely it will reset the lease to 120m:
v lease renew -increment=120m demopath/$LEASE_ID
# Specific revocation:
v lease revoke demopath/$LEASE_ID
# Path-based revocation - all will be revoked in aws/ (dangerous):
v lease revoke -prefix aws/
# Delete the lease from Vault even if the secret engine revocation fails:
v lease revoke -force -prefix demopath/
- Needs to be enabled first
v secrets enable transit
- Encryption-as-a-Service, does not store any data
- Default cipher is aes256-gem96
- Supports convergent encryption (same data produces same ciphertext), depends on the cipher used
- Start with creating encryption key, then it will give you options to:
- Encrypt
- Decrypt
- Datakey
- Rewrap
- HMAC
- Verify
- Encrypted data looks like this:
vault:v1:<BASE64>==
- Rotate encryption keys at regular intervals (v1.10 can rotate automatically now), this avoids number of data encrypted with the same key
- All versions of the keys are stored
- There will be working sets vs. archive sets (not in memory anymore)
- You can manage configuration and versions archiving by appending
/config
and/trim
, e.g.:min_decryption_version
- lower than this are automatically archivedmin_encryption_version
- lower than this are automatically archived
# CLI:
v path-help transit/
# Force key creation, with optional cipher type:
v write -force transit/demo-key [type="rsa-4096"]
# Encrypt plaintext (reserved word) in B64 with current version of demo-key, returns ciphertext:
v write transit/encrypt/demo-key plaintext='<BASE64>=='
# Decrypt with current version of demo-key, returns base64 plaintext:
v write transit/decrypt/demo-key ciphertext='vault:v1:<BASE64>=='
# Rotate key:
v write -force transit/keys/$KEYNAME/rotate
# See the latest_version:
v read transit/keys/$KEYNAME
# Change the min_decryption_version (this does not delete anything, just older keys will stop working):
v write transit/keys/$KEYNAME/config min_decryption_version=4
# Rewrap data with the latest version of the key (returns new key_version, e.g. ciphertext=vault:v4:<BASE64>):
v write transit/rewrap/$KEYNAME ciphertext="vault:v1:<BASE64>=="
- Multi-factor authentication
# CLI:
# Generator of the QR code:
v secrets enable totp
v write --field=barcode totp/keys/lucian generate=true \
issuer=vault account_name=foo
cat decode | base64 -d > totp.png
# Provider - it returns the 2FA numbers:
v read totp/code/lucian
# In AWS (otpauth is the content of the QR code provided by AWS):
v write totp/keys/aws url=otpauth://<...>?secret=<...>
v read totp/code/aws
- Engine generates dynamic x509 certificates
- Can act as an intermediate certification authority (CA)
- Reducing/eliminating certificate revocations
- Reduces time to get certificate (eliminate CSRs)
Example of the certificate generation:
# Enable the pki secrets engine at the pki path
v secrets enable pki
# Tune the pki secrets engine to issue certificates with a maximum time-to-live (TTL) of 87600 hours
v secrets tune -max-lease-ttl=87600h pki
# Generate the example.com root CA, give it an issuer name, and save its certificate in the file root_2022_ca.crt:
v write -field=certificate pki/root/generate/internal \
common_name="example.com" \
issuer_name="root-2022" \
ttl=87600h > root_2022_ca.crt
# List the issuer information for the root CA
v list pki/issuers/
v read pki/issuer/09c2c9a0-a874-36d2-de85-d79a7a51e373 | tail -n 6
# Create a role for the root CA
v write pki/roles/2022-servers allow_any_name=true
# Configure the CA and CRL URLs
v write pki/config/urls \
issuing_certificates="$VAULT_ADDR/v1/pki/ca" \
crl_distribution_points="$VAULT_ADDR/v1/pki/crl"
# Generate intermediate CA, enable the pki secrets engine at the pki_int path
v secrets enable -path=pki_int pki
v secrets tune -max-lease-ttl=43800h pki_int
# Generate an intermediate and save the CSR as pki_intermediate.csr
v write -format=json pki_int/intermediate/generate/internal \
common_name="example.com Intermediate Authority" \
issuer_name="example-dot-com-intermediate" \
| jq -r '.data.csr' > pki_intermediate.csr
# Sign the intermediate certificate with the root CA private key, and save the generated certificate as intermediate.cert.pem
v write -format=json pki/root/sign-intermediate \
issuer_ref="root-2022" \
csr=@pki_intermediate.csr \
format=pem_bundle ttl="43800h" \
| jq -r '.data.certificate' > intermediate.cert.pem
# Once the CSR is signed and the root CA returns a certificate, it can be imported back into Vault
v write pki_int/intermediate/set-signed [email protected]
# Create a role named example-dot-com which allows subdomains, and specify the default issuer ref ID as the value of issuer_ref
v write pki_int/roles/example-dot-com \
issuer_ref="$(vault read -field=default pki_int/config/issuers)" \
allowed_domains="example.com" \
allow_subdomains=true \
max_ttl="720h"
# Now, you can request certificates!
v write pki_int/issue/example-dot-com common_name="test.example.com" ttl="24h"
# And also revoke them
v write pki_int/revoke serial_number=<serial_number>
- Identity secrets engine is mounted by default (
/auth
but in reality it is/sys/auth
) - Identity engine is enabled by default, cannot be moved or disabled
- Auth methods cannot be moved, only disabled
- Different options:
- Token (enabled by default, cannot be disabled)
- Username/Password (
userpass
) - LDAP username/password
- Okta
- JWT role
- OIDC role
- RADIUS username/password
- Github token (but considered user-oriented method)
- Kubernetes
- AWS/Azure/GCP/Ali
- AppRole
- TLS certificates
# CLI:
v auth enable -path=my-login userpass
v auth list
# Add description to auth method via tune:
v auth tune -description="Foobar" TestAppRole
# Authenticate using username/password, avoid leaving password= in the shell but rather interactively:
v login -method=userpass username=lucian
> Please enter password:
# All users logged out:
v auth disable $METHOD
- Entities are linked to tokens, entity = single user/system
- Entity (e.g. Bob) maps to multiple aliases (e.g. Bob in AD, Bob in Github...)
- A group can contain multiple entities
- Alias work for groups in the same way, alias = combination of the auth method + [id]
- If there is entity policy and each alias has also policy, the policies will be extended/combined
- Many orgs already have groups defined within their external identity providers like AD
- External groups allow to link with external auth provider, otherwise default is internal group
- You can create internal group (e.g. Devs) which consists of AD groups (e.g. Devs1, Devs2)
# CLI:
v write identity/entity name=ford_entity
v write identity/alias name=f_alias mount_accessor=$mount-id canonical_id=$canonical-id
- Secret engine for own private storage
- No one else can read, including root
- Cannot be disabled, moved, enabled multiple times
- When token expires, it's cubbyhole is destroyed
- No token can access another token's cubbyhole
- Created per service token
- Authorization aspect, similar to passports/visas
- Policy is associated with token/authmethod/entity/group
- Two default policies:
- Root policy can do anything (superuser)
- Default policy is attached to all tokens, but may be explicitly excluded at creation time
- Most specific rule wins
- Empty policy grants no permission (deny by default)
- Plus sign (
+
) in the path stands for single directory/level wildcard matching (e.g.secrets/+/apikey
) - Glob/wildcard (
*
) can only be used at the end of the path - Default policy is built-in, cannot be removed (but can be modified) and contains basic functionality e.g. ability for the token to look up data about itself and to use it's cubbyhole
- Root policy is built-in, cannot be modified or removed, it is for a user created when initialized
- Identity policies assigned to entity or group are dynamic (evaluated at every request):
token_policies = ["default"]
identity_policies = ["vaultadmins"]
policies = ["default", "vaultadmins"]
# combination
Example policy that allows to create identity:
path "auth/*" {
capabilities = ["create"]
}
Example policy for deny:
path "secret/*" {
capabilities = ["read"]
}
# /data/ contains the actual secret value in v2 API
path "secret/data/supersecret" {
capabilities = ["deny"]
}
Question - does this permit access to kv/apps/webapp
?
path "kv/apps/webapp/*" {
capabilities = ["read"] # Answer: no, it only permits after kv/apps/webapp/..!
}
Question - does it permit to browse webapp in UI?
path "kv/apps/webapp/*" {
capabilities = ["read", "list"] # Answer: no, it only permits list/read at the listed path, not paths leading up to the desired path!
}
Protected paths:
auth/token
auth/token/accessors
auth/token/create-orphan
sys/audit
sys/mounts
sys/rotate
sys/seal
sys/step_down
pki/root/sign-self-issued
- etc.
Capabilities include:
- (C)reate (remember, there is no Write!)
- (R)ead
- (U)pdate
- (D)elete
- List (does not automatically allow Read!)
- Sudo (allows access to paths, that are root-protected)
- Deny - overrides all others (takes precedence)
# CLI:
v secrets list
> Permission denied http://127.0.0.1:8200/v1/sys/mounts
# Go to Access > Auth methods > userpas > admin > Edit user > Generated token's policies:
# policy_v2
v login -method=userpass username=foo
> policies = ["default", "policy_2"]
# Create token with specific policy, good for testing:
v token create -policy="policy-name"
# Assign policy to a user:
v policy write auth/userpass/users/foo token_policies=["policy-name"]
v policy write admin /tmp/admin-policy.hcl
# List policies:
v policy list
The metadata/
endpoint returns a list of key names at the location. That is important only for web UI & API and policies, not CLI:
path "secret/metadata" {
capabilities = ["list"]
}
There is also secret/metadata/$SECRET
which contains details about a specific secret:
path "secret/metadata/foo" {
capabilities = ["read"]
}
Table - what capabilities do you need:
Description | Path | Capability |
---|---|---|
Writing & reading version | data/ |
|
Listing keys | metadata/ |
["list"] |
Reading versions | metadata/ |
["read"] |
Destroy versions of $SECRET | destroy/ |
["update"] |
Destroy ALL versions of metadata for key | metadata/ |
["delete"] |
- To see what token can do (capabilities) at given path:
# CLI:
# No token provided as argument = /sys/capabilities-self
v token capabilities sys/
> deny
# Token provided as argument = /sys/capabilities
v token capabilities $TOKEN $PATH
> create, read, sudo, update
- Instead of having
/secret/data/user1
,/secret/data/user2
... - You can use
/secret/data/{{identity.entity.name}}/*
- identity.entity.[id, metadata., aliases.<mount_accessor>, ...]
- identity.[groups.ids.<group_id>.name, groups.names..id]
Available Templating Parameters:
Name | Description |
---|---|
identity.entity.id |
The entity's ID |
identity.entity.name |
The entity's name |
identity.entity.metadata.<metadata key> |
Metadata associated with the entity for the given key |
identity.entity.aliases.<mount accessor>.id |
Entity alias ID for the given mount |
identity.entity.aliases.<mount accessor>.name |
Entity alias name for the given mount |
identity.entity.aliases.<mount accessor>.metadata.<metadata key> |
Metadata associated with the alias for the given mount and metadata key |
identity.entity.aliases.<mount accessor>.custom_metadata.<custom metadata key> |
Custom metadata associated with the entity alias |
identity.groups.ids.<group id>.name |
The group name for the given group ID |
identity.groups.names.<group name>.id |
The group ID for the given group name |
identity.groups.ids.<group id>.metadata.<metadata key> |
Metadata associated with the group for the given key |
identity.groups.names.<group name>.metadata.<metadata key> |
Metadata associated with the group for the given key |
- Go to Access > Auth methods > Enable new method > AppRole
- Each role might be associated with policy that the application needs
- Uses role ID and secret ID (you can push or pull for the secret ID)
- Steps to configure it:
- Create policy and role for application
- Get
role-id
- Generate a new
secret-id
- Give role ID & secret ID to the application
- App authenticates
- Vault returns a token
- More secure method uses "wrapping architecture"
- You can also constrain secret ID by CIDR address (CIDR-bound token is like a regular service token with additional configuration, can only be used by specific host or from within a certain network)
# CLI:
# Create a role:
v write auth/approle/role/jenkins token_policies="jenkins-role-policy"
# Get a role ID:
v read auth/approle/role/jenkins/role-id
# Generate a new secret ID:
v write -f auth/approle/role/jenkins/secret-id
> secret_id
secret_id_accessor
# Authenticate:
v write auth/approle/login role_id="$role-id" secret_id="$secret-id"
- Wrapping tokens = single use
- Insert the response into token's cubbyhole with short TTL
- Process is following:
- Mount AppRole auth backend
- Create policy and role for app
- Get role ID
- Deliver role ID to the app (not considered sensitive information)
- Get wrapped secret ID
- Vault returns wrapping token
- Deliver wrapping token to the app (not considered sensitive information)
- Unwrap secret ID with the use of wrapping token (can only be done once)
- Login using role ID & secret ID
- App gets the token
# CLI:
v token create -wrap-ttl=600
> wrapping_token
v unwrap $WRAPPING_TOKEN
> token
- All routes are prefixed with
/v1/
- Example GET cURL:
curl \
-H "X-Vault-Token: XXXXXXXXXX:" \
-X GET \
http://127.0.0.1:8200/v1/secret/foo
# Also try:
# http://127.0.0.1:8200/v1/secret/data/foo?version=2
v print token
- When using API to create/put data to Vault, the JSON payload needs a specific
data
format - e.g.:
{
"data": {
"course": "vault-associate",
"instructor": "zeal"
}
}
- Example POST cURL:
curl \
--request POST \
--data @auth.json \
https://127.0.0.1:8200/v1/auth/approle/login
# This will return X-Vault-Token and you can use that for subsequent calls
- Example adding policy with cURL:
curl \
--request PUT \
--header "X-Vault-Token:<...>" \
--data @payload.json \
http://vault:8200/v1/sys/policy/admin
Table - mapping capability to a HTTP method:
Capability | HTTP method |
---|---|
create | POST/PUT |
list | GET |
update | POST/PUT |
delete | DELETE |
list | LIST |
- Same binary, runs as a service
- Automatically authenticates, no storage - all is in memory
- Keeps tokens renewed until renewal not allowed
- Config file needs vault server, authentication method and sink location:
auto_auth {
method "approle" {
mount_path = "auth/approle"
config = {
role_id_file_path = "/tmp/role-id"
secret_id_file_path = "/tmp/secret-id"
remove_secret_id_file_after_reading = false
}
}
sink "file" {
config {
path = "/tmp/token"
}
}
}
vault {
address = "http://192.168.1.1:8200"
}
- Clients will not be required to provide token to the requests that they make to the agent
- Your application does not have a token, by exporting Agent address (
VAULT_AGENT_ADDR
) the Agent will make use of the cached one stored locally - Fault tolerant, caching allows client-side stock of responses containing newly created tokens:
...
cache {
use_auto_auth_token = true
}
listener {
address = "127.0.0.1:8007"
tls_disable = true
}
...
# CLI:
v agent -config=agent.hcl
# Client need to know where is Agent listening:
export VAULT_AGENT_ADDR=127.0.0.1:8007
VAULT_TOKEN=$(cat /tmp/token) v token create
- Uses Consul template markup language
- Renders secrets to files in their own format
- Sets permissions on those generated files
- Runs arbitrary commands (<30s) - e.g. create env, update app etc.
- Config example:
template {
source = "template.ctmpl"
destination = "rendered.txt"
perm = 640
}
- Storage backend is not trusted by design
- Not all storage backends support high availability (HA)
- For clustering, storage must also support locking mechanism
- Vault cluster = multiple Vault nodes + one storage BE
- Storage types:
- Object
- Database
- Key/Value
- File
- Memory
- Integrated (Raft)
- Local storage
- HA
- Replicated
- HashiCorp technical support is officially only for:
- In-memory
- Filesystem
- Consul
- Raft
- Unseal keys should be provided from different users and can be via CLI or UI
- Rekeying updates unseal & master keys
- Rotation updates encryption key for future operations, previous version is saved for decryption purposes
- Configuration file (HashiCorp Configuration Language = HCL) for Vault server example:
storage "file" {
path = "/root/vault-data"
}
listener "tcp" {
address = "0.0.0.0:8200"
tsl_disable = 1
# Where to listen for cluster communication >
cluster_address = "127.0.0.1:8201"
tls_cert_file = "/path/to/cert.crt"
tls_key_file = "/path/to/cert.key"
}
ui = "true"
address = "127.0.0.1:8200"
- Raft storage example config:
...
storage "raft" {
path = "/path/to/data"
node_id = "unique_node_id123"
retry_join {
leader_api_addr = "https://node1.local:8200"
leader_ca_cert_file = "/path/to/cert.crt"
}
retry_join {
leader_api_addr = "https://node2.local:8200"
leader_ca_cert_file = "/path/to/cert.crt"
}
...as many retry_join as there are nodes, must include itself!
}
# Only for Raft:
disable_mlock = true
# URL handed out for cluster communication coming in:
cluster_addr = https://server1:8201
# URL clients should use:
api_addr = https://server1:8200
ui = true
# Useful commands:
# vault operator raft join https://active-vault:8200
# vault operator raft list-peers
Available stanzas in the config file:
- seal
- listener
- storage
- telemetry
- main config like
ui=true
outside of the above stanzas
Example of the complete config file vault.hcl
:
storage "consul" {
address = "127.0.0.1:8500"
path = "vault/"
token = "1a2b3c4d-1234-abdc-1234-1a2b3c4d5e6a"
}
listener "tcp" {
address = "0.0.0.0:8200"
cluster_address = "0.0.0.0:8201"
tls_disable = 0
tls_cert_file = "/etc/vault.d/client.pem"
tls_key_file = "/etc/vault.d/cert.key"
tls_disable_client_certs = "true"
}
seal "awskms" {
region = "us-east-1"
kms_key_id = "12345678-abcd-1234-abcd-123456789101",
endpoint = "example.kms.us-east-1.vpce.amazonaws.com"
}
api_addr = "https://vault-us-east-1.example.com:8200"
cluster_addr = " https://node-a-us-east-1.example.com:8201"
cluster_name = "vault-prod-us-east-1"
ui = false
log_level = "INFO"
license_path = "/opt/vault/vault.hcl"
# CLI:
# Check the current status:
v status
# Start Vault with specific configuration:
v server -config demo.hcl
# Initialize Vault, GnuPG can actually be used in OPTIONS:
v operator init [OPTIONS]
# Unseal the Vault manually:
v operator unseal
> Provide 3 unseal keys shown in the init
v login
> Use root token from the init
v operator key-status
# Rekey, you can also change the options in the process (e.g. key-shares/key-threshold):
v operator rekey -init
# Rotate encryption keyring:
v operator rotate
# Seal the Vault
v operator seal
How to create root token in an emergency using unseal/recovery keys:
- Quorum of unseal key holders can re-generate a new root token
v operator generate-root -init
- Get OTP from the output of 2.
- Each person from quorum runs
v operator generate-root
and enter their portion of unseal key - Last person gets encoded token
v operator generate-root -otp="<step 2>" -decode="<step 5>"
Opensource Vault does not include:
- Replication capabilities
- Limited scalability
- Enterprise integrations (MFA, HSM, Auto-backups...)
- Namespaces for multi-tenancy
- Policy-as-Code using Sentinel
- Access to snapshot agent for auto-DR
Features:
- Namespaces a.k.a. "Vault within Vault"
- Dedicated to each team in the org, isolated entity with its own:
- Policies
- Auth methods
- Secret engines
- Tokens
- Identity entities & groups
- Dedicated to each team in the org, isolated entity with its own:
- Disaster Recovery (DR)
- Not automatic promotion of secondary
- Existing data on the secondary is overwritten/destroyed
- Ability to fully restore all types of data (local & cluster)
- Syncs everything
- Secondaries do not handle client requests, can be promoted to new primary
- Replication
- Performance purpose replication between two clusters (primary -> secondary based on
sys/leader
) - Replication happens on the Vault node level, never directly between storage BEs (you can even have different storage BEs)
- Typically for scenarios in different regions
- Uses port
8201
for communication - Secondaries keep track of their own tokens & leases, will service reads locally (local data not replicated)
- However they share underlying config, policies, secrets
- Performance purpose replication between two clusters (primary -> secondary based on
- Monitoring
- Multi-factor authentication
- Auto-unseal with HSM
- Can be combination of DR and replication (e.g. 1x active cluster, 1x DR cluster, 1x performance cluster)
- HashiCorp does not recommend to use load balancers in front
- Request forwarding is default model for handling requests, cluster acts as one server from the client perspective
- If request forwarding fails, Vault will fail over to client redirection and tell client who is the primary/active and client needs to make a new request to this primary server
- Master key cut into multiple pieces (ideally owned by multiple people)
- Number of shares and minimum threshold is configurable, even during rekey:
v operator init -key-shares=5 -key-threshold=3
- Can be disabled = master key used directly for unsealing
-
Unsealing = reconstructing the master key
-
Cloud based key -> Master key -> Encrypted keys
-
Types can be AWS KMS, Transit secret engine, Azure Key Vault, HSM, GCP Cloud KMS...
-
This is set in server config, not done during init
-
Procedure is following:
- AWS key management service (KMS)
- Create a key - Symmetric -
vault-autounseal
- New IAM user
vault
- Admin access - Copy new user's access key & secret key
- Adjust your Vault server config
autounseal.hcl
, for example:
... seal "awskms" { region = "us-east-1" access_key = "abcdefg" secret_access_key = "abcdefg" kms_key_id = "abcdefg" endpoint = "VPC_ENDPOINT" } ...
- Endpoint is used for private connectivity and is not required
v operator init
, optionally add-recovery-shares=5 -recovery-threshold=3
v status
v server -config autounseal.hcl
-
You can actually unseal with other Vaults, supports key rotation if needed (they must run transit secret engine):
... seal "transit" { address = "..." token = "..." # from other Vault key_name = "..." # from other Vault mount_path = "..." # from other Vault namespace = "..." // TLS Configuration } ...
- Not enabled by default
- Good practice is to use more than 1
- Write-Ahead-Logging (WAL) guarantees it is first recorded into the log, then to datastore, especially important in HA scenario
- Types:
- File
- Syslog
- Socket
- Everything in the log is hashed by hmac-sha256
- Log contains: time, type, auth, request, response
- When enabled and not working (it is blocking due to e.g. not enough free space, network issue etc.), then Vault will be unresponsive until it can write again to at least 1 device
# CLI:
# To enable, can also accept -path="audit-path":
v audit enable file file-path=vault.log
v audit list
- Metrics:
- Core
- Runtime
- Policy
- Token, identity, lease
- Audit
- Resource quota
- Replication
- Secret engines
- Storage backend
- Raft health
- URL is for example
/v1/sys/metrics?format=prometheus
- Config file accepts telemetry stanza:
...
telemetry {
statsite_address = "statsite.company.local:8125"
}
...
- Vault tutorials: https://learn.hashicorp.com/vault
- Zeal Vora's Github: https://github.com/zealvora/hashicorp-certified-vault-associate
- Ned1313's Github:
- Adnan's study part1: https://drive.google.com/file/d/1swqnge2FvNs9KkjxqeFlY3Iiq3J6WnkF/view
- Adnan's study part2: https://drive.google.com/file/d/1CoH0kZ2cMDdIvpdWTcMjYBRP4TzBLNvc/view
- Adnan's study part3: https://drive.google.com/file/d/14I1jOcaw2_JWYyGU-wXoMvXNZPYOxiT2/view
- ismet55555's notes: https://github.com/ismet55555/Hashicorp-Certified-Vault-Associate-Notes
- From the
sys/tools
endpoint - Tools:
- Wrap - securely encrypt, generates single use wrap token
- Lookup - see details about a token
- Unwrap - decrypt the token, once unwrapped it cannot be done again
- Rewrap
- Random -
/sys/tools/random/164
- Hash -
/sys/tools/hash/sha2-512
Usage: vault <command> [args]
Common commands:
read Read data and retrieves secrets
write Write data, configuration, and secrets
delete Delete secrets and configuration
list List data or secrets
login Authenticate locally
agent Start a Vault agent
server Start a Vault server
status Print seal and HA status
unwrap Unwrap a wrapped secret
Other commands:
audit Interact with audit devices
auth Interact with auth methods
debug Runs the debug command
kv Interact with Vault's Key-Value storage
lease Interact with leases
monitor Stream log messages from a Vault server
namespace Interact with namespaces
operator Perform operator-specific tasks
path-help Retrieve API help for paths
plugin Interact with Vault plugins and catalog
policy Interact with policies
print Prints runtime configurations
secrets Interact with secrets engines
ssh Initiate an SSH session
token Interact with tokens
Author: @luckylittle
Last update: Tue Nov 8 08:53:08 UTC 2022
Shortened link: https://bit.ly/3yY08Iv