The Vault Admin is responsible for ensuring the vault service and its backend are available. They also manage the bringup process in the event of an outage, including coordination of the shamir secret holders during the process of unsealing the vault.
- Much more consideration for production :)
- Vault binary
vaultserver$ vault server -dev
The Policy Admin is responsible for creating, updating, and deleting vault policies. Vault policies are HCL documents that describe what parts of Vault a user is allowed to access.
- Vault Policy: policy-admin? (a policy with access to make policies)
- Vault binary w/ access to port 8200 of the vault server
$ cat > vault-policy-test-ssh-read.hcl <<EOF
path "secret/keys/test-ssh" {
policy = "read"
}
path "auth/token/lookup-self" {
policy = "read"
}
EOF
$ cat > vault-policy-test-ssh-write.hcl <<EOF
path "secret/keys/test-ssh" {
policy = "write"
}
path "auth/token/lookup-self" {
policy = "read"
}
EOF
$ export VAULT_ADDR='http://VAULT_SERVER_NAME:8200'
$ vault policy-write test-ssh-read vault-policy-test-ssh-read.hcl
$ vault policy-write test-ssh-write vault-policy-test-ssh-write.hcl
The SSH Key Admin is responsible for generating new ssh keys, updating vault with the new keys, removing the private key from disk, and distributing the public key to appropriate targets.
- Vault Policy: test-ssh-write
- Vault binary w/ access to port 8200 of the vault server
- ssh-keygen binary
- Public SSH key distribution mechanism
$ ssh-keygen -t rsa -b 2048 -f ./test-ssh
$ export VAULT_ADDR='http://VAULT_SERVER_NAME:8200'
$ vault write secret/keys/test-ssh value=@./test-ssh
$ rm ./test-ssh
- EX:
ssh-copy-id -i ./test-ssh.pub USER@HOSTNAME
- EX:
ansible-playbook distribute_new_ssh_key.yml
- EX:
consul kv write /appropriate/namespace/for/public/keys
The Nomad Admin is responsible for ensuring Nomad is up and available and integrated with vault. This involves generating and using a vault token when starting the nomad service. Once the Nomad service is up it will continue to rotate the token and maintain the integration with vault. If the box is rebooted or the Nomad service needs to be restarted, then a new token will need to be generated and passed to Nomad as an env var during service startup.
- Nomad binary
- A server to run the Nomad service as a server
- Nomad server must have access to port 8200 of the vault server
$ mkdir /etc/nomad.d; mkdir /var/lib/nomad
$ chown USER:GROUP /etc/nomad.d; chown USER:GROUP /etc/nomad.d
$ cat > /etc/nomad.d/server.hcl <<EOF
bind_addr = "0.0.0.0" # the default
data_dir = "/var/lib/nomad"
server {
enabled = true
bootstrap_expect = 1
}
client {
enabled = true
network_speed = 10
options {
"driver.raw_exec.enable" = "1"
}
}
vault {
enabled = true
address = "http://VAULT_SERVER_GOES_HERE:8200"
}
EOF
$ VAULT_TOKEN=VAULT_TOKEN_GOES_HERE nomad agent -dev -config=/etc/nomad.d
The Nomad User is responsible for submitting jobs to the Nomad server and monitoring the output for pass/fail and results.
- Nomad binary
- A client OS to run Nomad as a client
cat > test-ssh.nomad <<EOF
job "test-ssh" {
datacenters = ["dc1"]
type = "batch"
constraint {
attribute = "${attr.kernel.name}"
value = "linux"
}
update {
stagger = "10s"
max_parallel = 1
}
group "test-group" {
count = 1
restart {
attempts = 1
interval = "5m"
delay = "25s"
mode = "fail"
}
task "ssh-to-remote-host" {
driver = "docker"
vault {
policies = ["test-ssh-read"]
}
env {
VAULT_ADDR = "http://VAULT_SERVER_NAME_GOES_HERE:8200"
}
template {
data = "{{with secret \"secret/keys/test-ssh\"}}{{.Data.value}}{{end}}"
destination = "local/test-ssh"
}
config {
image = "ansible/centos7-ansible:stable"
command = "sh"
args = [ "-c", "chmod 0400 local/test-ssh; ssh -i local/test-ssh -o StrictHostKeyChecking=no REMOTE_USER_GOES_HERE@REMOTE_HOST_GOES_HERE uname -a" ]
}
resources {
cpu = 100 # Mhz
memory = 128 # MB
network {
mbits = 10
port "testssh" {
}
}
}
}
}
}
EOF
$ nomad run test-ssh.nomad
$ nomad alloc-status ALLOC_ID_GOES_HERE
$ nomad logs ALLOC_ID_GOES_HERE
$ nomad logs -stderr ALLOC_ID_GOES_HERE