Skip to content

Instantly share code, notes, and snippets.

@jeremypruitt
Last active December 23, 2019 14:25
Show Gist options
  • Save jeremypruitt/7eb695be73b06c1cc3c1f97c143081cc to your computer and use it in GitHub Desktop.
Save jeremypruitt/7eb695be73b06c1cc3c1f97c143081cc to your computer and use it in GitHub Desktop.

Vault Admin

The Vault Admin is responsible for ensuring the vault service and its backend are available. They also manage the bringup process in the event of an outage, including coordination of the shamir secret holders during the process of unsealing the vault.

Requires:

  • Much more consideration for production :)
  • Vault binary

For PoC purposes, just use the in-memory vault dev mode:

vaultserver$ vault server -dev

Policy Admin

The Policy Admin is responsible for creating, updating, and deleting vault policies. Vault policies are HCL documents that describe what parts of Vault a user is allowed to access.

Requires:

  • Vault Policy: policy-admin? (a policy with access to make policies)
  • Vault binary w/ access to port 8200 of the vault server

Create read policy:

$ cat > vault-policy-test-ssh-read.hcl <<EOF
path "secret/keys/test-ssh" {
  policy = "read"
}

path "auth/token/lookup-self" {
  policy = "read"
}
EOF

Create write policy:

$ cat > vault-policy-test-ssh-write.hcl <<EOF
path "secret/keys/test-ssh" {
  policy = "write"
}

path "auth/token/lookup-self" {
  policy = "read"
}
EOF

Create policies:

$ export VAULT_ADDR='http://VAULT_SERVER_NAME:8200'
$ vault policy-write test-ssh-read  vault-policy-test-ssh-read.hcl
$ vault policy-write test-ssh-write vault-policy-test-ssh-write.hcl

SSH Key Admin

The SSH Key Admin is responsible for generating new ssh keys, updating vault with the new keys, removing the private key from disk, and distributing the public key to appropriate targets.

Requires:

  • Vault Policy: test-ssh-write
  • Vault binary w/ access to port 8200 of the vault server
  • ssh-keygen binary
  • Public SSH key distribution mechanism

Generate SSH key:

$ ssh-keygen -t rsa -b 2048 -f ./test-ssh

Write the private key to vault:

$ export VAULT_ADDR='http://VAULT_SERVER_NAME:8200'
$ vault write secret/keys/test-ssh value=@./test-ssh

Remove the private key from disk:

$ rm ./test-ssh

Distribute public ssh keys as appropriate:

  • EX: ssh-copy-id -i ./test-ssh.pub USER@HOSTNAME
  • EX: ansible-playbook distribute_new_ssh_key.yml
  • EX: consul kv write /appropriate/namespace/for/public/keys

Nomad Admin

The Nomad Admin is responsible for ensuring Nomad is up and available and integrated with vault. This involves generating and using a vault token when starting the nomad service. Once the Nomad service is up it will continue to rotate the token and maintain the integration with vault. If the box is rebooted or the Nomad service needs to be restarted, then a new token will need to be generated and passed to Nomad as an env var during service startup.

Requires:

  • Nomad binary
  • A server to run the Nomad service as a server
  • Nomad server must have access to port 8200 of the vault server

Prepare directories:

$ mkdir /etc/nomad.d; mkdir /var/lib/nomad
$ chown USER:GROUP /etc/nomad.d; chown USER:GROUP /etc/nomad.d

Create the Nomad server config file:

$ cat > /etc/nomad.d/server.hcl <<EOF
bind_addr = "0.0.0.0" # the default

data_dir  = "/var/lib/nomad"

server {
  enabled          = true
  bootstrap_expect = 1
}

client {
  enabled       = true
  network_speed = 10
  options {
    "driver.raw_exec.enable" = "1"
  }
}

vault {
  enabled     = true
  address     = "http://VAULT_SERVER_GOES_HERE:8200"
}
EOF

Start nomad in dev mode and pass the Vault token:

$  VAULT_TOKEN=VAULT_TOKEN_GOES_HERE nomad agent -dev -config=/etc/nomad.d

Nomad User

The Nomad User is responsible for submitting jobs to the Nomad server and monitoring the output for pass/fail and results.

Requires:

  • Nomad binary
  • A client OS to run Nomad as a client

Create Nomad Job File:

cat > test-ssh.nomad <<EOF
job "test-ssh" {
        datacenters = ["dc1"]
        type = "batch"

        constraint {
                attribute = "${attr.kernel.name}"
                value = "linux"
        }

        update {
                stagger = "10s"
                max_parallel = 1
        }

        group "test-group" {
                count = 1

                restart {
                        attempts = 1
                        interval = "5m"
                        delay = "25s"
                        mode = "fail"
                }

                task "ssh-to-remote-host" {
                        driver = "docker"
                        vault {
                                policies = ["test-ssh-read"]
                        }

                        env {
                                VAULT_ADDR = "http://VAULT_SERVER_NAME_GOES_HERE:8200"
                        }

                        template {
                                data = "{{with secret \"secret/keys/test-ssh\"}}{{.Data.value}}{{end}}"
                                destination = "local/test-ssh"
                        }

                        config {
                                image = "ansible/centos7-ansible:stable"
                                command = "sh"
                                args = [ "-c", "chmod 0400 local/test-ssh; ssh -i local/test-ssh -o StrictHostKeyChecking=no REMOTE_USER_GOES_HERE@REMOTE_HOST_GOES_HERE uname -a" ]
                        }

                        resources {
                                cpu = 100 # Mhz
                                memory = 128 # MB

                                network {
                                        mbits = 10

                                        port "testssh" {
                                        }
                                }
                        }
                }
        }
}
EOF

Submit the job to the Nomad server:

$ nomad run test-ssh.nomad

Check the allocation status:

$ nomad alloc-status ALLOC_ID_GOES_HERE

Check the stdout logs if/when successful:

$ nomad logs ALLOC_ID_GOES_HERE

Check the stderr logs if/when NOT successful:

$ nomad logs -stderr ALLOC_ID_GOES_HERE
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment