Skip to content

Instantly share code, notes, and snippets.

vault list database/config
vault read database/config/postgres
vault list database/roles
vault read database/roles/readonly
# Step 0: Enable dynamic database credential service
vault secrets enable database
# Step 1: Configure connection String
@stenio123
stenio123 / README.md
Last active August 31, 2018 16:09
Show example step-by-step workflows of integrating Vault with long-running applications

Vault and Long Running Apps

Solving Secure Token Introduction

Assuming the applications have a client token, Chef cookbooks can leverage the Vault Ruby gem, direct API calls, native language integrations or the Vault client installed in the VM.

Traditionally, to deliver credentials to retrieve this client token, the Trusted Entity model is used. This is great when deploying in the cloud (AWS, Azure, GCP), using Kubernetes or Jenkins as part of a CI/CD pipeline.

However for applications with no guarantee of ever being redeployed, but that have Chef agents running at a recurring interval, there are at least two potential approaches:


cat vault_audit.log | jq 'select(.request.path | startswith("secret"))'
# Log into your vault instance if you haven't already
vault login root
# Enable the transit secret engine
vault secrets enable transit
# Create a key
vault write -f transit/keys/my-key
# Read the key, nothing up my sleeves
vault read transit/keys/my-key
# Write some base64 encrypted data to the transit endpoint
vault write transit/encrypt/my-key plaintext=$(base64 <<< "my secret data")
@stenio123
stenio123 / PeriodicToken.sh
Created July 3, 2018 14:48
Shows the difference between regular token and periodic token
# All tokens within Vault have an associated TTL (Root is the exception, having "infinite" TTL).
# For long running services, Vault allows the creation of "periodic tokens".
# These are special types of tokens created for long running services - for example a Jenkins server.
# We needed to accomodate the fact that every token in Vault needs to have a ttl, however we expect this service to be long
# lived, therefore it allows us to create a special token that can be renewed indefinitely, allowing a Vault admin to have
# different max_ttl strategies without impacting long running services. The "period" parameter will work as the TTL for the
# token, which needs to be renewed within that period. If it doesn't, Vault will not accept requests using that token
# until it is renewed.
# Example, confidering default system max_ttl and default_ttl:
@stenio123
stenio123 / Test.sh
Last active July 3, 2018 14:34
Showing max_ttl lease precedence behavior in vault: system | mount | config
# Mount database backend
vault mount database
# Configure MySQL connection
vault write database/config/mysql \
plugin_name=mysql-legacy-database-plugin \
connection_url="vaultadmin:vaultadminpassword@tcp(127.0.0.1:3306)/" \
allowed_roles="readonly"
# Create MySQL readonly role
@stenio123
stenio123 / stale_security_groups.py
Created November 10, 2017 15:50 — forked from astrikos/stale_security_groups.py
Script to detect stale AWS security groups
#!/usr/bin/env python
import boto3
import argparse
class StaleSGDetector(object):
"""
Class to hold the logic for detecting AWS security groups that are stale.
"""
def __init__(self, **kwargs):
@stenio123
stenio123 / dk-clean.sh
Created October 23, 2017 19:31 — forked from zeg-io/dk-clean.sh
Clean all Docker images older than 4 weeks
oldContainers="$(docker ps -f "status=exited" | grep -E 'Exited \(.*\) [5-9] h|Exited \(.*\) \d\d h' | awk '{ print $1 }')"
echo -e -n "\nRemoving containers older than 4 hours"
if [ "$oldContainers" != "" ]; then
echo ""
docker rm $oldContainers
else
echo "...none found."
fi
@stenio123
stenio123 / GitHub-Forking.md
Created October 18, 2017 13:42 — forked from Chaser324/GitHub-Forking.md
GitHub Standard Fork & Pull Request Workflow

Whether you're trying to give back to the open source community or collaborating on your own projects, knowing how to properly fork and generate pull requests is essential. Unfortunately, it's quite easy to make mistakes or not know what you should do when you're initially learning the process. I know that I certainly had considerable initial trouble with it, and I found a lot of the information on GitHub and around the internet to be rather piecemeal and incomplete - part of the process described here, another there, common hangups in a different place, and so on.

In an attempt to coallate this information for myself and others, this short tutorial is what I've found to be fairly standard procedure for creating a fork, doing your work, issuing a pull request, and merging that pull request back into the original project.

Creating a Fork

Just head over to the GitHub page and click the "Fork" button. It's just that simple. Once you've done that, you can use your favorite git client to clone your repo or j

@stenio123
stenio123 / Consul server log
Last active December 29, 2016 04:58
This is the Consul log at DEBUG level on the server.
Dec 29 04:50:52 ip-10-228-32-92 consul[10098]: raft: Initial configuration (index=0): []
Dec 29 04:50:52 ip-10-228-32-92 consul[10098]: raft: Node at 10.228.32.92:8300 [Follower] entering Follower state (Leader: "")
Dec 29 04:50:52 ip-10-228-32-92 consul[10098]: serf: EventMemberJoin: consul-i-0ada6114c8d15aca7-stenio 10.228.32.92
Dec 29 04:50:52 ip-10-228-32-92 consul[10098]: serf: EventMemberJoin: consul-i-0ada6114c8d15aca7-stenio.dc1 10.228.32.92
Dec 29 04:50:52 ip-10-228-32-92 consul[10098]: consul: Adding LAN server consul-i-0ada6114c8d15aca7-stenio (Addr: tcp/10.228.32.92:8300) (DC: dc1)
Dec 29 04:50:52 ip-10-228-32-92 consul[10098]: consul: Adding WAN server consul-i-0ada6114c8d15aca7-stenio.dc1 (Addr: tcp/10.228.32.92:8300) (DC: dc1)
Dec 29 04:50:52 ip-10-228-32-92 consul[10098]: agent: Joining cluster...
Dec 29 04:50:52 ip-10-228-32-92 consul[10098]: agent: No EC2 region provided, querying instance metadata endpoint...
Dec 29 04:50:52 ip-10-228-32-92 consul[10098]: agent: Discovered 6 servers from EC