Picking the right architecture = Picking the right battles + Managing trade-offs
- Clarify and agree on the scope of the system
- User cases (description of sequences of events that, taken together, lead to a system doing something useful)
- Who is going to use it?
- How are they going to use it?
#!/bin/bash | |
export vault=/usr/local/bin/vault | |
export VAULT_TOKEN=$(cat /root/.vault-token) | |
vault_cacert='-ca-cert=/path/to/your/ca.pem' | |
local_vault="-address=https://$(hostname -f):8200" | |
unsealed_vault="-address=https://$(getent hosts $(dig +short vault.service.consul | tail -n 1) | awk '{ print $2 }'):8200" | |
leader_vault="-address=https://$($vault status $vault_cacert $unsealed_vault 2> /dev/null | grep Leader | awk '{ print $2 }' | sed 's/^http\(\|s\):\/\///g'):8200" | |
vault_read="$vault read $vault_cacert $leader_vault" | |
vault_unseal="$vault unseal $vault_cacert $local_vault" |
That's our RC:
$ cat ws-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: webserver-rc
spec:
replicas: 5
selector:
cat $HOME/.docker/config.json|jq '.auths'|sed "s/http:/https:/g"|tr '\n' ' '|tr -d '[[:space:]]'|base64 |
--- | |
apiVersion: v1 | |
kind: Pod | |
metadata: | |
name: server | |
spec: | |
containers: | |
- image: resouer/sample:v2 | |
name: war | |
lifecycle: |
Expects one argument the name of the production stack file for Tutum.
(see https://support.tutum.co/support/solutions/articles/5000569899-stacks )
Requires these environment variables to be set
- CLOUDFLARE_DOMAIN - root domain of your app, e.g. example.com
- CLOUDFLARE_KEY - your Cloudflare API key
- CLOUDFLARE_EMAIL - your Cloudflare email address e.g. [email protected]
- PROJECT_NAME - a short name for your project e.g. example
The point of this is to use cheap machines with small/slow storage to coordinate client requests while dedicating the machines with the big and fast storage to doing what they do best. I found that request coordination was contributing to about half the CPU usage on our Cassandra nodes, on average. Solid state storage is quite expensive, nearly doubling the cost of typical hardware. It also means that if people have control over hardware placement within the network, they can place proxy nodes closer to the client without impacting their storage footprint or fault tolerance characteristics.
This is accomplished in Cassandra by passing the -Dcassandra.join_ring=false option when the process is started. These nodes will connect to the seeds, cache the gossip data, load the schema, and begin listening for client requests. Messages like "/x.x.x.x is now UP!" will appear on the other nodes.
There are also some more practical benefits to this. Handling client requests caused us to push the NewSize of the heap up