Skip to content

Instantly share code, notes, and snippets.

@necrux
Created September 10, 2020 00:19
Show Gist options
  • Save necrux/7eec9528c5a02ad1f12a0c71e45d3ecf to your computer and use it in GitHub Desktop.
Save necrux/7eec9528c5a02ad1f12a0c71e45d3ecf to your computer and use it in GitHub Desktop.

Ansible Configurations

The Ansible configuration consists primarily of the following 3 components:

  • Ansible Roles - These are reusable chunks of code that define some element of a server.
  • Ansible Playbooks - These are used to fully configure a server; a playbook is composed of multiple roles and should contain minimal code.
  • Hashicorp Vault - All sensitive information is stored in Vault and retrieved with the built-in hashi_vault plugin.

Inventory

Inventory is dynamically built using the built-in dynamic inventory scripts.

  • All inventory is managed with tags and/or names. Is it on the utmost importance that servers are tagged/named properly as this is currently a manual task until we start doing IAC.
  • Adding a provider:
    • Copy the Python script and corresponding ini file to inventory_scripts.
    • Append the script to the inventory variable in ansible.cfg; seperate scripts with a ,.

Dynamic Inventory Scripts

All dynamic inventory scripts must support the following options:

  • --list List instances (default: True)
  • --host HOST Get all the variables about a specific instance You can run them on an ad hoc basis to scan the inventory and determine the best management option.

Environment Configuration:

# GCE
export [email protected]
export GCE_PROJECT=ta-infrastructure
export GCE_CREDENTIALS_FILE_PATH=~/.creds/gce-ansible.json

# AWS
export AWS_ACCESS_KEY_ID=AKIAJZB5SJXO6GASFH5A
export AWS_SECRET_ACCESS_KEY=XXXXXX

# Packet
export PACKET_API_TOKEN=XXXXXX

# Rackspace
export RAX_CREDS_FILE=~/.creds/rax-ansible.txt

# Vault
export VAULT_ADDR=http://example.com
export VAULT_TOKEN=XXXXXX

# Turn off fork safety
export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES

Provider Specific Notes

  • GCE
    • To access GCE with the ansible service account, copy the JSON credentials from Vault to ~/.creds/gce-ansible.json.
  • AWS
    • The AWS_SECRET_ACCESS_KEY for the ansible_service_account can be found in Vault.
  • Rackspace
    • To access Rackspace, create ~/.creds/rax-ansible.txt:
[rackspace_cloud]
username = ansible_service_account
api_key = XXXXXX
  • Tags in the Rackspace Control Panel are disconnected from the server resource and cannot be used to manage inventory; the current version of the rackspace-novaclient is also very buggy making it difficult to set additional metadata. To circumvent these problems, manage the server by hostname wildcard, e.g. ansible -m ping pop*.

ScaleFT

The ScaleFT file, .scaleft.cfg, is configured for Mac. If not running a Mac you can rebuild the file by running: sft ssh-config --via bastion.com > .scaleft.cfg.

Executing

  • Playbook: ansible-playbook playbooks/<playbook.yml>
  • Role/Task: ansible-playbook -t <tag> playbooks/<playbook.yml>
  • Limit Hosts: ansible-playbook playbooks/<playbook.yml> --limit pop01*
  • Negate Hosts: ansible-playbook playbooks/<playbook.yml> --limit 'all:!tag_Name_r_*'
  • Query facts: ansible all -m setup -a 'filter=ansible_kernel'
  • Query Inventory: ansible-inventory --list
  • Inventory Tree: ansible-inventory --graph

NOTE: In order to execute specific role(s) from a playbook, you must make sure it is tagged in the plabook.

Creating a new role

  • ansible-galaxy init <ROLE>
    • The <ROLE> name should be descriptive of the role.
  • Fill out the README.md generated in <ROLE>; this is requirement if you intend to publish to Ansible Galaxy.

Role Structure

  • tasks: Contains the main list of tasks to be executed by the role.
  • handlers: Contains handlers, which may be used by this role or even anywhere outside this role. Handlers are best used to restart services and trigger reboots. You probably won’t need them for much else.
  • defaults: Default variables for the role.
  • vars: Other variables for the role.
  • files: Contains files which can be deployed via this role.
  • templates: Contains templates which can be deployed via this role.
  • meta: Defines some meta data for this role.

Facts

Ansible executes the setup module to gather facts at the beginning of each run. Custom facts can be added to /etc/ansible/facts.d/*.fact; these facts will be ran with setup and stored in the ansible_local JSON property: ansible -m setup -a 'filter=ansible_local' all

NOTE: The filter option filters only the first level subkey below ansible_facts.

Adding Custom Facts

  • Copy templates/custom_linux.fact to custom_linux_facts.
  • Set ${PACKAGE} to the name of the package to check; this will be the property name for the fact.
  • Set ${VERSION} to the command that needs to be ran; this will be the property value for the fact.
    • If the package is not installed the value defaults to not_installed.

Refer to the custom_linux_facts role for more details.

Example Workflow

  • Run the scaleft playbook to bootstrap the new server(s) with ScaleFT. This must be done by someone with access to the root infrastructure SSH key(s):

    • ansible-playbook -u root --private-key ~/.ssh/id_rsa.ta-ops playbooks/scaleft.yml --limit 'new_servers*'
  • Run the playbook that corresponds to the server(s) function. Al playbooks include baseline, however the baseline playbook can be ran separately if desired. Alternatively, a role can be applied without running the baseline if required.

    • ansible-playbook playbooks/jenkins-slaves.yml --limit 'jenkins*02*'
    • ansible-playbook playbooks/baseline.yml --skip-tags 'baseline' --limit 'new_servers*'

Troubleshooting

  • Connectivity Issues
    • Verify that you have a valid ScaleFT token by running sft list-accounts; sft login will create a new token if required.
  • Ansible is attempting to connect as your username rather than root; host is unreachable.
  • Verify that ScaleFT is installed. If not installed run the following: ansible-playbook -u root --private-key /path/to/key playbooks/scaleft.yml --limit <HOST(s)>
  • Connection fails: WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
    • Verify that any old entries are removed from the ScaleFT console: sft dash
    • Check local known_hosts and ssh_known_hosts file.
  • Vault is sealed.
    • Contact either BJ or Wes to unseal Vault.

TODO

  • Consider moving all software installs in playbooks/jenkins-slaves.yml to a common.yml role.
  • Update ubuntu-updates role to only restart if a new kernel was added.
  • Modify ubuntu-updates role to work on CentOS/RedHat as well.
  • Make set-hostname work on all providers.
  • Consodate some of the initial playbooks.

NOTES

Sources

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment