The Ansible configuration consists primarily of the following 3 components:
- Ansible Roles - These are reusable chunks of code that define some element of a server.
- Ansible Playbooks - These are used to fully configure a server; a playbook is composed of multiple roles and should contain minimal code.
- Hashicorp Vault - All sensitive information is stored in Vault and retrieved with the built-in
hashi_vault
plugin.
Inventory is dynamically built using the built-in dynamic inventory scripts.
- All inventory is managed with tags and/or names. Is it on the utmost importance that servers are tagged/named properly as this is currently a manual task until we start doing IAC.
- Adding a provider:
- Copy the Python script and corresponding
ini
file toinventory_scripts
. - Append the script to the
inventory
variable inansible.cfg
; seperate scripts with a,
.
- Copy the Python script and corresponding
All dynamic inventory scripts
must support the following options:
- --list List instances (default: True)
- --host HOST Get all the variables about a specific instance You can run them on an ad hoc basis to scan the inventory and determine the best management option.
# GCE
export [email protected]
export GCE_PROJECT=ta-infrastructure
export GCE_CREDENTIALS_FILE_PATH=~/.creds/gce-ansible.json
# AWS
export AWS_ACCESS_KEY_ID=AKIAJZB5SJXO6GASFH5A
export AWS_SECRET_ACCESS_KEY=XXXXXX
# Packet
export PACKET_API_TOKEN=XXXXXX
# Rackspace
export RAX_CREDS_FILE=~/.creds/rax-ansible.txt
# Vault
export VAULT_ADDR=http://example.com
export VAULT_TOKEN=XXXXXX
# Turn off fork safety
export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
- GCE
- To access GCE with the
ansible
service account, copy the JSON credentials from Vault to~/.creds/gce-ansible.json
.
- To access GCE with the
- AWS
- The
AWS_SECRET_ACCESS_KEY
for the ansible_service_account can be found in Vault.
- The
- Rackspace
- To access Rackspace, create
~/.creds/rax-ansible.txt
:
- To access Rackspace, create
[rackspace_cloud]
username = ansible_service_account
api_key = XXXXXX
- Tags in the Rackspace Control Panel are disconnected from the server resource and cannot be used to manage inventory; the current version of the rackspace-novaclient is also very buggy making it difficult to set additional metadata. To circumvent these problems, manage the server by hostname wildcard, e.g.
ansible -m ping pop*
.
The ScaleFT file, .scaleft.cfg
, is configured for Mac. If not running a Mac you can rebuild the file by running: sft ssh-config --via bastion.com > .scaleft.cfg
.
- Playbook:
ansible-playbook playbooks/<playbook.yml>
- Role/Task:
ansible-playbook -t <tag> playbooks/<playbook.yml>
- Limit Hosts:
ansible-playbook playbooks/<playbook.yml> --limit pop01*
- Negate Hosts:
ansible-playbook playbooks/<playbook.yml> --limit 'all:!tag_Name_r_*'
- Query facts:
ansible all -m setup -a 'filter=ansible_kernel'
- Query Inventory:
ansible-inventory --list
- Inventory Tree:
ansible-inventory --graph
NOTE: In order to execute specific role(s) from a playbook, you must make sure it is tagged in the plabook.
ansible-galaxy init <ROLE>
- The
<ROLE>
name should be descriptive of the role.
- The
- Fill out the README.md generated in
<ROLE>
; this is requirement if you intend to publish to Ansible Galaxy.
- tasks: Contains the main list of tasks to be executed by the role.
- handlers: Contains handlers, which may be used by this role or even anywhere outside this role. Handlers are best used to restart services and trigger reboots. You probably won’t need them for much else.
- defaults: Default variables for the role.
- vars: Other variables for the role.
- files: Contains files which can be deployed via this role.
- templates: Contains templates which can be deployed via this role.
- meta: Defines some meta data for this role.
Ansible executes the setup
module to gather facts at the beginning of each run. Custom facts can be added to /etc/ansible/facts.d/*.fact
; these facts will be ran with setup
and stored in the ansible_local
JSON property: ansible -m setup -a 'filter=ansible_local' all
NOTE: The filter option filters only the first level subkey below ansible_facts.
- Copy
templates/custom_linux.fact
tocustom_linux_facts
. - Set
${PACKAGE}
to the name of the package to check; this will be the property name for the fact. - Set
${VERSION}
to the command that needs to be ran; this will be the property value for the fact.- If the package is not installed the value defaults to
not_installed
.
- If the package is not installed the value defaults to
Refer to the custom_linux_facts
role for more details.
-
Run the
scaleft
playbook to bootstrap the new server(s) with ScaleFT. This must be done by someone with access to the root infrastructure SSH key(s):ansible-playbook -u root --private-key ~/.ssh/id_rsa.ta-ops playbooks/scaleft.yml --limit 'new_servers*'
-
Run the playbook that corresponds to the server(s) function. Al playbooks include
baseline
, however thebaseline
playbook can be ran separately if desired. Alternatively, a role can be applied without running thebaseline
if required.ansible-playbook playbooks/jenkins-slaves.yml --limit 'jenkins*02*'
ansible-playbook playbooks/baseline.yml --skip-tags 'baseline' --limit 'new_servers*'
- Connectivity Issues
- Verify that you have a valid ScaleFT token by running
sft list-accounts
;sft login
will create a new token if required.
- Verify that you have a valid ScaleFT token by running
- Ansible is attempting to connect as your username rather than
root
; host is unreachable.
- Verify that ScaleFT is installed. If not installed run the following:
ansible-playbook -u root --private-key /path/to/key playbooks/scaleft.yml --limit <HOST(s)>
- Connection fails:
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
- Verify that any old entries are removed from the ScaleFT console:
sft dash
- Check local
known_hosts
andssh_known_hosts
file.
- Verify that any old entries are removed from the ScaleFT console:
- Vault is sealed.
- Contact either BJ or Wes to unseal Vault.
- Consider moving all software installs in
playbooks/jenkins-slaves.yml
to acommon.yml
role. - Update
ubuntu-updates
role to only restart if a new kernel was added. - Modify
ubuntu-updates
role to work on CentOS/RedHat as well. - Make
set-hostname
work on all providers. - Consodate some of the initial playbooks.
- Ansible must be ran out of the root of this repo as
ansible.cfg
sets relative paths. - The
[ERROR]:
when runningansible-playbook
was recently addressed upstream. - In order to run the
hvac
(HashiCorp Vault API client) module on High Sierra, fork safety must be off. This is a known issue. Upgrading to the latest Python 3 may resolve the problem; further testing is required.