I'm be using DreamCompute as my OpenStack provider, but there are dozens to choose from. I assume you already have Ansible and the OpenStack CLI tools installed.
With the proliferation of OpenStack public clouds offering free and intro tiers, it's becoming very easy to effectively run a simple application for free or nearly free. Also with the emergence of Ansible, you don't need to learn and deploy complicated tools to do configuration management.
However, a typical limitation with free OpenStack offerings is that you don't have many public IPs to work with. This makes it a little annoying to manage your instances using a "push" configuration management tool like Ansible because you have run the tool from inside the private network.
Of course you could use something like Salt where an agent running on each instance connects back to a master process running on another instance on the private network. Salt and friends though (Chef, Puppet, ...) are much more complicated than Ansible, and I don't have a devops team or a lot of time to dedicated to a side project running on a couple of free VMs!
You could just install Ansible on one instance with a public IP, push your playbooks there, then SSH into that host to run playbooks. I don't particularly like this option because now I have to either install all the OpenStack CLI tools on that box too, or run Ansible on a remote host but OpenStack tools from my laptop. Also now anytime I make a change to a playbook, I have to push it to my remote Ansible box. Ansible is getting a lot more complicated all of a sudden...
Luckily SSH has a feature called agent forwarding that solves this problem. The folks at DualSpark addressed cobbled together the scant information on using this with Ansible and kindly wrote a blog post about it. Here I'm just going to tie that information together with a few more details to get an end-to-end example of how to get up and running quickly.
From the OpenStack management UI, create a new security group. Allow ICMP and SSH for that security group. You may also want to create a new keypair and add it to the security group. If so, download the keypair and add it to your SSH agent (using ssh-add
).
nova keypair-add
nova secgroup-create
nova secgroup-add-rule ICMP
nova secgroup-add-rule SSH
Download the OpenStack RC file from "Access and Security" section. This is just a convenient shell script to set some OpenStack-related environment variables. Source the script.
If you're setting this up on a free account (say DreamCompute) you probably only have a single floating IP. Considering this limitation, we're going to use one of our servers as both a web server (in this case) and an SSH bastion. If you're running a legitimate business, you'd probably create a dedicated jump box and pay for another IP.
Spin up a new instance on the security group you created earlier. To begin this will just be an SSH jump box, although later it will also turn into a web server (or whatever). Associate a floating IP to the instance.
nova instance-create
nova floatingip-associate
Now you need to be able to connect from this instance to other instances on the private network using SSH. Since I'm lazy I'm just going to push my OpenStack keypair up to the host.
scp yourkeypair.pem dhc-user@YOURFLOATINGIP:.ssh/id_rsa
In order for Ansible forwarding to work properly, you'll need to install netcat. We'll also go ahead and update everything to be sure we have any security patches, etc.
sudo yum -y update
sudo yum -y install nc
Configure Ansible to connect to private IPs through the bastion (on your floating IP) as described here. This will allow us to manage all instances with Ansible by their private IPs, including the bastion host itself.
Add the private IPs of your instances to Ansible's inventory.
You should now be able to ping all of your instances. For DreamCompute, we connect as dhc-user (similar to ec2-user):
ansible -vvvv -i inventory.ini all -m ping -u dhc-user
Create a simple playbook to run on each instance. In this case we'll just enable some extra repositories for CentOS, install Python 3 via Software Collections, and create a new user to run our application.
OK now run your playbook!
ansible-playbook -i inventory.ini -u dhc-user playbook.yml