This document should serve as a reference for a minimal Openstack POC deploy.
The Openstack ecosystem contains a wealth of tools and services for extending the functionality of your Openstack cloud.
In this document we will only focus on the core components of Openstack and their corresponding charms:
Openstack Service | Docs/Homepage | Charm |
---|---|---|
Keystone (Identity) | https://docs.openstack.org/keystone/latest/ | Keystone |
Percona-Cluster (database) | https://www.percona.com/software/mysql-database/percona-xtradb-cluster | Percona-Cluster |
Glance (image storage) | https://docs.openstack.org/glance/latest/ | Glance |
Cinder (block storage) | https://docs.openstack.org/cinder/latest/ | Cinder |
Neutron (network) | https://docs.openstack.org/neutron/latest/ | Neutron-API, Neutron-Gateway, Neutron-Openvswitch |
Nova (hypervisor/compute) | https://docs.openstack.org/nova/latest/ | Nova, Nova-Cloud-Controller |
Horizon (dashboard/gui) | https://docs.openstack.org/horizon/latest/ | Openstack-Dashboard |
RabbitMQ Server (messaging) | https://www.rabbitmq.com/ | RabbitMQ-Server |
Some common additional Openstack components include:
As well as pluggable storage and network components (more on these later).
The Keystone service provides identity management and authentication for services and users in the Openstack environment.
Glance is responsible for the tracking, storage of, and access to the images available to the Openstack cloud.
Cinder provides block storage to virtual machines and containers launched in Openstack (root os disk, attached volumes).
Neutron sits on top of a SDN stack (openvswitch by default) and gives Openstack a high level API for advanced network operations
Nova is the hypervisor wrapper/manager, configure it to control the compute resources of your cloud.
A basic Openstack deployment using 1 physical server. The server used for this demo only need have a single disk that the host OS runs on, and two network connections.
The following openstack-vlan-min-bundle.yaml
file can be used to deploy Openstack with vlan networking. This
bundle omits the neutron-gateway component due to networking directly from the compute node via neutron-openvswitch (no l3 networking needed).
# openstack-vlan-min-bundle.yaml
series: xenial
description: |
Openstack Min Bundle
applications:
keystone:
charm: cs:keystone
num_units: 1
options:
openstack-origin: cloud:xenial-queens
worker-multiplier: 0.1
admin-password: openstack
to:
- "lxd:0"
mysql:
charm: cs:percona-cluster
num_units: 1
options:
source: cloud:xenial-queens
to:
- "lxd:0"
glance:
charm: cs:glance
num_units: 1
options:
openstack-origin: cloud:xenial-queens
worker-multiplier: 0.1
to:
- "lxd:0"
cinder:
charm: cs:cinder
num_units: 1
options:
openstack-origin: cloud:xenial-queens
glance-api-version: 2
block-device: /srv/cinderfile.img|20G
overwrite: "true"
worker-multiplier: 0.1
to:
- "0"
neutron-api:
charm: cs:neutron-api
num_units: 1
options:
openstack-origin: cloud:xenial-queens
worker-multiplier: 0.1
# vlan-ranges config specifies the vlans that will be available to openstack
# and on what provider (physnet1)
vlan-ranges: physnet1:200:205
# Enable DVR so we can have egress routing from the compute nodes.
# This eliminates the need for a neutron-gateway charm/networking component.
# Notice this bundle does not contain neutron-gateway.
enable-dvr: true
global-physnet-mtu: 9000
neutron-security-groups: false
to:
- "lxd:0"
neutron-openvswitch:
charm: cs:neutron-openvswitch
num_units: 0
options:
# vlan-ranges, data-port, bridge-mappings
# are the configuration that create the networking
# on the compute nodes that connect the openvswitch software networks to
# the physical networks in your datacenter.
#
# We want to ensure a bridge exists on the interface connected to the trunk switch port
# so we can pass management of vlans available on the trunk through to openstack.
#
data-port: br-data:enp4s0f1
bridge-mappings: physnet1:br-data
vlan-ranges: physnet1:200:205
disable-security-groups: true
# We don't have a neutron-gateway component so we need to provide
# dhcp and metadata locally on the compute nodes
enable-local-dhcp-and-metadata: true
nova-cloud-controller:
charm: cs:nova-cloud-controller
num_units: 1
options:
worker-multiplier: 0.1
network-manager: Neutron
openstack-origin: cloud:xenial-queens
ram-allocation-ratio: '64'
cpu-allocation-ratio: '64'
to:
- "lxd:0"
nova-compute:
charm: cs:nova-compute
num_units: 1
options:
enable-live-migration: False
enable-resize: False
migration-auth-type: ssh
openstack-origin: cloud:xenial-queens
force-raw-images: False
to:
- "0"
rabbitmq-server:
charm: cs:rabbitmq-server
num_units: 1
options:
source: cloud:xenial-queens
to:
- "lxd:0"
relations:
- - nova-compute:amqp
- rabbitmq-server:amqp
- - keystone:shared-db
- mysql:shared-db
- - nova-cloud-controller:identity-service
- keystone:identity-service
- - glance:identity-service
- keystone:identity-service
- - neutron-api:identity-service
- keystone:identity-service
- - neutron-openvswitch:neutron-plugin-api
- neutron-api:neutron-plugin-api
- - neutron-api:shared-db
- mysql:shared-db
- - neutron-api:amqp
- rabbitmq-server:amqp
- - glance:shared-db
- mysql:shared-db
- - glance:amqp
- rabbitmq-server:amqp
- - nova-cloud-controller:image-service
- glance:image-service
- - nova-compute:image-service
- glance:image-service
- - nova-cloud-controller:cloud-compute
- nova-compute:cloud-compute
- - nova-cloud-controller:amqp
- rabbitmq-server:amqp
- - nova-compute:neutron-plugin
- neutron-openvswitch:neutron-plugin
- - neutron-openvswitch:amqp
- rabbitmq-server:amqp
- - nova-cloud-controller:shared-db
- mysql:shared-db
- - nova-cloud-controller:neutron-api
- neutron-api:neutron-api
#- - cinder:image-service
# - glance:image-service
- - cinder:amqp
- rabbitmq-server:amqp
- - cinder:identity-service
- keystone:identity-service
- - cinder:cinder-volume-service
- nova-cloud-controller:cinder-volume-service
- - cinder:shared-db
- mysql:shared-db
machines:
0:
series: xenial
constraints: "tags=openstack-demo"
This bundle deploys all of the Openstack services to containers on a single host, and deploys the Nova service to the host itself. The neutron-gateway charm is not deployed because it is not needed, in this scenario we only need vlan networking on the compute nodes themselves, this doesn't require a SDN network service.
Once you have the bundle file in place, the process looks like this:
-
Create a model in the
dcmaas
Juju controller to deploy this test in.
$ juju switch dcmaas
$ juju add-model my-openstack-test
- Deploy the bundle from above (notice the
constraints
tag that targets the server in MAAS with theopenstack-demo
tag.
$ juju deploy openstack-vlan-min-bundle.yaml
Resolving charm: cs:cinder
Resolving charm: cs:glance
Resolving charm: cs:keystone
Resolving charm: cs:percona-cluster
Resolving charm: cs:neutron-api
Resolving charm: cs:neutron-openvswitch
Resolving charm: cs:nova-cloud-controller
Resolving charm: cs:nova-compute
Resolving charm: cs:rabbitmq-server
Executing changes:
- upload charm cs:cinder-270 for series xenial
- deploy application cinder on xenial using cs:cinder-270
- upload charm cs:glance-264 for series xenial
- deploy application glance on xenial using cs:glance-264
- upload charm cs:keystone-278 for series xenial
- deploy application keystone on xenial using cs:keystone-278
- upload charm cs:percona-cluster-261 for series xenial
- deploy application mysql on xenial using cs:percona-cluster-261
- upload charm cs:neutron-api-258 for series xenial
- deploy application neutron-api on xenial using cs:neutron-api-258
- upload charm cs:neutron-openvswitch-249 for series xenial
- deploy application neutron-openvswitch on xenial using cs:neutron-openvswitch-249
- upload charm cs:nova-cloud-controller-307 for series xenial
- deploy application nova-cloud-controller on xenial using cs:nova-cloud-controller-307
- upload charm cs:nova-compute-282 for series xenial
- deploy application nova-compute on xenial using cs:nova-compute-282
- upload charm cs:rabbitmq-server-73 for series xenial
- deploy application rabbitmq-server on xenial using cs:rabbitmq-server-73
- add new machine 0
- add relation nova-compute:amqp - rabbitmq-server:amqp
- add relation keystone:shared-db - mysql:shared-db
- add relation nova-cloud-controller:identity-service - keystone:identity-service
- add relation glance:identity-service - keystone:identity-service
- add relation neutron-api:identity-service - keystone:identity-service
- add relation neutron-openvswitch:neutron-plugin-api - neutron-api:neutron-plugin-api
- add relation neutron-api:shared-db - mysql:shared-db
- add relation neutron-api:amqp - rabbitmq-server:amqp
- add relation glance:shared-db - mysql:shared-db
- add relation glance:amqp - rabbitmq-server:amqp
- add relation nova-cloud-controller:image-service - glance:image-service
- add relation nova-compute:image-service - glance:image-service
- add relation nova-cloud-controller:cloud-compute - nova-compute:cloud-compute
- add relation nova-cloud-controller:amqp - rabbitmq-server:amqp
- add relation nova-compute:neutron-plugin - neutron-openvswitch:neutron-plugin
- add relation neutron-openvswitch:amqp - rabbitmq-server:amqp
- add relation nova-cloud-controller:shared-db - mysql:shared-db
- add relation nova-cloud-controller:neutron-api - neutron-api:neutron-api
- add relation cinder:image-service - glance:image-service
- add relation cinder:amqp - rabbitmq-server:amqp
- add relation cinder:identity-service - keystone:identity-service
- add relation cinder:cinder-volume-service - nova-cloud-controller:cinder-volume-service
- add relation cinder:shared-db - mysql:shared-db
- add unit cinder/0 to new machine 0
- add unit nova-compute/0 to new machine 0
- add lxd container 0/lxd/0 on new machine 0
- add lxd container 0/lxd/1 on new machine 0
- add lxd container 0/lxd/2 on new machine 0
- add lxd container 0/lxd/3 on new machine 0
- add lxd container 0/lxd/4 on new machine 0
- add lxd container 0/lxd/5 on new machine 0
- add lxd container 0/lxd/6 on new machine 0
- add unit glance/0 to 0/lxd/0
- add unit keystone/0 to 0/lxd/1
- add unit mysql/0 to 0/lxd/2
- add unit neutron-api/0 to 0/lxd/3
- add unit nova-cloud-controller/0 to 0/lxd/4
- add unit rabbitmq-server/0 to 0/lxd/5
Deploy of bundle completed.
- Watch the Juju status until the server has powered on and the ceph components have completed deployment and configuration.
$ watch -n 1 -c juju status --color
After an amount of time, juju status
will show a completed and settled deployment.
$ juju status
Model Controller Cloud/Region Version SLA
my-openstack-test dcmaas dcmaas 2.3.5 unsupported
App Version Status Scale Charm Store Rev OS Notes
cinder 12.0.0 active 1 cinder jujucharms 270 ubuntu
glance 16.0.0 active 1 glance jujucharms 264 ubuntu
keystone 13.0.0 active 1 keystone jujucharms 278 ubuntu
mysql 5.6.37-26.21 active 1 percona-cluster jujucharms 261 ubuntu
neutron-api 12.0.0 active 1 neutron-api jujucharms 258 ubuntu
neutron-openvswitch 12.0.0 active 1 neutron-openvswitch jujucharms 249 ubuntu
nova-cloud-controller 17.0.1 active 1 nova-cloud-controller jujucharms 307 ubuntu
nova-compute 17.0.1 active 1 nova-compute jujucharms 282 ubuntu
rabbitmq-server 3.6.10 active 1 rabbitmq-server jujucharms 73 ubuntu
Unit Workload Agent Machine Public address Ports Message
cinder/0* active idle 0 10.10.54.155 8776/tcp Unit is ready
glance/0* active idle 0/lxd/0 10.10.54.156 9292/tcp Unit is ready
keystone/0* active idle 0/lxd/1 10.10.54.161 5000/tcp Unit is ready
mysql/0* active idle 0/lxd/2 10.10.54.158 3306/tcp Unit is ready
neutron-api/0* active idle 0/lxd/3 10.10.54.159 9696/tcp Unit is ready
nova-cloud-controller/0* active idle 0/lxd/4 10.10.54.157 8774/tcp,8778/tcp Unit is ready
nova-compute/0* active idle 0 10.10.54.155 Unit is ready
neutron-openvswitch/0* active idle 10.10.54.155 Unit is ready
rabbitmq-server/0* active idle 0/lxd/5 10.10.54.162 5672/tcp Unit is ready
Machine State DNS Inst id Series AZ Message
0 started 10.10.54.155 64n6wm xenial openstack-a Deployed
0/lxd/0 started 10.10.54.156 juju-4fe0eb-0-lxd-0 xenial openstack-a Container started
0/lxd/1 started 10.10.54.161 juju-4fe0eb-0-lxd-1 xenial openstack-a Container started
0/lxd/2 started 10.10.54.158 juju-4fe0eb-0-lxd-2 xenial openstack-a Container started
0/lxd/3 started 10.10.54.159 juju-4fe0eb-0-lxd-3 xenial openstack-a Container started
0/lxd/4 started 10.10.54.157 juju-4fe0eb-0-lxd-4 xenial openstack-a Container started
0/lxd/5 started 10.10.54.162 juju-4fe0eb-0-lxd-5 xenial openstack-a Container started
Relation provider Requirer Interface Type Message
cinder:cinder-volume-service nova-cloud-controller:cinder-volume-service cinder regular
cinder:cluster cinder:cluster cinder-ha peer
glance:cluster glance:cluster glance-ha peer
glance:image-service cinder:image-service glance regular
glance:image-service nova-cloud-controller:image-service glance regular
glance:image-service nova-compute:image-service glance regular
keystone:cluster keystone:cluster keystone-ha peer
keystone:identity-service cinder:identity-service keystone regular
keystone:identity-service glance:identity-service keystone regular
keystone:identity-service neutron-api:identity-service keystone regular
keystone:identity-service nova-cloud-controller:identity-service keystone regular
mysql:cluster mysql:cluster percona-cluster peer
mysql:shared-db cinder:shared-db mysql-shared regular
mysql:shared-db glance:shared-db mysql-shared regular
mysql:shared-db keystone:shared-db mysql-shared regular
mysql:shared-db neutron-api:shared-db mysql-shared regular
mysql:shared-db nova-cloud-controller:shared-db mysql-shared regular
neutron-api:cluster neutron-api:cluster neutron-api-ha peer
neutron-api:neutron-api nova-cloud-controller:neutron-api neutron-api regular
neutron-api:neutron-plugin-api neutron-openvswitch:neutron-plugin-api neutron-plugin-api regular
neutron-openvswitch:neutron-plugin nova-compute:neutron-plugin neutron-plugin subordinate
nova-cloud-controller:cluster nova-cloud-controller:cluster nova-ha peer
nova-compute:cloud-compute nova-cloud-controller:cloud-compute nova-compute regular
nova-compute:compute-peer nova-compute:compute-peer nova peer
rabbitmq-server:amqp cinder:amqp rabbitmq regular
rabbitmq-server:amqp glance:amqp rabbitmq regular
rabbitmq-server:amqp neutron-api:amqp rabbitmq regular
rabbitmq-server:amqp neutron-openvswitch:amqp rabbitmq regular
rabbitmq-server:amqp nova-cloud-controller:amqp rabbitmq regular
rabbitmq-server:amqp nova-compute:amqp rabbitmq regular
rabbitmq-server:cluster rabbitmq-server:cluster rabbitmq-ha peer
At this point your minimal Openstack cloud should be deployed, and ready for you to start interacting with it.
Once your Openstack cloud has been successfully deployed, source the following file to export your openstack access creds to your shell environment.
#!/bin/bash
# Openstack creds to shell env
# openrc
# Clear any previous OS_* environment variables
_OS_PARAMS=$(env | awk 'BEGIN {FS="="} /^OS_/ {print $1;}' | paste -sd ' ')
for param in $_OS_PARAMS; do
unset $param
done
unset _OS_PARAMS
# Get information about deployment from Juju
juju_status=$(juju status keystone)
KEYSTONE_UNIT=$(
echo "$juju_status"|grep -i workload -A1|tail -n1|awk '{print $1}' \
|tr -d '*')
KEYSTONE_IP=$(juju run --unit ${KEYSTONE_UNIT} 'unit-get private-address')
KEYSTONE_MAJOR_VERSION=$(
echo "$juju_status"|grep -i version -A1|tail -n1|awk '{print $2}' \
|cut -f1 -d\.
)
KEYSTONE_PREFERRED_API_VERSION=$(juju config keystone preferred-api-version)
# Keystone API v2.0 was removed in Keystone version 13
# shipped with OpenStack Queens
if [ $KEYSTONE_MAJOR_VERSION -ge 13 -o \
"$KEYSTONE_PREFERRED_API_VERSION" = '3' ];
then
echo Using Keystone v3 API
export OS_AUTH_URL=${OS_AUTH_PROTOCOL:-http}://${KEYSTONE_IP}:5000/v3
export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_DOMAIN_NAME=admin_domain
export OS_USER_DOMAIN_NAME=admin_domain
export OS_PROJECT_DOMAIN_NAME=admin_domain
export OS_PROJECT_NAME=admin
export OS_REGION_NAME=RegionOne
export OS_IDENTITY_API_VERSION=3
# Swift needs this:
export OS_AUTH_VERSION=3
else
echo Using Keystone v2.0 API
export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_TENANT_NAME=admin
export OS_REGION_NAME=RegionOne
export OS_AUTH_URL=${OS_AUTH_PROTOCOL:-http}://${KEYSTONE_IP}:5000/v2.0
fi
source openrc
You should now be able to start interacting with Openstack via CLI, lets preform a few simple ops to initialize our cloud for use.
- Upload Image
Grab an ubuntu cloud server image and upload it to the glance image store.
$ wget https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img
$ openstack image create --disk-format qcow2 --container-format bare \
--public --file ./xenial-server-cloudimg-amd64-disk1.img xenial
- Create a VLAN Network
#openstack network create --share \
# --provider-physical-network physnet1 \
# --provider-network-type vlan \
# --provider-segment 200 vlan200_net
neutron net-create vlan200_net --shared \
--provider:physical_network physnet1 \
--provider:network_type vlan \
--provider:segmentation_id 200
neutron subnet-create vlan200_net 192.168.200.0/24 \
--name vlan200_net_subnet --gateway 192.168.200.1 \
--allocation-pool start=192.168.200.100,end=192.168.200.190 \
--host-route destination=10.10.0.0/16,nexthop=192.168.200.1 \
--dns-nameserver 192.168.200.1
- Add SSH Key
$ openstack keypair create --public-key ~/.ssh/id_rsa.pub os-test-key
- Create Flavor
$ openstack flavor create --ram 2048 --disk 5 --vcpus 2 --public o2.small
- Deploy Instance
$ openstack server create --image xenial --flavor o2.small \
--key-name os-test-key --network vlan200_net os-test-server --wait
+-------------------------------------+----------------------------------------------------------+
| Field | Value |
+-------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | os-util-00 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | os-util-00.maas |
| OS-EXT-SRV-ATTR:instance_name | instance-00000001 |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2018-04-18T14:45:04.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | vlan200_net=192.168.200.109 |
| adminPass | rWCuE3fDUXMZ |
| config_drive | |
| created | 2018-04-18T14:44:52Z |
| flavor | o2.small (da4dcaa4-63b7-4ce3-8674-31f8210ff200) |
| hostId | b986f11c04e5a4b8de882a877efee8f5b2b8aebba72adb36d83954d0 |
| id | 32623247-22f3-4089-a321-acf3ae61b102 |
| image | xenial (b5749a10-bc83-4a95-a645-f54e57cbc05d) |
| key_name | os-test-key |
| name | os-test-server |
| progress | 0 |
| project_id | bf7ee551a04f47e495a1a20b1b7471e6 |
| properties | |
| security_groups | name='default' |
| status | ACTIVE |
| updated | 2018-04-18T14:45:05Z |
| user_id | fe48f382adf544e7b1f398d3fdf0c92e |
| volumes_attached | |
+-------------------------------------+----------------------------------------------------------+
Notice the "addresses" row above. You can now ssh into your openstack instance at its ip address: