This document should serve as a reference for a minimal Openstack POC deploy with VLAN + external provider networking. This deploy will use DVR (distributed virtual routing) such that routing between VLAN and external provider network will take place locally on each compute node.
# openstack-vlan-external-min-bundle.yaml
series: xenial
description: |
Openstack VLAN + External Min Bundle
applications:
keystone:
charm: cs:keystone
num_units: 1
options:
openstack-origin: cloud:xenial-queens
worker-multiplier: 0.1
admin-password: openstack
to:
- "lxd:0"
mysql:
charm: cs:percona-cluster
num_units: 1
options:
source: cloud:xenial-queens
to:
- "lxd:0"
glance:
charm: cs:glance
num_units: 1
options:
openstack-origin: cloud:xenial-queens
worker-multiplier: 0.1
to:
- "lxd:0"
cinder:
charm: cs:cinder
num_units: 1
options:
openstack-origin: cloud:xenial-queens
glance-api-version: 2
block-device: /srv/cinderfile.img|20G
overwrite: "true"
worker-multiplier: 0.1
to:
- "0"
neutron-api:
charm: cs:neutron-api
num_units: 1
options:
openstack-origin: cloud:xenial-queens
worker-multiplier: 0.1
# vlan-ranges config specifies the vlans that will be available to openstack
# and on what provider (physnet1)
vlan-ranges: physnet1:200:205 physnet2
# Enable DVR so we can have egress routing from the compute nodes.
# This eliminates the need for a neutron-gateway charm/networking component.
# Notice this bundle does not contain neutron-gateway.
enable-dvr: true
global-physnet-mtu: 9000
neutron-security-groups: false
flat-network-providers: physnet2
to:
- "lxd:0"
neutron-gateway:
charm: cs:neutron-gateway
num_units: 1
options:
openstack-origin: cloud:xenial-queens
# vlan-ranges, data-port, bridge-mappings
# are the configuration that create the networking
# on the network nodes that connect the openvswitch software networks to
# the physical networks in your datacenter.
#
# We want to ensure a bridge exists on the interface connected to the trunk switch port
# and on the vlan94 access mode nic
# so we can pass management of vlans available on the trunk through to openstack.
#
data-port: br-data:enp4s0f1 br-ex:enp2s0
bridge-mappings: physnet1:br-data physnet2:br-ex
vlan-ranges: physnet1:200:205 physnet2
flat-network-providers: physnet2
to:
- "1"
neutron-openvswitch:
charm: cs:neutron-openvswitch
num_units: 0
options:
# vlan-ranges, data-port, bridge-mappings
# are the configuration that create the networking
# on the compute nodes that connect the openvswitch software networks to
# the physical networks in your datacenter.
#
# We want to ensure a bridge exists on the interface connected to the trunk switch port
# so we can pass management of vlans available on the trunk through to openstack.
#
data-port: br-data:enp4s0f1 br-ex:enp2s0
bridge-mappings: physnet1:br-data physnet2:br-ex
vlan-ranges: physnet1:200:205 physnet2
disable-security-groups: true
# provide dhcp and metadata locally on the compute nodes
enable-local-dhcp-and-metadata: true
flat-network-providers: physnet2
nova-cloud-controller:
charm: cs:nova-cloud-controller
num_units: 1
options:
worker-multiplier: 0.1
network-manager: Neutron
openstack-origin: cloud:xenial-queens
ram-allocation-ratio: '64'
cpu-allocation-ratio: '64'
to:
- "lxd:0"
nova-compute:
charm: cs:nova-compute
num_units: 1
options:
enable-live-migration: False
enable-resize: False
migration-auth-type: ssh
openstack-origin: cloud:xenial-queens
force-raw-images: False
to:
- "0"
openstack-dashboard:
charm: cs:openstack-dashboard
num_units: 1
options:
openstack-origin: cloud:xenial-queens
to:
- "lxd:0"
rabbitmq-server:
charm: cs:rabbitmq-server
num_units: 1
options:
source: cloud:xenial-queens
to:
- "lxd:0"
relations:
- - keystone:identity-service
- openstack-dashboard:identity-service
- - nova-compute:amqp
- rabbitmq-server:amqp
- - keystone:shared-db
- mysql:shared-db
- - nova-cloud-controller:identity-service
- keystone:identity-service
- - glance:identity-service
- keystone:identity-service
- - neutron-api:identity-service
- keystone:identity-service
- - neutron-openvswitch:neutron-plugin-api
- neutron-api:neutron-plugin-api
- - neutron-api:shared-db
- mysql:shared-db
- - neutron-api:amqp
- rabbitmq-server:amqp
- - glance:shared-db
- mysql:shared-db
- - glance:amqp
- rabbitmq-server:amqp
- - nova-cloud-controller:image-service
- glance:image-service
- - nova-compute:image-service
- glance:image-service
- - nova-cloud-controller:cloud-compute
- nova-compute:cloud-compute
- - nova-cloud-controller:amqp
- rabbitmq-server:amqp
- - nova-compute:neutron-plugin
- neutron-openvswitch:neutron-plugin
- - neutron-openvswitch:amqp
- rabbitmq-server:amqp
- - nova-cloud-controller:shared-db
- mysql:shared-db
- - nova-cloud-controller:neutron-api
- neutron-api:neutron-api
#- - cinder:image-service
# - glance:image-service
- - cinder:amqp
- rabbitmq-server:amqp
- - cinder:identity-service
- keystone:identity-service
- - cinder:cinder-volume-service
- nova-cloud-controller:cinder-volume-service
- - cinder:shared-db
- mysql:shared-db
- - neutron-gateway:amqp
- rabbitmq-server:amqp
- - neutron-gateway:neutron-plugin-api
- neutron-api:neutron-plugin-api
- - nova-cloud-controller:quantum-network-service
- neutron-gateway:quantum-network-service
machines:
0:
series: xenial
constraints: "tags=openstack-demo"
1:
series: xenial
constraints: "tags=openstack-demo"
This bundle deploys all of the Openstack services to containers on a single host. All services get deployed to containers with the exception of neutron-gateway, nova-compute and cinder, which get deployed to the actual host.
Once you have the bundle file in place, the process looks like this:
-
Create a model in the
dcmaas
Juju controller to deploy this test in.
$ juju switch dcmaas
$ juju add-model my-openstack-test
- Deploy the bundle from above (notice the
constraints
tag that targets the servers in MAAS with theopenstack-demo
tag.
$ juju deploy openstack-vlan-with-external-min-bundle.yaml
Resolving charm: cs:cinder
Resolving charm: cs:glance
Resolving charm: cs:keystone
Resolving charm: cs:percona-cluster
Resolving charm: cs:neutron-api
Resolving charm: cs:neutron-gateway
Resolving charm: cs:neutron-openvswitch
Resolving charm: cs:nova-cloud-controller
Resolving charm: cs:nova-compute
Resolving charm: cs:rabbitmq-server
Executing changes:
- upload charm cs:cinder-271 for series xenial
- deploy application cinder on xenial using cs:cinder-271
- upload charm cs:glance-264 for series xenial
- deploy application glance on xenial using cs:glance-264
- upload charm cs:keystone-278 for series xenial
- deploy application keystone on xenial using cs:keystone-278
- upload charm cs:percona-cluster-264 for series xenial
- deploy application mysql on xenial using cs:percona-cluster-264
- upload charm cs:neutron-api-259 for series xenial
- deploy application neutron-api on xenial using cs:neutron-api-259
- upload charm cs:neutron-gateway-249 for series xenial
- deploy application neutron-gateway on xenial using cs:neutron-gateway-249
- upload charm cs:neutron-openvswitch-249 for series xenial
- deploy application neutron-openvswitch on xenial using cs:neutron-openvswitch-249
- upload charm cs:nova-cloud-controller-309 for series xenial
- deploy application nova-cloud-controller on xenial using cs:nova-cloud-controller-309
- upload charm cs:nova-compute-282 for series xenial
- deploy application nova-compute on xenial using cs:nova-compute-282
- upload charm cs:rabbitmq-server-73 for series xenial
- deploy application rabbitmq-server on xenial using cs:rabbitmq-server-73
- add new machine 0
- add new machine 1
- add relation nova-compute:amqp - rabbitmq-server:amqp
- add relation keystone:shared-db - mysql:shared-db
- add relation nova-cloud-controller:identity-service - keystone:identity-service
- add relation glance:identity-service - keystone:identity-service
- add relation neutron-api:identity-service - keystone:identity-service
- add relation neutron-openvswitch:neutron-plugin-api - neutron-api:neutron-plugin-api
- add relation neutron-api:shared-db - mysql:shared-db
- add relation neutron-api:amqp - rabbitmq-server:amqp
- add relation glance:shared-db - mysql:shared-db
- add relation glance:amqp - rabbitmq-server:amqp
- add relation nova-cloud-controller:image-service - glance:image-service
- add relation nova-compute:image-service - glance:image-service
- add relation nova-cloud-controller:cloud-compute - nova-compute:cloud-compute
- add relation nova-cloud-controller:amqp - rabbitmq-server:amqp
- add relation nova-compute:neutron-plugin - neutron-openvswitch:neutron-plugin
- add relation neutron-openvswitch:amqp - rabbitmq-server:amqp
- add relation nova-cloud-controller:shared-db - mysql:shared-db
- add relation nova-cloud-controller:neutron-api - neutron-api:neutron-api
- add relation cinder:amqp - rabbitmq-server:amqp
- add relation cinder:identity-service - keystone:identity-service
- add relation cinder:cinder-volume-service - nova-cloud-controller:cinder-volume-service
- add relation cinder:shared-db - mysql:shared-db
- add relation neutron-gateway:amqp - rabbitmq-server:amqp
- add relation neutron-gateway:neutron-plugin-api - neutron-api:neutron-plugin-api
- add relation nova-cloud-controller:quantum-network-service - neutron-gateway:quantum-network-service
- add unit cinder/0 to new machine 0
- add unit neutron-gateway/0 to new machine 1
- add unit nova-compute/0 to new machine 0
- add lxd container 0/lxd/0 on new machine 0
- add lxd container 0/lxd/1 on new machine 0
- add lxd container 0/lxd/2 on new machine 0
- add lxd container 0/lxd/3 on new machine 0
- add lxd container 0/lxd/4 on new machine 0
- add lxd container 0/lxd/5 on new machine 0
- add unit glance/0 to 0/lxd/0
- add unit keystone/0 to 0/lxd/1
- add unit mysql/0 to 0/lxd/2
- add unit neutron-api/0 to 0/lxd/3
- add unit nova-cloud-controller/0 to 0/lxd/4
- add unit rabbitmq-server/0 to 0/lxd/5
Deploy of bundle completed.
- Watch the Juju status until the server has powered on and the ceph components have completed deployment and configuration.
$ watch -n 1 -c juju status --color
After an amount of time, juju status
will show a completed and settled deployment
$ juju status
my-openstack-test dcmaas dcmaas 2.3.7 unsupported
App Version Status Scale Charm Store Rev OS Notes
cinder 12.0.0 active 1 cinder jujucharms 271 ubuntu
glance 16.0.0 active 1 glance jujucharms 264 ubuntu
keystone 13.0.0 active 1 keystone jujucharms 278 ubuntu
mysql 5.6.37-26.21 active 1 percona-cluster jujucharms 264 ubuntu
neutron-api 12.0.1 active 1 neutron-api jujucharms 259 ubuntu
neutron-gateway 12.0.1 active 1 neutron-gateway jujucharms 249 ubuntu
neutron-openvswitch 12.0.1 active 1 neutron-openvswitch jujucharms 249 ubuntu
nova-cloud-controller 17.0.1 active 1 nova-cloud-controller jujucharms 309 ubuntu
nova-compute 17.0.1 active 1 nova-compute jujucharms 282 ubuntu
openstack-dashboard 13.0.0 active 1 openstack-dashboard jujucharms 258 ubuntu
rabbitmq-server 3.6.10 active 1 rabbitmq-server jujucharms 73 ubuntu
Unit Workload Agent Machine Public address Ports Message
cinder/2* active idle 2 10.10.55.53 8776/tcp Unit is ready
glance/2* active idle 2/lxd/0 10.10.55.56 9292/tcp Unit is ready
keystone/2* active idle 2/lxd/1 10.10.55.59 5000/tcp Unit is ready
mysql/2* active idle 2/lxd/2 10.10.55.58 3306/tcp Unit is ready
neutron-api/2* active idle 2/lxd/3 10.10.55.60 9696/tcp Unit is ready
neutron-gateway/0* active idle 3 10.10.55.54 Unit is ready
nova-cloud-controller/2* active idle 2/lxd/4 10.10.55.55 8774/tcp,8778/tcp Unit is ready
nova-compute/2* active idle 2 10.10.55.53 Unit is ready
neutron-openvswitch/2* active idle 10.10.55.53 Unit is ready
openstack-dashboard/2* active idle 2/lxd/6 10.10.55.64 80/tcp,443/tcp Unit is ready
rabbitmq-server/2* active idle 2/lxd/5 10.10.55.57 5672/tcp Unit is ready
Machine State DNS Inst id Series AZ Message
2 started 10.10.55.53 64n6wm xenial openstack-a Deployed
2/lxd/0 started 10.10.55.56 juju-4043c6-2-lxd-0 xenial openstack-a Container started
2/lxd/1 started 10.10.55.59 juju-4043c6-2-lxd-1 xenial openstack-a Container started
2/lxd/2 started 10.10.55.58 juju-4043c6-2-lxd-2 xenial openstack-a Container started
2/lxd/3 started 10.10.55.60 juju-4043c6-2-lxd-3 xenial openstack-a Container started
2/lxd/4 started 10.10.55.55 juju-4043c6-2-lxd-4 xenial openstack-a Container started
2/lxd/5 started 10.10.55.57 juju-4043c6-2-lxd-5 xenial openstack-a Container started
2/lxd/6 started 10.10.55.64 juju-4043c6-2-lxd-6 xenial openstack-a Container started
3 started 10.10.55.54 wscepf xenial openstack-b Deployed
Relation provider Requirer Interface Type Message
cinder:cinder-volume-service nova-cloud-controller:cinder-volume-service cinder regular
cinder:cluster cinder:cluster cinder-ha peer
glance:cluster glance:cluster glance-ha peer
glance:image-service nova-cloud-controller:image-service glance regular
glance:image-service nova-compute:image-service glance regular
keystone:cluster keystone:cluster keystone-ha peer
keystone:identity-service cinder:identity-service keystone regular
keystone:identity-service glance:identity-service keystone regular
keystone:identity-service neutron-api:identity-service keystone regular
keystone:identity-service nova-cloud-controller:identity-service keystone regular
keystone:identity-service openstack-dashboard:identity-service keystone regular
mysql:cluster mysql:cluster percona-cluster peer
mysql:shared-db cinder:shared-db mysql-shared regular
mysql:shared-db glance:shared-db mysql-shared regular
mysql:shared-db keystone:shared-db mysql-shared regular
mysql:shared-db neutron-api:shared-db mysql-shared regular
mysql:shared-db nova-cloud-controller:shared-db mysql-shared regular
neutron-api:cluster neutron-api:cluster neutron-api-ha peer
neutron-api:neutron-api nova-cloud-controller:neutron-api neutron-api regular
neutron-api:neutron-plugin-api neutron-gateway:neutron-plugin-api neutron-plugin-api regular
neutron-api:neutron-plugin-api neutron-openvswitch:neutron-plugin-api neutron-plugin-api regular
neutron-gateway:cluster neutron-gateway:cluster quantum-gateway-ha peer
neutron-gateway:quantum-network-service nova-cloud-controller:quantum-network-service quantum regular
neutron-openvswitch:neutron-plugin nova-compute:neutron-plugin neutron-plugin subordinate
nova-cloud-controller:cluster nova-cloud-controller:cluster nova-ha peer
nova-compute:cloud-compute nova-cloud-controller:cloud-compute nova-compute regular
nova-compute:compute-peer nova-compute:compute-peer nova peer
openstack-dashboard:cluster openstack-dashboard:cluster openstack-dashboard-ha peer
rabbitmq-server:amqp cinder:amqp rabbitmq regular
rabbitmq-server:amqp glance:amqp rabbitmq regular
rabbitmq-server:amqp neutron-api:amqp rabbitmq regular
rabbitmq-server:amqp neutron-gateway:amqp rabbitmq regular
rabbitmq-server:amqp neutron-openvswitch:amqp rabbitmq regular
rabbitmq-server:amqp nova-cloud-controller:amqp rabbitmq regular
rabbitmq-server:amqp nova-compute:amqp rabbitmq regular
rabbitmq-server:cluster rabbitmq-server:cluster rabbitmq-ha peer
At this point your minimal Openstack cloud with VLAN + External networking should be deployed, and ready for you to start interacting with it.
Once your Openstack cloud has been successfully deployed, source the following file to export your openstack access creds to your shell environment.
#!/bin/bash
# Openstack creds to shell env
# openrc
# Clear any previous OS_* environment variables
_OS_PARAMS=$(env | awk 'BEGIN {FS="="} /^OS_/ {print $1;}' | paste -sd ' ')
for param in $_OS_PARAMS; do
unset $param
done
unset _OS_PARAMS
# Get information about deployment from Juju
juju_status=$(juju status keystone)
KEYSTONE_UNIT=$(
echo "$juju_status"|grep -i workload -A1|tail -n1|awk '{print $1}' \
|tr -d '*')
KEYSTONE_IP=$(juju run --unit ${KEYSTONE_UNIT} 'unit-get private-address')
KEYSTONE_MAJOR_VERSION=$(
echo "$juju_status"|grep -i version -A1|tail -n1|awk '{print $2}' \
|cut -f1 -d\.
)
KEYSTONE_PREFERRED_API_VERSION=$(juju config keystone preferred-api-version)
# Keystone API v2.0 was removed in Keystone version 13
# shipped with OpenStack Queens
if [ $KEYSTONE_MAJOR_VERSION -ge 13 -o \
"$KEYSTONE_PREFERRED_API_VERSION" = '3' ];
then
echo Using Keystone v3 API
export OS_AUTH_URL=${OS_AUTH_PROTOCOL:-http}://${KEYSTONE_IP}:5000/v3
export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_DOMAIN_NAME=admin_domain
export OS_USER_DOMAIN_NAME=admin_domain
export OS_PROJECT_DOMAIN_NAME=admin_domain
export OS_PROJECT_NAME=admin
export OS_REGION_NAME=RegionOne
export OS_IDENTITY_API_VERSION=3
# Swift needs this:
export OS_AUTH_VERSION=3
else
echo Using Keystone v2.0 API
export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_TENANT_NAME=admin
export OS_REGION_NAME=RegionOne
export OS_AUTH_URL=${OS_AUTH_PROTOCOL:-http}://${KEYSTONE_IP}:5000/v2.0
fi
source openrc
You should now be able to start interacting with Openstack via CLI, lets preform a few simple ops to initialize our cloud for use.
Grab an ubuntu cloud server image and upload it to the glance image store.
$ wget https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img
$ openstack image create --disk-format qcow2 --container-format bare \
--public --file ./bionic-server-cloudimg-amd64.img bionic
#openstack network create --share \
# --provider-physical-network physnet1 \
# --provider-network-type vlan \
# --provider-segment 200 vlan200_net
neutron net-create vlan200_net --shared \
--provider:physical_network physnet1 \
--provider:network_type vlan \
--provider:segmentation_id 200
neutron subnet-create vlan200_net 192.168.200.0/24 \
--name vlan200_net_subnet --gateway 192.168.200.99 \
--allocation-pool start=192.168.200.100,end=192.168.200.190 \
--host-route destination=10.10.0.0/16,nexthop=192.168.200.1 \
--host-route destination=10.0.0.0/16,nexthop=192.168.200.1 \
--host-route destination=10.1.8.0/24,nexthop=192.168.200.1 \
--dns-nameserver 192.168.200.1
neutron net-create ext_net --shared \
--provider:physical_network physnet2 \
--provider:network_type flat \
--router:external True
neutron subnet-create ext_net 216.151.20.192/26 \
--name ext_net_subnet --gateway 216.151.20.193 \
--allocation-pool start=216.151.20.200,end=216.151.20.220
openstack router create ext-router
openstack router set --enable --distributed \
--description "External Range <-> VLAN router" \
--external-gateway ext_net --enable-snat \
--fixed-ip subnet=ext_net_subnet,ip-address=216.151.20.199 \
ext-router
openstack router add subnet ext-router vlan200_net_subnet
$ openstack keypair create --public-key ~/.ssh/id_rsa.pub os-test-key
$ openstack flavor create --ram 2048 --disk 5 --vcpus 2 --public o2.small
$ openstack server create --image bionic --flavor o2.small \
--key-name os-test-key --network vlan200_net os-test-server --wait
+-------------------------------------+----------------------------------------------------------+
| Field | Value |
+-------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | os-util-00 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | os-util-00.maas |
| OS-EXT-SRV-ATTR:instance_name | instance-00000001 |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2018-04-18T14:45:04.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | vlan200_net=192.168.200.109 |
| adminPass | rWCuE3fDUXMZ |
| config_drive | |
| created | 2018-04-18T14:44:52Z |
| flavor | o2.small (da4dcaa4-63b7-4ce3-8674-31f8210ff200) |
| hostId | b986f11c04e5a4b8de882a877efee8f5b2b8aebba72adb36d83954d0 |
| id | 32623247-22f3-4089-a321-acf3ae61b102 |
| image | bionic (b5749a10-bc83-4a95-a645-f54e57cbc05d) |
| key_name | os-test-key |
| name | os-test-server |
| progress | 0 |
| project_id | bf7ee551a04f47e495a1a20b1b7471e6 |
| properties | |
| security_groups | name='default' |
| status | ACTIVE |
| updated | 2018-04-18T14:45:05Z |
| user_id | fe48f382adf544e7b1f398d3fdf0c92e |
| volumes_attached | |
+-------------------------------------+----------------------------------------------------------+
openstack floating ip create ext_net
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| created_at | 2018-05-02T23:58:59Z |
| description | |
| fixed_ip_address | None |
| floating_ip_address | 216.151.20.210 |
| floating_network_id | 265efede-c529-474d-8a9c-86c76afdee53 |
| id | e5e45269-5f89-4f99-ac32-fe7c4fb211e1 |
| name | 216.151.20.210 |
| port_id | None |
| project_id | 92e7ba4964794bb8900a8b84d7a64126 |
| qos_policy_id | None |
| revision_number | 0 |
| router_id | None |
| status | DOWN |
| subnet_id | None |
| updated_at | 2018-05-02T23:58:59Z |
+---------------------+--------------------------------------+
Note the "name" or "floating_ip_address" field.
openstack server add floating ip os-test-server 216.151.20.210
You shoud now be able to ssh into both the floating and VLAN ip addresses.
ssh [email protected]
ssh [email protected]