Skip to content

Instantly share code, notes, and snippets.

@linuxsimba
Last active December 22, 2016 04:25
Show Gist options
  • Select an option

  • Save linuxsimba/6b03c6f5f2c9cdb1941e0b170e61584d to your computer and use it in GitHub Desktop.

Select an option

Save linuxsimba/6b03c6f5f2c9cdb1941e0b170e61584d to your computer and use it in GitHub Desktop.
MOS8 Lbaasv2 setup using Haproxy driver

Lbaas v2 Setup on MOS 8

Create Subnet with 2 Web server VMs

Use the normal way of creating VM instances and add 2 VMs running a webserver into a single subnet

Install Lbaasv2 using Haproxy driver

Install lbaasv2-agent

Install the lbaasv2-agent on all controllers

sudo apt-get install neutron-lbaasv2-agent

Install the lbaas_agent.ini file

Update the /etc/neutron/lbaas_agent.ini to say the following

[DEFAULT]
verbose = False
debug = False

periodic_interval = 10

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

[haproxy]
user_group = nogroup

send_gratuitous_arp = 3

In /etc/neutron/neutron.conf on all controllers apply the following diff:

--- /etc/neutron/neutron.conf.old       2016-12-19 23:20:35.987369011 +0000
+++ /etc/neutron/neutron.conf   2016-12-19 20:02:23.735785952 +0000
@@ -30,6 +30,7 @@

 # The service plugins Neutron will use (list value)
 #service_plugins =
+service_plugins =  neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.metering.metering_plugin.MeteringPlugin,neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2

 # The base MAC address Neutron will use for VIFs. The first 3 octets will remain unchanged. If the 4th octet is not 00, it will also be
 # used. The others will be randomly generated. (string value)
@@ -1361,3 +1362,8 @@

 # Sets the list of available ciphers. value should be a string in the OpenSSL cipher list format. (string value)
 #ciphers = <None>
+
+[service_providers]
+service_provider = LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

Restart Neutron servers

sudo service neutron-server restart
sudo service neutron-lbaasv2-agent restart

Configure a LoadBalancer

Create the Loadbalancer

First determine the Subnet ID that contains the hosts you wish to load balance to.


root@node-2:~# openstack server list -f json
[
  {
    "Status": "ACTIVE",
    "Networks": "admin_internal_net=10.109.4.45, 10.109.3.173",
    "ID": "877f29c4-dee7-4ac8-b2a4-37f010bba04f",
    "Name": "web02"
  },
  {
    "Status": "ACTIVE",
    "Networks": "admin_internal_net=10.109.4.44, 10.109.3.172",
    "ID": "6473c31c-60e4-410f-93c8-1c6bfbe1c908",
    "Name": "web01"
  },
  {
    "Status": "ACTIVE",
    "Networks": "admin_internal_net=10.109.4.35, 10.109.3.170",
    "ID": "5afe10cb-d302-4c3d-923c-0ed01cf3f8f5",
    "Name": "lbtest"
  }
]

root@node-2:~# openstack network list -f json
[
  {
    "Subnets": "449a4a98-5bae-42ed-a4cd-a2a24bd27a6b",
    "ID": "9f178fd6-8914-426d-9f5e-d4ad9a073484",
    "Name": "admin_floating_net"
  },
  {
    "Subnets": "**139cc698-f079-4c46-ac0f-6364ad3238d5**",
    "ID": "2a165d79-9fdc-486b-8a7b-8db076ebee20",
    "Name": "admin_internal_net"
  }
]

root@node-2:~# neutron lbaas-loadbalancer-create --name test-lb 139cc698-f079-4c46-ac0f-6364ad3238d5
Created a new loadbalancer:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| description         |                                      |
| id                  | 33fcb82e-ab1d-4c71-90b4-6ce04998b993 |
| listeners           |                                      |
| name                | test-lb                              |
| operating_status    | OFFLINE                              |
| pools               |                                      |
| provider            | haproxy                              |
| provisioning_status | PENDING_CREATE                       |
| tenant_id           | 9388b4bab91e4ac8a8cb96877df6af40     |
| vip_address         | 10.109.4.41                          |
**| vip_port_id | 458fac9d-755e-4c10-ba54-2186076059a4 |**
| vip_subnet_id       | 139cc698-f079-4c46-ac0f-6364ad3238d5 |
+---------------------+--------------------------------------+

root@node-2:~# neutron lbaas-loadbalancer-show test-lb
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| description         |                                      |
| id                  | 33fcb82e-ab1d-4c71-90b4-6ce04998b993 |
| listeners           |                                      |
| name                | test-lb                              |
| operating_status    | ONLINE                               |
| pools               |                                      |
| provider            | haproxy                              |
| provisioning_status | ACTIVE                               |
| tenant_id           | 9388b4bab91e4ac8a8cb96877df6af40     |
| vip_address         | 10.109.4.41                          |
| vip_port_id         | 458fac9d-755e-4c10-ba54-2186076059a4 |
| vip_subnet_id       | 139cc698-f079-4c46-ac0f-6364ad3238d5 |
+---------------------+--------------------------------------+

Notice that the Operational status is ONLINE but there are no listeners. Listeners are ports you want the load balancer to manage and balance traffic to. Example HTTP Port 80.

The Mitaka LBAAS doc says that you should be able to ping the vip_address. With OVS you cannot do that. It is because the lbaas network namespace is not created. This is what its like right now

root@node-2:~# ip netns ls
qdhcp-2a165d79-9fdc-486b-8a7b-8db076ebee20
haproxy
vrouter

To create a ip netns namespace add a listener to the loadbalancer object.

root@node-2:~# neutron lbaas-listener-create --name test-lb-http --loadbalancer test-lb --protocol HTTP --protocol-port 80
Created a new listener:
+---------------------------+------------------------------------------------+
| Field                     | Value                                          |
+---------------------------+------------------------------------------------+
| admin_state_up            | True                                           |
| connection_limit          | -1                                             |
| default_pool_id           |                                                |
| default_tls_container_ref |                                                |
| description               |                                                |
| id                        | 9ea6d06e-a214-4369-a37c-4e612883c76b           |
| loadbalancers             | {"id": "33fcb82e-ab1d-4c71-90b4-6ce04998b993"} |
| name                      | test-lb-http                                   |
| protocol                  | HTTP                                           |
| protocol_port             | 80                                             |
| sni_container_refs        |                                                |
| tenant_id                 | 9388b4bab91e4ac8a8cb96877df6af40               |
+---------------------------+------------------------------------------------+
root@node-2:~# neutron lbaas-loadbalancer-show test-lb
+---------------------+------------------------------------------------+
| Field               | Value                                          |
+---------------------+------------------------------------------------+
| admin_state_up      | True                                           |
| description         |                                                |
| id                  | 33fcb82e-ab1d-4c71-90b4-6ce04998b993           |
| listeners           | {"id": "9ea6d06e-a214-4369-a37c-4e612883c76b"} |
| name                | test-lb                                        |
| operating_status    | ONLINE                                         |
| pools               |                                                |
| provider            | haproxy                                        |
| provisioning_status | ACTIVE                                         |
| tenant_id           | 9388b4bab91e4ac8a8cb96877df6af40               |
| vip_address         | 10.109.4.41                                    |
| vip_port_id         | 458fac9d-755e-4c10-ba54-2186076059a4           |
| vip_subnet_id       | 139cc698-f079-4c46-ac0f-6364ad3238d5           |
+---------------------+------------------------------------------------+
root@node-2:~# ip netns list
qlbaas-33fcb82e-ab1d-4c71-90b4-6ce04998b993
qdhcp-2a165d79-9fdc-486b-8a7b-8db076ebee20
haproxy
vrouter

Notice above that the qlbaas-xxxx namespace has the same ID as the name of the ID of the loadbalancer. In this case it is 33fcb82e-ab1d-4c71-90b4-6ce04998b993.

Now it is possible to ping a test VM from the loadbalancer. This is different than what the Mitaka Docs says.


root@node-2:~# ip netns exec qlbaas-**33fcb82e-ab1d-4c71-90b4-6ce04998b993** netshow l3
--------------------------------------------------------------------
To view the legend,  rerun "netshow" cmd with the  "--legend" option
--------------------------------------------------------------------
    Name            Speed      MTU  Mode          Summary
--  --------------  -------  -----  ------------  ------------------------
UP  lo              N/A      65536  Loopback      IP: 127.0.0.1/8, ::1/128
UP  tap458fac9d-75  N/A       1500  Interface/L3  IP: 10.109.4.41/24

root@node-2:~# openstack server list -f json
[
  {
    "Status": "ACTIVE",
    "Networks": "admin_internal_net=**10.109.4.35**, 10.109.3.170",
    "ID": "5afe10cb-d302-4c3d-923c-0ed01cf3f8f5",
    "Name": "lbtest"
  },
  {
    "Status": "ACTIVE",
    "Networks": "admin_internal_net=10.109.4.33, 10.109.3.168",
    "ID": "5773ebae-0882-460d-a464-1f97876a6db6",
    "Name": "ex-2ofp-5e5rdjgonxds-z3s44d7kaxjf-server-hak6j4smwxag"
  },
  {
    "Status": "ACTIVE",
    "Networks": "admin_internal_net=10.109.4.32, 10.109.3.167",
    "ID": "f5752b90-7f24-4127-8b45-90e694455f5a",
    "Name": "ex-2ofp-5z22nlq5cuyz-ju4nl35q5nvt-server-z4nje5cpxjsk"
  }
]root@node-2:~#

root@node-2:~# ip netns exec **qlbaas-33fcb82e-ab1d-4c71-90b4-6ce04998b993** ping -c4 **10.109.4.35**
PING 10.109.4.35 (10.109.4.35) 56(84) bytes of data.
64 bytes from 10.109.4.35: icmp_seq=1 ttl=64 time=4.47 ms
64 bytes from 10.109.4.35: icmp_seq=2 ttl=64 time=1.17 ms
64 bytes from 10.109.4.35: icmp_seq=3 ttl=64 time=1.08 ms
64 bytes from 10.109.4.35: icmp_seq=4 ttl=64 time=0.911 ms

--- 10.109.4.35 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 0.911/1.910/4.477/1.484 ms

Next create a security group that allow the VIP to accept HTTP traffic.

root@node-2:~# neutron security-group-create lbaas
$ neutron security-group-rule-create \
  --direction ingress \
  --protocol tcp \
  --port-range-min 80 \
  --port-range-max 80 \
  --remote-ip-prefix 0.0.0.0/0 \
  lbaas
$ neutron security-group-rule-create \
  --direction ingress \
  --protocol icmp \
  lbaas

Apply the security group on the VIP port of the loadbalancer

root@node-2:~# neutron lbaas-loadbalancer-show test-lb
+---------------------+------------------------------------------------+
| Field               | Value                                          |
+---------------------+------------------------------------------------+
| admin_state_up      | True                                           |
| description         |                                                |
| id                  | 33fcb82e-ab1d-4c71-90b4-6ce04998b993           |
| listeners           | {"id": "9ea6d06e-a214-4369-a37c-4e612883c76b"} |
| name                | test-lb                                        |
| operating_status    | ONLINE                                         |
| pools               |                                                |
| provider            | haproxy                                        |
| provisioning_status | ACTIVE                                         |
| tenant_id           | 9388b4bab91e4ac8a8cb96877df6af40               |
| vip_address         | 10.109.4.41                                    |
| vip_port_id         | 458fac9d-755e-4c10-ba54-2186076059a4           |
| vip_subnet_id       | 139cc698-f079-4c46-ac0f-6364ad3238d5           |
+---------------------+------------------------------------------------+

root@node-2:~# neutron port-update \
 --security-group lbaas 458fac9d-755e-4c10-ba54-2186076059a4

Now add a loadbalancer pool and pool members.

root@node-2:~# neutron lbaas-pool-create --name test-lb-pool-http  \
  --lb-algorithm ROUND_ROBIN --listener test-lb-http \
  --protocol HTTP

root@node-2:~# neutron lbaas-member-create \
  --subnet admin_internal_net__subnet \
  --address 10.109.4.45 --protocol-port 80 test-lb-pool-http

root@node-2:~# neutron lbaas-member-create \
  --subnet admin_internal_net__subnet \
  --address 10.109.4.44 --protocol-port 80 test-lb-pool-http

At this point it is possible to test the loadbalancer. But not in the way the Mitaka Docs say. Run the ping test using the ip netns exec command.

root@node-2:~# ip netns exec qlbaas-33fcb82e-ab1d-4c71-90b4-6ce04998b993 curl 10.109.4.41
web02

root@node-2:~# ip netns exec qlbaas-33fcb82e-ab1d-4c71-90b4-6ce04998b993 curl 10.109.4.41
web01

root@node-2:~# ip netns exec qlbaas-33fcb82e-ab1d-4c71-90b4-6ce04998b993 curl 10.109.4.41
web02

root@node-2:~# ip netns exec qlbaas-33fcb82e-ab1d-4c71-90b4-6ce04998b993 curl 10.109.4.41
web01root@node-2:~# neutron port-list

Next configure a floating IP for the VIP

First figure out an floating IP ID to assign to the VIP port id. Run neutron floatingip-list to get an empty floating id slot.

root@node-2:~# neutron floatingip-list
+--------------------------------------+------------------+---------------------+--------------------------------------+
| id                                   | fixed_ip_address | floating_ip_address | port_id                              |
+--------------------------------------+------------------+---------------------+--------------------------------------+
| 19dd10c7-a4ed-421a-b517-f54afa3717b3 | 10.109.4.45      | 10.109.3.173        | bca43a8a-8e72-460a-9ea1-b78d819c2a35 |
| 653fef19-d743-4d1b-a861-49670395b1e0 | 10.109.4.44      | 10.109.3.172        | 2ca2cec7-336a-4678-a03f-d0dc66a25e19 |
| 82d45e45-d189-4c9a-af50-723233dbd757 | 10.109.4.35      | 10.109.3.170        | 101e3532-2bc1-4636-b638-0143ea5dbc24 |
| 863e966c-3cc2-44b5-9327-1416ddeff696 |                  | 10.109.3.139        |                                      |
| abfed9a4-61ca-4afb-8af9-02f1ef0b62c8 |                  | 10.109.3.169        |                                      |
| ac5860f6-d22a-4011-a103-e18d0c8971d3 |                  | 10.109.3.171        |                                      |
| b7c3a815-f7ef-4abf-8399-34e78b878e95 | 10.109.4.32      | 10.109.3.167        | a2b2ca9d-206a-445f-8959-7b625a4d8550 |
| d2885818-c3e8-4219-9e12-59fce163b7f9 | 10.109.4.33      | 10.109.3.168        | 42570562-24a9-49ee-a72a-c7ddd383893c |
+--------------------------------------+------------------+---------------------+--------------------------------------+

Let's use the ID associated with 10.109.3.139. And next get the VIP port id which is 458fac9d-755e-4c10-ba54-2186076059a4

root@node-2:~# neutron lbaas-loadbalancer-show test-lb
+---------------------+------------------------------------------------+
| Field               | Value                                          |
+---------------------+------------------------------------------------+
| admin_state_up      | True                                           |
| description         |                                                |
| id                  | 33fcb82e-ab1d-4c71-90b4-6ce04998b993           |
| listeners           | {"id": "9ea6d06e-a214-4369-a37c-4e612883c76b"} |
| name                | test-lb                                        |
| operating_status    | ONLINE                                         |
| pools               | {"id": "0fe66435-849b-4299-9c0e-b46b037a7662"} |
| provider            | haproxy                                        |
| provisioning_status | ACTIVE                                         |
| tenant_id           | 9388b4bab91e4ac8a8cb96877df6af40               |
| vip_address         | 10.109.4.41                                    |
| vip_port_id         | 458fac9d-755e-4c10-ba54-2186076059a4           |
| vip_subnet_id       | 139cc698-f079-4c46-ac0f-6364ad3238d5           |
+---------------------+------------------------------------------------+

With the floating IP id and vip_port_id, assign the VIP a floating IP of 10.10.3.139.

root@node-2:~# neutron floatingip-associate 863e966c-3cc2-44b5-9327-1416ddeff696 458fac9d-755e-4c10-ba54-2186076059a4
Associated floating IP 863e966c-3cc2-44b5-9327-1416ddeff696
root@node-2:~# neutron floatingip-list
+--------------------------------------+------------------+---------------------+--------------------------------------+
| id                                   | fixed_ip_address | floating_ip_address | port_id                              |
+--------------------------------------+------------------+---------------------+--------------------------------------+
| 19dd10c7-a4ed-421a-b517-f54afa3717b3 | 10.109.4.45      | 10.109.3.173        | bca43a8a-8e72-460a-9ea1-b78d819c2a35 |
| 653fef19-d743-4d1b-a861-49670395b1e0 | 10.109.4.44      | 10.109.3.172        | 2ca2cec7-336a-4678-a03f-d0dc66a25e19 |
| 82d45e45-d189-4c9a-af50-723233dbd757 | 10.109.4.35      | 10.109.3.170        | 101e3532-2bc1-4636-b638-0143ea5dbc24 |
| 863e966c-3cc2-44b5-9327-1416ddeff696 | 10.109.4.41      | 10.109.3.139        | 458fac9d-755e-4c10-ba54-2186076059a4 |
| abfed9a4-61ca-4afb-8af9-02f1ef0b62c8 |                  | 10.109.3.169        |                                      |
| ac5860f6-d22a-4011-a103-e18d0c8971d3 |                  | 10.109.3.171        |                                      |
| b7c3a815-f7ef-4abf-8399-34e78b878e95 | 10.109.4.32      | 10.109.3.167        | a2b2ca9d-206a-445f-8959-7b625a4d8550 |
| d2885818-c3e8-4219-9e12-59fce163b7f9 | 10.109.4.33      | 10.109.3.168        | 42570562-24a9-49ee-a72a-c7ddd383893c |
+--------------------------------------+------------------+---------------------+--------------------------------------+

Now from the openstack controller bash terminal, use curl to access the VIP floating IP. You should see that load-balancing should work from the VIP floating IP.

root@node-2:~# curl 10.109.3.139
web02
root@node-2:~# curl 10.109.3.139
web01
root@node-2:~# curl 10.109.3.139
web02
root@node-2:~# curl 10.109.3.139
web01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment