Skip to content

Instantly share code, notes, and snippets.

@cabecada
Created February 2, 2023 13:22
Show Gist options
  • Save cabecada/611e988efc7b25c1931ca465e518f642 to your computer and use it in GitHub Desktop.
Save cabecada/611e988efc7b25c1931ca465e518f642 to your computer and use it in GitHub Desktop.
haproxy 101 with chatgpt
Vijaykumar Jain
Let's learn haproxy today. I want to understand basic load tcp balancing using haproxy. Can you explain with examples
Sure! I can explain the basic load balancing using HAProxy.
HAProxy is a free, open-source, high-performance load balancer and proxy for TCP and HTTP-based applications. Load balancing is a technique for distributing workloads across multiple servers to optimize resource utilization, maximize throughput, minimize response time, and avoid overload of any single resource.
Here's an example of how to configure HAProxy for TCP load balancing:
Install HAProxy on your server.
Open the HAProxy configuration file, usually located at /etc/haproxy/haproxy.cfg
Define the list of backend servers to receive the traffic:
Copy code
backend backend_servers
server server1 192.168.1.10:80 weight 1
server server2 192.168.1.11:80 weight 1
Define a frontend to listen for incoming traffic and forward it to the backend servers:
bash
Copy code
frontend tcp_load_balancer
bind *:80
mode tcp
default_backend backend_servers
Save and close the configuration file.
Start the HAProxy service:
sql
Copy code
systemctl start haproxy
In this example, HAProxy will listen on port 80 for incoming traffic and forward it to the backend servers, server1 and server2, with equal weight. The load balancing algorithm used is round-robin, meaning that the traffic will be distributed evenly among the backend servers.
That's it! With these simple steps, you have set up basic TCP load balancing using HAProxy.
Vijaykumar Jain
How to make use of health check for backends
Health checks are a critical feature of HAProxy that allows you to monitor the status of your backend servers and remove unhealthy servers from the load-balancing pool. Here's an example of how to configure health checks for backend servers in HAProxy:
Open the HAProxy configuration file, usually located at /etc/haproxy/haproxy.cfg
Define the list of backend servers, including a health check:
sql
Copy code
backend backend_servers
server server1 192.168.1.10:80 weight 1 check
server server2 192.168.1.11:80 weight 1 check
In this example, the check option in the server line enables health checks for each backend server.
Configure the health check parameters:
sql
Copy code
listen health
bind *:1936
mode http
option httpchk GET /health
option httpclose
server server1 192.168.1.10:80 check inter 5000 rise 2 fall 3
server server2 192.168.1.11:80 check inter 5000 rise 2 fall 3
In this example, the inter parameter sets the interval between health checks (5000 milliseconds, or 5 seconds), the rise parameter sets the number of consecutive successful health checks needed to consider a server healthy (2), and the fall parameter sets the number of consecutive failed health checks needed to consider a server unhealthy (3).
Save and close the configuration file.
Start or restart the HAProxy service:
Copy code
systemctl restart haproxy
With these steps, HAProxy will perform periodic health checks on the backend servers, and remove any server that fails the check from the load-balancing pool until it becomes healthy again.
Note that you need to configure the endpoint /health on each backend server to return a HTTP status code of 200 to indicate that the server is healthy.
Vijaykumar Jain
I don't have a web service at the backend for http based check. I can use a simple shell script for health check
Yes, that's correct! You can use a shell script as a health check for your backend servers. Here's an example of how to do that in HAProxy:
Create a shell script to perform the health check. For example, you can create a file called healthcheck.sh:
bash
Copy code
#!/bin/bash
nc -z localhost 8080
In this example, the script uses the nc (netcat) command to check if the backend server is listening on port 8080.
Make the script executable:
bash
Copy code
chmod +x healthcheck.sh
Open the HAProxy configuration file, usually located at /etc/haproxy/haproxy.cfg
Define the list of backend servers, including a health check:
sql
Copy code
backend backend_servers
server server1 192.168.1.10:80 weight 1 check
server server2 192.168.1.11:80 weight 1 check
Configure the health check to use the shell script:
sql
Copy code
listen health
bind *:1936
mode tcp
option tcp-check
tcp-check command /path/to/healthcheck.sh
server server1 192.168.1.10:80 check inter 5000 rise 2 fall 3
server server2 192.168.1.11:80 check inter 5000 rise 2 fall 3
In this example, the tcp-check command option specifies the path to the health check script.
Save and close the configuration file.
Start or restart the HAProxy service:
Copy code
systemctl restart haproxy
With these steps, HAProxy will run the health check script for each backend server, and remove any server that fails the check from the load-balancing pool until it becomes healthy again.
Vijaykumar Jain
What if the health check takes very long. Is there any timeout to make the health check fail
Yes, you can set a timeout for the health check in HAProxy. By default, the timeout is set to 2000 milliseconds (2 seconds), but you can change it to a different value as needed. Here's an example of how to set a timeout for the health check in HAProxy:
sql
Copy code
listen health
bind *:1936
mode tcp
option tcp-check
tcp-check command /path/to/healthcheck.sh
tcp-check timeout 5000
server server1 192.168.1.10:80 check inter 5000 rise 2 fall 3
server server2 192.168.1.11:80 check inter 5000 rise 2 fall 3
In this example, the tcp-check timeout option sets the timeout for the health check to 5000 milliseconds (5 seconds). If the health check takes longer than 5 seconds to complete, HAProxy will consider it a failed check.
You should set the timeout value based on your specific requirements and the expected response time of the health check script. A longer timeout will allow more time for the health check to complete, but may increase the time it takes to detect an unhealthy server. A shorter timeout will reduce the time it takes to detect an unhealthy server, but may also cause false failures if the health check takes longer than expected to complete.
Vijaykumar Jain
Can I drain any backend without dropping connections
Yes, you can drain a backend in HAProxy without dropping existing connections by using the "drain" keyword in the server configuration. When a backend is drained, it will not accept new connections but will allow existing connections to complete normally.
Here's an example of how to drain a backend in HAProxy:
sql
Copy code
backend backend_servers
server server1 192.168.1.10:80 weight 1 check
server server2 192.168.1.11:80 weight 1 check drain
In this example, the "drain" keyword is added to the configuration of the second backend server, which is identified as server2. This means that new connections will not be sent to this backend, but existing connections will continue to be served until they complete.
You can also drain a backend dynamically using the HAProxy management interface. This can be useful if you need to perform maintenance on a backend server, for example, and want to drain traffic to it gradually, without affecting existing connections.
Vijaykumar Jain
I don't like to make config changes everytime. Is there any runtime api that I can use to push the changes
Yes, you can use the HAProxy runtime API to dynamically configure and manage HAProxy at runtime, without making changes to the configuration file. This allows you to make changes to the configuration and see the results in real-time, without having to restart HAProxy.
The runtime API uses a simple text-based protocol over a Unix domain socket or a TCP socket. You can use any program that can connect to a socket and send commands to interact with the runtime API.
Here are some common tasks you can perform using the runtime API:
Show the current configuration:
bash
Copy code
echo "show info" | socat stdio /var/run/haproxy.sock
Show the current statistics:
bash
Copy code
echo "show stat" | socat stdio /var/run/haproxy.sock
Enable or disable a backend server:
bash
Copy code
echo "disable server backend_servers/server1" | socat stdio /var/run/haproxy.sock
echo "enable server backend_servers/server1" | socat stdio /var/run/haproxy.sock
Drain a backend server:
bash
Copy code
echo "set server backend_servers/server1 drain" | socat stdio /var/run/haproxy.sock
Change the weight of a backend server:
bash
Copy code
echo "set weight server backend_servers/server1 10" | socat stdio /var/run/haproxy.sock
These are just a few examples of what you can do with the HAProxy runtime API. You can find more information and a complete list of available commands in the HAProxy documentation.
Vijaykumar Jain
What if my health checks are not working. I still want to have some server serving requests. Is there an option to fallback to server on all health check failures
Yes, you can configure HAProxy to fallback to a backup server if all health checks for the backend servers fail. This can be achieved by using a backup server in the backend configuration.
Here's an example of how to configure a backup server in HAProxy:
sql
Copy code
backend backend_servers
server server1 192.168.1.10:80 weight 1 check
server server2 192.168.1.11:80 weight 1 check
server backup 192.168.1.12:80 weight 1 backup
In this example, the backup keyword is used in the configuration of the backup server, which is identified as 192.168.1.12:80. This server will only be used if all other servers in the backend have failed their health checks.
You can also set different weights for the backup server, so that it will receive a proportionate amount of traffic compared to the other backend servers. In this example, the weight for the backup server is set to 1, which means that it will receive an equal amount of traffic as the other backend servers.
You can also set the fallback option on a frontend to have all failed health checks fallback to the backup server. Here's an example of how to configure the fallback option:
bash
Copy code
frontend frontend_servers
bind *:80
default_backend backend_servers
option fallback
In this example, the option fallback option is set on the frontend_servers frontend. This means that if all servers in the backend have failed their health checks, HAProxy will redirect all incoming traffic to the backup server.
In conclusion, by using a backup server and the fallback option, you can ensure that your HAProxy load balancer will continue to serve requests even if some of the backend servers are not available. This helps to ensure high availability and minimize downtime for your web services.
Vijaykumar Jain
How to make haproxy ha
Making HAProxy highly available typically involves setting up two or more HAProxy instances and configuring them to work together in an active-passive or active-active cluster configuration. This provides redundancy, ensuring that if one HAProxy instance fails, the other instance(s) will take over and continue to handle traffic.
Here are the steps to set up a high availability HAProxy configuration:
Configure two or more HAProxy instances: Each instance should have the same configuration and be running on separate servers.
Set up a virtual IP address: This IP address will be the virtual address that clients connect to. This IP address will float between the HAProxy instances in the event of a failover.
Use a load balancer to balance traffic between the HAProxy instances: This load balancer will monitor the health of each HAProxy instance and redirect traffic to the healthy instance(s).
Use a network file system (NFS) to share the configuration files: This allows the HAProxy instances to have a consistent configuration, even if one instance fails.
Use a tool such as keepalived to manage the virtual IP address: This tool will automatically detect if one of the HAProxy instances has failed and move the virtual IP address to another instance.
Monitor the HAProxy instances: Regularly monitor the HAProxy instances to ensure that they are functioning correctly and to detect any issues early.
By following these steps, you can create a highly available HAProxy configuration that provides redundancy and minimizes downtime for your web services.
Note: The exact steps for setting up high availability will depend on the specific environment and infrastructure you are using. The steps listed here provide a general overview and may need to be adjusted for your specific use case.
Vijaykumar Jain
What is a virtual ip. How to create one
A virtual IP (VIP) is a virtual IP address that is used as the endpoint for a group of servers. In a highly available setup, a virtual IP is assigned to a group of servers, and traffic is redirected to this IP address instead of the individual server IP addresses. This provides a level of abstraction, so that if one of the servers fails, the virtual IP can be moved to another server, ensuring that traffic continues to be handled even if a server fails.
To create a virtual IP, you will typically use a tool such as keepalived or VRRP (Virtual Router Redundancy Protocol). These tools allow you to assign a virtual IP address to a group of servers and manage the assignment of the virtual IP in the event of a failover.
Here's an example of how to create a virtual IP using keepalived:
Install keepalived: You will need to install keepalived on each of the servers that will be part of the virtual IP group.
Configure keepalived: Create a configuration file for keepalived, specifying the virtual IP address, the network interface to use, and the priority for each server in the virtual IP group.
Start keepalived: Start the keepalived service on each server in the virtual IP group.
Test the virtual IP: Test that the virtual IP is working by pinging the virtual IP address from a client machine and verifying that it is responding.
In conclusion, creating a virtual IP involves using a tool such as keepalived to manage the assignment of the virtual IP address to a group of servers. This provides a level of abstraction and helps to ensure high availability for your web services, even if one of the servers fails.
Vijaykumar Jain
Can vip handle the same traffic as a real ip
Yes, a virtual IP (VIP) can handle the same traffic as a real IP address. In fact, from the perspective of clients, the virtual IP behaves just like a real IP address. Clients connect to the virtual IP, which is then redirected to one of the servers in the virtual IP group. The virtual IP acts as an endpoint for the group of servers, so clients are unaware of any underlying changes in the server configuration.
Therefore, a virtual IP can handle the same traffic as a real IP, and in a highly available setup, it provides a level of abstraction and helps to ensure that traffic continues to be handled even if one of the servers fails.
Vijaykumar Jain
Is keepalived the only option that uses vrrp , any other open source tool I can use
Yes, keepalived is not the only open source tool that uses VRRP (Virtual Router Redundancy Protocol) to manage virtual IP addresses. Here are a few other open source tools that can be used:
Heartbeat: Heartbeat is an open-source high availability cluster resource manager for Linux. It can be used to manage virtual IP addresses and provide automatic failover.
Corosync: Corosync is an open-source cluster engine that provides cluster membership and messaging capabilities. It can be used to manage virtual IP addresses and provide automatic failover.
HAProxy: HAProxy is a popular open-source load balancer and proxy server that can be used to manage virtual IP addresses and provide automatic failover.
NGINX: NGINX is a popular open-source web server that can also be used as a reverse proxy and load balancer. It provides built-in support for VRRP and can be used to manage virtual IP addresses and provide automatic failover.
In conclusion, there are multiple open-source tools that can be used to manage virtual IP addresses and provide automatic failover. Keepalived is just one of them and the best option will depend on your specific needs and requirements.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment