So, often times on my environment I'm changing the HAProxy configuration since I'm using it as a reverse proxy for my services. To make my life easier I've created a workflow.
I'm storing my haproxy.cfg in a Gitlab repository for which I have enabled CI/CD with a runner running as a Docker container on the same server on which the rest of my services are running (I know, I know not ideal but I don't need 10 servers for running the services that I need. I had to make it work somehow you know 🤷♂️). I'm not going to explain how to configure or add a new Gitlab runner as if you're reading this post I assume you already know.
Now how my .gitlab-ci.yaml file looks like you're wondering? Well, fear not - below you can find my configuration:
stages:
- deploy
deploy_haproxy_config:
stage: deploy
script:
- cp haproxy.cfg /etc/haproxy/haproxy.cfg
only:
- main
On the Gitlab runner I've mapped /etc/haproxy location to be the exact same in the container as well, hence it's easy, just copy (overwrite) the existing configuration and boom.
Now the fun part - well, I've configured everything like you, like you specified above but I still need to reload the HAProxy as the config is not getting picked up automatically 🤔.
For that my friend I've created a python script that acts as a Linux service. What it does is basically it watches the files for a changes, if it detects a change it validates the configuration and if valid it proceeds to reload the HAProxy service.
Below you can find the code of the script:
import subprocess
import logging
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
class HaproxyConfigHandler(FileSystemEventHandler):
""" Handler for file changes """
HAPROXY_CONFIG_PATH = '/etc/haproxy/haproxy.cfg'
HAPROXY_SERVICE = 'haproxy'
def on_modified(self, event):
if event.src_path == self.HAPROXY_CONFIG_PATH:
self.validate_and_reload()
def validate_and_reload(self):
""" Validate HAProxy config and reload service if valid """
if self.validate_config():
self.reload_service()
def validate_config(self):
""" Validate HAProxy configuration """
result = subprocess.run(['haproxy', '-c', '-f', self.HAPROXY_CONFIG_PATH], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if result.returncode == 0:
logging.info("HAProxy configuration is valid.")
return True
else:
logging.error("HAProxy configuration validation failed:\n" + result.stderr.decode())
return False
def reload_service(self):
""" Reload HAProxy service """
result = subprocess.run(['sudo', 'systemctl', 'reload', self.HAPROXY_SERVICE], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if result.returncode == 0:
logging.info("HAProxy service reloaded successfully.")
else:
logging.error("Failed to reload HAProxy service:\n" + result.stderr.decode())
def main():
logging.basicConfig(filename='haproxy_watcher.log', level=logging.INFO, format='%(asctime)s - %(message)s')
path = '/etc/haproxy'
event_handler = HaproxyConfigHandler()
observer = Observer()
observer.schedule(event_handler, path, recursive=False)
observer.start()
try:
while True:
# Run indefinitely
pass
except KeyboardInterrupt:
observer.stop()
observer.join()
if __name__ == "__main__":
main()
I've named it HAProxy Watcher since well, as the name says it watches for config changes, it's important to note that I'm using Python 3.6.8.
To make this run as a Linux service we need to actually create it. To do so we need to create new file called haproxy_watcher.service in /etc/systemd/system/.
The contents of the file should be:
[Unit]
Description=HAProxy Configuration Watcher
[Service]
ExecStart=/usr/bin/python3 /etc/haproxy_watcher/watcher.py
Restart=always
User=root
[Install]
WantedBy=multi-user.target
Once that has been done and configured accordinly to your environment (looking at the location of the script itself), we have the service configured now we just need to start it:
systemctl start haproxy_watcher.service
And that should be it, the service generates a log file in the location where it's stored whenever it reloads the HAProxy.
Till next time!