UPDATE (March 3, 2026): Added BGP AS-Path Prepending section to resolve traffic flapping issues when both load balancers are active. This ensures a deterministic Active/Passive failover while maintaining BGP readiness.
I've been running two HAProxy boxes with Keepalived for years. Classic active/standby setup: one does the work, one sits there waiting for disaster. Works fine, but it always bugged me that half my capacity was just... idle.
Last week I finally dug into better options. Turns out you can do proper BGP anycast with a UniFi UDM Pro Max. No expensive Cisco gear needed. Here's how I got both HAProxy instances handling traffic simultaneously.

Where I Started
The original setup was dead simple:
Internet → VIP (Keepalived) → HAProxy01 (active)
→ HAProxy02 (sitting idle)Keepalived does its job well. HAProxy01 goes down, VIP floats to HAProxy02 in a couple seconds. But here's the thing: HAProxy02 has zero clue what sessions existed on HAProxy01. Users get kicked out, sticky sessions break. Not great.
First Fix: Sync Those Sessions
Before going crazy with BGP, there's a quick win. HAProxy can sync stick-tables between instances. When failover happens, the backup already knows about existing sessions.
Add this to both configs:
peers haproxy_cluster
peer haproxy01 10.0.0.10:10000
peer haproxy02 10.0.0.11:10000Then hook it into your backends:
backend myapp_backend
balance roundrobin
stick-table type ip size 200k expire 30m peers haproxy_cluster
stick on src
server app01 10.0.0.10:8080 check
server app02 10.0.0.11:8080 checkNow sessions survive failover. Users don't get logged out. This alone made a huge difference.
Keeping Configs in Sync
Two HAProxy boxes means two configs to maintain. I got tired of SSH'ing into both machines every time I changed something. HAProxy Data Plane API fixes this. Push changes via REST:
# Grab current config
curl -u admin:password "http://haproxy01:5555/v3/services/haproxy/configuration/raw"
# Push updated config
curl -u admin:password -X POST \
"http://haproxy01:5555/v3/services/haproxy/configuration/raw?version=1" \
-H "Content-Type: text/plain" \
--data-binary @haproxy.cfgI wrote a quick script that pushes to both nodes. Config drift problem solved.
The Real Upgrade: Both Boxes Working
Active/standby is fine, but why waste half your hardware? The goal was getting both HAProxy instances handling traffic at the same time.
Option 1: Two VIPs, DNS Round-Robin
VIP1 (10.0.0.100) → HAProxy01 (primary)
VIP2 (10.0.0.101) → HAProxy02 (primary)
DNS: lb.example.com → both IPsEach HAProxy owns one VIP, backs up the other. DNS returns both, clients pick randomly. If one node dies, its VIP moves over. Works, but you're depending on DNS TTLs for failover.
Option 2: BGP Anycast on UDM Pro Max
This is what I actually went with. Both HAProxy boxes announce the same IP to the router. The router (my UDM Pro Max) sees two paths and load-balances between them.
I figured this needed fancy network gear. Nope. Turns out UniFi added BGP support in UniFi OS 4.1.13. If you've got a UDM Pro Max, UDM Pro, UDM-SE, or UXG-Enterprise, you can do this right now.

How It Looks
┌─────────────────────┐
│ UDM Pro Max │
│ AS 65000 │
│ 10.0.0.1 │
└──────────┬──────────┘
│
BGP (eBGP) │ ECMP Load Balancing
│
┌──────────────────┼
│ │
┌──────▼──────┐ ┌──────▼──────┐
│ HAProxy01 │ │ HAProxy02 │
│ AS 65010 │ │ AS 65010 │
│ .10 │ │ .11 │
└──────┬──────┘ └──────┬──────┘
│ │
└────────┬─────────┘
│
Anycast VIP
10.0.0.100Both HAProxy nodes announce 10.0.0.100. The UDM sees two equal paths, splits traffic between them. One goes down, BGP withdraws the route, traffic flows to the survivor. No DNS delays.
Pro Tip: Fixing Traffic Flapping with AS-Path Prepend
While Anycast Active-Active sounds great, some stateful applications (Workflow engines, Streaming, Documentation portals) can suffer if the UDM Pro Max decides to switch paths mid-session. This is called traffic flapping.
The solution is to keep both BGP sessions up but make one HAProxy less attractive to the router. HaProxy01 becomes the primary, and HAProxy02 is the hot-standby.
On your backup HAProxy, update your /etc/frr/frr.conf:
router bgp 65010
neighbor 10.0.0.1 route-map PREPEND out
!
route-map PREPEND permit 10
set as-path prepend 65010 65010 65010This makes the path via HaProxy02 look much "longer" to the UDM, forcing it to use HaProxy01 as long as it is available. If 01 fails, the route is withdrawn and 02 takes over in seconds with ZERO flapping.
Setting It Up
Step 1: FRRouting on the HAProxy Boxes
# On both HAProxy servers
apt update && apt install -y frr frr-pythontools
# Turn on BGP
sed -i 's/bgpd=no/bgpd=yes/' /etc/frr/daemons
systemctl restart frrStep 2: FRR Config (HAProxy01)
# /etc/frr/frr.conf
frr version 8.5
frr defaults traditional
hostname haproxy01
!
router bgp 65010
bgp router-id 10.0.0.10
no bgp ebgp-requires-policy
!
neighbor 10.0.0.1 remote-as 65000
neighbor 10.0.0.1 description UDM-Pro-Max
!
address-family ipv4 unicast
network 10.0.0.100/32
neighbor 10.0.0.1 activate
neighbor 10.0.0.1 soft-reconfiguration inbound
exit-address-family
!HAProxy02 is identical, just change the router-id to .11.
Step 3: Add the VIP to Loopback
The anycast IP needs to exist on both boxes:
# /etc/network/interfaces.d/anycast
auto lo:0
iface lo:0 inet static
address 10.0.0.100/32
# Bring it up
ifup lo:0Step 4: UDM Pro Max BGP Config
Configure neighbors via UniFi Network → Settings → Routing → BGP:
router bgp 65000
bgp router-id 10.0.0.1
!
neighbor 10.0.0.10 remote-as 65010
neighbor 10.0.0.10 description HAProxy01
!
neighbor 10.0.0.11 remote-as 65010
neighbor 10.0.0.11 description HAProxy02
!
address-family ipv4 unicast
neighbor 10.0.0.10 activate
neighbor 10.0.0.10 soft-reconfiguration inbound
neighbor 10.0.0.11 activate
neighbor 10.0.0.11 soft-reconfiguration inbound
maximum-paths 2
exit-address-family
!Step 5: Open the Firewall
BGP runs on TCP 179. In UniFi Network, add a firewall rule:
- Type: LAN In
- Source: 10.0.0.10, 10.0.0.11
- Destination: Gateway
- Port: TCP 179
- Action: Allow
Did It Work?
Check BGP status on the HAProxy boxes:
# Should show Established
sudo vtysh -c "show ip bgp summary"
# Check what you're advertising
sudo vtysh -c "show ip bgp neighbors 10.0.0.1 advertised-routes"On the UDM Pro Max (SSH in):
# Should show two nexthops for the VIP
ip route show 10.0.0.100
# Expected output:
# 10.0.0.100 proto bgp
# nexthop via 10.0.0.10 weight 1
# nexthop via 10.0.0.11 weight 1What I Run Now
- HAProxy Peers for session sync
- Data Plane API for config management
- BGP anycast via UDM Pro Max
Both boxes handle traffic. One dies, the other picks up everything automatically. Sessions survive because of peer sync. It's proper active-active without any DNS hacks or slow failovers.
Quick Reference
| Approach | Effort | When to Use |
|---|---|---|
| Keepalived only | Low | Simple setups, you don't mind idle hardware |
| Dual VIPs + DNS | Medium | Router doesn't support BGP |
| BGP Anycast | Medium | You want real active-active with fast failover |
Handy Commands
# HAProxy peer status
echo "show peers" | socat stdio /run/haproxy/admin.sock
# BGP summary
sudo vtysh -c "show ip bgp summary"
# What routes am I advertising?
sudo vtysh -c "show ip bgp neighbors 10.0.0.1 advertised-routes"
# Reload HAProxy without dropping connections
systemctl reload haproxy
Bonus: AI-Powered HAProxy Management
Here's something I didn't expect to love this much. My load balancers expose their configuration via REST endpoints. Paired with a local AI assistant, management is now done in plain English / French:
- "Drain the backend server for maintenance"
- "Show me the stats for all backends"
- "Add a new server to the cluster"
- "Block external access to the admin panel"
The AI translates my request into the right API calls, executes them, and confirms the result. No more digging through config files or remembering complex syntax.
Me: "Put the blog server in maintenance mode"
AI: Done. Drained node 01 in backends.
Active connections will finish, new requests
go to other servers. Want me to re-enable it later?- UniFi BGP docs: help.ui.com/hc/en-us/articles/16271338193559
- HAProxy Peers: haproxy.com/documentation
- FRRouting: frrouting.org