This might just be the shortest blog post in history, but I spent a disproportionate amount of time today getting this to work so I thought I’d share it (plus I’d know where to find it in future). In both production and the lab environments we use F5 LTMs to load-balance our VMware Horizon View traffic.
The initial configuration of these devices (either physical or virtual) is trivially easy. However one step that isn’t documented very well is configuring the ability for internal devices to communicate back through the F5s to other subnets.
In our shared environment, we configure our F5s with an external VLAN, and a separate internal VLAN for each load-balanced application (View, Exchange etc). In this scenario, clients would connect to a virtual server on the external VLAN (60 in the example below), which would distribute traffic to pool members on the inside.
In a VMware View example, the internal network would contain the Connection Servers. These still need access to other infrastructure servers however (AD, vCenter), so must be able to “escape” from this network.
To do this we create a virtual server.
Configure a outbound Virtual Server
In the F5 LTN GUI, click Local Traffic, followed by Virtual Servers, and then finally Virtual Server List. On the right, click Create…
Fill in the details for Name field and change the Type to Forwarding (IP).
Set the Destination, ensuring you use 0.0.0.0/0 and not just 0.0.0.0 (this cost me about three hours today!). Change the Service Port to * All Ports:
Change the Protocol to * All Protocols and the Source Address Translation to Auto Map:
When done click Finished.
The above are examples only to get you up and running, and you may need to make adjustments to suit your environment.
Obviously forcing traffic back through the F5 isn’t the only solution – you could add another interface to each internal subnet that connects back to the firewall. However in my experience that starts getting messy…