Implementing a VMware Virtual Desktop Infrastructure with Horizon View 6.2 – Part 7: Load-balancing

20151023 - 1So far in the series we have installed our View Connection Servers and provisioned both desktops and applications. In this part we introduce load-balancing to ensure our requests for resources are equally distributed, and that in the event a Connection Server fails, we can still provision resources to our users.

Other posts in this series:

  1. Design
  2. Installing the Connection Servers and Composer
  3. Creating the templates
  4. Configuring the RDS hosts
  5. Pool configuration
  6. Application farm configuration
  7. Load-balancing
  8. Remote access

Current scenario

So far the View Connection Servers, Composer and RDSH servers have all been provisioned in the same subnet:

For users to connect to View, they enter the name of one of the connection servers. However if one of these were to fail, there is no automatic way for users to be redirected to another connection server.

If we refer to the design, in particular the requirements:

20160127 - 1

We note that “the solution must be highly available with no single point of failure (SPoF)” (R2). Whilst other technologies such as vSphere HA, multiple NICs in each host and a highly available storage tier (in our case IBM XIV) are in place, currently there is no protection offered for the View Connection Servers .

To satisfy this requirement we must ensure that in the event of a Connection Server failure, users are automatically re-routed to an available server so they can still access their desktop or application.

In addition to this, it would also be beneficial to ensure that requests are equally distributed between connection servers to prevent either one from becoming over-utilized.

One of the design constraints (see part 1, Constraints, C3) is that all load-balancing must use the existing F5 devices.

Please note: in the event you don’t have access to F5 load-balancers, Nginx have a free one which can also be used.

Currently the following VLANs are in place:

  • 60 – server subnet
  • 70 – client subnet
  • 90 – external load-balancer subnet

In addition to this we will make use of another VLAN:

  • 95 – internal load-balancer subnet

Physical design

Once the load-balancers have been configured and the View Connection Servers have been migrated to the new subnet, the physical design will look like:

In the following scenario, two F5 devices have been configured with four interfaces each:

  • Management
  • HA
  • VLAN90
  • VLAN95

The Management interface is attached to VLAN60, whilst the HA interfaces are connected to each other using a crossover cable.

The following two self-IPs have been defined:

  • 172.17.90.3
  • 172.17.95.253

All hosts on VLAN95 are configured to use 172.17.95.253 as their default gateway. A forwarding (IP) virtual server called Outbound-VS has also been defined.

High Availability has also been configured between the two hosts.

Each device has been configured with a SSL private key and certificate. The root and issuing certificate from the internal Certificate Authority have also been imported.

The following load-balanced address in VLAN90 will be used:

  • 172.17.90.50

Migrate View Connection Servers

Move two of the View Connection Servers (view1 and view2) from VLAN60 to VLAN95 and give them the following IPs:

  • IP: 172.17.95.51 & 52
  • Subnet mask: 255.255.255.0
  • Default gateway: 172.17.95.253

Configure DNS

Configure an A record on your internal DNS servers using (substitute text in bold accordingly):

dnscmd . /RecordAdd nl.mdb-lab.com vdi A 172.17.90.50

This will create a host record for vdi.nl.mdb-lab.com that will point to the F5 load-balanced IP of 172.17.90.50.

Configure an SSL certificate

Please note: the following assumes you have an internal certificate authority already in place with a template called “VMwareSSL” . Please see the section “SSL certificates” in part 2 for more information.

Create the following file and save it as vdi.cfg (substitute the highlighted lines accordingly):

[ req ]
default_bits = 2048
distinguished_name = req_distinguished_name
encrypt_key = no
prompt = no
string_mask = nombstr
req_extensions = v3_req

[ v3_req ]
basicConstraints = CA:false
keyUsage = digitalSignature, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth, clientAuth

[ req_distinguished_name ]
commonName = vdi.nl.mdb-lab.com

No subject alternate names (SAN) are needed.

Create a certificate signing request:

openssl req -new -nodes -out vdi.csr -keyout vdi.key -config vdi.cfg

Submit the CSR to the certificate authority to generate the certificate (substitute text in bold accordingly):

certreq -submit -config "issuingca.mdb-lab.com\mdb-lab.com Issuing CA" -attrib "CertificateTemplate:VMwareView" vdi.csr vdi.crt

Create PFX file:

openssl pkcs12 -export -in vdi.crt -inkey vdi.key -certfile cachain.pem -passout pass:VMware1! -out vdi.p12

Import the p12 file to the F5 devices.

Configure the F5 load-balancers

Before beginning the F5 configuration, download the latest F5 iApp templates for VMware View from https://downloads.f5.com/esd/productlines.jsp. At the time of writing the template bundle version is 1.0.0.377, and the VMware View templates are version 1.3.0.

Logon to the F5 admin page:

20160315 - 3

On the left-hand side, click iApps followed by Templates:

20160315 - 4

Verify the correct templates have been installed.

On the left-hand side, click Application Services, followed by Create…

20160315 - 5

In the Name field, type the URL corresponding to the DNS entry you created above. From the Template drop-down box, select f5.vmware_view.v1.3.0:

20160315 - 6

Scroll down to the  SSL Encryption section. From the How should the BIG-IP system handle encrypted traffic? drop-down box, select Terminate SSL for clients, plaintext to View servers (SSL offload).

From the Which SSL certificate do you want to use? box, select the certificate you imported previously. Select the corresponding private key below.

Under PCoIP, select Yes, PCoIP connections should go through the BIG-IP system from the How should the BIG-IP system handle encrypted traffic? drop-down box.

From the Will PCoIP connections be proxied by the View Security Servers? field, select No, PCoIP connections are not proxied by the View Security Servers.

Enter the address and and subnet mask of the network where the VDI desktops will be provisioned. Leave the remaining fields as default:

20160315 - 8

Under Virtual Servers and Pools, type the load-balanced address in the What virtual server IP address do you want to use for remote, untrusted clients? field.

In the What FQDN will clients use to access the View environment? field, type the fully qualified domain name corresponding to the DNS entry created previously.

Finally, enter the IP addresses of the two View Connection Servers that have been migrated to the internal F5 VLAN (95).

20160315 - 9

Click Finish. If configured correctly, the configuration should look like:

20160315 - 10

Testing

The simplest way to test the above is to shutdown each of the View Connection Servers in turn and then use the View Client to connect to a desktop or application pool.

Launch the View Client and delete this exist connection. Create a new one for the load-balanced URL defined previously:

20160315 - 11

Initiate the connection:

20160315 - 12

If you can still connect and receive a desktop (or published application) despite a View Connection Server being offline then the config works as intended.

Simply running a looping ping will not work as the F5 virtual server will always respond even if both connection servers are down.

Coming up

In this part we configured our two F5 load-balancers to distribute traffic between the two VMware View Connection Servers. We then tested the configuration worked by shutting down each of the connection servers in turn and verifying connectivity was still in place.

In the final part in this series we enable remote users to access provisioned desktops and applications in a secure manner by using the VMware View Access Point.

7 thoughts on “Implementing a VMware Virtual Desktop Infrastructure with Horizon View 6.2 – Part 7: Load-balancing

  1. Pingback: Implementing a VMware Virtual Desktop Infrastructure with Horizon View 6.2 – Part 2: Installing the Connection Servers and Composer | virtualhobbit

  2. Pingback: Implementing a VMware Virtual Desktop Infrastructure with Horizon View 6.2 – Part 1: Design | virtualhobbit

  3. Pingback: Implementing a VMware Virtual Desktop Infrastructure with Horizon View 6.2 – Part 3: Creating the templates | virtualhobbit

  4. Pingback: Implementing a VMware Virtual Desktop Infrastructure with Horizon View 6.2 – Part 4: Configuring the RDS hosts | virtualhobbit

  5. Pingback: Implementing a VMware Virtual Desktop Infrastructure with Horizon View 6.2 – Part 5: Pool configuration | virtualhobbit

  6. Pingback: Implementing a VMware Virtual Desktop Infrastructure with Horizon View 6.2 – Part 6: Application farm configuration | virtualhobbit

  7. Pingback: Implementing a VMware Virtual Desktop Infrastructure with Horizon View 6.2 – Part 8: Remote access | virtualhobbit

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s