Load-balancing Microsoft Exchange with nginx+ – Part 5: Tidying up

nginxIn part 4 of this series I configured Microsoft Exchange to work with nginx.

In this final part of the series I tidy up the loose ends so it can be put live.

Other articles in the series:

  1. Installing and configuring keepalived
  2. Installing nginx+
  3. Configuring nginx+ for Microsoft Exchange
  4. Configuring Microsoft Exchange
  5. Tidying up

The first thing to configure is synchronise the nginx+ configs between both VMs.  To do this we will use rsync over SSH.

Create a new user on both VMs to run the rsync copy.  Insert your own password as desired:

useradd -s /bin/bash -p $(echo mysecretpassword | openssl passwd -1 -stdin) sa_copyconf

On HA1, login as the user and create the SSH keys:

mkdir .ssh
chmod 700 .ssh
cd .ssh
ssh-keygen -t rsa -N '' -b 2048

Accept the default file name for the private key. Add the public key to the list of authorized keys:

cat id_rsa.pub > authorized_keys2
chmod 644 authorized_keys2

Copy the public key over to HA2:

cat id_rsa.pub | ssh ha2.mail.mdb-lab.com "mkdir .ssh && chmod 700 .ssh && cat > .ssh/authorized_keys2"

On HA2, login is as sa_copyconf and set the permissions to /home/sa_copyconf/.ssh/authorized_keys2:

chmod 644 /home/sa_copyconf/.ssh/authorized_keys2

Also on HA2, copy across the id_rsa file from HA1 and place in .ssh:

sftp ha1.mail.mdb-lab.com:.ssh/id_rsa .ssh/id_rsa

On each VM, add permission to /etc/nginx/ for sa_copyconf:

setfacl -m u:sa_copyconf:rwx /etc/nginx/

Next, install rsync (if it isn’t already):

yum install rsync -y --nogpgcheck

Create the following script on each host (replace the hostname as needed – on HA1, it should reference HA2 and vice-versa):

cat <<EOF> /home/sa_copyconf/copyconf.sh
#!/bin/bash
rsync -avuz -e ssh ha2.mail.mdb-lab.com:/etc/nginx/nginx.conf /etc/nginx
EOF

Make the script executable:

chmod +x /home/sa_copyconf/copyconf.sh

Add a cron job to run the script every five minutes:

crontab -l | { cat; echo "*/5 * * * * /home/sa_copyconf/copyconf.sh"; } | crontab -

To test, delete the config on HA2:

rm -f /etc/nginx/nginx.conf

Wait ten minutes and the config should now reappear on HA2. To check this:

diff /etc/nginx/nginx.conf <(ssh ha1.mail.mdb-lab.com 'cat /etc/nginx/nginx.conf')

Next, restrict VRRP (the protocol keepalived uses) to the IPs of the two hosts. On HA1:

iptables -D INPUT -p 112 -j ACCEPT
iptables -I INPUT -p 112 -s 172.17.80.12 -j ACCEPT
service iptables save

On HA2:

iptables -D INPUT -p 112 -j ACCEPT
iptables -I INPUT -p 112 -s 172.17.80.11 -j ACCEPT
service iptables save

Test this by pausing the VM currently owning the cluster addresses and verifying they have transferred.

Finally, SELinux needs to be modified so nginx can run.  To demonstrate this, enable SELinux:

setenforce 1

Then restart the nginx service:

service nginx restart

You will get the following error:

nginx: [emerg] bind() to 172.17.80.13:135 failed (13: Permission denied)
nginx: configuration file /etc/nginx/nginx.conf test failed

This is because with SELinux enabled nginx is unable to bind to tcp/25, tcp/135 and tcp/139. To work around this:

grep nginx /var/log/audit/audit.log | audit2allow -m nginx > nginx.te
grep nginx /var/log/audit/audit.log | audit2allow -M nginx
semodule -i nginx.pp

To test, restart the service again:

service nginx restart

nginx should now start without issue.  On each VM run the following as root:

sudo sed -i "/SELINUX=permissive/c\SELINUX=enforcing" /etc/selinux/config

I would like to thank the technical guys at Nginx for help with the SELinux component.  More information regarding this can be found on their blog at http://nginx.com/blog/nginx-se-linux-changes-upgrading-rhel-6-6/.

Nick Shadrin at Nginx has also put together a comprehensive Exchange configuration guide on their site.  I highly recommend checking it out – http://nginx.com/blog/load-balancing-microsoft-exchange-nginx-plus-r6/.

Now that mainstream support for Microsoft Threat Management Gateway 2010 has ended (extended support is available till 14 April 2020), there is an opportunity to leverage technologies such as nginx+ to load-balance and publish Microsoft Exchange 2013 externally when the time comes.  If there is, I’ll be sure to document it!

In this article we have provided a method of syncing the configs, tightened security and re-enabled SELinux.

That completes the series on how to configure nginx+ to load-balance Microsoft Exchange.

Load-balancing Microsoft Exchange with nginx+ – Part 4: Configuring Microsoft Exchange

nginxIn part 3 of this series I configured nginx+ to support Microsoft Exchange.

In this part, I configure Microsoft Exchange 2010/13.

Other articles in the series:

  1. Installing and configuring keepalived
  2. Installing nginx+
  3. Configuring nginx+ for Microsoft Exchange
  4. Configuring Microsoft Exchange
  5. Tidying up

The Exchange environment consists of the following:

  1. 3 sites (2 in Amsterdam, 1 in London, 1 DR (Southport, UK))
  2. 2 Windows 2008 R2 domain controllers (core) (1 in Amsterdam, 1 in London)
  3. 11 Exchange 2010 SP3 RU9 servers
  4. 3 client access servers (2 in Amsterdam, 1 in London)
  5. 3 hub transport servers (2 in Amsterdam, 1 in London)
  6. 5 mailbox servers (3 in Amsterdam, 2 in London)
  7. 2 Forefront Threat Management Gateway 2010 servers (1 in Amsterdam, 1 in London)
  8. 1 Windows 2008 R2 landing pad (for administration)

Background information

The Exchange solution I have designed is based on the concept of a production and resource domain.  All user accounts are hosted in the production domains (nl.mdb-lab.com and uk.mdb-lab.com), and all Exchange-related objects reside in the resource domain (mail.mdb-lab.com).  A trust exists between the two  forests, and accounts are linked to mailboxes.

Whilst there are many advantages to this design, it does add extra complexity and there are simpler ways to bring Exchange to the organisation.

The first disadvantage is in the choice of name I made for the resource domain.  Ideally I wanted to use a consistent name across the estate for all services – mail.mdb-lab.com.  Unfortunately with DNS stub domain created to support the forest trust won’t allow this – any request for mail.mdb-lab.com will also return the IP addresses of the two domain controllers in the resource domain.  The only way around this it to configure internal hosts to use outlook.mail.mdb-lab.com and use mail.mdb-lab.com for external clients.  In hindsight I wish I had of named the domain exchange2010.mdb-lab.com.

At first the aim is to load-balance Exchange front-end traffic for users in Amsterdam for both Outlook Web App and the Outlook client. Exchange ActiveSync will also benefit from this additional layer of redundancy, along with using TMG to publish this to external users.

First, create an A record in DNS to point to the load-balanced address:

dnscmd dc1.mail.mdb-lab.com /RecordAdd mail.mdb-lab.com outlook A 172.17.80.13

For inbound SMTP from the internet, mail will come from the Exchange 2010 Edge server in the DMZ. However if you want to take advantage of the load-balanced address for sending email internally then another DNS entry is preferred:

dnscmd dc1.mail.mdb-lab.com /RecordAdd mail.mdb-lab.com smtp A 172.17.80.13

Using the Exchange Management Shell, create a new client access array on your Exchange server:

New-ClientAccessArray -Name "outlook.mail.mdb-lab.com" -fqdn "outlook.mail.mdb-lab.com" -site Amsterdam

Configure the RpcClientAccessServer attribute on the mailbox database:

Set-MailboxDatabase DB1 -RpcClientAccessServer "outlook.mail.mdb-lab.com"

You can check this by using:

Get-MailboxDatabase | select name,rpcclientaccessserver | ft -auto

If done correctly that should show:
RpcClientAccess
When the Outlook client communicates with the Client Access Servers it does so by first connecting the TCP Endpoint  Mapper on tcp/135.  After that, it chooses a port from the dynamic RPC port range (6005-59530).  For load balancing to work, we need to restrict this to as few ports as possible.

We do this by setting the ports in the registry for the Exchange RPC and Address Book services.

Create the following registry keys on each CAS in the site using:

reg add HKLM\SYSTEM\CurrentControlSet\services\MSExchangeAB\Parameters /v RpcTcpPort /t REG_SZ /d 60001
reg add HKLM\SYSTEM\CurrentControlSet\services\MSExchangeRPC\ParametersSystem /v "TCP/IP Port" /t REG_DWORD /d 60000

Reboot each CAS and verify the ports are in place using Netstat:

netstat -an -p tcp | find "60000"

Finally, configure Outlook and connect to Exchange.  The connection status should box should show a connection to the RPC port configured previously:

Outlook connectivity

That’s it for the Exchange configuration.  In part 5 I tidy up a few things before the solution can be put live.

Load-balancing Microsoft Exchange with nginx+ – Part 3: Configuring nginx+ for Microsoft Exchange

nginxIn part 2 of this series I installed nginx+ on both HA1 and HA2.

In this part, I configure nginx+ to support Microsoft Exchange 2010/13.

Other articles in the series:

  1. Installing and configuring keepalived
  2. Installing nginx+
  3. Configuring nginx+ for Microsoft Exchange
  4. Configuring Microsoft Exchange
  5. Tidying up

First, find your Exchange front-end SSL certificate and its serial number:

certutil -store my

Export the certificate (along with the private key) so it can be imported onto the nginx+ VMs:

certutil -exportpfx -p "password" -privatekey serialnumber mail.mdb-lab.com.pfx

Copy the PFX file to HA1 and HA2. Check the file came across okay:

openssl pkcs12 -info -in mail.mdb-lab.com.pfx

Import the certificate (you will be asked for the password you specified in the preceding step):

openssl pkcs12 -in mail.mdb-lab.com.pfx -nocerts -nodes -out mail.mdb-lab.com.key.enc
openssl pkcs12 -in mail.mdb-lab.com.pfx -clcerts -nokeys -out mail.mdb-lab.com.cer
openssl pkcs12 -in mail.mdb-lab.com.pfx -out cacerts.crt -nodes -nokeys -cacerts

The first command extracts the private key, the second the certificate, and the third the CA certificate(s).  Next make the private key ready for nginx+

openssl rsa -in mail.mdb-lab.com.key.enc -out mail.mdb-lab.com.key

Check the private key is correct:

openssl rsa -in mail.mdb-lab.com.key -check

Move the certificate, private key and CA certificates to /etc/nginx/ssl/

rm -f mail.mdb-lab.com.key.enc
rm -f mail.mdb-lab.com.pfx
mkdir -p /etc/nginx/ssl
mv -f mail.mdb-lab.com.* /etc/nginx/ssl/

Edit /etc/nginx/nginx.conf and make sure the following global settings are in place:

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log info;
pid /var/run/nginx.pid;
events {
     worker_connections 1024;
}

Add the following lines to the http block in /etc/nginx/nginx.conf, replacing values for your CAS servers where necessary:

http {
     log_format main '$remote_addr - $remote_user [$time_local] '
          '"$request'' $status $body_bytes_sent '
          '"$http_user_agent" "$upstream_addr"';
     #set the log
     access_log /var/log/nginx/access.log main;
     keepalive_timeout 3h;
     proxy_read_timeout 3h;
     tcp_nodelay on;

     upstream exchange {
          zone exchange-general 64k;
          server 172.17.80.21:443; # Replace with IP address of a your CAS
          server 172.17.80.22:443; # Replace with IP address of a your CAS
          sticky learn create=$remote_addr lookup=$remote_addr
                    zone=client_sessions:10m timeout=3h;
     }

     server {
          # redirect to HTTPS
          listen 80;
          location / {
               return 301 https://$host$request_uri;
               }
     }

     server {
          listen 443 ssl;
          client_max_body_size 2G;
          ssl_certificate /etc/nginx/ssl/mail.mdb-lab.com.cer;
          ssl_certificate_key /etc/nginx/ssl/mail.mdb-lab.com.key;
          ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
          status_zone exchange-combined;
          # redirect from main page to /owa/
          location = / {
               return 301 "/owa/";
          }
     }

     location = /favicon.ico {
          empty_gif;
          access_log off;
     }

     location / {
          proxy_pass https://exchange;
          proxy_buffering off;
          proxy_http_version 1.1;
          proxy_request_buffering off;
          proxy_set_header Connection ''Keep-Alive'';
     }
}

Add the stream block to /etc/nginx/nginx.conf also:

stream {

     upstream exchange-smtp {
          zone exchange-smtp 64k;
          server 172.17.80.31:25; # Replace with IP address of a your Hub Transport
          server 172.17.80.32:25; # Replace with IP address of a your Hub Transport
     }

     upstream exchange-smtp-ssl {
          zone exchange-smtp-ssl 64k;
          server 172.17.80.31:465; # Replace with IP address of a your Hub Transport
          server 172.17.80.32:465; # Replace with IP address of a your Hub Transport
     }

     upstream exchange-smtp-submission {
          zone exchange-smtp-submission 64k;
          server 172.17.80.31:587; # Replace with IP address of a your Hub Transport
          server 172.17.80.32:587; # Replace with IP address of a your Hub Transport
     }

     upstream exchange-imaps {
          zone exchange-imaps 64k;
          server 172.17.80.21:993; # Replace with IP address of a your CAS
          server 172.17.80.22:993; # Replace with IP address of a your CAS
     }

     server {
          listen 25; #SMTP
          status_zone exchange-smtp;
          proxy_pass exchange-smtp;
     }

     server {
          listen 465; #SMTP SSL
          status_zone exchange-smtp-ssl;
          proxy_pass exchange-smtp-ssl;
     }

     server {
          listen 587; #SMTP submission
          status_zone exchange-smtp-submission;
          proxy_pass exchange-smtp-submission;
     }
}

Test the configuration before putting it live:

nginx -t

If everything is correct, it will yield the following:

Config okay

Modify iptables to allow traffic through the host firewall:

for i in {25,80,135,139,443,465,587,60000,60001}; do iptables -I INPUT -p tcp --dport $i -m state --state NEW,ESTABLISHED -j ACCEPT; done

Save the new iptables rulebase:

service iptables save

To get nginx+ running we need to disable SELinux temporarily:

setenforce 0

Edit /etc/selinux/config:

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

Change SELINUX=enforcing to SELINUX=permissive

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

Start the service:

service nginx start

Make sure the config is the same on both HA1 and HA2. In part 5 I’ll configure rsync to ensure the configs are kept in sync.

That’s it for configuring nginx+.  In part 4 I’ll configure Exchange to support our nginx+ configuration.

Load-balancing Microsoft Exchange with nginx+ – Part 2: Installing nginx+

nginxIn part 1 of this series I installed and configured keepalived in preparation for installing nginx+.

In this part, I install nginx+.

Other articles in the series:

  1. Installing and configuring keepalived
  2. Installing nginx+
  3. Configuring nginx+ for Microsoft Exchange
  4. Configuring Microsoft Exchange
  5. Tidying up

On each VM, create the /etc/ssl/nginx directory:

mkdir -p /etc/ssl/nginx

Download CA.crt to /etc/ssl/nginx:

wget https://cs.nginx.com/static/files/CA.crt -P /etc/ssl/nginx

As part of your nginx+ trial or when you bought the software, you will have been provided a link to nginx-repo.key and nginx.repo.crt. Download these and place in /etc/ssl/nginx:

mv nginx-repo.key nginx.repo.crt /etc/ssl/nginx

Next, create a yum repository to install nginx+:

wget https://cs.nginx.com/static/files/nginx-plus-6.repo -P /etc/yum.repos.d

Then use yum to install:

yum install -y nginx-plus

If there is any issue with CA.crt (ie. it is missing or the permissions are not set correctly) then yum will not install the software.  The same goes for nginx.repo.crt.

Finally, enable the service on both VMs:

chkconfig nginx on

That’s all there is to the nginx+ installation.

In part 3, I configure nginx+ to load-balance the Microsoft Exchange environment.

Load-balancing Microsoft Exchange with nginx+ – Part 1: keepalived

nginxA couple of weeks ago a couple of my colleagues and I came to the conclusion that a client’s Microsoft Exchange platform was in need of some load-balancing.

Normally we achieve this by installing a pair of hardware load-balancers from F5.  Whilst these are excellent products and are well supported in our company, they’re certainly not cheap.  Unfortunately, one size definitely does not fit all with our customers.  Some demand the performance of the Bugatti Veyron, others only require the reliability of a Toyota Corolla.

With that in mind we decided to look at other options.

I’ve been load-balancing Exchange and VMware View for a while here in the lab using keepalived and HAProxy.  However, with our company looking at making the move to Softlayer, it was suggested this would be a good time to look at a product they support – nginx+.

Before I can get to that, I need to install and configure keepalived to support my nginx+ installation.

Other articles in the series:

  1. Installing and configuring keepalived
  2. Installing nginx+
  3. Configuring nginx+ for Microsoft Exchange
  4. Configuring Microsoft Exchange
  5. Tidying up

Firstly I spun-up two RHEL 6.6 VMs in my lab.  These consisted of 1 vCPU, 1Gb of RAM and a 16Gb thin-provisioned disk.  They were then patched using Spacewalk.

The IP addresses for both boxes were set to 172.17.80.11/24 and 172.17.80.12/24 and each vmnic was placed in VLAN80.

Then for both boxes, I added the following lines to /etc/sysctl.conf:

net.ipv4.ip_nonlocal_bind=1
net.ipv4.ip_forward=1

And made them take effect:

sysctl -p

Next I acquired the RPM for keepalived.  At the time of writing, v1.2.17 is the latest version available.  You can download a source tarball from the keepalived site, but for the sake of ease I decided to get the RPM from rpmfind.net.  Unfortunately the latest they have for RHEL/CentOS6 is 1.2.13, which for the lab is close enough.

Please note: for deploying in a production environment I would highly recommend obtaining the latest version direct from Red Hat’s Enterprise Load Balancer add-on.

Next I installed keepalived on both boxes:

yum localinstall -y --nogpgcheck keepalived-1.2.13-4.el6.x86_64.rpm

I then copied the default config (just in case):

cd /etc/keepalived
cp keepalived.conf keepalived.conf.old

Next I edited /etc/keepalived/keepalived.conf on the first host (HA1) and added:

! Configuration File for keepalived

global_defs {
   notification_email {
     admin@mdb-lab.com
   }
   notification_email_from keepalived@mdb-lab.com
   smtp_server 172.17.80.31
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_sync_group VG1 {
   group {
      V1
      V2
   }
}

vrrp_instance V1 {
    state MASTER
    interface eth0
    virtual_router_id 10
    priority 101
    advert_int 1
    virtual_ipaddress {
        172.17.80.13
    }

vrrp_instance V2 {
    state MASTER
    interface eth0
    virtual_router_id 11
    priority 101
    advert_int 1
    virtual_ipaddress {
        172.17.80.113
    }
}

The config for HA2 is nearly identical, except for two lines. The state should be BACKUP and the priority should be lower at 100:

! Configuration File for keepalived

global_defs {
   notification_email {
     admin@mdb-lab.com
   }
   notification_email_from keepalived@mdb-lab.com
   smtp_server 172.17.80.31
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_sync_group VG1 {
   group {
      V1
      V2
   }
}

vrrp_instance V1 {
    state BACKUP
    interface eth0
    virtual_router_id 10
    priority 100
    advert_int 1
    virtual_ipaddress {
        172.17.80.13
    }

vrrp_instance V2 {
    state BACKUP
    interface eth0
    virtual_router_id 11
    priority 100
    advert_int 1
    virtual_ipaddress {
        172.17.80.113
    }
}

Next, I configured iptables to allow VRRP communication:

iptables -I INPUT -p 112 -j ACCEPT

I also added a few other lines to iptables allow stuff like ICMP etc.
Finally I enabled the service on both VMs and rebooted:

chkconfig keepalived on
reboot

After I rebooted the hosts I checked to see which had the cluster addresses of 172.17.80.13 and 172.17.80.113:

ip addr sh

On HA1 this gave me:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:b2:44:ac brd ff:ff:ff:ff:ff:ff
    inet 172.17.80.11/24 brd 172.17.80.255 scope global eth0
    inet 172.17.80.13/32 scope global eth0
    inet 172.17.80.113/32 scope global eth0
    inet6 fe80::250:56ff:feb2:44ac/64 scope link
       valid_lft forever preferred_lft forever

To test failover, I setup a looping ping from another host on VLAN80 to each cluster address and then suspended the HA1 VM.  Each cluster IP failed over immediately to HA2, dropping only one ping in the process:

Ping

And that completes the keepalived installation and configuration.

In part 2, I install nginx+ on both VMs, before finally configuring it for Microsoft Exchange.