Building an advanced lab using VMware vRealize Automation – Part 4: Physical infrastructure – compute

20150630 - vRAIn part 3 of this series on building a lab using VMware vRealize Automation we configured the physical networking to support our lab.  In this part we install and configure VMware vSphere 5.5 on our servers.

Before we can install vSphere, we have to analyse our requirements and source hardware to satisfy them.  Obviously we would like hardware that comes with as much compute, storage and network capacity as possible, but budgetary constraints must be taken into consideration.

Other posts in this series

  1. Intro
  2. Physical infrastructure – storage
  3. Physical infrastructure – networking
  4. Physical infrastructure – compute
  5. Authentication services
  6. Deploy and configure the vCenter Server Appliance
  7. Configure vCenter Server Appliance SSL certificates
  8. Deploy and configure the vRA Appliance
  9. Deploy and configure the IaaS platform
  10. Configure tenants
  11. Configure endpoint & fabric/business groups
  12. Configure blueprints (coming soon)
  13. Configure entitlements (coming soon)
  14. Configure policies (coming soon)
  15. Integration with vCloud Air (coming soon)
  16. Tidy up (coming soon)


As with most server designs, our requirements come down to the features we need, plus a certain level of performance, reliability and redundancy.

The first feature that must be supported is hardware virtualisation.  This means a 64-bit x86 CPU with either Intel VT or AMD RVI.  To verify this we can use VMware’s CPU Identification Utility.

Other notable features we require from our vSphere cluster is vMotion and Fault Tolerance, both of which have specific hardware requirements.  vMotion requires servers to have CPUs not only from the same manufacturer, but they need to be the same family and (ideally) the same stepping level.  We can workaround the latter by introducing Enhanced vMotion Capability, but this negatively impacts performance and should be avoided.

Another feature we require is Fault Tolerance, as this can protect some of our critical virtual machines.  However this too has a number of hardware limitations (more about that later) which must be taken into consideration.


As with any vSphere design, performance is critical.  If there is not enough CPU and memory resources to satisfy the requirements, contention will occur and users will quickly become frustrated.

Normally we would identify the virtual machine workload upfront and then use a sizing calculator to determine what the CPU and memory requirements would be.  However the VMs that will be deployed in the lab will not be resource intensive and will have a pre-defined lease time, meaning detailed calculations are not necessary.


The server hardware that is chosen is central to our entire lab design, therefore reliability is paramount.  To ensure this, it is important that a vendor is chosen that has been recognised by VMware as reputable.  By consulting VMware’s Hardware Compatibility List (HCL), we can verify our chosen hardware is supported.

For the lab, the servers that have been provided were spares for a project which was later cancelled.  Both servers are listed on the HCL and are identical.


Finally our design must consider redundancy.  Where possible it is important to eliminate single points of failure.  Whilst the lab will only serve internal customers, it is important that the service is continuously available.

vSphere itself provides a certain level of redundancy through its existing feature set.  High Availability (HA) ensures that any VMs which are lost due to a cluster node failure are restarted on the other node.  However it is important to ensure that we have a sufficient level of redundancy with individual server components to eliminate potential failure of the server.  An example of this is ensuring that hard disks are protected by RAID, multiple network cards are available for network traffic and each server has multiple power supplies.

For the project we are using two IBM Series x3650 M4s with the following components:

  • 2 x 2.26GHz CPU
  • 256GB RAM
  • SD card storage
  • 2 x Intel 82580 on-board gigabit network adapters
  • 4 x Intel I350 gigabit network adapters
  • 2 x Emulex OneConnect adapters
  • 2 x power supplies

Together with the above, the first server has a 146GB hard disk installed temporarily.

VMware ESXi

For this series we will be using VMware ESXi 5.5 Update 2 Patch 5 (re-release). Whilst ESXi 6.0 is available, this project was conceived late 2014 before vSphere 6.0 went GA. However for what we are trying to achieve ESXi 5.5 works just as well.

Install ESXi on both servers. Use a password of “password” and give each host an IP address. For the latter I have chosen:

  • IP address: 192.168.201 & 202
  • Subnet mask:
  • Default gateway:

Download VMware PowerCLI and install on a laptop/workstation connected to the lab network.

Use Notepad++ (or an editor of your choice) to create a PowerCLI script called build_esxi.ps1:

# Variables

$esxi1 = ""
$esxi2 = ""
$username = "root"
$password = "password"
$rootPW = "VMware1!"
$ntpIP = ""
$dnsIP = ""
$domainname = ""
$syslog = "udp://"

ForEach ($esxi in $esxi1,$esxi2){

# Connect to the ESXi host
Connect-VIServer -Server $esxi -username $username -password $password

# Set NTP server
Add-VmHostNtpServer -NtpServer "$ntpIP"

# Set DNS server
Get-VMHostNetwork | Set-VMHostNetwork -DnsAddress $dnsIP

# Set DNS domain name
Get-VMHostNetwork | Set-VMHostNetwork -DomainName $domainname -SearchDomain $domainname -HostName $esxi

# Set the syslog server
Set-VMHostSysLogServer -SysLogServer $syslog

# Open firewall port for syslog
Get-VMHostFirewallException -Name syslog | Set-VMHostFirewallException -Enabled:$true

# Enable SSH
Get-VMHost | Foreach {Start-VMHostService -HostService ($_ | Get-VMHostService | Where { $_.Key -eq "TSM-SSH"} ) }

# Suppress shell warning
Get-VMHost | Get-AdvancedSetting -Name 'UserVars.SuppressShellWarning' | Set-AdvancedSetting -Value "1" -Confirm:$false

# Add iSCSI software adapter
Get-VMHostStorage -VMHost $esxi | Set-VMHostStorage -SoftwareIScsiEnabled $True

# Set power management policy to High Performance
$view = (Get-VMHost $esxi | Get-View)
(Get-View $view.ConfigManager.PowerSystem).ConfigurePowerPolicy(1)

# Reset root password
Set-VMHostAccount -UserAccount root -password $rootPW

Disconnect-VIServer -Server $esxi -confirm:$false

We will not configure the iSCSI or NFS storage at this time.

Open PowerCLI and set the execution policy:

Set ExecutionPolicy Unrestricted

20150710 - PS

Press Y and then enter.

Execute the script


Use PuTTY to connect to ESXi1.  Login as root and configure a temporary datastore:

esxcli storage core path list | grep -i device

20150710 - esxcli

We know that the first disk is used for the ESXi installation, so we want the second disk (as denoted by the 1 in mpx.vmhba1:C0:T1:L0).

Enter the following exactly as written.

Create a variable for the disk:


Create a label:

partedUtil mklabel ${DEVICE} msdos

Find the end sector:

END_SECTOR=$(eval expr $(partedUtil getptbl ${DEVICE} | tail -1 | awk '{print $1 " \\* " $2 " \\* " $3}') - 1)

Create the partition table:

/sbin/partedUtil "setptbl" "${DEVICE}" "gpt" "1 2048 ${END_SECTOR} AA31E02A400F11DB9590000C2911D1B8 0"

More information on creating partitions can be found here.

Create the temporary datastore:

/sbin/vmkfstools -C vmfs5 -b 1m -S TEMP-datastore ${DEVICE}:1

That’s it!  The servers have both been given a basic configuration ready for hosting our first VMs.  At a later stage we will add these to a cluster and configure them further.

Coming up

In this part we installed and configured ESXi 5.5 on our two hosts. We also created a temporary datastore to hold our first two VMs before we can configure the distributed networking needed to support iSCSI storage.

In part 5 we install and configure a domain controller for the lab to authenticate our users.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.