In part 3 of this series on building a lab using VMware vRealize Automation we configured the physical networking to support our lab. In this part we install and configure VMware vSphere 5.5 on our servers.
Before we can install vSphere, we have to analyse our requirements and source hardware to satisfy them. Obviously we would like hardware that comes with as much compute, storage and network capacity as possible, but budgetary constraints must be taken into consideration.
Other posts in this series
- Intro
- Physical infrastructure – storage
- Physical infrastructure – networking
- Physical infrastructure – compute
- Authentication services
- Deploy and configure the vCenter Server Appliance
- Configure vCenter Server Appliance SSL certificates
- Deploy and configure the vRA Appliance
- Deploy and configure the IaaS platform
- Configure tenants
- Configure endpoint & fabric/business groups
- Configure blueprints (coming soon)
- Configure entitlements (coming soon)
- Configure policies (coming soon)
- Integration with vCloud Air (coming soon)
- Tidy up (coming soon)
Features
As with most server designs, our requirements come down to the features we need, plus a certain level of performance, reliability and redundancy.
The first feature that must be supported is hardware virtualisation. This means a 64-bit x86 CPU with either Intel VT or AMD RVI. To verify this we can use VMware’s CPU Identification Utility.
Other notable features we require from our vSphere cluster is vMotion and Fault Tolerance, both of which have specific hardware requirements. vMotion requires servers to have CPUs not only from the same manufacturer, but they need to be the same family and (ideally) the same stepping level. We can workaround the latter by introducing Enhanced vMotion Capability, but this negatively impacts performance and should be avoided.
Another feature we require is Fault Tolerance, as this can protect some of our critical virtual machines. However this too has a number of hardware limitations (more about that later) which must be taken into consideration.
Performance
As with any vSphere design, performance is critical. If there is not enough CPU and memory resources to satisfy the requirements, contention will occur and users will quickly become frustrated.
Normally we would identify the virtual machine workload upfront and then use a sizing calculator to determine what the CPU and memory requirements would be. However the VMs that will be deployed in the lab will not be resource intensive and will have a pre-defined lease time, meaning detailed calculations are not necessary.
Reliability
The server hardware that is chosen is central to our entire lab design, therefore reliability is paramount. To ensure this, it is important that a vendor is chosen that has been recognised by VMware as reputable. By consulting VMware’s Hardware Compatibility List (HCL), we can verify our chosen hardware is supported.
For the lab, the servers that have been provided were spares for a project which was later cancelled. Both servers are listed on the HCL and are identical.
Redundancy
Finally our design must consider redundancy. Where possible it is important to eliminate single points of failure. Whilst the lab will only serve internal customers, it is important that the service is continuously available.
vSphere itself provides a certain level of redundancy through its existing feature set. High Availability (HA) ensures that any VMs which are lost due to a cluster node failure are restarted on the other node. However it is important to ensure that we have a sufficient level of redundancy with individual server components to eliminate potential failure of the server. An example of this is ensuring that hard disks are protected by RAID, multiple network cards are available for network traffic and each server has multiple power supplies.
For the project we are using two IBM Series x3650 M4s with the following components:
- 2 x 2.26GHz CPU
- 256GB RAM
- SD card storage
- 2 x Intel 82580 on-board gigabit network adapters
- 4 x Intel I350 gigabit network adapters
- 2 x Emulex OneConnect adapters
- 2 x power supplies
Together with the above, the first server has a 146GB hard disk installed temporarily.
VMware ESXi
For this series we will be using VMware ESXi 5.5 Update 2 Patch 5 (re-release). Whilst ESXi 6.0 is available, this project was conceived late 2014 before vSphere 6.0 went GA. However for what we are trying to achieve ESXi 5.5 works just as well.
Install ESXi on both servers. Use a password of “password” and give each host an IP address. For the latter I have chosen:
- IP address: 192.168.201 & 202
- Subnet mask: 255.255.255.0
- Default gateway: 192.168.146.253
Download VMware PowerCLI and install on a laptop/workstation connected to the lab network.
Use Notepad++ (or an editor of your choice) to create a PowerCLI script called build_esxi.ps1:
# Variables $esxi1 = "192.168.146.201" $esxi2 = "192.168.146.202" $username = "root" $password = "password" $rootPW = "VMware1!" $ntpIP = "192.168.146.204" $dnsIP = "192.168.146.204" $domainname = "lab.mdb-lab.com" $syslog = "udp://192.168.146.205:514" ForEach ($esxi in $esxi1,$esxi2){ # Connect to the ESXi host Connect-VIServer -Server $esxi -username $username -password $password # Set NTP server Add-VmHostNtpServer -NtpServer "$ntpIP" # Set DNS server Get-VMHostNetwork | Set-VMHostNetwork -DnsAddress $dnsIP # Set DNS domain name Get-VMHostNetwork | Set-VMHostNetwork -DomainName $domainname -SearchDomain $domainname -HostName $esxi # Set the syslog server Set-VMHostSysLogServer -SysLogServer $syslog # Open firewall port for syslog Get-VMHostFirewallException -Name syslog | Set-VMHostFirewallException -Enabled:$true # Enable SSH Get-VMHost | Foreach {Start-VMHostService -HostService ($_ | Get-VMHostService | Where { $_.Key -eq "TSM-SSH"} ) } # Suppress shell warning Get-VMHost | Get-AdvancedSetting -Name 'UserVars.SuppressShellWarning' | Set-AdvancedSetting -Value "1" -Confirm:$false # Add iSCSI software adapter Get-VMHostStorage -VMHost $esxi | Set-VMHostStorage -SoftwareIScsiEnabled $True # Set power management policy to High Performance $view = (Get-VMHost $esxi | Get-View) (Get-View $view.ConfigManager.PowerSystem).ConfigurePowerPolicy(1) # Reset root password Set-VMHostAccount -UserAccount root -password $rootPW Disconnect-VIServer -Server $esxi -confirm:$false }
We will not configure the iSCSI or NFS storage at this time.
Open PowerCLI and set the execution policy:
Set ExecutionPolicy Unrestricted
Press Y and then enter.
Execute the script
.\build_esxi.ps1
Use PuTTY to connect to ESXi1. Login as root and configure a temporary datastore:
esxcli storage core path list | grep -i device
We know that the first disk is used for the ESXi installation, so we want the second disk (as denoted by the 1 in mpx.vmhba1:C0:T1:L0).
Enter the following exactly as written.
Create a variable for the disk:
DEVICE=/vmfs/devices/disks/mpx.vmhba1:C0:T1:L0
Create a label:
partedUtil mklabel ${DEVICE} msdos
Find the end sector:
END_SECTOR=$(eval expr $(partedUtil getptbl ${DEVICE} | tail -1 | awk '{print $1 " \\* " $2 " \\* " $3}') - 1)
Create the partition table:
/sbin/partedUtil "setptbl" "${DEVICE}" "gpt" "1 2048 ${END_SECTOR} AA31E02A400F11DB9590000C2911D1B8 0"
More information on creating partitions can be found here.
Create the temporary datastore:
/sbin/vmkfstools -C vmfs5 -b 1m -S TEMP-datastore ${DEVICE}:1
That’s it! The servers have both been given a basic configuration ready for hosting our first VMs. At a later stage we will add these to a cluster and configure them further.
Coming up
In this part we installed and configured ESXi 5.5 on our two hosts. We also created a temporary datastore to hold our first two VMs before we can configure the distributed networking needed to support iSCSI storage.
In part 5 we install and configure a domain controller for the lab to authenticate our users.