Wednesday Tidbit: Create custom ESXi ISO with ESX-tools-for-ESXi

20150713 - PowerCLIMy test cluster in the lab recently needed to be rebuilt.  As this cluster is nested and I wanted the latest version of ESXi to be deployed, I thought it would be good to use Image Builder to combine the two.

After downloading the VMware Tools for Nexted ESXi tools fling, I followed Andreas Peetz’ article on how to use them to build an offline bundle.

Once done, I used the following PowerCLI script with Image Builder to create my new ISO complete with VMware Tools for Nested ESXi:

# Add VMware depot
Add-EsxSoftwareDepot C:\Depot\ESXi550-201505002.zip

# Clone the ESXi 5.5 GA profile into a custom profile
$CloneProfile = Get-EsxImageProfile ESXi-5.5.0-20150504001-standard
$MyProfile = New-EsxImageProfile -CloneProfile $CloneProfile -Vendor $CloneProfile.Vendor -Name (($CloneProfile.Name) + "-virtualhobbit") -Description $CloneProfile.Description

# Add the Nested ESXi VMTools Offline bundle
Add-EsxSoftwareDepot C:\Depot\esx-tools-for-esxi-9.7.1-0.0.00000-offline_bundle.zip

# Add the esx-tools-for-esxi package to the custom profile
Add-EsxSoftwarePackage -SoftwarePackage esx-tools-for-esxi -ImageProfile $MyProfile

# Disable signature check
$DeployNoSignatureCheck=$true

# Export the custom profile into an ISO file
Export-EsxImageProfile -ImageProfile $MyProfile -ExportToISO -NoSignatureCheck -FilePath C:\Depot\ESXi550-201505002-virtualhobbit.iso

I also chose to export the image profile to a .zip file, so I could use it to remediate an existing cluster using VUM.

Export-EsxImageProfile -ImageProfile $MyProfile -ExportToBundle -NoSignatureCheck -FilePath C:\Depot\ESXi550-201505002-virtualhobbit.zip

Building an advanced lab using VMware vRealize Automation – Part 1: Intro

20150630 - vRAA few months back my boss came to me and asked if I could build a test lab for the company.  Being a managed services company with a number of technical staff, it was important to give them a training ground where skills can be learned but mistakes also made without consequence.

We have a lot of kit we currently use for testing changes on prior to putting them live, but not a segregated area dedicated for this purpose. Continue reading

Wednesday Tidbit: PowerCLI script to enable copy & paste

20150713 - PowerCLIUnlike at work, when I access servers in the lab I use the vSphere Client remote console.  Unfortunately since vSphere 4.1, copy & paste between the host and the console has been disabled – which I find a pain (even though it’s probably more secure).  It can be enabled per-VM with the following two settings:

isolation.tools.copy.disable="false"
isolation.tools.paste.disable="false"

I wanted to enable it on all my backend VMs, but not my other VMs (DMZ etc).  For this I used the following PowerCLI script:

$esxi = "lab01.mdb-lab.com"
$credential = Get-Credential
$tgtVLAN = 'VLAN70','VLAN80','VLAN120'

Connect-VIServer $esxi -Credential $credential

Get-VM |where {
   (Get-NetworkAdapter -VM $_ | %{$tgtVLAN -contains $_.NetworkName}) -contains $true} | %{
      New-AdvancedSetting $_ -Name isolation.tools.copy.disable -Value false -Confirm:$false -Force:$true
      New-AdvancedSetting $_ -Name isolation.tools.paste.disable -Value false -Confirm:$false -Force:$true
   }
}

Disconnect-VIServer $esxi -Confirm:$false

I’d like to thank Luc Dekens for helping me with that last bit. If you get chance, check him out at http://www.lucd.info/.

Man down… when you need to DR your DR

Last week I received the following error on my DR ESXi host:

Configuration-Issue-Lost-connection-to-the-device

The host in question is a Dell PowerEdge R710, and this indicated a hardware failure of some sort.

mpx.vmhba32:C0:T0:L0 is an 8GB SanDisk SD card containing the ESXi 5.5 boot partitions.  With this gone, it meant the host would still live, but any changes wouldn’t be saved.  It was also unlikely that the host would boot up again.

Sure enough this turned out to be the case.  Three times out of four the server would hang at the BIOS.  To remedy this I disconnected the SD card reader and ordered a replacement.

Unfortunately this turned out to be a red herring, as the new SD card reader failed to be recognised, as did the internal USB stick.  After further testing, I decided it was the control panel (a daughterboard that the SD card reader, internal USB and console port plug into) that was faulty.  After replacing that, the server booted normally into ESXi.

The total downtime was about ten days, which meant when the remote vCenter and vSphere Replication servers came back up there was a lot of data which needed to be replicated.  This subsequently hammered my web connection, so much so that I couldn’t even bring up the vCenter Web Client to monitor how the replication was going.

To find out I needed to switch to the command line.

On the source host I used the following command to get the list of VMs and their associated VM IDs:

vim-cmd vmsvc/getallvms

This give an output similar to:

VM IDs

With each VM ID number, I then used the following command to get status of each replication:

vim-cmd hbrsvc/vmreplica.getState vmid

That told me exactly how much had replicated and how much was left to do.  With a total of thirteen VMs  to replicate, that was going to take a while!

Replication state

The BusyBox shell on ESXi is quite limited, but a loop such as:

for i in `seq 38 43`; do vim-cmd hbrsvc/vmreplica.getState $i; done | grep DiskID

Gives me a basic overview on how my mail replications are going:

For loop

In the long-run this hasn’t caused too much of an issue, however it has made me think about moving my DR needs to vCloud Air

Compress thin-provisioned VMDKs even further

Here in the lab I’m running out of resources to run all my VMs concurrently on the one host.  Rather than add another physical host at this site (or switch off some VMs), and with long-distance vMotion now available in vSphere 6.0, I decided to place another at my secondary site in the UK.

Before the move to vSphere 6.0 (I’m currently studying for my VCAP so want to stay on 5.5 for the time being), I decided it would be best to migrate my 14-host Exchange 2010 environment over the secondary site, and setup SRM in the process.  However before I could do that I needed to setup vSphere Replication to get the actual VMs over.

With the VPN to the secondary site not being the quickest, it was important to compress my already thin-provisioned VMs down as much as possible.

It’s an old one, but always worth doing in this type of scenario.

On each VM, I downloaded Mark Russinovich’s SDelete and ran it on each of the Windows drives:

sdelete -z

This balloons each VMDK up to its maximum size, but don’t worry about that for now.  When it finished, I powered down each VM and switched to the ESXi host console (using SSH) and ran:

vmkfstools -K /vmfs/volumes/path-to-VM/VM.vmdk

The important thing here to remember is to target the VMs normal VMDK, not the one ending in “-flat.vmdk”.

After a while it finished and freed up a lot of space, ready to be replicated using vSphere Replication 5.8!

 

Each VM still took two days though 😦

Recovering damaged VMFS partitions

Last year a client suffered a power outage at one of their major sites.  Unfortunately the Powerchute installation I had configured on a vSphere vMA didn’t work as expected, and all hosts, storage and networking equipment died when the UPS ran out of juice.  This however, was only the beginning of what was to become a very long day.

When power was restored, a member of the Operations team brought all the kit back on-line, but unfortunately the SAN controllers came up before the expansion arrays… therefore marking them as dead.  When the ESXi hosts came back up, they were missing quite a few LUNs, and got visibly upset.

Despite all the storage controllers and arrays being online, ESXi refused to recognise any of the partitions and only offered the ever-helpful option of adding new storage (and therefore destroying what was already there).  That’s when Ops decided to escalate the issue.

After calling VMware support, the prognosis was not good… the data was lost and we had to restore from backup.  Knowing the client would not be happy with this, I decided to step-in and take over.

I didn’t believe the data was lost, but merely needed a little nurturing to bring it back to life.  First, the storage had to be brought back in the right order:

  1. Shutting down all ESXi servers
  2. Disconnecting controller B fibre
  3. Disconnecting controller A fibre
  4. Shutting down enclosure 2
  5. Shutting down enclosure 1
  6. Shutting down controller B
  7. Shutting down controller A

If the SAN controllers can’t see all the arrays, then the hosts have no chance of seeing the LUNs.

Then at two minute intervals:

  1. Reconnecting controller A and B fibre
  2. Powering up enclosure 2
  3. Powering up enclosure 1
  4. Starting controller A
  5. Starting controller B
  6. Powering on ESXi host 1

Rescanning the LUNs still gave me nothing, so I SSH’d onto the host and listed the disks it could see using:

esxcli storage core path list | grep naa

This gave me two devices:

naa.60080e50002ea8b600000318524cd170
naa.60080e50002eba84000002de524cd0b2

Then I listed the partition table (on each disk) using:

partedUtil getptbl /vmfs/devices/disks/naa.60080e50002ea8b600000318524cd170

This showed as empty (ie. no entry starting with “1” or the word “vmfs” anywhere) – not good.  I then checked for the beginning and end blocks on each disk (thank you VMware for the following command):

offset="128 2048"; for dev in `esxcfg-scsidevs -l | grep "Console Device:" | awk {'print $3'}`; do disk=$dev; echo $disk; partedUtil getptbl $disk; { for i in `echo $offset`; do echo "Checking offset found at $i:"; hexdump -n4 -s $((0x100000+(512*$i))) $disk; hexdump -n4 -s $((0x1300000+(512*$i))) $disk; hexdump -C -n 128 -s $((0x130001d + (512*$i))) $disk; done; } | grep -B 1 -A 5 d00d; echo "---------------------"; done

I then listed the usable sectors:

partedUtil getUsableSectors /vmfs/devices/disks/naa.60080e50002ea8b600000318524cd170

That came back with two numbers.  I needed the large one (4684282559).  I then rebuilt the partition table using:

partedUtil setptbl /vmfs/devices/disks/naa.60080e50002ea8b600000318524cd170 gpt "1 2048 4684282559 AA31E02A400F11DB9590000C2911D1B8 0”

So to recap,

naa.60080e50002ea8b600000318524cd170 = disk device
2048 = starting sector
4684282559 = end sector
AA31E02A400F11DB9590000C2911D1B8 = GUID code for VMFS

I then mounted the partition:

vmkfstools –V

After repeating the above command for the second disk, ESXi host 1 then saw both partitions.  I could then bring up all the VMs and the remaining ESXi hosts.

Bullet…. dodged.