Using genuine SSL certificates is good for security. It lets users know the device/host they’re connecting to is genuine, and prevents man-in-the-middle attacks. It also removes nasty warnings in the browser title bar. Continue reading
As part of studying for my VDCA550-DCA, I’ve started relying more on the CLI and a lot less on the GUI. IMHO, the best tool for the job is the vSphere vMA.
For ease of use, I decided to add it to my domain and then lock it down so only certain users could logon.
First, logon to the vMA and add it to the domain:
sudo domainjoin-cli join nl.mdb-lab.com email@example.com
This will prompt you for the vMA super-user password you set during installation, followed by the password for the account you’re using to add the vMA to the domain. The vMA will then require a reboot.
Once restarted, edit /etc/likewise/lsassd.conf and add the AD groups you wish to have access to the vMA:
sudo sed -i "/require-membership-of/c\require-membership-of = NL\\\vMA Access Users" /etc/likewise/lsassd.conf
In this case, I created an AD group called vMA Access Users and used that.
Here in the lab I’m running out of resources to run all my VMs concurrently on the one host. Rather than add another physical host at this site (or switch off some VMs), and with long-distance vMotion now available in vSphere 6.0, I decided to place another at my secondary site in the UK.
Before the move to vSphere 6.0 (I’m currently studying for my VCAP so want to stay on 5.5 for the time being), I decided it would be best to migrate my 14-host Exchange 2010 environment over the secondary site, and setup SRM in the process. However before I could do that I needed to setup vSphere Replication to get the actual VMs over.
With the VPN to the secondary site not being the quickest, it was important to compress my already thin-provisioned VMs down as much as possible.
It’s an old one, but always worth doing in this type of scenario.
This balloons each VMDK up to its maximum size, but don’t worry about that for now. When it finished, I powered down each VM and switched to the ESXi host console (using SSH) and ran:
vmkfstools -K /vmfs/volumes/path-to-VM/VM.vmdk
The important thing here to remember is to target the VMs normal VMDK, not the one ending in “-flat.vmdk”.
After a while it finished and freed up a lot of space, ready to be replicated using vSphere Replication 5.8!
Each VM still took two days though 😦