I deployed my original ESXi 6.0 home lab server back in 2015, and although I’ve been putting off upgrading the hardware, ESXi 6.7 was released in April 2018 and I’ve finally gotten around to upgrading ESXi.
I’m a Microsoft guy by trade, but wanted to build some experience around VMware so made the decision a number of years ago to have VMware ESXi in my home lab as my primary hypervisor instead of Hyper-V (although I have run them both in my home lab at one point). The biggest challenge I’ve had with this setup is using hardware that is on the VMware HCL and making sure that I could find supported drivers for the NICs I had laying around. Through this upgrade from 6.0 to 6.7, I found out first-hand how tricky dealing with non HCL’d hardware and lack of VMware-supported drivers would be.
Once all the pieces were in place for the upgrade, the process took less than 5-10 minutes, but getting up to that point took a little while longer, and lack of deeper ESXi expertise made it a little challenging.
Download the ESXi ISO and build a bootable USB thumb drive
You’ll need to download the ESXi ISO from the VMware website. This requires you to create a free account if you don’t have one in order to download, and you’ll need that account to get your free license key. There are obviously restrictions on what you can do with the free version of ESXi which we won’t get into detail here. The free version works just fine for me as a single server (non-clustered) solution that runs a bunch of virtual machines.
If you’re ESXi on a box that has a CD/DVD drive, then you can burn the ISO to a disc and boot from, but in my case I need to get it on a bootable thumb drive for the installation.
This is where Rufus comes in. Rufus is a utility that helps format and create bootable USB flash drives where you need to create installation media from bootable ISOs (think Windows installation DVD). Once you download your ESXi 6.7 ISO as well as Rufus, go into your downloads and start the Rufus application.
The options are pretty straight forward, but here’s the steps:
- Choose the device (thumb drive) that you want to copy the ISO to and make bootable.
- For the “boot selection”, click on the Select button and browse to the location of the ESXi ISO that you downloaded.
- Make sure the partition scheme is “MBR”, and the target system is “BIOS or UEFI”.
- The volume label will be populated and other options set automatically. You shouldn’t have to change anything else.
- Click Start to build the bootable USB thumb drive.
This process is quick. It only took 1-2 minutes for it to create the bootable drive and copy the files from the ISO over.
ESXi 6.7 installation
NOTE: Before you start anything, make sure that you have a good back of your system. At a very minimum, make sure that any of the important data, VMDKs, or other critical files are available via backup, cloud, or copied somewhere else in case things don’t work out. Take the time to do it. It’s not worth losing data due to lack of preparation.
OK. Let’s go. Wait, are you sure you backed everything up? Alright.
You’ll want to shutdown the virtual machines that you have running at this point. Once your VMs are shutdown, take that new bootable USB thumb drive you created created with Rufus and plug it into your ESXi machine. I went to the the console of my ESXi box and shut it down from there and then powered it back on. If you need to get into a boot menu to select your device, do that, or make sure that the USB device is configured at the top of your boot order in the BIOS. Start the boot process and wait for the USB thumb drive to start loading the ESXi 6.7 installation.
At this point, the installer will start the scan devices and look for places that it can install ESXi or where it might already be installed.
The next step will prompt you to select a disk to install or upgrade. In my case, it sees the 2 TB hard disk I have installed in the machine as well as the 8 GB thumb drive that contains the ESXi installation media. I want to upgrade, so I selected the local hard disk and pressed Enter. It will gather additional information from the selected device before continuing.
Now it will detect that ESXi and VMFS are found on the device you selected. Again, I want to upgrade, so I left the default option to upgrade and preserve the existing VMFS datastore. (That’s important – you don’t want it to overwrite anything)
“Errors and Warnings Found During System Scan”
Things couldn’t go perfectly to plan, right? After I selected to upgrade ESXi and preserve my datastore, it threw an error. This would be where my inexperience with ESXi would shine through.
I did recall back in 2015 having some challenges with one of the network cards in my server. It was a Realtek and didn’t have VMware supported drivers, but I did find a community supported driver which took me down a path of command line magic to get working. But having a secondary card at that point as I ran a software firewall on a VM and the “external” port was on that secondary card. The good news is I quit doing that a while ago and that secondary card didn’t have anything plugged into it at this point…Which will be good news in a few minutes.
The first thing I needed to do was search for this error (obviously). Nobody hit the same exact error I saw (of course), but it got me moving in the right direction to resolution.
The “Net51” portion of the error was familiar enough that I knew it had something to do with the network drivers, but from which card I wasn’t sure yet.
How do I figure out what NICs are on the server that ESXi knows about? SSH into the ESXi host and use ESXCLI to get the list.
esxcli network nic list
Now, neither of the drivers listed directly came out and said anything related to Net51. I was pretty confident that my Intel NIC had nothing to do with this, which left the Realtek, but there wasn’t any correlation between r8169 and Net51, but Net51 could be generic and what was actually driving that other card.
Next we had to figure out what the VIB was that was causing the error in the installation process. Getting a list of VIBs using ESXCLI is easy, and I filtered out the list looking for what was tied to Net51.
esxcli software vib list | grep net51
There was indeed a VIB with Net51, but I still couldn’t tie it back to which card. I made the decision to just remove the VIB and worry about installing any missing drivers later. Best case scenario is that it disables the NIC that I don’t use anymore, worst case being that it whacks my primary NIC and then I have to try to clean up the mess at that point.
Removing a VIB is easy too, although I was pretty nervous doing it.
esxcli software vib remove -n net51-drivers
Once the command finished, the VIB was removed and I needed to reboot the server to complete the process. I rebooted the server and waited to see if I had connectivity again.
I lucked out. It rebooted and I was able to SSH into it.
I went back and booted from the thumb drive and let the ESXi 6.7 installation start again. This time, there were no errors found, just a warning that my Intel i7 processor may not be supported int future ESXi releases and that I needed to planning accordingly. I hit Enter to continue.
The installer asked me to confirm that I wanted to “upgrade” for ESXi 6.0 to 6.7. I pressed F11 and the process started.
It took a bit to get over a few of the initial hurdles, but once the upgrade process started, it moved very quickly.
And before I knew it, the upgrade was complete. It was now time to reboot and hopefully things came back up as expected.
And a couple minutes later, a freshly upgraded instance of ESXi running version 6.7 was up and running. I’ll say that on that initial reboot when ESXi was loading, it paused for a minute or so after successfully loading vFlash. I got a little nervous, but it did eventually finish booting.
VMware Tools updates
After my virtual machines came back online after the ESXi upgrade, I logged into each VM and the VMware Tools started to update to the new version. It did require a reboot of the VM, but the installation of the Tools happened without any intervention once I logged into each VM.
The upgrade itself was pretty painless, but the errors and warnings that were generated as part of the initial upgrade process were a little unexpected. But having experienced this definitely puts a spotlight on the importance of using VMware certified and supported components as documented in their HCL. For home labs that are typically made up of gear thrown together from various vendors and origination can make it difficult to come up with a fully supported environment. It’s important to remember there’s no guarantee from this release to next that anything working now will work when that next upgrade comes into play.