I wanted to test a nested ESXi in 5.1. Officially it’s still not supported but it can be very useful for testing purposes in a lab environment.

A lot of HOWTO’s can be found and most of this write up is based on the work of virtuallyghetto. I did have to make a few extra steps since my environment runs Auto Deployed and the host profile needed to be updated.

A good first place to start is to check if nested will be supported on your hardware. Browse to: https://[your-esxi-host-ip-address]/mob/?moid=ha-host&doPath=capability and login with your root credentials.

Search for nestedHVSupported if its set to true your hardware is supported. Since mine is I wont go into the options which are left when it states false. Have a look at virtuallyghetto to see what your options are.

You will still need to enable promiscuous mode on the portgroup that will be used for your nested ESXi VM for network connectivity. For me that was simply changing the security setting on the portgroup and updating the Host profile. This way all ESXi hosts that get deployed within that cluster get this setting. If DRS would move the vESXi network connectivity would remain.

1. Create a new VM using the vSphere 5.1 Web Client.

2. Choose Hardware Version 9 (not available in the Windows vSphere Client). Select “Linux” as the guestOS and “Other Linux (64 bit)” as the guestOS version.

3. During the customize hardware wizard, expand the “CPU” section and select “Hardware Virtualization” box to enable VHV.

Note (from virtuallyghetto): If this box is grayed out, it means that your physical CPU does not supported Intel VT-x + EPT or AMD-V + RVI which is required to run VHV OR that you are not using Virtual Hardware 9. If your CPU only supports Intel-VT or AMD-V, then you can still install nested ESXi, but you will only be able to run nested 32-bit VMs and not nested 64-bit VMs.

4. Change the guestOS version after VM creation to VMware ESXi 5.x to make sure all special settings are made. This can’t be done using the Web Client. I simply used the Windows Client to edit the setting but you could also directly edit the .VMX and change guestOS = “vmkernel5″

In my Auto Deploy setup just booting the new Vm was enough to get a ne vESXI server up and running and configured with the default Profile.




workstation9里用esxi虚拟机实验RDS


i5 2550K ,windows环境,workstation 9

安装了一个esxi 5.1 ,使用dhcp,然后用clone复制一个esxi5.1,分配均是4核+2GB内存

然后导入vCenter的OVF,vCenter 5.1的VM,导入成功后启动

桌面上用vSphere client 5.1登陆vCenter,新建集群,加入两台esxi5.1主机,可以看到RDS已经成功启用了。

这需要esxi 5.1支持nestedHVSupported

通过访问esxi 5.1的ip地址,https://IP_Of_Your_Host/mob/?moid=ha-host 点击HostCapability的capability进入功能支持中,查看nestedHVSupported 是否为true

如果是即刻完成前面的实验工作,和两台esxi + vCenter的管理几乎一样。可以看到共有3.4G * 4核 * 2台=27GHz CPU资源,2G *2=4GB内存资源

一般新一点的CPU都可以完成这个测试,需要开启esxi VM配置中EPT/RVI


  • Look for Intel VT-x or AMD-V – to supports nested 32-bit VMs
  • look for Intel EPT or AMD RVI to support nested 64-bit VMs.

HA的测试我这次略过了,需要两个esxi使用相同的datastore,如NFS/iSCSI,主机的内存资源不够,不够再开一个NAS VM共享一份存储空间出来。

以后有机会再测试。