Hey all,
I haven't been able to find an answer to this one. Hopefully someone might point me in the right direction. I have six ESXi 4.1 U2+ hosts in a cluster with HA and DRS enabled. HA's ACP is set to tolerate one host failure.
I made some NFS/Net changes recently and needed a reboot to put them into effect. I put one host into Maintenance Mode and it evacuates all the VMs without issue. I reboot the host, it comes back, and I remove it from Maintenance Mode.
When I move onto the other host after five or ten minutes and start the process again...VMs will NOT migrate to the host that I had just rebooted. It's pushing them all to the hosts that have been running longest.
I'm not seeing any rule or advanced options that would prevent guest VMs from going to a newly rebooted host that's ready to go. my das.minUptime value in the HA advanced options is set to 120 seconds...which is default. I thought it might have been that option, but don't believe so. I've let a rebooted host sit for 15 minutes and still no VMs will migrate to it when I "flush" the next host.