This article describes how upgrading from VMware 3.5 to a higher level, like 4.0 or 4.1 will probably lead to a higher overall memory consumption in VDI environments.
If you have been running VMware View 4.0 or greater the chances are that you have vSphere 4.0 U2 or vSphere 4.1 deployed in your VDI infrastructure. You may have VMware Infrastructure 3.5 U5 but I am hoping that at this stage you have already been trough the upgrade. If you haven’t yet upgraded you will experience the behavior I am explaining below when you upgrade to vSphere 4.0 and greater. If you have already upgraded your ESX hosts and the hosts are Nehalem based Xeon 5500 series CPUs, you may or may not have noticed that after the upgrade vCenter and esxtop show that the hosts are consuming more physical memory than before the upgrade even thou the number of virtual desktops have not changed.
The behavior is not exclusive to VDI infrastructures, however in VDI, especially if 32-bit Windows XP is in use; there is a significant amount of TPS (Transparent Page Sharing) sharing across all virtual desktops. Because of a reduced TPS ratio across all hosts the behavior is more evident in VDI infrastructures. Simply put, with vSphere 4.0 few changes have been introduced to the memory management techniques. TPS now, by default, only analyse 2MB memory blocks (large memory pages) to try to find common memory blocks. Only when the host server is under memory stress TPS will break those large memory blocks into smaller 4KB blocks to search for additional common memory blocks. This happens because vSphere relies on hardware-assisted memory virtualization, EPT(Intel) or RVI(AMD), to eliminate the need for shadow page tables, reducing kernel overhead.
VMware published a KB 1020524 about the behaviour – Transparent Page Sharing is not utilized under normal workloads on Nehalem based Xeon 5500 series CPUs.