Part 2: VMware vSphere Performance Monitoring

posted in: SDDC | 0


This post covers various factors that can cause performance issues within the virtual infrastructure. We will need to consider a host of options that can be causing an performance degradation. This can be issues within the physical CPU. Memory, Storage or even Networking or very likely a bad design of vSphere infrastructure. vSphere has a built in ESXTOP which we use to measure various  performance metrics that gives us an idea about what could be causing the issue. We must note since vSphere (ESXi) sits in between our VM’s and physical infrastructure there is a little overhead caused by it.

How is CPU virtualized by vSphere?
Hardware Virtualization ie Intel-VX and AMD-V built in is used by VMkernel for CPU virtualization
CPU is virtualized by VMkernel and is shared across multiple virtual machines as vCPU’s
CPU Scheduler is responsible for scheduling VM’s processes and threads on the physical CPU.
All the VM’s running on vSphere has an associated VMM and VMX process

Overheads caused due to Virtualization:
Overheads will vary with each and every environment and it also changes with different situations

  1. Overhead of ESXi on CPU is very minimal (max 10%)
  2. Overhead of ESXi on Memory is negligible (max 3%)
  3. VMM causes a further max 2% overhead
  4. Multiple VM’s running on Host can cause CPU scheduler overload
  5. Cluster load will also impact the overload

To improve processing performance vSphere recommends sizing of vCPU to be a crucial factor also Host Cache uses NUMA further improving performance

What causes CPU load?

  1. Guest OS running on VM can cause CPU load
  2. Agents and other services may cause CPU load
  3. It could also be the application running on guest
  4. Many VM’s will cause interrupts to CPU even if they are idle
  5. CPU affinity and CPU contention will cause an extra load on it

How to Monitor CPU load?

  1. We can monitor the CPU usage at Host level, VM level, or even within the guest OS
  2. We use vSphere Client or ESXTOP to monitor CPU stats and to conclude where is the issue lying
  3. Using ESXTOP we can monitor the following metrics
    > Host CPU usage
    > VM CPU usage
    > VM CPU Ready time
  4. CPU Ready time is a very crucial metric that gives us the time that vCPU was ready to execute the instruction but couldn’t access physical CPU for running the instruction while physical CPU was busy executing other vCPU instructions



Memory Management in vSphere:

  1. Memory is shared across multiple VM’s across Hosts with logical aggregation of clustering
  2. MMU (Memory Management Unit) is virtualized using hardware encoding present on OEM’s server(Intel EPT/AMD RVI) Hardware encoding is faster and is enabled by default in all top vendor servers
  3. Most importantly vSphere introduces Memory over commit techniques
    > Transparent Page Sharing : VM’s hosting similar data on the guest OS is stored only once in Host Memory. Eg: Windows virtual machines having same boot files are loaded only once in the ESXi memory
    > Memory Ballon driver vmemctl is installed on the guest OS as a part of VMware Tools. This driver triggers memory clean up as and when ESXi instructs it to do
    > Memory Compression : Memory compression is triggered where Memory pages are compressed for quick access
    > Memory Swapping is triggered as a desperate sign, where Swap file in the VM directory is used for storing VM’s memory pages reducing the performance
  4. To improve performance VMware recommends usage of SSD’s for caching data on the Hosts

vSphere Storage Management:

  1. vSphere stores all of the VM’s in VMDK’s which is striped with VMFS
  2. VMFS is where all the magic begins which is basis of vMotion
  3. External NAS and SAN providers uses various protocols such as FC, iSCSI , NFS etc
  4. EMC and other storage vendors provide support to VAAI offloading certain processes to Array there increasing performance
  5. VMware recommends to configure SSD’s and configuring multipathing for better performance

Leave a Reply

Your email address will not be published. Required fields are marked *