E-mail, as we noted in last week’s blog, remains critical to business functions, and Microsoft Exchange is the most widely used e-mail client in the world. Virtualizing Exchange servers on VMware can improve performance, allow you to consolidate various Exchange server roles, combine mailboxes, and increase flexibility of your Exchange infrastructure, so you can scale up or down as your e-mail loads demand.
You’ll end up with 5-10x less physical hardware and more responsive Exchange, plus you can design your environment for your current workload. No need to guess at your resource utilization 3-5 years down the road—just provision a few more VMs when the time comes.
While virtualization can increase performance (VMware claims a 16 core server with vSphere produced double the throughput as physical hardware), Exchange has its own set of requirements and demands, so take a look at these best practices before you start up the installer in your virtual environment.
You’ll need to add 10% or so to the physical CPU requirements to account for the hypervisor. Your total vCPU number should be less than, or at maximum equal to, the number of cores on the host machine.
Turn on NUMA (non uniform memory access) so ESXi can place vCPUs into a single node, but match the VM vCPU number to the number of nodes. In practice, this works to reduce memory access latency, as each NUMA node has allocated memory that it can quickly access. For large scale deployments, the additional latency of a single VM spanning multiple NUMA nodes may or may not be enough to warrant splitting it into smaller VMs.
Overcommitting vCPU resources is acceptable, but you should be careful about it. A single physical core, with a single vCPU, is able to handle approximately 375 users at 100% utilization.
Talk to an infrastructure consultant today.
Do not overcommit memory resources. This is one area that you’ll need to right size, as RAM caches mailbox data in Exchange. If vSphere dynamically scales down your memory and the demand suddenly increases, you’re going to have some unhappy users.
Microsoft recommends 8 GB minimum memory for VMs running Mailbox, 4GB minimum for VMs running Client Access, and 8 GB minimum if combined.
Fixed disk size is required, as Exchange doesn’t support expanding solutions. VMDKs stored on NFS are also not supported—you have to use block level storage. Thin disks and snapshots are also out, unfortunately. You should use Eager zeroed thick virtual disks when provisioning.
vSCSI adapters are supported. Using multiple vSCSI adapters improves performance—turn on all four to allow higher IOPS and better performance. This allows the system to share VMDKs across multiple storage adapters.
vSCSI allows more connected disks per controller compared to vIDE, as well. Up to 64 disks can be connected to each controller, with four controllers per VM. That’s 256 virtual disks for each VM. IDE only allows three. SCSI also allows hot add/remove for disks, but IDE does not.
Use separate network adapters for vMotion, VMware FT logs, and ESXi console access. Two network adapters should be used as a minimum for Exchange production traffic. This takes advantage of VMware NIC (network interface card) teaming, which shares traffic between physical and virtual networks. However, only public networks should be teamed.
Use different network adapters for Public (user access) and private (dedicated replication), giving Public a higher priority.
Even though some nice features like Snapshots are unavailable in virtualized Exchange, others still come in handy. Application High Availability in vSphere 5.5 can automatically monitor and restart Exchange as needed, for example, and other disaster recovery and backup tools like Site Recovery Manager are supported.
For more in-depth information on setting up a VMware virtualized Exchange environment check out the following links from the vendors: