Do the virtualization math: When four CPUs aren't four CPUs

Four virtual cores or four virtual sockets, what's the difference? It could be a lot

One of the major advantages of virtualization is the ability to dynamically add CPU and RAM to running virtual machines. Have a box that gets a sudden spike? Add more RAM on the fly and let it go. It's a fantastic way to deal with certain compute issues, and it can make a tough decision disappear due to the fact that downtime and reboots aren't required.

However, allocating CPU and RAM with the click of a mouse -- dynamically or otherwise -- can have deleterious effects on your servers in some circumstances. You really need to understand your workload and your OS.

[ Also on InfoWorld: First look: Driving VMware vSphere 5.1 | What's key in VMware's new vSphere, vCenter, and vCloud | The price of success: VMware's big integration challenge ]

It all comes down to the type of workload you're running, the OS scheduler, and the virtual CPU layout for the virtual machine. Virtual CPU allocations used to be simple. You specified how many virtual CPUs you wanted to assign and off you went. However, as the number of physical CPU cores increased and NUMA became the norm, that choice became trickier. Now, just about every major hypervisor presents a choice of virtual CPU types.

For instance, if you want to assign four virtual CPUs to your virtual machine, you can choose between four single-core CPUs, two dual-core CPUs, and one quad-core CPU. While all of these selections wind up presenting four virtual CPUs to the virtual machine, they do so in different ways, and the differences can impact the decisions made by the OS scheduler running on that virtual server.

Virtual machine alchemy
There's no hard-and-fast rule about these selections. The right choice is extremely dependent on the workload profile, the scheduler in use, and the OS version or kernel version. Older kernels less adept at dealing with multicore CPUs may have a better time with single-core CPU assignments. Newer kernels and OS versions might prefer multicore CPU presentations.

Beyond that, the nature of the workload itself can have a big impact. Single- and multithreaded workloads will handle each of these instances differently. There may be only slight differences in some workloads, but massive differences in others.

Picture a modern OS that's well versed in NUMA. Taking advantage of NUMA permits faster memory access and can significantly speed up processor and RAM-intensive processes. If a CPU core interacts only with memory controlled by that CPU, it will perform faster, as it does not need to cross to another CPU to allocate and use memory.

1 2 Page 1
Page 1 of 2