Skip to content ↓ | Skip to navigation ↓

On Network World’s Microsoft Subnet, there is a very solid article on “9 Myths of Microsoft Virtualization -  Busted or Confirmed."  It’s actually an interview with Microsoft’s Mike Neil (general manager of Microsoft virtualization) and it is a fun read.

One of the myths I was very interested in was this one:

No. 3. VMware says that its memory overcommitment feature actually makes its wares cheaper in production environments in terms of total-cost-of ownership than Microsoft’s products (and Xen Server, too). Microsoft (and several users I’ve talked to) say this is a myth … although I’ve also heard that Microsoft is working on a similar feature. Is the "memory overcommitment" a myth and if so, why?

I got even more interested when I read Mr. Neil’s response:

So first off, how many IT pros configure their production servers to overcommit anything? Customers want an SLA and they want to know what resources are being consumed by a VM. Memory costs continue to come down and the number of DIMM sockets are going up, making this argument moot. We are focused on the efficient use of resources and using those resources dynamically — pooling the memory of the whole machine and dynamically balancing the memory between all of the VMs, instead of overcommiting a resource that can lead to bottlenecks. So, you can see the caveats on using overcommit in a production environment. As to Microsoft’s plans for new memory, we don’t look at it as "overcommit" we look at it as "dynamic memory." We want to provide the same benefit without the risk. Watch for future details.

So – what’s the real deal?  Hey you VMware practitioners (read that as “paying customer practitioners):  How big a deal is memory overcommit to you?  And (be straight with me) how much do you actually use it in production?

['om_loaded']
['om_loaded']