Server memory is a component that’s either sufficient or insufficient. If you have a sufficient amount of memory for your workloads, you likely don’t even think about RAM because other problems consume your attention. However, when you have insufficient memory, your servers and your organisation’s productivity slow to a crawl because DRAM feeds your CPUs. That’s why in a recent Spiceworks survey of over 350 IT decision-makers, 47% noted that they planned to add more server memory in the coming year, even though half of all their servers were already running at the maximum installed memory capacity.* These findings come as no surprise because of how memory helps overcome five of the most pressing workload constraints.
The question: “What are the top challenges you currently face in overcoming server workload constraints?”
The response: Respondents were able to select up to three answers. Their answers are below.
About the respondents: The 353 respondents selected by Spiceworks were required to have purchase influence in their organization. Respondents were split nearly evenly across four countries (US, UK, France, and Germany) and were required to have at least 30 physical servers and be using virtualisation software. Overall, 23 industries were represented (ranging from technology to energy to manufacturing) and 74% of respondents were running 100 or more physical servers, with 41% running over 200 boxes.
Getting the most out of dwindling budgets often comes down to comparing acquisition cost to total cost of ownership (TCO). If you increase a server’s efficiency, you decrease its TCO because you’re getting more performance out of it over the same amount of time. Since memory is what feeds processing cores, it’s one of the most effective ways to make your CPUs more efficient and productive, allowing you to handle growing workloads without having to buy new servers.
Specifically, more memory gives your system more of its fastest resource to get data to the CPU. And the faster data gets to the CPU, the less time it spends idling, consuming power, and doing little to no work. Since memory resides closer to the CPU, it takes less time for data to get from DRAM to the CPU than is does to go from storage to the CPU. For example, hard drives typically get data to the CPU in milliseconds, versus enterprise SSDs which get data to the CPU in microseconds. This is certainly a vast improvement, but it’s still a higher latency than DRAM, which is able to get data to the CPU in nanoseconds (with latency, lower is of course better). When you consider the millions of instructions that are fed to the processor each day, feeding data to the CPU via memory delivers a significant performance difference. Time is money, and more memory helps deliver the best possible return on your CPU investment.
Virtualised workloads are all about maintaining consistent quality of service (QoS) and eliminating on-again/off-again variance (see our MySQL study for a deep dive on this). Overall, more RAM helps eliminate service variance because it provides extra resources for virtualised applications to store and use active data (which lives in memory). Since unpredictable workload spikes quickly exhaust available memory, the system scrambles to find available resources, performance drops, and disk thrash is typically the result. More memory solves this problem by giving your applications more flexibility to meet rising and falling workloads that like to spike.
Think of floor space limits as a constructive problem to solve: What’s the minimum amount of servers you need to accomplish your workload? This kind of thinking helps lighten your enterprise footprint because every server that’s underutilised costs you more. For example, if you used 5 maxed-out servers to accomplish the workload of 10 half-full or old servers, you’d save on power, cooling, and software licenses – the big killer. When floor space is at a premium, there’s really only one thing to do: scale up. Scaling up almost always involves increasing a server’s installed memory capacity to get as much out of the box and feed as many VMs as possible.
Hosting more users requires more system resources (read: RAM) to maintain the same QoS, which is very similar to Constraint #2 above. By giving the system more RAM, you gain more flexibility and increase its ability to handle unpredictable workload demands caused by the sudden growth in your user base.
Although fully populating the memory in a server raises its total power consumption, the total consumed energy is often less than using multiple partially full servers to deliver a comparable level of performance. More DRAM helps your servers use power in the most efficient manner possible from a workload perspective (feeding and running the CPU). Plus, if you’re using fewer physical servers, your total power and cooling costs will likely be less.
Memory is like fuel for your CPUs – as long as they have enough of it, they’re OK. But there’s a significant difference between having enough RAM and truly improving workload efficiency. With just enough RAM, you’re certainly able to run applications, but with the maximum installed memory capacity, you’re often able to use fewer servers to get more done at a lower total TCO. Don’t starve your CPUs. Know your workload, and if it’s CPU or memory-dependent, improve efficiency for less with more RAM, not more servers.