If you’ve worked with on-prem or cloud infrastructure before, you are probably aware of the idea of overprovisioning. Overprovisioning resources is often required to make sure you have enough resources to support peak demands. It’s not very efficient on platforms where capacity and compute cannot be scaled independently. However, a quick Google search on overprovisioning shows that many organizations consider it to be beneficial. In this post, we’ll look at how overprovisioning is both costly and unnecessary and discuss how there is a better way to get the results you want while only paying for the resources that you actually use.
The “Benefits” of Overprovisioning
Current public cloud architectures have resources that are designed to be fairly tightly coupled to each other, which impacts the ability for clients to scale those resources in a granular fashion. For example, the type and size of the compute engine in CPU and DRAM memory will also determine the amount of network bandwidth or storage performance you can provision as well. It’s a three-legged stool with not a lot of room for movement in the amount or ratio of resources to each other. As a result of this, often times a client will need to provision far more of a certain resource than the application actually requires, in order to get enough of another resource that the application needs. This overprovisioning is costly, and quite common, especially in applications that are resource “hungry” for a certain element, be it storage performance or number of vCPU cores or whatnot. The downside to this is that the client is paying for 100% of every allocated resource, even if they are only using 10% of them, as resources are paid for when they are turned on, whether they are actively being used or not.
This is a very common source of frustration among cloud customers, as cloud native resources do not let clients pick and choose the specific resources they need and use them as flexibly as they would like. It’s more of a bundled or pre-packaged approach, with some flexibility within small ranges, but when requirements begin to scale, the imbalance between resource allocations becomes significant, and costs become unreasonable. No one wants to pay for an 18-wheel truck when all they need is a minivan. But often times, the public cloud requires that clients pay for that 18-wheel big rig in order to get enough of something else… even if that minivan would have sufficed. Now let’s say the client has to provision 1,000 18-wheelers instead of 100 minivans — this is where cloud costs are becoming problematic. While most clients are fine accepting some overprovisioning waste and cost burden, it is a hard limiting factor when significant expansion, scaling, or the need to bend cost curves factor in — almost always the case for real Tier 1 production applications (not just test/dev environments). Clients want to benefit from an economy of scale, not be penalized for it.
Break It Up: Decoupling Performance and Capacity
There is a more cost-efficient way to get a high-level of performance while only paying for the capacity that you need. By choosing a platform that decouples performance from capacity, you can independently scale one without the need to scale the other – eliminating the need to overprovision. In this scenario, you only need to buy – and pay for – the capacity that you use, while still getting the high performance that you need.
Having the flexibility to scale up, down, in, and out to meet changing business needs ensures that you are never overpaying on your infrastructure. If you’re a retail company heading into the holiday season, you can expand capacity to meet the needs of the busiest time of the year before scaling back in January. Or using a timelier example: If you’re a web conferencing software company, you would have had the flexibility to scale performance at the start of the COVID-19 pandemic when more workers were staying home and telecommuting.
How TPG Software Achieved High Performance Without Overprovisioning
TPG Software, which creates investment accounting software solutions, was trying to move from its legacy business model to a SaaS delivery model. SaaS would allow TPG to deliver a high-quality and consistent application experience to customers, simplify support operations, and speed up the development and QA processes… but it also presented its own challenges.
TPG’s customers generate hundreds of reports with peak activity at the middle and end of each month. TPG’s cloud-based infrastructure would need to have the flexibility to add more resources to accommodate these peak workloads… and then scale back down for the rest of the month.
Silk’s Cloud Data Platform allowed TPG the agility to run at full efficiency while minimizing the costs of managing its infrastructure. Ultimately, by leveraging Silk, TPG saw 30x cost savings with a Silk-enabled SaaS solution as well as 20x faster reporting for its customers.