When it’s time to begin designing and building your high-performance SaaS application, every design decision needs to be carefully evaluated. What are the unique requirements, risks, and trade-offs for your application? One of the first questions you will face will be “what is our scaling strategy?” This can greatly influence the hosting infrastructure. This can sometimes feel like a cart and horse challenge. Knowing how you plan to scale will help select the ideal infrastructure and application software.
Should You Scale Horizontally or Vertically?
The truth is that you will want to employ both techniques depending on which part of the application you are working with. Horizontal scaling and vertical scaling have different impacts on compute resources. Both horizontal and vertical scaling distribute the workload across multiple instances. Your goal is to optimize use of compute and data resources of each instance to handle increased demand.
Both approaches have distinct advantages, limitations, and tradeoffs. The choice between them will depend on your specific requirements for the application, workload patterns, and cost-efficiency considerations. So, why would you choose horizontal, vertical, or perhaps both patterns in your SaaS scaling strategy?
Advantages with Horizontal Scaling
Horizontal scaling refers to adding more instances or replicas of a resource to handle increased workload or demand. In the context of Azure, this typically involves adding more virtual machines (VMs) or containers to distribute the workload across multiple instances.
When horizontally scaling, the compute resources in Azure are distributed across multiple instances, allowing for increased parallel processing and improved performance. Each additional instance contributes its own compute resources, such as CPU, memory, and storage, to handle a portion of the workload. This enables better resource utilization and the ability to handle higher traffic volumes or more complex computational tasks. However, it’s important to note that the resources available per individual instance remain the same, and the workload is divided among the instances.
Risks and Challenges with Horizontal Scaling
There are specific technical risks associated with choosing horizontal scaling for compute and storage resources.
Complexity of Distributed Systems
Horizontal scaling often involves managing multiple instances or replicas of resources. This introduces complexities related to synchronization, data consistency, and distributed system coordination. Implementing and maintaining a distributed architecture can be more challenging, requiring careful consideration of factors like data partitioning, load balancing, and fault tolerance.
Network Latency and Communication Overhead
When horizontally scaling, instances need to communicate and share data or state information. This can introduce network latency and communication overhead. Depending on the design and configuration, the increased network traffic between instances can impact overall system performance and response times.
Scalability Limits
While horizontal scaling allows for increased scalability, there are limits to how much an application or system can scale horizontally. Factors such as data consistency requirements, inter-instance communication, and application design can impose practical limits on the scalability achievable through horizontal scaling alone.
Advantages with Vertical Scaling
Vertical scaling, also known as scaling up or scaling out, involves increasing the capacity or capabilities of individual resources, such as virtual machines or databases, to handle increased workload or demand. A key advantage with vertical scaling is that it usually has lower latency because compute processes are located closer to the data. Being able to do more work in-memory or with less distance between compute and data can mean overall faster database operations.
When vertically scaling, the compute resources of a single instance are enhanced by increasing its capacity. This typically involves upgrading the VM size to a higher tier with more CPU cores, memory, and storage.
As an example, vertical scaling in Azure often involves upgrading the VM size or adjusting the resource capacity of a service. This can lead to improved performance for applications that require higher computational power or have increased resource requirements.
Risks and Challenges with Vertical Scaling
Vertical scaling is still the more common scaling model with traditional applications and infrastructure. Adding more compute, storage, and network capacity to nodes and connected resources helps to add processing power with limited disruption. That said, let’s review some of the risks and trade-offs that come with vertical scaling.
Resource Contention
Vertical scaling focuses on increasing the capacity of individual resources. However, there can be limitations on the maximum capacity of a single instance. As resources are increased, there is a risk of resource contention, where multiple components within the instance compete for the same resources, leading to performance degradation or bottlenecks. This results in data query delays, write-stalls, hung processes, slow updates, and turn into application failures or slowdowns for the client.
Single Point of Failure
Vertical scaling involves working with a single, larger instance. If that instance fails, the entire workload or system can be affected. The risk of a single point of failure increases when relying on a single, highly scaled instance. Implementing high availability and fault-tolerant measures, such as clustering or redundancy, becomes crucial to mitigate this risk.
Cost-Effectiveness
Vertical scaling can become expensive, especially as the resource capacity requirements increase. Higher-tier VMs or larger storage capacities come at a higher cost. Evaluating the cost-effectiveness of vertical scaling compared to horizontal scaling is essential to ensure optimal resource utilization and budget allocation.
Risk Mitigation with Scaling Decisions
It is important to carefully analyze the requirements of the application, workload characteristics, and growth projections. A well-designed architecture, proper monitoring, and performance testing are critical to identifying and addressing scalability limitations, resource contention, and potential points of failure.
For example, on Microsoft Azure you can try to make use of built-in scaling capabilities, such as Azure Load Balancer, Azure Autoscale, or Azure SQL Elastic Pools. These tools can help simplify the management and optimization of horizontal and vertical scaling in Azure environments but also have their own risks and limitations.
Reasons to Scale Compute Resources
Compute resources are more important to database performance than you may realize. Primarily, CPU and memory are going to define how much the working set can be and this will vary dramatically based on your application usage patterns.
Performance Optimization
Scaling compute resources, such as CPU and memory, helps optimize database performance. Increasing compute capacity allows for faster query processing, improved transaction throughput, and reduced response times. Scaling compute is beneficial when the database workload requires more processing power to handle concurrent user requests or complex data operations efficiently.
Workload Variations
Database workloads can exhibit variations in demand over time. Scaling compute resources enables you to dynamically adjust the processing capacity to match the workload patterns. During peak periods or high-demand events, scaling compute resources ensures that the database can handle increased transaction volumes or analytical queries without performance degradation.
Parallel Processing
Scaling compute resources supports parallel processing capabilities, which can accelerate data-intensive operations and analytical workloads. By distributing computational tasks across multiple CPU cores or instances, you can achieve faster data processing, complex query execution, and improved overall performance.
Reasons to Scale Storage Resources
It’s probably obvious that storage will have the most significant impact on performance and potentially cost for your data-driven applications. SaaS puts this on a whole new scale because of the operational pattern and growth potential.
Capacity Expansion
Scaling storage capacity is necessary when the database requires additional storage capacity to accommodate growing data volumes. As the database size increases, scaling storage ensures sufficient space to store data, log files, backups, and indexes. It allows for seamless expansion without disrupting database operations or facing storage capacity constraints. Capacity will also come in various tiers which each have their own performance advantages and challenges. Cloud storage will tightly couple these capabilities to costs. While you may be able to scale capacity, there are limits that could create a problem if not accounted for early in the design process.
I/O Performance
Scaling storage is crucial for optimizing I/O performance. As database workloads grow, the storage system needs to handle increased read and write operations efficiently. Scaling storage resources, such as disk throughput or IOPS (Input/Output Operations Per Second), helps prevent storage bottlenecks and ensures faster data retrieval and updates. Some data solutions require scaling capacity to scale performance, while others allow performance to scale independently.
Redundancy and High Availability
Scaling storage resources enables the implementation of redundancy and high availability mechanisms. By deploying redundant storage configurations, such as RAID (Redundant Array of Independent Disks) or data replication, you can ensure data durability, fault tolerance, and continuous database availability even in the event of storage failures. Cloud storage will likely have many “9s” of availability and redundancy (e.g. 99.999% available). Remember that availability is about access to the data and this is not a guarantee. If you lose access to cloud storage resources, the only recourse is usually a bill reduction after the outage. You need to design for availability outside of just your cloud provider’s offerings.
Data Access Patterns
Different storage technologies offer varying performance characteristics. Scaling storage allows you to align the storage infrastructure with the specific data access patterns of the database workload. For example, utilizing solid-state drives (SSDs) or caching mechanisms can enhance the performance of frequently accessed data, while leveraging archival storage or tiered storage solutions can optimize cost-effectiveness for less frequently accessed data.
Ready To Get Started Scaling Your SaaS Application?
Download our whitepaper, “Building and Optimizing High-Performance SaaS Applications”, today!
Let's Get Building!