Speed up the performance of your cloud based system.

Sujit Udhane
5 min readOct 7, 2020
Source : https://tr.pinterest.com/pin/570409109051442099/
Source : https://tr.pinterest.com/pin/570409109051442099/

Oh god…One more article (or series) on cloud. Plethora of articles already available on cloud. Then, why do we need one more?

Objective of this article is, to have a simple reference list (checklist), how you can improve your important characteristics of the system, that is performance, by leveraging offerings from the cloud. All of the offerings may not suit your system or may not require for your system, and you need to carefully choose which will help you. Articles also share the important keywords from primary cloud providers, which you need to explore for further details.

What is system performance?

The total effectiveness of a computer system, including throughput, individual response time (latency), and availability.

Throughput is the average amount of data that actually passes through over a given period of time via a Network, interface or channel.

Latency is the time that passes between action (user action/system action) and the resulting response.

Which elements can help you to achieve optimal system performance in the cloud?

1.Virtual Machines (VM)

Right-size the instance with the best allotment of virtual CPUs (vCPUs), memory and specialized characteristics

Choose the correct CPU family (number of vCPUs), Memory, disk IOPS, network bandwidth based on the use case. Different criterias available (Number of vCPUs core, Number of vCPUs threads, RAM Size, RAM Type, Disk Size, Disk Type)

For Memory Intensive programs (like Spark based), require High memory.

For CPU Intensive programs (like Multi threading, Multi Processing), require more number of virtual CPU cores.

For high IOPS programs (like Big file write/read), require VM with local SSD storage.

For ML/Analytics programs require GPUs.

2. Storage

Choose the right storage option such as HDD/SSD/Premium SSD, also storage plans hotline, nearline, coldline for optimized disk performance and cloud cost. Different criterias available (HDD/SSD/SATA)

High IOPS applications require SSD. For less frequent read, cold storage (or HDD) should work.

3. Load Balancer & Global load balancer/Traffic manager

A load balancer acts as the “traffic cop” sitting in front of your servers and routing client requests across all servers capable of fulfilling those requests in a manner that maximizes speed and capacity utilization and ensures that no one server is overworked, which could degrade performance.

Provision the redundant cloud resources such as VM, microservices deployments based on the capacity plan and use Load balancer services to improve workload handling.

For your distributed application, use a global load balancer to route the user traffic to the nearest data center reducing network latency and improving overall performance.

AWS — Elastic Load Balancing (Application Load Balancer, Network Load Balancer, Classic Load Balancer)

Azure — A Public Load Balancer, An Internal (or Private) Load Balancer

GCP — HTTP (S) load balancing, Network load balancing, Autoscaling Instances.

4.Content Delivery Network (CDN)

Virtually everyone on the Internet has experienced the benefits of a Content Distribution Network (CDN). The majority of technology companies including companies like Google, Apple and Microsoft use CDNs to reduce latency in loading web page content. A CDN will typically place servers at the exchange points between different networks. These internet exchange points (IXPs) are the primary locations where different internet providers link to each other in order to provide each other access to resources on their different networks.

In addition to IXPs, a CDN will place servers in data centers in locations across the globe in high traffic areas and strategic locations to be able to move traffic as quickly as possible.

A primary benefit of a CDN is its ability to deliver content quickly and efficiently. CDN performance optimizations can be broken into three categories:

Distance reduction — reduce the physical distance between a client and the requested data

Hardware/software optimizations — improve performance of server-side infrastructure, such as by using solid-state hard drives and efficient load balancing

Reduced data transfer — employ techniques to reduce file sizes so that initial page loads occur quickly

In order to understand the benefits of using a CDN, let’s explore what a normal client/server data transfer looks like without a CDN in place.

AWS — Amazon CloudFront

Azure — Azure CDN

GCP — Cloud CDN

5.Domain Name System (DNS)

Cloud DNS services are in most cases better able to ensure redundancy and fault tolerance in the infrastructure that they offer. Geographic dispersal of their servers allows for greater scope in DNS resolution between locations, which for the customer provides reduced latency and faster access to websites and online applications.

Cloud providers can improve on the performance possible with in-house DNS servers, by using their resources to ensure advanced traffic routing. The load-balancing capabilities and geographic spread of their servers allows for the deployment of routing policies such as simple failover, latency-based routing, round-robin, geographic DNS and geo-proximity routing.

AWS — Amazon Route S3

Azure — Azure DNS

GCP — Cloud DNS

6.Asynchronus Programming

It refers to processes that are separated into discrete steps which don’t need to wait for the previous one to be completed before processing.

For example, a user can be shown a “sent!” notification while the email is still technically processing. Asynchronous processing removes some of the bottlenecks that affect performance for large-scale software.

Use serverless cloud services for event-driven microservices

AWS — Lambda,

Azure — Serverless Functions

GCP — Cloud Functions

7.Cache

Cache wherever possible. Caching is a good way to keep from having to perform the same task over and over.

Improve application performance by integrating the caching services based on data access patterns. By using caching services, one can reduce the need to query the database or storage for each request.

Server side cache solutions like Redis/Memcached/Hazelcast/Gemfire available.

AWS — ElastiCache (Redis, Memcached)

Azure — Azure Cache for Redis

GCP — Memorystore (Compatible with Redis & Memcached)

8.Managed Services

Go for the correct plan for any of the managed services such as managed compute, databases, middlewares, ETL Jobs schedulers, cache instances, and many such things. If you’re experiencing any performance issue, consider upgrading to a better service plan, which generally offers better configuration.

9. Autoscaling Services

Use VM scale sets for auto-scaling application resources based on the given workload

That’s all for now, in future, we will talk about other system characteristics (like Availability/Reliability/Security/etc.) in cloud eco-system.

If you like the article, please clap for it. Also, share the article with your friends.

Reference articles for this article -

https://www.oreilly.com/library/view/load-balancing-in/9781492038009/ch01.html

https://www.nginx.com/resources/glossary/load-balancing/#:~:text=A%20load%20balancer%20acts%20as,overworked%2C%20which%20could%20degrade%20performance.

--

--

Sujit Udhane

I am Lead Platform Architect, working in Pune-India. I have 20+ years of experience in technology, and last 10+ years working as an Architect.