In this blog, we are going to see what are the objectives of system performance. System performance objectives are the ways to identify proper performance for your application.
What is System Performance Objective
A system performance objective is a goal that is set for the performance of a system.
This objective can be based on a number of different factors, including idle time, response time,
throughput, and availability. By setting a performance objective, we ensure that, businesses can ensure that their computer systems are meeting the needs of the organization.
There are two types of Objectives:
Two objectives
1. Minimize Request-Response Latency
2. Maximize Throughput
Minimize Request-Response Latency
Latency is measured in time units.
Request and response are factors which depends on the performance of the system, So latency is depends on two things: Wait/Idle Time and Processing time.
Latency is a measure of how much time a request response spends within a system. While a request goes through a system, it spends a lot of time at different points within a system for processing, and the same request also spends a lot of time within a system waiting to be processed.
So the total latency of a request response is the sum of wait time plus processing time.
Whenever we are trying to minimize latency, we are trying to minimize the processing time and the wait time.
Maximize Throughput
So let’s first see what is throughput.
Throughput is a measure of how many requests a system can process in a given time.
It is a rate, so it is measured as rate of request processing. Now it indirectly depends on, or we can even say directly depends on latency, but there is one more important thing that determines throughput and that is capacity. If we can minimize latency of our system, then that increases throughput.
But to increase throughput, there is one more factor that is capacity that we need to increase.
So latency and capacity put together decides the throughput of our system, and our goal is always to maximize throughput.
When we are talking about request response latency, we are generally talking about those components, Or processes, which take a request and gives back a response.
E.g. System Performance Objective : Web App example which does batch processing
So let’s say a web application, a business service, or a database. They all take a request, and they give back a response. But there are certain components which are not request response based. They are those components which do batch processing.
So let’s say if we are generating a report which reads data from a database and process that data and makes a report which it can write back to a database or to a file, that kind of process is doing batch processing.
Here we do not talk about request response latency, we only talk about throughput. We may talk about batch processing time, total batch processing time, but there is no concept of a request response.
When we are talking about request response model, let’s say web application, business service, or database, we are interested in both request response latency and throughput. For batch processes, we are generally interested in throughput only.
Conclusion
So we notice that throughput depends on latency and capacity. Capacity augmentation is not something that we need to learn, it’s something that can be done as needed. What we need to learn as an architect, or as a developer, is to how to bring down latency of a system and that will ensure that we are able to minimize latency. Our first goal is met, and that will also ensure that our second goal is also met provided we have enough capacity, and can deploy that. So from a learning perspective, our focus will be on how we can minimize latency which will automatically maximize throughput, assuming that we always have the capacity that we need.
Checkout Performance Principles here.
FAQ
How do you measure performance of a system?
It can be measured in multiple ways, depending on the specific performance objective. For example, response time can be measured using tools like ping or traceroute, while throughput can be measured using tools like Iperf. It’s important to choose the right tool for the job in orderr to get accurate performance measurements.
What are some common system performance objectives?
It Includes reducing response time, increasing throughput, and improving availability. The specific objectives will depend on the needs of the organization and the capabilities of the system.
How do you improve system performance?
Improving system performance can include a number of different ways and strategies, depending on the specific performance issue that we are targeting. Some of the common strategies include optimizing software configurations, upgrading hardware components, and implementing performance tuning techniques. The most important way or before improving the performance of the system we need to find out what is the root cause which is causing the performance problem and keeping that in mind we apply different strategies.