Laws of performance engineering

  • perf-test.com need your contributions to build up a strong repository of performance engineering resources.

anmoldubey

Administrator
Staff member
Aug 13, 2014
33
4
8
Making this as placeholder for all definition of all laws of performance engineering! Start a fresh thread for in depth discussion
 
Last edited by a moderator:
  • Like
Reactions: admin
Little’s Law

Little’s law was named after “John Dutton Conant Little” who was an Institute Professor at the Massachusetts Institute of Technology. Little’s law is a generic and simple law which can be applied not only to software systems but to any generic closed system.

Little‘s law states that the average number of requests in the system(closed) is equal to the product of average number of Requests serviced per unit time and the average time each Request stays in the system.

upload_2015-4-30_2-0-9.png


Little‘s law holds valid only under below conditions

  • Little‘s law holds good as long as the as the Requests are not created or destroyed within the system.
  • It assumes the system to be a stable system (Rate of arrival equals rate of departure)
It applies to any ―black box‖, which may contain any arbitrary set of components. If the box contains a single resource ( e.g.: single CPU or a single system) or if the box contains a complex system ( e.g.: internet, a city full of systems).

Thus, Little‘s law can be applied to a software systems and can be rephrased as – The average number of users in a queuing system N is equal to the average throughput of system X, multiplied by the average response time of system R.

Numarically,

N = R * X

Where,

N = average number of users in a system

X = average throughput or departure rate of users

R = average time spent in the system or response time





Modifying above stated law for performance engineering and adding think time (TT)

N = (R + TT)* X



Uses of Little’s law in performance engineering and testing

  1. Used for designing the test (Modelling workload) – Littles law can be used while designing the test to achieve a desired throughput and calculate appropriate think times (wait times) to be placed inside test script.
  2. Used for validating the correctness of test – Every performance test should be verified for correctness by applying littles law on to the results.


Some practice questions –

Problem 1 : Customers exit from a bank at an avg. rate of 2/min. They spend an average of 10 min in the bank. What is the average number of customers in the bank at any time? Assume a stable system.

Solution : Rate of input customers = X = 2/min

Time to service = R = 10 Min

By littles law : N=R*X

N= Number of customers inside bank = 10 * 2 = 20 Customers!

Problem 2 – Determine Concurrent Users in a performance test based on following data-

A system at processes 1000 Transactions/Hour with an average response time of 5s per transaction. The average Think Time per user is = 10s. What is the number of concurrent users on the system?



Throughput or X = 1000/3600 = 0.27 TPS (Transactions Per Second)

From Littles Law : N = [R + TT] * X



Applying the equation: N = [ 5 + 10 ] * 1000/3600 = 4.16

The above system has approximately 4 concurrent users inside the system at any given point of time.



Problem 3 – How to validate software Performance Test Results:

Below details were collected during a test for constant load, Predict if the test was correctly executed or not?

Peak User Concurrency = 400

Average Transactional Response Time = 10s

Average User Think Time = 10s

Peak Transactional Throughput ( as recorded by the tool) = 10 TPS

Solution: Try applying littles law to the above results

N = [R + TT] * X

Solving right hand side = [10 + 10] * 10 = 200 Users

This is not equal to left hand side. Hence it can be predicted that something went terribly wrong with this test.
 
  • Like
Reactions: Rajesh
Adding Amdahl's Law:

Consider a system which involves multiple components with different processing speeds or service times. In order to optimize the overall system performance it is advised to optimize the performance of the component that takes maximum percentage of time for processing.

Similarly, if a piece of code is executed multiple times, it should be considered for optimization to yield more performance benefit or speedup, compared to a piece of code which would execute only once.

Amdahl's Law: For any application, the maximum speedup we can obtain by optimizing any component of a program or system depends on the percentage of time the application spends using that component.

Speedup: Speedup refers to the capacity or ability of the system to perform a particular task/work in a shorter time when its processing power is increased. In a system with linear scalability and speedup, an increase in processing power generates a proportional improvement in the throughput or in the response time.

In other words, with linear speedup, twice as much of the required hardware would perform the same task in half the elapsed time. If 2 processing units can complete a task in 10 minutes, 4 processing units should do it in 5 minutes, and 8 processing units should take 2.5 minutes.
 
UTILIZATION LAW

Before we understand the actual law - you should be able to understand what service demand is . Consider that you are in a grocery queue behind 10 persons and estimate that the grocer is taking around 5 minutes per customer to weight the items. At billing counter, it takes another 3 min to bill each customer. Here 5 min and 3 min are respective service times for each counter. In all, a customer has to spend on an average 8 mins to get his order completed at the shop. This total time duration at counters to perform a customer‘s request is called Service demand at the counter per customer. It is the sum of all the service times at sequential hops for a request to be serviced. In case of a single counter to process the customer request, the service demand and service time would be the same.

Service demand:

Similarly, in software performance engineering, the service demand is the amount of resource time required to execute a request. The time spent by a request on a given server or component such as CPU, disk is the service time for that component or resource for the given request. The sum of all the service times for a given request is defined as service demand of the request.


Utilization Law:

Service demand and throughput of a resource can be used to know if the resource is being utilized to its potential or not.
Consider a disk that is serving 40 requests/second each of which requires 0.0225 second of disk service time. The utilization law tells us that the utilization of this disk is product of throughput and service demand.

Utilization = Throughput * Service demand
= 40x .0225 = 90%

That is, the utilization of a resource is the product of the throughput of that resource and the average service demand at that resource.

From utilization law,
Service demand = Utilization/Throughput

Where the service demand is expressed in seconds, the utilization is expressed in fraction (example: 30% utilization to be taken as 0.3) and the throughput is expressed in TPS(Transactions per second).

upload_2015-4-30_9-54-35.png
 
Some other laws which are important for software performance engineers - Law of Measurements, Response Time Law and Nagraj's Law of tuning choices

upload_2015-10-8_0-30-19.png
 
Let me know the Little's Law modification for Performance Testing perspective if Pacing is to be considered
Will It be N = [R + TT + Pacing] * X ???
 
  • Like
Reactions: admin