loadrunner The Protocol Complexity Matrix and what it...

  • perf-test.com need your contributions to build up a strong repository of performance engineering resources.

I

Igor Markov

Guest
The Protocol Complexity Matrix and what it means for your load testing
safe_image.php
 
I have always found it to be an alien concept to want to "max out" your load generators. This seems very problematic every time I run across it with the load generators being so overloaded that it is the load generator being slow which is appearing as the application being slow. The virtual user code is not marked as non swappable so as soon as memory is full the load generators will start thrashing themselves to death swapping (m)mdrv processes to and from the disk slowing the actions of the virtual users. Add logging to this (as many do) and only one load generator (as many also do) and you have a recipe for "blame the tool" developing. I propose a different model. Once you know how many your load generators can hypothetically support, deliberately underload them by 50%. So when the developer want so "blame the test|tool" since you have just called their code baby out as being really ugly you can point to all sorts of ways where you have taken precautions to minimize the influence of the test design on the test output. I would also recommend a minimum of three load generators, with one of the load generator pool being a "control" generator, with the same hardware configuration as the rest of the load generator pool but running only a single virtual user. You can leverage this control set to understand the health of your load generator pool during the test: * Control set degrades and global pool degrades at same rate, then you have a common issue - most likely the application * Control set does not degrade or perhaps gets faster and your global non control set slows - You have a load generator issue. Rebalance your load and bring in additional load generators. It's very difficult to have a control set in place with only a single load generator. Not impossible, but difficult. Also, if you are going to budget for a skilled user of the tool, for the purchase/maintenance of the tool, then you should at least budget for an appropriate infrastructure for the tool to run on, otherwise you are putting Michael Schumacher in a F1 car powered by a motorcycle engine. Michael is known as a miracle man behind the wheel but even he could not overcome the limitations of the physical infrastructure in such a case.
 
Also a reason that we try to use the same configuration of load generator for every test environment and they are custom optimized to handle more threads than normal, and have hand tuned tcpip stack settings across the board. Once that is all done then we run several baselining tests to ensure they are all working in a calibrated fashion. This applies to physical AND virtual machine lg's but there are a few more options that apply to the virtuals and then the golden 60% HOST limit and 80% guest limit we found should also be heeded or pretty much your testing stability goes completely out the window. Beautiful infographic!