loadrunner what are the trade-offs if we increase the...

  • Thread starter Pankaj Chandan Mohapatra
  • Start date
  • perf-test.com need your contributions to build up a strong repository of performance engineering resources.

P

Pankaj Chandan Mohapatra

Guest
what are the trade-offs if we increase the transaction throughput by a factor of 10 per useruser, in order to accomdate 20000 users in 2000 vusers licences.
 
You will be missing the session object load for 18000 users. This includes the number of IP connections (potentially as many as six per host), the load on your load balancers, firewalls, etc... Anything that is directly related to session load will not be caught. The client server model is predicated upon a delay between requests from client A while response a.n is being processed by the client, either by software or human action, and then the next request, a.n+1, arrives. As you collapse the interval between requests to a non human capable level your test becomes less and less a predictor of what the user will experience in production. You will file more defects which will either be rejected as unrealistic or will result in thousands of man hours chasing engineering ghosts which would never occur in production. This will impact your value and reputation as a performance tester. Consider instead going to a single stack model, a single architectural node of each type of service in your architecture. Use the license you have to find the point of exhaustion on natural load for each node type. As you move back in load, for instance moving load from the web server to the app server, you may need to increase the number of nodes on the front end to generate and appropriate load on the next upstream architectural component. Once you get to the database you will most likely be looking at a finite number of connections in the connection pool which is well within your license limit. You would need to shift your model for load production to be at the database tier directly in such a case (or the alternative is making the appropriate number of app server calls directly for ~n~ app servers to load up the database. Usually this request comes from a manager who doesn't understand the technical issues of "turning the volume to eleven," for what is and is not appropriately stressed by skipping the virtual users. Can you? Yes. Should you? Would you ever consider driving a brand new Corvette off the dealer showroom floor and then testing whether the top speed is 198 on public roads?
 
James Pulley...even if he decides to test on a scaled down environment , there might be a possibility that he may not be able to scale down each tier as there may be clustering only at web or application layer and not at DB layer. How to address that? Also scaling down minute cofigs like heap sizes, thread pools , Db connection pools and other inifite things is difficult and then extrapolating is another risk, the test might get totally screwed and might not represent reality!
 
Ah. I see your single stack model approach now, that will require triple the effort and associated risks. And then effort around explaining the complex test scenarios to the stakeholders involved.
 
I scale down on a host basis, not settings within the host. So, you find the limits on a host of a particular type, such as web, app server A, App Server B, etc..., then you know (less 5-12% overhead on the front end load balancing/firewall) how you can scale horizontally. The number of calls naturally drops/aggregates from one service tier to the next. This is where you may need to up your number of web servers to achieve an overload condition on an app server which is behind the web tier, as an example. Or, you make the app server calls directly, capturing and modeling what would be a representative call for an individual user to the app server and then scaling that load up to the point of exhaustion/break. This is a higher value engineering approach versus reducing iteration and think times to a tenth of their value to represent the transactional, but not the session load on a particular system.
 
James Pulley bottom line....only 20k virtual users can emulate 20k user session load....am I ryt? :)
 
Yes. If you scale down to what one front end will support, you're still missing the big picture for any downstream components. Front-end servers will eventually compete for resources on the backend. The trick is finding out when.
 
Yes, if you want the full session load, and you should want the full session load, of 20K users, then you need 20K users.
 
James Pulley could u dig a little deeper into the " collapsing the interval" between requests problem. apart from the session load prolem, as I see it the server will receive the same amount of requests during the same period regardless of the number of vusers.