Not all applications require software performance testing, and the same application may not require repeated performance testing for every release. The risk factors you should use to evaluate your application include:
1. User Population
The people who use your application are critical to the decision. Questions may include: Who are the users of your applications, how many concurrent users are there, and is the number of users increasing? Are the users purchasing products or services from your business? Are they external or internal users? How easy is to for your user base to switch to a competitor if you web site is not performing well?
2. Application type
The application type itself can dominate the risk factors. For instance, questions to ask may include: is the application an online retail web site? Typically a revenue generating web site requires a performance test for every release. Is the application a key component in the Enterprise Architecture that other applications use? If so, this is deemed a critical application and may require performance testing for every release. Is the application a batch process with a strict window of processing? How critical is this application to the business, and how is it rated?
3. Application technology
The state of the application technology stack can be a significant risk factor. Generally, the technology platform does not change from release to release. If the underlying technology is stable and is well known to the application development team, a performance test might not be required. However, if a new technology is being introduced or replacing one of the tiers, there may be a great risk and thus testing is required. Likewise, a significant upgrade to a vendor product could warrant performance testing. It is important to consider the scope and impact of changes to any key components.
4. Application features and functions
The amount of modified code or new code in an application can create new performance risks. Understanding the impact of the changes is critical to determining if performance testing is required. Potential analysis questions may include: How has the new or modified business feature changed the behavior of the application? Were the changes extensive and across the client, application services, and database? What percentage of the code was impacted by the new or modified services?
5. Software Development process
Analysts may consider questions such as: Does the SDLC track non-functional requirements during the lifecycle? How are those non-functional requirements communicated from the requirements, design, development, testing and deployment teams? Have key business transactions or services been identified with stringent response time requirements, or strict throughput requirements? What architectural risk analysis, prototyping, or other types of testing have been done throughout the lifecycle that may mitigate the need for formal performance and scalability testing efforts?
6. Production issues with the last release
Recent history can be an indicator for the future. If the application went into production and the last release had performance, scalability, or stability issues, then it may require a closer look at the application to determine if the issues have been truly mitigated. Otherwise, performance testing is required. Similarly, resource utilization patterns and trends may be used to assess the need for further risk mitigation.
7. The schedule of performance tests
Applications will undergo performance testing at different times during their lifetime. The application can be tested before it is ever released into production; performance testing can be scheduled for every major release; or performance testing can be scheduled based on the extent of the application’s changes.