Archive

Monthly Archives: July 2012

Software Performance modeling – Part I
This is a complex topic. Where do you start? A complete approach must start with the current production business volumes. I will review the steps involved in the modeling effort. I will also cover the types of models, and please do not use a linear model for a complex system.

Know your workload profile
If you have a web site that is frequented by customers and they are transacting business, you must know how many customer and how often they visit the web site. What are they doing on your web site? What are the key business transactions they are using? Beyond the login, are they reviewing orders, searching for new products? If you have an online retail web site you must be fully aware of the workload profile your customer are putting on your systems. If you are providing healthcare benefits or prescription benefits to a large number of clients and members, you must understand the business volumes for each prior year, during the open season when the members and beneficiaries visited the member portal.
The business and IT must together estimate the future workload profile. Will there be a steady increase in volume each quarter, say 20% growth? Is there a new client coming onboard with one million new members?  How will this business growth impact the existing systems?

Know your system utilization
Each workload profile consumes resources of the system, and when I mention system, I mean a comples Enterprise system. When a user or member logins into the member portal, or brokerage account, they execute business transactions. If a member reviews their beneficiary, or the retail trader reviews their account positions, those business transactions turn into system transactions; the proverbial rubber hitting the road.
The system transactions span across all the components; web servers, application servers, middleware, CICS gateways, databases.  These transactions consume memory, CPU, network, and disks. The business workload can be measured in transactions per seconds (minutes).
The system must be monitored, the end user experience must be monitored. There are a number of tools in the market to help collect this information. These tool are in the Application Performance Monitoring market or APM.  These tools are critical to help measure resource utilization in complex systems. The information they collect are critical to the forecasting and capacity planning disciplines.
The next post will be a review of a simple business model mapping the system transactions.

Advertisements

Part II

Not all applications require software performance testing, and the same application may not require repeated performance testing for every release.  The risk factors you should use to evaluate your application include:
1. User Population
The people who use your application are critical to the decision. Questions may include: Who are the users of your applications, how many concurrent users are there, and is the number of users increasing? Are the users purchasing products or services from your business? Are they external or internal users? How easy is to for your user base to switch to a competitor if you web site is not performing well?
2. Application type
The application type itself can dominate the risk factors.  For instance, questions to ask may include: is the application an online retail web site?  Typically a revenue generating web site requires a performance test for every release.  Is the application a key component in the Enterprise Architecture that other applications use?  If so, this is deemed a critical application and may require performance testing for every release.  Is the application a batch process with a strict window of processing? How critical is this application to the business, and how is it rated?
3. Application technology
The state of the application technology stack can be a significant risk factor. Generally, the technology platform does not change from release to release.  If the underlying technology is stable and is well known to the application development team, a performance test might not be required. However, if a new technology is being introduced or replacing one of the tiers, there may be a great risk and thus testing is required. Likewise, a significant upgrade to a vendor product could warrant performance testing.  It is important to consider the scope and impact of changes to any key components.
4. Application features and functions
The amount of modified code or new code in an application can create new performance risks. Understanding the impact of the changes is critical to determining if performance testing is required. Potential analysis questions may include: How has the new or modified business feature changed the behavior of the application?  Were the changes extensive and across the client, application services, and database? What percentage of the code was impacted by the new or modified services?
5. Software Development process
Analysts may consider questions such as: Does the SDLC track non-functional requirements during the lifecycle?  How are those non-functional requirements communicated from the requirements, design, development, testing and deployment teams? Have key business transactions or services been identified with stringent response time requirements, or strict throughput requirements? What architectural risk analysis, prototyping, or other types of testing have been done throughout the lifecycle that may mitigate the need for formal performance and scalability testing efforts?
6. Production issues with the last release
Recent history can be an indicator for the future. If the application went into production and the last release had performance, scalability, or stability issues, then it may require a closer look at the application to determine if the issues have been truly mitigated. Otherwise, performance testing is required. Similarly, resource utilization patterns and trends may be used to assess the need for further risk mitigation.
7. The schedule of performance tests
Applications will undergo performance testing at different times during their lifetime. The application can be tested before it is ever released into production; performance testing can be scheduled for every major release; or performance testing can be scheduled based on the extent of the application’s changes.

I am creating a theme for the next few posts, on the topic of selecting an application for performance engineering and performance testing.

Under what conditions will an application require a performance test? Not all applications require performance testing, and the same application may not require repeated performance testing for every release. The selection process for performance testing frequency must consider user population, application type, technology, changes to features and function, and how non-functional requirements are monitored in the Software Development Life Cycle (SDLC).

A typical Enterprise has thousands of applications.  In a business unit, there can be several hundred applications. As the business continues to evolve and change, so will the applications evolve and change to support the business needs.  The changes made to the application may increase the risk to performance, scalability, or stability. Depending on the business’s tolerance for disruption, these changes may require some level of performance and scalability testing to verify the application can still process the accepted business volumes while staying within service level agreements.

These application changes are typically scheduled on a release calendar. Customer facing applications should be considered critical to the business, while internal facing applications may be less critical. However, what if the internal application is supporting the executive level of the company and provides key information for decision making?  Some Enterprises believe inherently that performance and scalability testing are required, while others may leave the decision to the business units. Enabling the Enterprise for performance engineering requires an investment. This investment is considered to be an indirect investment, as it may not be immediately linked to a revenue generating process.

The value of the investment in Software Performance Engineering can be quantified.  For example, when performance engineering recommends design changes allowing a revenue-generating Web site to process more orders, the return on investment is clear. In a large Enterprise, performance engineering resources are often in short supply, and budgets are generally under constant pressure.  This leads IT and business decision-makers to ask certain questions, such as: What guidelines can you use to help allocate these resources’ across the project portfolio? How can you make sure you have not missed an application that required performance testing, and how can you make sure you’re not testing the wrong applications? Are you sure the results from performance are accurate and allow management to make informed decisions?