In our software performance engineering practice here are Collaborative Consulting, my team and I are fortunate to work across many different clients and across many different industries; Financial services, Insurance, Retail, Life Sciences, energy, and more. We are called in for many different reasons; from a production performance and stability issue, to a designing and executing a custom benchmark, to reviewing the results of performance tests that didn’t quit deliver the value or answer the question the business needed. I am a big proponet of clearly descrbing the value that a Performance Engineeing team is providing to the business. In some cases the production system still experiences performance and stability issues, even after performance testing was done. So what happened?
When we look at each project and try to undecover where the disconnect is, we look across the following organizations; Architecture and development, the Performance engineering or testing team, Enterprise Architecture, perhaps a stage gate process (how projects are approved to move forward), performance testing, capacity planning and production monitoring. Each has a performance concern. Oh, and don’t for the business viewpoint. One of the key questions I ask; Can the performance engineering team stop a release? Under what conditions will a performance defect prevent code from going in? Do people ask, “Have they run the performance tests?”, or do they ask how were the results of the performance test.
What is the connection between the development team and the performance testing team? Certainly, the development team is a consumer of performance engineering/testing services. How silo’ed are these two organizations? Is the development team involved in the performance test and how eager are they for the results? Now, not every development change needs a performance test. The high risk, high value transactions must be tested, at volume and a workload analysis done as well so there are no surprises, as the new coding changes can consume more system resources than planned.
Looking at the performance testing environment or lab, we find a shared environment, with infrastructure changes occuring without notice. Or the testing can only occur at off-hours, and sometimes they are frequently cancelled. Performance and scalability testing require significant resources, large databases, load testing tools, metrics collection, a few people, and hardware. Many times companies only go half-way; some componets are close to production size, others are a fraction of production computing resources. Then, the business or IT management is surprised when the performance tests are not indicative of production performance. If you really cannot run a large scale performance or scalability test that adds real value, you must clearly articulate that.
What else can you do? Component testing, where you rate each component interms of transactions per second. You can do workload profiling, with a few comparably configured components, with low voluem end-to-end performance testing. You need some additional tools to monitoring the workload changes driven by the code modifications. The performance engineering team can work more closely with the developmen team, you can introduce code profiling tools into developement, and make user of service virtualization.
Too often, the release process simply wants to know, have the test been run? Not the quality and findings of the tests. In the stage gate processes that are out there, often times a performance and scalability isn’t even a requirement. Or there is simply a checkbox.
Have you clearly defined the business value of the software performance engineering process to your business? You should be able to tell quickly be looking at your budgeting process.