Selling the value of your APM investment to your business


The Digital business is focused on the customer journey and real-time performance monitoring is a critical success factor on your Digital business journey. The digital business is connecting more and more internal systems and third party partner systems to deliver the customer journey. Creating a complex environment.

Application performance management practices and tools provide a great value to the digital business; from accelerating the root-cause analysis process, to proactively measuring the customer experience across the mobile and web channels, and providing business transaction dashboards. When the full set of APM tools are used properly they can provide alerting and trending information so you find out whether the change to the application or system did no harm, some harm or really caused a problem. You can quickly react to the latest digital marketing event.

In my position, as a CTO and a Sr. performance engineer, I have used many different APM tools and diagnostic tools. Unfortunately, these tools are brought in to a company as a reaction to a problem and not brought in before there is a problem. These tools are usually brought into the company to help solve an immediate production performance or stability problem.  It starts with the developer, engineer, or operations person or team, who have to solve the problem. The system is slow (what, where, when, who, how??).  To quickly get an APM tool in place for such a scenario, the free download option is used for a short period or a vendor is used for a proof-of-concept and the problem is solved.  However, to keep the tool and to switch from a reactive to a proactive position, there is no budget at the time because the tool was not identified last year. So, the people on the ground are stuck, no APM tool.

No one really knows why the system is slow, in fact, they usually can’t even tell you what they mean by slow, is it 5 seconds, 10, or 42 seconds. It is amazing that this is still the rule in most fortune 1,000 companies. Often times the Corporate desktop or laptop is locked-down, no changes unless approved. The issue is that a Enterprise  end-user cannot download and install any software on the corporate laptop or desktop.  With web performance tools (Appdynamics, New Relic, YSlow, Dynatrace, HTTPWatch, et al) having the ability to download and instantly measure the response times of web transaction, SLOW can be measured quickly.


Figure 1 – Industry Benchmark – Dynatrace


Know your business

Let’s talk about business value and business benefit.  What are the goals of your business this year? If you are in the Insurance business; Is it to capture the business of more Independent Agents by improving their customer experience? Wouldn’t there be value you knew your digital customer experience? Also, if you sell on the web then you know that faster response times mean increased revenue. What if you run a distribution center, what is the goal there? The business goal is to increase revenue and the Distribution Center must move more orders. That requires a highly available environment and a responsive environment. Don’t forget call centers.

Understand the digital business motivation for performance and user experience dashboards.  They help proactively monitoring the customer journey. There are a number of third party companies that monitor the performance of web sites across industries to establish benchmarks. The business maybe motivated to move up in the industry Benchmark and understand that some investment is needed to move from the 20th position to the top five.

Your homework: Go find your company business goals; there should be 4-6 of them.

Payback period, direct benefit and indirect benefit

Technology often gets caught in the budgeting process when you cannot clearly link the investment (purchase) to a business goal and benefit.  Many technologies are far away from revenue or cost reduction, and how do you quantify flexibility? The ability to invest in technology and the willingness to invest in technology varies greatly by industry and by the type of application. Online web retailers relentlessly work to improve the user experience and user performance, the business understands the connection to performance and revenue. They can quickly justify the investment in APM solutions.

Direct benefit: The APM investment you are making will improve the application performance and customer experience for the Independent Agents. This is lined up with the company goal to attract more Independent  Agents and increase revenue.  If the business can attract 15% more IA then revenue increases by XXX%.

Indirect benefit: The APM investment will defer technology (Server) purchases for two quarters.  If the business has a cost containment goal then this will align with it.

Payback period: There will be an initial investment required ($100K), recurring charges ($15K), and one time training expenses.  Based on your direct and in-direct benefits, how long is the payback period?

Digital business and the customer journey: An Agile digital business introduces new features and functions to the marketplace, frequently.  Are all your customer channels providing the same experience? Creating a real-time dashboard of the channels and the critical business transactions provides value to the business by providing leading indicators of their experience.

Your homework: Does your APM investment provide a direct benefit or an in-direct benefit?  How long will the payback period be?






Software Performance Engineering

Agile and PEYour are not Agile

Integrating Software performance engineering with Agile software development methods.  This should be easy, right? These two methods are not naturally suited for each other;

Performance engineering has rigorous and defined methods for defining non-functional requirements in the development cycle, and performance testing requires production like test systems with a proper transaction workload mix, and a large database. Performance testing can introduce more time in a release schedule when the application was designed and developed without well defined performance and scalability requirements.  The value in performance engineering methods is to manage risk and to design and build highly scalable systems that support the busines growth.

Agile methods start with stories and themes, a defined system architecture, and a partial list of required features. The team leader defines a series of releases comprised of a series of scrums.  Each release will have partial features and functions, Each scrum may…

View original post 934 more words

NetN Netflix


How is your Internet experience? Is it throttled, blocked, are you in the fast-lane, do you pay more  for speed; are you not prioritized?  How would you know if you were throttled? Information Service providers are under Title I and are loosely regulated while telecommunication providers are under Title II (Common Carrier) and  are tightly regulated.

Ten years ago, internet traffic came from thousands of companies. Then,  in 2009, half the traffic started coming  from 150 large content providers. Now, only 30 companies control half the traffic. The Internet is a very different place today than it was ten years ago, and certainly it is more different than how it was in 1996 when the Telecommunication act classified ISP’s as Information Service Providers, Title I.

Even though it has been a hot topic since 2002, the Open Internet or the Net Neutrality debate gained momentum in January 2014.  Verizon Communications brought forth a lawsuit to challenge the 2010 FCC Ruling on the Open Internet, which states  there are no paid prioritizations and blocking or throttling traffic is not allowed.  The ruling also requires transparency and internet access for all. It was only put in place for wired Broadband providers, not the wireless broadband providers.  However, the US Court of Appeals finally heard the case and, in favor of Verizon, ruled that the FCC could not regulate the Internet Service Providers.  It took four years to work through the courts.

The emerging ISP’s were classified under Title I by the FCC in 1996; therefore, the FCC could not impose rules on them.  In order to impose rules on them, the court said that the FCC needed to reclassify the ISP’s as Common Carriers under Title II of the Communications act of 1934.  Did you know the FCC was created by Congress in this act to oversee the telecommunications industry?  Being classified as Title II would mean the FCC could regulate prices and require approval before providers could offer new services and products. Also, this means the FCC could force the ISP’s to lease capacity on their lines to competing ISP’s, and don’t forget increased taxes and tariffs.

This is where forbearance and the hybrid approach come in.  Most advocates do not want the FCC as the gateway to new products in the marketplace. However, they also don’t want the ISP’s prioritizing their content over competing content. The number of ISP’s continues to shrink.

How will the FCC implement Forbearance? The FCC wants to enable the open access rules; however, they do not want to slow down innovation by having to approve new products or services. The FCC Chairman, Tom Wheeler, has proposed a way forward and will be presenting it to the FCC Committee at the end of February. He is including wired and wireless broadband providers. In addition to the FCC direction, many people and groups are adding their comments. For example, President Obama, is advocating for full Title II classification,  many others are taking the , “If it isn’t broke, don’t fix it” position.    Congress is also weighing in by proposing bills that would ban classifying ISP’s as public utilities.

1996 Telecommunications Act: Summary

Section 706 of the act requires the FCC to determine whether “advanced telecommunications capability (broadband or high-speed access) is being deployed to all Americans in a reasonable and timely manner.” Universal Service was also mandated in the 1996 Act.  An extension of the Act is the E-Rate program; it enables grants and discounts for libraries, schools, etc. to connect to the Internet. The E-Rate program is paid for with taxes from the Title II participants.

Trigger event –  So who is blocking whose traffic?

In order to move traffic across the internet,, the Internet Access providers (Comcast, Verizon FIOS, Time-Warner Cable, etc) created contractual peering exchange agreements between themselves and the Internet Transit providers (Level 3, Cogent Communications, XO Communication, etc.) to allow traffic to flow back and forth across the various networks. The intent is that they will share about the same volume of traffic to keep the peering agreement balanced.  So, what happens when one party increases their traffic significantly?

In May 2014, Level 3, an Internet backbone provider, said that Comcast allowed traffic to back-up (slow down) at the peering points between the two networks when L3 and Comcast were still working through their peering agreement. Level 3 has Netflix as a customer, and they were delivering much more data through Comcast, resulting in a lopsided peering agreement. When carriers have roughly the same traffic volume, each one is happy. However, when there is an imbalance for a significant period of time,  the agreement needs to be brought back into balance.  Netflix moved from Akamai to level 3. Previously, Akamai paid Comcast for the Netflix traffic, however, now Level 3 wants to transit the exchange while not having to pay more.  Comcast wants Level 3 to pay for the increased traffic. So, is this throttling?

A brief timeline

Old Phone


Year, Act
1887 – Interstate commerce act Railroads had to be regulated due to their anti-competitive practices. This is where the term common carrier started.
1934 – Communication Act FCC is created. Title II definition of Common Carriers for Phone companies.
1984 – ATT Broken Up Seven Regional Operating Bell companies. Before this, you needed ATT’s permission to connect a device to their network. This opened the door for Modems. Without this you would not have AOL or Earthlink.
1992 – FCC Re-regulated Cable TV Cables business practices required it to be re-regulated.
1996 – Telecommunications Act Universal access and ISP’s classified at Title I, Information Service Provider.  Not Title II.
2002 – Tim Wu coins the term Net Neutrality was first described, after Tim Wu  worked for Riverstone networks. When he went to work for a Chinese ISP , they blocked each other’s traffic.
2005 – FCC puts principles in place for ISP’s. Then FCC Chairman Michael Powel: Put principles in place to set the foundation for the concepts of transparency, nondiscrimination and reasonable network management
2007 – Comcast throttles traffic from BitTorrent. This creates a big debate on large ISP’s controlling traffic and prioritizing.
2010 – FCC Ruling, follow up on  the 2005 Principles. The FCC Ruling forces the ISP’s to be transparent on how they handle network congestion, andprohibits them from blocking traffic on wired networks.It outlaws discrimination on their networks. (Cannot prioritize your service over someone else).Verizon Communications brings forth a lawsuit on the ability of the FCC to regulate the ISP’s. 
2014 January- US Appeals rules FCC cannot regulate ISP’s The court agreed with the FCC’s arguments that Section 706 of the Communications Act gives it authority to regulate broadband networks. However, the court found that the FCC could not regulate broadband under common carrier rules, because it has not classified the service as a telecommunication service.
2014 – FCC is reviewing how to handle and classify the ISP’s. FCC solicits public comments. (John Olivers’ rant crashes the FCC Web Site)
2014- May Event:  L3, Comcast, and Netflix L3 wins Netflix as a customer, starts sending huge volumes to its peering ISP’s.  Comcast allows a backup to occur on the interchange.  The peering agreement becomes lopsided. L3 thinks the increased volume should be covered under the peering agreement  and Comcast wants to renegotiate.
2014 – President comments on NN and regulation. States his opinion on the need for Regulation under Title II.
2014 – FCC bandwidth Considers moving Broadband minimum  bandwidth download from 4 Mbps to 25 Mbps.
2015 – President comments Comments on Net Neutrality (NN) in Iowa where the Internet speed is at 1 Gbps and  makes the case for inner cities and rural areas.
2015 – Republican Congress takes up the Net Neutrality issue. Republican congress introduces possibility of new legislation for NN, banning throttling and blocking.
2015 – FCC Commission will vote on new Net Neutrality rules.


Where do we go from here?

It’s clearly a complex issue. Even Federal and Local state governments are involved. There are fewer and fewer ISP’s which thereby limits consumer choices and decreases competition.  Comcast is trying to buy Time Warner Cable, which would create a giant business that would own almost 40% of  households using the Internet.

Fewer and Fewer content providers are generating larger amounts of traffic. Netflix and YouTube account for 45% of the Internet traffic and just wait till Facebook and Twitter really get their video services running. Then add Amazon Prime. Traffic continues to grow significantly. Today, many large content providers place their servers at the Access ISP facility, essentially co-location. This allows them to bypass the Transit ISP’s and deliver content faster to the consumer. Is this paid for prioritization or just a reality when you generate 30% of the traffic?

Is there really a problem?  Certainly Level 3 and Comcast need to work out their issues, but this seems like an outlier.  Has anyone had an issue with their content being blocked or throttled at large? It seems more focused on what could go wrong.

Google is becoming an ISP with their fiber initiative, they are both content and ISP.  Its up to the local governments to allow more than one ISP in the community.

We need to encourage innovation, so the FCC must not be the approver of new services or products.

We need the FCC to keep the ISP’s honest with clear visibility into the traffic flow across the Internet. They must be able to intercede if the ISP’s contracts are getting in the way of providing service.

A key component of the Open Internet is continued access for Libraries, schools and rural regions.

The FCC musts react quickly to violations regarding the Open Internet Policy.

Stay tuned for the FCC Commission meeting on February 26th.

Architecture and technology decision making.

Making decisions

Making decisions

Lets hide behind the chain saws

In the spirit of Halloween, Chain saws.    What a great commercial from Geico to highlight the subject of poor decision making; when you’re in a horror movie, you make poor decisions.  There is a running car that is ready to go, but they decide not to take it. How many of us realize that after the architecture has been defined and the development starts, that maybe, in hind sight,  we choose to hide behind the chain saws. Another familiar scenario is after your business selected a software product that met the functional requirements,  however it wasn’t even close meeting the performance and scalability requirements. The product was selected, and then later in the SDLC it was very apparent that the software product was not going to meet the performance goals without a significant amount of rework from the vendor and three times the original capacity plan, from the vendor.

What is the decision making process? How do we make the right or wrong decision? The more difficult it is to undo the decision, the more information we need. To help make the choices, we have our personal experience, there are best practices, we have mentors and peers we consult, you have situational information and often times not enough information.  How much information do you need to be comfortable to commit?  One quote comes to mind, from General George Patton, “Perfect is the enemy of the good”.  His approach being the most important thing you can do is to make the decision quickly and move on, then adjust when new information becomes available.  Though, some decisions are tough to undo. Like committing to a new software product without the proper information.

When selecting a product that will be part of the key business function, you should make sure these items are part of your decision making process;

Understand the risks from the User population.

    1. Who is using the solution?
    2. Review the user population, the business growth plans, and volume peaking factors
  1. Understand the associated risk with type of application.
    1. What type of Application is supporting the business function? Is is a messaging application, a reporting and analysis, etc.
    2. For instance, messaging, ERP module(s), reporting and analysis, Business or consumer portal.
  2. Understand the associated risk with the technology the application or solution is dependent upon
    1. Is this a new technology platform or solution for the Enterprise?
    2. Has the technology been demonstrated to work at the expected load?
    3. Is there a critical technology required from a third party?
    4. Will part or all of the solution be hosted externally?
  3. Understand the risks associated with the Application release strategy
    1. Expected frequency of releases
    2. Degree of change per Release
    3. Added business units at each Release increasing volume
    4. Will the application be on a brand new release of the product
  4. Understand the risks associated with the entire Solution Architecture (logical and Physical layers)
    1. Is there a critical new component required for application
    2. What is the configuration testing environment
    3. What reference can the vendor introduce you to, are they using the version and features you will be using?
    4. What is the team depth of the vendor
    5. Who will be making the changes for you, the consulting organization or the development organization?
  5. Understand the risks associated with the team organization and structure
    1. Has the Solution architect done this before
    2. How distributed is the team?
    3. Does the team understand the development methodology
    4. Is the technology new and requires a scarce skillset?

Hopefully this will help you get to the running car and not the chain saws.



DevOps is a popular trend in IT and have been gathering steam. The challenges with DevOps for large companies are plentiful. One of the key roadblocks comes from the Vendor Management and Vendor Contracting groups.  Nothing to do with the actual task at hand. Large companies often outsource development to one or more large System Integrators.  They develop the code or assemble the solution and hand it off to another company doing the operations support. The contracts these companies write are usually very specific on who is responsible for what.  If you want one Integrator to support your performance testing, that must be in the contract and part for the price or bid.

How do we get the two or more SI’s to work towards a DevOps model? The customer must adjust their contracting terms to accommodate this and provide a path.

Starting the DevOps culture in a Large fortune 500 company is no small task. You can define and start a pilot within your Performance testing team. A meaningful performance test requires multiple disciplines to execute and analyze results and for root cause analysis.
The Performance engineer drives the process, the application architect can see the application under load or stress to see how their design is working, the operations team can see the installation and configuration process, can help with the automation and deployment scripts or approach.  All can benefit from the APM monitoring that is required for critical transactions. I think it might a smaller challenge to get some SI buying for this type of approach.  Then perhaps more onto a Production application.

 The SPE Body of Knowledge


How can we benefit from the using a Body of Knowledge approach for Software Performance Engineering. The goals is to address the career path, the organization, the industry practices and to enable you to build a BoK within your company, for the performance engineer. There are five knowledge areas for performance engineering.

This is an overview presentation I gave at the Greater Boston Computer Measurement Group and the National meeting. There are two documents here, the Powerpoint and the Detailed document.  It is a work in progress.

PresentationSPEBoK CMG National V4

Paper with the details of the SPEBoK: The Guide to the Software Performance engineering body of knowledge V4

Please send me your comments..





Frustrated User

Bring Your Own Response Time.

The consumers’ expectations have greatly influenced the demands and expectations on Enterprise IT departments. The consumer and IT customer brought their own devices, expected more self service and at a much faster pace. One of the key tasks that a Performance Engineer must do is to help the business and IT set expectations on the response times of corporate systems. The history of performance requirements for corporate facing systems and even call centers has been problematic. Often times ignored and certainly deferred. The typical approach is to see just how slow the system can be before the users completely revolt. This tends to be the case because its not a revenue generating system, however, many of the Corporate IT systems directly touch the customer or business partner after the sale or contract is signed.

Response time or performance goals for Internet retailers is well defined and measured, there are many Industry specific benchmarks that compare the response times of web pages against competitors in the industry. The Internet business models demand faster and faster response times for transactions. Benchmarks can be found at Compuware (, Keynote (, among others. However, there is not a benchmark for Corporate systems. The users of Corporate systems are starting to voice their concerns and displeasure more loudly.  They are expecting speeds comparable to Internet Retailer speeds. Their expectations are for less than five seconds and often two seconds for simple transactions.

Our studies have shown and are in alignment with the research done by Jakob Nielson ( on usability, A guide to setting user expectations must consider three barriers;

1) 0.100 Seconds: The user perceives the system to respond in real time with out any noticeable delay

2) 1.0 Seconds: the User starts to perceive a slight delay with the system, but us very happy with response time

3) 10.0 Seconds: the user will greatly notice the delay and start to be distracted and attempt to do other things while waiting

So, just as the consumer has brought their own devices, they are bringing their own Response times to Corporate systems.