Archive

Monthly Archives: December 2012

In October this year the SEC sponsored a technology roundtable to discuss how to promote stability in today’s markets. This is after a year with headline grabbing stock market failures;

NASDAQ market

Disrupted trading in the markets

In March the BATS Exchange IPO was halted and the BATS exchange stopped trading for hours, because the IPO price of BATS went from $16 to zero in minutes. In May the Facebook IPO on the NASDAQ had issues due to extreme volume coupled with the effects of high frequency trading. The rate of order change and cancellations prevented the exchange from establishing a price for the IPO.  The orders changed that fast, sub five milliseconds. Then Knight Capital suffered from a software issue.  They installed new software to work with the NYSE’s new Retail Liquidity provider program. The software sent buy orders out  from the market open and continued unnoticed for 30 minutes. Knight Capital was not aware of the impact it was having to the marketplace.

In between all this, you can find many examples of micro-flash crashes of single stocks. These can be seen by visiting the web site of Nanex (www.nanex.net).

The SEC Roundtable review

The morning session lasted 2 ½ hours. The roundtable was kicked-off with by the SEC Chairman Mary Schapiro. The success of the market is tied to the technology and when it fails, the consequences are extreme. These events continue to erode confidence in the market.  The industry needs to address the high volume of cancellations.  Also, there are more basic technology 101 issues occurred during the two IPO cases and the Knight Capital case. We need to balance the need for rapid innovation and competition with the proper and diligent testing methods.

My key take-a-ways from the morning session of the roundtable;

  • There is a need for firms to have a better understanding of their impact to the overall market. This would involve using Drop-Copies, where the exchanges send real-time trading records back to the broker/dealers, so they would understand their order flow. The broker/dealer could then run real-time reports to check their orders.
  • Improved testing strategies within the firms and an elevation of the Software Quality and performance profession. While QA people are independent from the development teams, they must be integrated into the development teams. The QA role in the firms must change to attract the best and the brightest to the role, including functional testing, as well as performance testing.
  • Testing in production. This is always a controversial topic in any industry. The firms and exchanges would agree on a number of testing Symbols for use when testing new features. This would require significant cooperation across the marketplace.
  • A focus on internal software testing for stability, performance, and scalability. Introduce earlier involvement in the SDLC with software quality resources. The benefit that an outside organization can bring on the processes and test cases could be helpful.  However, the roundtable participants discussed how difficult it is to being new people into the teams, due to the complex and technical nature of their systems.
  • Orderflow kill switches for the firms and exchanges. The exchanges would provide this capability and allow each firm to set its own parameters or limits that would trigger the kill switch. This would allow brokers to manage specific order types, control the size or value of the order, limit prices, or restrict the stock they trade in.  This came out of a working group that was established after the new meltdown Knight Capital.

The issues this year occurred in the exchanges and the broker/dealers, no one was immune from disruption. The key items that caused disruption were related to new or changed software and large volumes of orders.  An Enterprise wide software performance engineering strategy will help mitigate these software issues, both for the brokerages and exchanges.  The Market is facing a significant challenge with these issues.  The need to innovate and introduce better features before the competition does,  in a very complex and interconnected marketplace, with the need for rigor and increased testing. In addition production monitoring (really marketplace monitoring) is a critical component.  The devil is in the details and the compensation models.

VegasStrip

The Vegas strip and Software Performance Engineering

Reflecting on the week of conversation, presentations and discussion from the Computer Measurement Group’s annual conference help during December 3rd thru the 7th in Las Vegas.

I attended a number of presentations on web performance and capacity planning. The topics ranged from Performance and capacity on the Cloud, to VMWare deep dives on the CPU behavior, a panel on how disruptive technologies still need performance engineering, load testing tips and techniques, performance challenges of Big Data and of course the Software Performance Engineering Body of Knowledge, and others.

These are typically deep dive discussions with solid examples you can take back with you. These are presented by the practitioners, the people who solved the problem and made it happen.  Each year 100’s of people send in papers and presentations for consideration for the conference. There is a wonderful group of volunteers for each of the subject areas that review the papers and decide on who will be able to present. Then they provide coaches and mentors (all volunteers) to the presenters to help refine their presentations.

There was a “how-to” presentation on finding the critical path for the nightly batch process for a large application.  There were over 1,000 jobs that ran during the cycle and the cycle was starting to run longer than the window. So, Chris (our presenter), walked us through using Microsoft Excel to help solve the problem. He created a process and a set of scripts to evaluate the run log and import them into excel to find the jobs that were in the critical path.

There was another presentation on how to measure the performance of the Browser, by creating the waterfall chart.  This identified the critical path of the browser processing. One conference and two very different technologies.  Very nice

The CMG is have been around for over 25 years, and continuous to adjust and adapt to the marketplace. There are many competing organizations today in the web performance and capacity planning market.  They each have a focus area, I think the one strength the CMG has is that end to end focus on software performance. It is more than the browser, it is more than the database or the CICS region.  The CMG is the one user driven organization that considered performance across the tiers.  I think the one thing they could use right now is a solid marketing campaign to help get the word out.

 

 

Web page size is increasing – From the HTTP Archive

HTTPArchive Image load

Day one of the CMG Conference was kicked-off with the keynote presentation by Pat Meenan of Google and Webpagetest.org, where Pat discussed the user experience and the browser.  He discussed performance monitoring and tuning of a few well-known web pages. When is a page really loaded and ready?  That keeps getting fuzzier by the day. During his presentation he mentioned a site called the HTTP Archive, as it has shown that the web page size is increasing at a rapid rate. So, I thought I would take a quick look at the HTTP archive.

From their web site; The HTTP Archive records the content of Web Pages and how it is constructed and served. It is a permanent repository of we performance information such as size of pages, failed requests, and the technologies utilized. They use this performance to identify trends in how the Web is built and provides a common data set for conducting web performance research.  Starting in November 2011, they started using the web sites listed in the Alexa Top 1,000,000 sites. From November 2010 to October 2011 they analyzed 18,026 URL’s.

They produce a number of Trending graphs, almost all of them are increasing during time.  The Total Transfer Size is at 1.27 MB’s and Total Requests is at 87. They track the HMTL Transfer size and HTML Requests, JavaScript Size and Javascript requests and many more. You can download the data for your own detailed analysis of performance trends.

One this is clear, the web pages continue to increase in size, as the bandwidth increases, the ability to consume the bandwidth increases.

Take look at httparchive.org

 

The naitonal CMG conference starts today with a series of workshops on Top performance metrics for Capacity management of Virtualization; SAN/Ethernet fabrics Network performance, and z/OS Enterprise Storage performance and architecture. Then the Keynote will be given by Patrick Meenan of Google discusssing Web Performance, the Big Picture. And when google says big, they mean big.