Phillip Odom’s background is in Electrical Engineering/Computer Engineering. Phillip grew up in Nashville, TN in the late 1970’s and 1980’s with access to “home computers” like the Tandy TRS-80, the Commodore 64, the Apple II, the Amiga, etc. So his childhood was spent making these quirky and somewhat crude machines do things, sometimes anything.
When and why did you get into this industry?
Professionally I have been working in technology for almost 20 years. I spent the first ten years working in the hardware and IT services sectors (Integration/Product Development/R&D) almost exclusively focused on the healthcare vertical. In 2004 I chose to take my career in a different direction. My friend Scott Moore, ran the QA/Performance practice for Deloitte in Nashville. He told me that he was leaving and planned to start a new company focused something that I was only vaguely familiar with called load testing. He sold me on the idea, took me under his wing, and I joined the startup team at LoadTester, Inc. in 2004.
There I began to learn how to perform (Mercury) Loadrunner consulting, and the finer points of the exquisite art of performance testing. I had opportunities to test web apps, SAP, Citrix, Oracle, custom .NET and J2EE portals, WebLogic, and all sorts of application technologies in between. Eventually (at the beginning of 2007) I started my own Application Performance Management (APM) practice called AppDiagnostics, where we performed integration and implementation services for Borland, CA, HP, OPNET, Microsoft, and SOASTA testing and diagnostic products. In September of 2013, AppDiagnostics was acquired by another firm, and I elected to join LoadStorm in November.
Do you consider yourself more of a software developer or QA professional?
I would consider myself a QA professional, for sure. I am more in tune with process development and execution. I generally can script with ANSI C and VB Script. I can build web objects with HTML and JQuery/JavaScript. I have even dabbled with Java for the purposes of Arduino prototyping, but code definitely does NOT drip from my fingertips.
What is your specialty? Why?
My specialty is process and service development. I generally subscribe to the notion: Show me the “why”, and I will show you the “what and how”. I enjoy the challenge of solving problems with repeatable processes.
Is there anything commonly overlooked in web application testing?
The number one mistake in Performance Testing is NOT proactively testing your application. All too often, people only consider performance testing AFTER a problem occurs. Then, they usually want to throw more hardware at the problem.
How much involvement do you have with load and performance testing?
I have been an active practitioner of performance for the last nine years. Generally I am involved with almost all aspects of performance testing, including: initial project scoping, project management, requirements gathering, objective setting, test planning, business process design, test script development, test product configuration, test execution, analysis, reporting, and final documentation. Once, (no kidding) I even had a client ask me to make guacamole… And I did.
What would you say is the difference between load testing and performance testing?
Well, first I see Load Testing and Stress Testing as sub-sets of Performance Testing. Beyond that, I was taught and currently teach my clients that there are distinct differences between a Stress Test and a Load Test.
The objective of a Stress Test is to achieve “First Point of Failure”. The objective of a Load Test is to achieve a simulation of a “Peak Hour” of “Real-World” usage.
Who are the top 3 testing experts that you know?
I know way more than three, but the most significant to me would be Scott Moore the Founder and President at Northway Solutions Group.
Do you feel like performance testing is an accepted critical part of the development life cycle?
No, but things are getting better. In the ideal world, we all want every customer to “engineer” performance into the SDLC. Many times, this still doesn’t happen. For example, my wife is a software developer for a publicly traded, Fortune 500 health care company with 50,000 employees. While they have implemented formal QA from a functional perspective, they STILL do not performance test anything. ANYTHING, EVER! When her team experiences a performance issue, their first reaction is to throw more hardware or bandwidth at the problem. This usually ends with costing more money only to achieve mixed results.
I am under the impression that some customers still equate performance testing to some form of Voodoo/Black Magic that they don’t understand, never realizing the rigorous processes behind most performance delivery frameworks. To that end, I have written testing methodology documents in excess of a couple hundred pages bemoaning the principals and benefits of repeatable processes. However, I have never sacrificed a chicken to improve transactions per second.