This is the second installment of an email interview with James Christie, independent software testing consultant and owner of Claro Testing Ltd. In the first part, James talks about his views on usability testing, and in this second part he discusses leaders in usability testing, KPIs, test automation tools, cloud computing, and testing blogs.
Living in Scotland, James has worked with large global organizations such as IBM, and he provides consulting services as leader of Claro Testing Ltd. His consulting includes setting testing strategy and budget, writing test plans, supervising test execution. and creating testing processes.
Who are the top 3 usability testing experts that you know? No need to list yourself, that is a given.
I’m definitely not an expert. My background is strictly conventional IT. I’m trying to operate on the boundary between conventional development and HCI. Anyway, the three most influential HCI experts as far as I’m concerned have been Alan Cooper, Larry Constantine and Steve Krug. Cooper and Constantine have done interesting work and written very good books about integrating HCI and software development. Krug is more of a straight usability guy and has produced some very readable stuff on how to get websites right.
However, I don’t actually know any of these people personally. Of those I’ve met or dealt with, I’d go for:
- Jhumkee Iyengar – an Indian usability engineer
- Tom McEwan – an academic at Napier University in Edinburgh
- Euan Dempster – my MSc supervisor at Abertay University.
Each of these have been helpful in various ways.
Is there anything commonly overlooked in usability testing?
Yes. Usability! It’s still often, or usually, not done. There’s still a huge amount of ignorance amongst developers and HCI still hasn’t yet got itself properly involved with the development process. It adds extra cost, and even though the investment more than pays for itself, the temptation to cut costs is overwhelming. I think it’s usually sold as an additional premium service. That’s understandable, but it does create the dangerous impression that it’s an optional extra.
What are the KPIs you track for usability testing?
Wow, that’s a big question. Firstly, I’m not a usability expert, I’m a tester. Ideally I’d prefer a usability expert to sort that out with the client so that the testers know what they’ve got to test for.
KPI’s are going to vary depending on the client, their business and most importantly, the purpose of the application. I don’t really think in terms of KPIs for an internal application for company employees to use. They’re something I associate more with web applications.
There are a range of possible KPI’s for e-commerce applications, but most of them aren’t relevant to a website that’s just for information. These sites would have a different set. My website is to promote me and the service I provide, and the figures I’m most interested in are the number of visitors (new and returning), bounce rate (i.e. the percentage leaving the site after viewing only the page they landed on), number of pages viewed, and time spent on the site. If people bounce in and out quickly then they obviously didn’t really want my site. If they spend a fair time on a single article then they’re obviously interested, but I know it won’t come to much unless they’re looking at other pages. I’m interested in the KPI of “time spent per page” but it’s meaningful only on certain pages, and the average for all users across all pages doesn’t really mean much.
However, that’s me speaking as a website owner not as a test consultant. But that’s the point. The KPIs depend on the client, not the consultant.
Also, KPIs are really relevant to monitoring the website once it’s gone live. They’d guide the testing, but I don’t think there’s a neat direct relationship between KPIs and tests in the same way that there would be between requirements and tests.
How do you use them?
I’d check the KPIs against the requirements to ensure they are consistent. If there are KPIs for which there are no relevant requirements, then how do the client and developers expect to get the results they want? Magic? Are they merely aspirational? Has anyone actually sat down and worked out how the KPIs would be achieved in practice and how they should influence the design?
The KPIs should shape the test scenarios. Wherever possible I’d want to be confident that if the application wasn’t going to get acceptable results for a KPI in live running then there would be a test it could fail. That’s not easy or always possible. How can you test for bounce rates when you’re talking about people appearing out of the blue with some completely unforeseeable search argument then vanishing? That’s more of a challenge for the designers and the information architects who try to ensure the website can keep people when they arrive. The testers would feed off their work and try to create scenarios where you’d find out that test users want to bail out quickly.
What do you think is the most important aspect of web application testing?
Early and frequent user involvement, for the reasons I’ve covered above.
In what ways have you seen test automation tools evolve in the last few years?
When I started in testing there were really just tools for test management, capture/replay and performance testing. The explosion in the last few years has been incredible and keeping up with what’s available now is a career in its own right.
I suppose the main change is that tools have advanced earlier in the development process. I suppose Agile has driven this. Tools are now seen as an integral part of development, helping programmers unit test for instance, rather than an option at the end of the development.
How does cloud computing affect the future of automated testing?
You’re taking me right out of my comfort zone now. I really don’t feel qualified to pontificate on this. I guess though that cloud computing is going to give automated testing a big boost. Automated testing will become essential because the cloud supplier will need to be careful about regression testing on existing applications and operational acceptance when new apps come on board, and they could be coming on maybe every day. Automated testing will probably also be cheaper and easier because it could be part of the cloud service. Also, I suppose cloud computing is going to be associated with very rapid development cycles, which again increases the demand for automation. I will research this issue more and try to determine the implications of cloud computing on usability testing.
What are your favorite testing blogs? Should we subscribe?
I don’t subscribe to any blogs. I do like to keep an eye on Brian Marick’s (http://www.exampler.com), Elisabeth Hendrickson’s (http://testobsessed.com/) and Sticky Minds (http://www.stickyminds.com). I’m most regularly on the Software Testing Club, however. I have had some good discussions there, and that’s the one I’d recommend.
And here concludes the second portion of the interview. Our thanks to James for his time invested, his candor, and his insight. It has been a pleasure to work with him on this interview. The third installment will be posted soon.
More information about James, his company, his services, and his expertise can be found at http://www.clarotesting.com/.
Twitter: james_christie