by John Schrag on October 16, 2009
Design Values: Validated Data over Expert Opinion
A while ago, I posted an article (Values in Software Design Practice) describing what values my co-workers and I believe contribute to a healthy software design practice. At that time, I promised to go into more details on the individual items of that list. This posting describes why we value Validated Data over Expert Opinion.
Some people mistook my earlier post to be a criticism of experts, which was not intended at all. I value expert opinion highly, and my employer values the software design expertise of my team. The question we were looking at is this: when you are making design choices that require information, is design and domain expertise enough to come to the right answers?
A few years ago, some of my colleagues decided to run a first-experience study on one of our software packages. The purpose of such a study is to gain an understanding of what our users go through in their first hour of use. What do they experience? Where do they get stuck? How far can they get in the software? What are their learning strategies? As a side experiment, my colleagues asked several experts in the company for their expert opinion as to what problems users would run into, and compiled them into a list. (These experts included the software designers, domain experts, and the people who trained users on the software.) Then my colleagues ran their study, observing sixteen people using this software for the first time, and made a list of the problems that users actually ran into. The result?
There was not one common item on the two lists.
We repeated this experiment a year or two later on a different product, and got exactly the same result. While our experts knew a great deal, they were not good at predicting the behaviour of novice users. Perhaps this should surprise no one; experts cannot ‘unlearn’ what they know to get into a beginner’s mindset. (But ask yourself this: when you design training materials for your software, how do you decide what goes in them?)
Here’s another story: working at a previous company, I had the opportunity to visit with one of our engineering customers and learn about their workflows. We arrived early, and spent the first hour in a meeting room with a lead engineer who described her workflow in great detail, and explained the problem areas as she saw them. After that, we had the opportunity to shadow her for a day as she worked. When we actually observed her process, we saw that it was markedly different than what she had described. She was spending about half her time doing long tasks that had never been mentioned in her workflow description, because to her they were not part of her job – just things she had to do in order to be able to do her job. But those tasks (which were repetitive make-work) represented a huge opportunity for us to improve her workflow. This is a common problem – people are very bad at self-report. If you need to collect behavioral information, you cannot rely on people to be experts even on themselves.
These stories provide two examples where validated data – observation of users at work – was provably clearly better than expert opinion. But what about design expertise? Does being a ‘design expert’ give you what you need to make the right choices? What about things that you just can’t get validated data for?
While good design practices are valuable, the fact remains that every design decision we make is a guess.
In most cases, it is an informed guess. If we’ve done our research and taken time to think through the problems we are trying to solve, then we can probably make pretty good guesses, and get our interface designs right much of the time. (Certainly an expert will do a better job than a non-expert!) But we’re still guessing. Will our users be able to find the functions they need? Will they be able to use what they find? Will it satisfy their needs? We can’t know for sure if our designs work until they’ve been tested by real users doing real tasks. Once your design has passed the goals you set for it in a usability test, you are no longer guessing. You know you have a good design – or, if not, you know what you need to change.
And that is the short answer. We value validated data over expert opinion because we value knowledge over guesswork – even good guesswork. Sometimes the data surprises us – and that’s usually where opportunity lies.
wonderful post about design, thanks
Posted by: Custom Logo Design | December 24, 2009 at 04:39 AM
What a very interesting article. Keep up the excellent work.
Posted by: Rådgivende ingeniør | November 04, 2009 at 02:49 AM
Shorter version: research identifies problems and evaluates the success of the solutions. Designers ('experts') find the solutions.
Posted by: setmajer | November 01, 2009 at 02:59 PM
Nice article John. I particularly appreciated your mention of why getting users to self-report doesn't get us the information we as PDs need to design a UI.
Talking about doing something and doing it are not the same thing because of the details users omit during a traditional style interview as compared with a contextual interview where users are "doing what they do" and not just talking about it.
We all get through our days using tacit knowledge. This often includes the minutiae of our activities that we are unconscious of because we do it them often, or would not think is of interest or importance when "reporting" what we are doing in traditional interviews. The only way for us to get at this sometimes critical data is to observe users at work.
Posted by: Jean Foster | October 19, 2009 at 11:16 AM