by Desirée Sy
John Schrag wrote earlier about Design Values, and others have expanded on some of these ideas. This follow-up post looks at why we designers value Achieving Results over Writing Reports.
This isn’t so much a duality as it is two points on the continuum of design doneness. Writing reports is a part of what we do as designers, but it’s not an end to itself—just as an architect’s job isn’t complete when the blueprints are finalized. The exciting part is realizing that just as an architect’s job doesn’t start when the blueprints are being drafted, there is far more to an interaction designer’s job than writing specifications.
(http://farm1.static.flickr.com/170/371670597_f0d01ddf53.jpg)
What can we do as designers to achieve results that are better—faster, more vivid, more malleable, more effective—than pounding out a 279-page specification?
Is there a better way?
Let’s conduct some thought experiments with a few typical design activities and their deliverables.
Usability test
You’re conducting a usability test on a low-fidelity prototype, and the design idea fails to solve the primary design criteria set for it. What do you do to get this problem fixed?
- During a design review session, you present the problem as part of the data in a PowerPoint scatter plot.
- Pull some ideas out of an email thread with the whole team, after sending a link to your report of the whole test series.
- Make changes to the paper prototype, in consultation with the developer who observed the session, and see if the next user can complete the task.
Answer (c) allows the team to move through design iterations more rapidly than writing comprehensive usability test reports and then reacting to those results allows. This also illustrates one of the concepts underlying the rapid iterative testing and evaluation (RITE) usability test methodology. Through personal experience, I can say that developers are more engaged and have a more complete understanding of underlying problems (rather than your proposed solution) when they participate in rapid design iterations.
Site visit
You're at a customer site, watching users pushing real data through the latest release of the software. You identify a critical issue that has an impact on the version currently in development. How do you convey this problem back to the team?
- Three weeks later, as part of the “Problems” section of a field visit report that also incorporates data from the next two weeks of visits.
- A month later, as part of a PowerPoint presentation to the team about all the field visits.
- The next day, as an upload via FTP of a recording of a re-creation of the workflow with the current build version on your laptop with a voice-over describing the issue, supplemented by an example of the customer data.
The pros of producing a stream of “re-created highlights” for your team as the site visits progress are that you are highlighting the most important issues for the team (and therefore, for yourself) in a timely way, and you are keeping everyone engaged in the narrative of data collection for the full 2 weeks. Also, adding customer data provides context and data-driven requirements, helping the whole team to frame the problem more accurately.
The cons of such informal reporting are that you are paraphrasing the problem, and there is some loss in translation when you are using (for example) a different version of the software to demonstrate the problem. Also, there are going to be some issues that are only revealed as part of a pattern of observations over time.
And yet…haven’t we all been at a site where we saw one instance of a critical problem that doesn’t require another example for us to know that it needs addressing? Shouldn’t we try to find more timely (and less boring) ways to engage team members to fix these than by writing a report and insisting everyone read it and then do the right thing?
Convey UI design
You’ve finished part of a larger design and the low-fidelity prototype met all the design criteria in usability testing. The next step is to explain the design and the user experience acceptance criteria so that development can build it, and QA can test it. Which of these actions will most effectively help the team build the product?
- Get a head start on section XXVI.34.ii (b) of the specification, which won’t be distributed until the whole design is complete.
- Conduct a detailed design walkthrough of the entire UI design for developers, QA, and technical writers. Answer questions about the design solution, and capture the minutes of the meeting.
- Write a short design brief for the part of the design that’s done, and link a series of these together later in a higher-level document.
- Demo the smaller design piece using the final version of the low-fidelity prototype to complete a few common and critical tasks for developers, QA, and technical writers. Answer questions about the design problem. Capture the meeting, including the questions and answers, in brief design history notes. Keep a running series of these notes for all the design pieces.
I’ve found that a combination of (c) and (d) work much more effectively than (a) and (b). What makes the continuous (vs. complete) demonstration of workflow (vs. UI walkthrough) approach more effective is:
- It breaks the implementation down into more manageable, understandable pieces of work.
- It emphasizes the desired workflow as the outcome, rather than the minutia of the design, except where the visual or interaction design details have an important effect on the workflow.
Therefore, it puts the UI elements in the context of the user’s work and tasks for the whole team.
- It helps you understand where you failed to communicate design details to other team members, so that you can fix the design brief. Inevitably, when the team has a better understanding of the problem that you’re solving, someone improves your design.
- Most importantly, you’ve distributed the responsibility for achieving results to the entire team.
Making things happen
Designers are often frustrated because even after doing all the “right” design activities, and then distributing research reports, usability test reports, wireframes, and specifications, somehow the “right” solutions don’t end up in the product.
We’ve sometimes reacted by trying to fix the reports or enforce their authority — by forcing sign-offs; by presenting reports instead of getting team members to read them; by holding up coding until documents are complete; and in the case of specifications, by pushing more and more detail into more and more pages.
These fixes are symptoms of mistrust. Forcing compliance to read or act on reports is a breakdown of trust that the findings aren’t being taken seriously. Adding more content to reports is a breakdown of trust in the validity or completeness of the findings. I think most of us can think of times when doing either of these things hasn’t resulted in the “right” result, even though collectively, the team knew better.
Both approaches are based on the myth that a designer uncovers the user’s problems and creates the One True Solution, and then developers implement that vision (either successfully, or incompletely).
I think that this is the wrong way to infuse design thinking into products. Instead of forcing “their” solution on a team, designers should get as many team members as possible to understand users’ problems, and then get everyone engaged in solving those issues. Ultimately, the likelihood of getting results into a product increases exponentially for each team member who is committed to solving problems and owning the solution. Sometimes, writing reports isn’t the best way to do this.