Blog

Self-referential design (aka why usability testing is important)

By William Jeffrey Rankin, Sat May 19 2018

Some notes on why self-referential design is a problem, and why usability testing is important.

Notes:

  • The team will have changes to the product design in mind: enhancements or new features.
  • There will be overlap in the design changes desired among the team members.
  • Usability testing will confirm that some of the changes suggested by the team are necessary.
  • There will likely be many unanticipated discoveries: completely new issues, designs thought well-vetted present difficulties for users, and those thought of as clunky prove to be intuitive or usable. This is why we test!
  • Takeaway: Self-referential design will cause you to fix problems that aren't there, and miss real problems that would have been discovered through usability testing.

    How do you know what to build? How do you know it's right?

    By William Jeffrey Rankin, Mon May 7 2018

    I had a conversation recently with analysts at a Columbus-based consulting agency regarding user research and usability evaluation. I was disappointed—but not surprised—to learn the agency didn't engage in either despite having a team of designers on hand. This isn't the first time I've encountered this phenomenon, and I wanted to put some questions and comments together.

    Some Basic Questions

    So, assuming a lack of user research and/or usability evaluation for a project, these questions occur to me:

    • How do you know what to build?
    • How do you know it's the right thing?
    • How do you know whether people will be able to effectively use the thing?

    The client and/or product owner hopefully (but not always!) has a vision in mind, but how well-defined and realistic is this vision? How much research has been done? How many people have they spoken to about the product or service? Who are the competitors? If the amount of research has been minimal, the consultant needs to utilize their expertise in helping refine the vision by conducting some user research. It doesn't have to take a long time or be expensive (check out our user research cheat sheet for more information).

    What if there's significant ambiguity about what's being built (there's always some)? Yes, the team can get together, ideate, and generate some stories. But are these stories anything other than assumptions if there's no data to back them up? So, take time to gather some data to help inform ideation/storytelling sessions.

    In the midst of design, the team should take the initiative and conduct informal usability evaluation during the design sprints (or at some point that makes sense for the project). Just like gathering data for ideation/storytelling sessions, it doesn't have to be expensive in terms of time and resources. Formative sessions, with 4 to 6 users (actual users strongly preferred), and held over a day or two should provide enough data to ensure that the design is on course. Employ techniques with some level of rigor: Internal tests, 5 second tests, and similar techniques return questionable results in my experience.

    A Bigger Point

    We're the experts and shouldn't assume the client knows exactly what needs to be built. An interesting product/service concept needs to be developed, and we owe it to our clients to utilize the tools and techniques at our command. Part of this may involve educating the client (and perhaps internal people who manage the client relationship) about what needs to be done and why it will benefit the project. Instances of great ambiguity may involve forging ahead and doing the work you know needs to be done (sometimes it's better to beg forgiveness, than ask permission)!

        Changing a design while it's being tested: good idea?

        By William Jeffrey Rankin, Mon Jul 17 2017

        A while back I got into an exchange with another designer on Twitter regarding his conduct of a usability test. It started when he "tweeted" this (these aren't the exact words, but I've captured the gist): "I'm testing with users and updating the design as issues are uncovered."

        This surprised me: it didn't seem like a good idea from a methodological perspective, yet here was a fairly well-known designer (who'd written a book or two by the time of our exchange) talking about it like it was business as usual.

        I replied something to the effect of "Shouldn't you change the design after you've run all the sessions (and therefore collected all the data)?" We had a brief friendly exchange following this: he didn't see the harm in changing the design as he was running the sessions. I left it at that, but wanted to put down my thoughts on why this is not a good idea.

        Is the problem really a problem?

        If you change the design immediately after the session, how do you know the issue is a problem? And to what extent is it a problem? For example, pretend that you ran a series of 12 user-testing sessions, and observed several issues:

        • Issue 1: 7 users experienced this issue
        • Issue 2: 6 users experienced this issue
        • Issue 3: 4 users experienced this issue
        • Issue 4: 3 users experienced this issue
        • Issue 5: 1 user experienced this issue

        Is issue 5 really a problem? Maybe, but certainly not as big of a problem as issues 1 and 2. Had issue 5 been "fixed" early in the test sessions, you wouldn't have been able to know whether it's truly a problem (perhaps it's a methodological or other test anomaly), and you wouldn't know the magnitude of the problem (data necessary for prioritization).

        A Missed Opportunity

        Usability testing, in part (a large part), is about understanding why users are having problems with a product's design. For a given issue, you want as much data as you can get so you can understand the nature of the problem. If you're changing the design (ostensibly to fix an observed issue) as you're testing, you've lost the opportunity to learn more the problem. If you only have 1 data point for an issue, can you really address it with a high level of confidence (and again, is it really a problem)?

        A side question: What happens when the mid-test "fix" introduces new issues? It seems like the design and testing sessions could go off the rails pretty quickly with this methodology.

        The Bottom Line

        Testing and updating in this way, it's entirely possible that the designer is "fixing" issues that aren't really problems, or are relatively trivial problems. Or, because the designer doesn't have an understanding of the nature of an actual problem, it's not addressed as well as it could've been, had there been more data.

        What are your thoughts? Is this a common testing methodology? Is there a context in which it would make sense?