Research

The biggest mistake teams make when running user interviews

The biggest mistake in user interviews is not asking the wrong question. It’s accepting the first answer and moving on.

Why interviews often stay too close to the surface, and why the real insight usually appears only when you stay with an answer for longer than planned.

20 January 20256 min read

When interviews look strong but reveal very little

From the outside, it felt like a strong set of interviews. The kind you’d expect to produce useful .

And to be fair, they did produce output.

There were notes, themes, quotes, clips, all the usual artefacts that signal we’ve done the work.

But when we stepped back and looked at what had actually been learned, it was surprisingly thin.

Not because the users didn’t have anything to say.

But because the never really got past the surface.

Every question was answered.

Very few things were actually understood.

Every question can be answered and you can still fail to understand what is actually going on.

Why the first answer is rarely the real answer

That’s a I’ve seen repeat itself over and over again.

The biggest mistake teams make in isn’t asking the wrong questions.

It’s accepting the first answer they get.

Most users will give you an answer quickly.

They’ll tell you what they think, what they expect, what they usually do. And it often sounds reasonable enough to move on. The keeps flowing, the script keeps moving, and you cover everything you planned to cover.

It feels productive.

But the first answer is rarely where the is.

Key takeaway

The first answer is often the cleanest answer, not the most revealing one.

What happens when you stay with the response

I remember a where a user was going through a fairly standard and described part of it as fine. No hesitation, no visible frustration, nothing that would flag it as an issue if you were just taking the answer at face value.

It would have been very easy to move on at that point.

But we didn’t.

We stayed there a bit longer.

Asked them to walk through what they were thinking. Why they chose that option. What they expected to happen next.

What came out wasn’t fine at all.

They didn’t fully understand what they were doing. They weren’t confident in the decision they’d just made. They were moving forward because it seemed like the only available option, not because it felt right.

That’s a completely different picture.

And it only surfaced because we didn’t accept the first answer.

How this shows up in complex systems

I’ve seen the same thing in more complex as well.

Working across the NHS, users would often describe in very general terms. It’s a bit confusing, there’s a lot going on, it’s not very clear. Useful , but not particularly actionable on their own.

If you stop there, you end up with vague problems and equally vague solutions.

But when you start digging into those , asking them to show you where that confusion comes from, what they expected instead, what they were trying to achieve in that moment, things start to sharpen.

You move from this is confusing to this specific part breaks because the structure doesn’t match the user’s .

That’s where decisions can actually be made.

Why most interview guides work against depth

The challenge is that it requires a different mindset.

Most interview guides are designed to cover ground. To make sure certain topics are touched, certain questions are asked, certain areas are explored. There’s a natural pressure to keep moving, to get through everything, to make the most of the time you have.

And that pressure works against depth.

In my experience, the most valuable interviews are rarely the ones where you cover everything.

They’re the ones where you follow something unexpected and stay with it longer than planned.

Where you notice a small hesitation and choose to explore it. Where an answer doesn’t quite add up and you gently push on it. Where you’re willing to abandon the script because something more interesting is happening in front of you.

That’s where the real tends to sit.

What makes interviews actually useful

I’ve worked on projects where the difference between average and genuinely useful came down to that one thing. Whether the person running the was comfortable going off-script and sitting in the uncomfortable space of not immediately knowing where a conversation would .

Because that’s where users stop giving you rehearsed answers and start revealing how they actually think.

And that’s ultimately the point.

aren’t about collecting opinions.

They’re about understanding .

If you treat them like a questionnaire, you’ll get answers.

If you treat them like a conversation, and you’re willing to go deeper when something doesn’t quite make sense, you’ll start to get .

That’s the difference between that sounds useful and research that actually changes something.

LET'S WORK TOGETHER

Ready to improve your product?

UX, research and product leadership for teams tackling complex digital services. The work usually starts where things have become harder than they need to be: unclear journeys, inconsistent products, competing priorities, or teams trying to move forward without a clear direction. I help simplify the problem, shape the right next step, and turn complexity into something people can actually use.

Previous feedback

Will Parkhouse

Senior Content Designer

01/20