Research

User research is easy to do badly and hard to do well

It’s easy to run sessions and produce output. It’s much harder to uncover something that actually changes the product.

Why research often looks credible without going deep enough, and what separates useful output from insight that actually changes decisions.

26 April 20256 min read

When research looks right on paper

The work had been presented internally, were aligned, and from their point of view, they understood exactly where the problems were.

On paper, everything looked right.

But when you looked at the product, nothing had really improved.

Users were still dropping off in the same places. The same points were still there. The same hesitation in key parts of the hadn’t gone away.

So we went back through the .

What stood out wasn’t that it was wrong.

It was that it never went deep enough.

The problem with weak research is not always that it is wrong. It is often that it never went deep enough to change anything.

Why surface-level research feels convincing

Every followed the same .

Users were taken through the step by step, asked what they thought at each stage, and prompted to explain their decisions. It all sounded reasonable. It felt structured. It felt like progress.

But it kept everything on the surface.

Users reacted to what they were shown. They answered the questions in front of them. They described things as confusing, or clear, or a bit long, or fine.

All valid .

None of them explained why the wasn’t working.

I’ve seen this play out more times than I can count.

that looks credible, sounds useful, and creates , but doesn’t actually move anything forward.

Why bad research is hard to spot

On another project, it was even more subtle.

The team were convinced they already knew where the issues were. was brought in to validate it, and without really meaning to, the were shaped around that assumption. The questions, the , even the way things were introduced all pointed in the same direction.

The findings came back clean. Too clean.

Everything lined up with what the team already believed.

It felt like confirmation. It felt like progress.

But again, nothing changed.

Because nothing new had been uncovered.

That’s what makes bad difficult to spot.

It doesn’t look bad.

It produces output. It fills a gap. It gives people something to point at and say, we’ve spoken to users.

But it rarely challenges anything.

Key takeaway

Research that only confirms what the team already believes may feel useful, but it rarely changes the product in a meaningful way.

Where the real insight usually sits

The difference, in my experience, comes from what you pay attention to.

I’ve been in where the most useful moment wasn’t an answer, it was a pause.

A user hovering slightly longer than expected.

Re-reading something they’d already seen.

Hesitating before committing to the next step.

None of that gets called out explicitly. No one says, this is where I’m losing .

But you can see it happening.

On work like Travelbag, those moments were critical. Users would move through the early part of the comfortably. Browsing, exploring, engaging. But as soon as the decisions started to carry weight, pricing, options, committing to a booking, something shifted.

They didn’t always say it.

But you could see the drop.

That’s not something you get from asking, does this make sense?

How structural problems show up in behaviour

And I saw a different of the same thing working across the NHS.

Users weren’t struggling with a single page or a single step. They were struggling with inconsistency. Different structures, different language, different expectations depending on where they entered the .

When you asked them directly, the answers were vague.

It’s a bit confusing.

It’s hard to find things.

But when you watched them, you could see the real issue.

They were constantly recalibrating.

Working out how this part of the behaved compared to the last.

That’s not a UI problem. That’s a structural one.

That’s the part that’s easy to miss if you’re only listening for answers.

What good research actually does

Good doesn’t just capture what users say.

It looks for where what they say and what they do don’t quite line up.

That gap is where the usually is.

I’ve found that the most valuable are rarely the neat ones.

They’re the ones where something doesn’t quite add up.

Where a user says something is fine, but behaves as if it isn’t.

Where they take a route no one expected.

Where they get stuck somewhere that wasn’t even considered a problem.

Those are the moments that change how you see the .

What separates output from impact

And then there’s what happens afterwards.

I’ve seen strong completely lose its value because it gets reduced to a deck of quotes and .

Everyone nods. Everyone agrees. It gets filed away as useful.

And nothing changes.

I’ve also seen less polished to real impact because someone took the time to interpret it properly.

To connect the dots. To challenge what it meant. To push it into decisions instead of just documenting it.

That’s where the real difference sits.

Not in how the looks, but in what it changes.

In my experience, good is slightly uncomfortable.

It challenges assumptions that felt safe. It exposes gaps that weren’t obvious. It creates questions where there used to be certainty.

That’s when you know it’s doing something.

Because it’s easy to run .

It’s easy to produce output.

It’s easy to say we’ve spoken to users.

It’s much harder to uncover something that actually changes how the product works.

And that’s the difference between that fills time and research that creates impact.

LET'S WORK TOGETHER

Ready to improve your product?

UX, research and product leadership for teams tackling complex digital services. The work usually starts where things have become harder than they need to be: unclear journeys, inconsistent products, competing priorities, or teams trying to move forward without a clear direction. I help simplify the problem, shape the right next step, and turn complexity into something people can actually use.

Previous feedback

Will Parkhouse

Senior Content Designer

01/20