• Chad Zollinger

UX Fight Club: Are User Interviews Overrated?

Updated: Feb 24, 2020

This article was orgi

Today’s matchup is between Daniel Falabella and… well… the 3,000+ members of Product Hive’s Product Management slack channel.

Big props to Daniel for rolling with the punches, swinging hard, and looking for that KO.

Daniel’s arguments may be unpopular to some, but he puts up a fight, and provides ground for some important insights.

The contest resulted in an in-depth discussion about the usefulness of user interviews, whether they’re being used for their true purpose, and some common PM/UX mistakes concerning user research.

Our Contestants (in order of appearance)

Daniel Falabella — Lead Product Manager at Duolingo.

Grady Kelly — Senior UX Designer at Impartner.

Ty Clement — Director of Operations at Stashify.

Ty Hatch— Principal UX Designer at O.C. Tanner.

Nathan Nannenga — Senior Product Manger at Inside Real Estate.

Alex Hackney — Associate Product Manager at Weave HQ.

Aaron Airmet — Product Manager at O.C. Tanner.

Randy Hoffman— Product Manager at FamilySearch.

Kyle Clements — Senior Learning Experience Designer at WGU Labs


It is UX Fight Club’s policy that there are no ties.No hold’s barred either, whatever that means.Eye gouging is encouraged.Extra points for verbally-acrobatic insults.

The First Punch: User Interviews Are Overused

Every fight has a first punch and every design debate has a catalyst comment. 

Here’s how this bout got underway:

In short: user interviews kinda suck.

This is going to be fun. And Grady knows it.

Seeing Grady’s anticipation of drama, Daniel feels the need to clarify his point.

Should You Have a Go-To Method?

No offense to Daniel, but 99% of the time the argument that begins with “99% of the time…” is an over exaggeration (he admits this later on).

The number he should have chosen is 99.9%. The additional “.9” conveys a sense of unknowable expertise.

If he had added the .9, we’d be sitting here thinking “where did the .9 come from?” and “what does he know that we don’t?” and “how do I learn this math?”

Daniel does make a good point here though: you should never really have a “go-to method” for conducting user research.

It makes sense right? If you have a go-to method then you’re not tailoring your research to the needs of the user or the end product. 

Ty (the first) was kind enough to give the discussion a nudge forward.

Note: There are two fighters named Ty. Yes, that does make them Ty Fighters (relevant Star Wars reference).

The only issue I have with this statement is “that’s so wrong.” I don’t think the issue is that black and white.

I know this is an impossible ask, but I’d love to see the end product of both methods.

Can Daniel honestly say that speaking to the end users would have resulted in drastically different data —  and a different product — than by categorizing support tickets?

Ty (the second) makes this same counter point.

Thousands of Interviews with One Question

I don’t know Ty

That’s a great question.

Here are my three thoughts on the difference:

1. Even if you could ask just a single question to thousands of interviewees, the mode of answering affects the answers themselves. Support tickets are not fun to submit as a user. In fact, they suck. So negative moods will have dramatic results.

2. Given my first point, something as trivial as a negative mood is actually very important when working with a large output of data.

3. Interviews can be misleading in their own way. Because they’re face-to-face, users may feel more inclined to tell you what they think you want to hear (or lie in the name of politeness). People submitting tickets usually don’t care about your feelings.

What do you think Daniel?

Good point. 

Let’s let Daniel handle this one.

Nuff said. But you’re not in the clear yet Daniel.

Taking the Hard Line

So far, Daniel has fended off the criticism pretty well, but he still has to defend the polarity of his argument.

Shots fired by Nathan.

I would argue that “it depends” actually opens up an argument as long as it leads to “on what?” 

This is especially true in this case.

However, I do feel like picking an extreme is a very good way to start an argument.

After all, conflicts never start from places of understanding. And we’re all for conflict here.

This is the UX Fight Club, damn it.

This isn’t true in every case and I think that’s Ty’s point. There are negative and positive sides to every feedback/research method.

Daniel admits here that he chose the extreme in order to strike up a constructive argument.

He’s all about that fight life.

We’re Only Good at Interviewing

At this point, the discussion opens up even more and we get varied insights into how research should be conducted.

Alex shares an interesting tweet from Nate Sanders that I believe actually strengthens Daniel’s position.

This is definitely true in my case.

I feel the most comfortable with user interviews because they’re generally face-to-face.

I think that at some subconscious level, I like user interviews because I want the option to control the results or at least defend my assumptions (an intuition we should obviously all fight against).

Just like most stubborn people, I love my assumptions when I’m starting a project. One purpose of research is to challenge those assumptions.

With any data collection method, you’re going to hear feedback that challenges your assumptions.

And you can’t really defend your assumptions against a support ticket.

That’s my roundabout way of reasoning out why I might prefer interviews. I definitely don’t prefer them for any strategic reason, which is a big red flag. If anything else, Daniel, Alex, and Nate have helped me identify that flaw in my own method.

Here’s a more exhaustive case in point (skip to Aaron’s TL;DR if you prefer the short version):

User Interviews: Useful or Overrated?

There it is.


User interviews are not useless. They’re just overrated.

Personally, I wouldn’t go as far to say that they’re overrated. But it makes sense that they’re overused.

I stood pretty staunchly against this point when I first read Daniel’s “user interviews kinda suck” argument. But, I do see his point more clearly. As I pointed out above, I preferred interviews for no apparent reason (and I’ve seen the light).

Here’s the lesson: Preference should never outweigh precision. Make sure your research preferences are backed by strong user-oriented reasoning.

Research Questions Are Not Interview Questions

We’ve straightened out the fact that interviews can often be used in the wrong circumstances.

But for those who want more resources to flex their PM/UX muscles, Kyle shared this article concerning bad interview practices.

Erika Hall goes further and makes the point that even when user interviews are the correct method of choice, how we interview can negatively impact our results.

“Once you have identified a good research question and decided that interviewing people is the best way to answer it, you need to figure out what to ask the people you are interviewing. Again, research questions are notinterview questions. If you have a strong research question in mind, you might only need a couple of prepared interview questions.” —  Erika Hall

Here’s an another awesome resource from Product Talk shared by Aaron Airmet and written by Teresa Torres.

This article examines the gap between what users think their motivations are and what their motivations actually are.

“You can’t simply ask your customers about their behavior and expect to get an accurate answer. Most will obligingly give you what sounds like a reasonable answer.”

This means that user interviews are trickier than they first appear.

You shouldn’t rely on the results of a user interview without putting thought into the interview process.

“If you build a product based on your customer’s ideal self, you might get the initial sale, but you’ll struggle to engage them, and you’ll churn through customer after customer.”

Conclusion: Who Won the Fight?

Daniel is the self-proclaimed winner of this fight.

But who is the Chad-proclaimed winner?

I’d love to say it was a tie, but for the sake of closure, I’ll make a firm decision.

The world won this one Daniel, though I think you had a Rocky-Balboa-style loss (the one where he lost, but fought hard and everyone applauded him in the end). 

You made your point in the end, but the initial argument was too unclear and most of the contention was a result of the initial polarity.

In the end, I believe most people came an agreement that your methodology should reflect your user and product needs.

All-in-all, I thought it was a great discussion, and we have Daniel to thank for that.

My Contribution to the Contest

Classic Aaron.

That’s all I said. I think it was a vital contribution to the debate.

Can’t wait for the next fight. Hopefully, the next one is less civil and a little more insult-filled — the people want drama.

If you liked this article, please leave a comment below, highlight your favorite section, or just give us some claps so we know to keep writing these articles in the future.

The UX Fight Club is a Medium publication dedicated to the increase of learning through mental brawling. If you’d like to submit an article to the UX Fight Club, please email me at

110 views0 comments

Recent Posts

See All