5 min read

Question: Can I Put Numbers on My UXR Resume If I Do Not Have Numbers?

Stop making up metrics for your UXR resume. "Drove 20% lift" from interviews is fiction and senior reviewers know it. Write bullets that show judgment and influence instead: what you learned, what decision it drove, how you moved the needle. Real impact doesn't need fake numbers to sound impressive.
Question: Can I Put Numbers on My UXR Resume If I Do Not Have Numbers?
Really? 20% lift? From 12 interviews?

Every few weeks I see the same panic post in a UX research community somewhere. "Help! I'm updating my resume and I don't have any metrics! What do I do?!" And then the replies roll in with the worst advice imaginable. "Just estimate the impact!" "Back-calculate the lift from the feature launch!" "Put a dollar sign in front of something!"

No. Stop. I am begging you.

Here's the answer you actually need: if you would be inventing the number, do not put the number on your resume. The moment you find yourself reverse engineering what the lift "probably was" based on some Mixpanel dashboard you glanced at six months after your study wrapped, you have already lost the plot. Put down the calculator. Step away from the spreadsheet.

Let me explain why most metrics on UXR resumes are complete nonsense. Product outcomes are multi-causal. That conversion lift you want to claim? It came from your research, sure, but also from the PM who scoped it, the designer who nailed the interaction, the engineer who optimized load time, the marketing team who ran the campaign, the timing of the launch, and honestly a decent amount of luck. Research influence is shared across half a dozen functions and about forty variables you will never isolate. The time lag between when you delivered insights and when impact showed up in the data makes attribution laughably weak. And here's the thing that should terrify you: senior reviewers spot this instantly. We have been doing this long enough to know what a researcher can and cannot actually own.

Welcome to what I call the Fake Metrics Hall of Fame. "Drove 20% lift in conversion" from a set of interviews. Really? Your interviews drove that? Not the entire product team and their months of execution? "Generated $2M in revenue" from a study. Did you personally generate that revenue, or did you talk to twelve people and then a hundred other humans built and shipped something? "Saved $500K in development costs" with absolutely no measurement ownership whatsoever. Here's the punchline: if the only way you can defend a bullet point is with a long, winding explanation full of caveats and assumptions, it does not belong on a resume. Resumes are not the place for creative writing.

And while we're at it, counting activities is not impact either. "Conducted 23 interviews" tells me nothing. "Led 3 usability studies" tells me nothing. "Survey with N=400" still tells me nothing. You know what I want to know? What decision got made. What insight actually mattered. How you influenced the outcome. Twenty-three interviews that led nowhere is not an achievement. It's a time sheet.

So what do hiring managers actually want to see instead of your made-up metrics? Judgment and problem framing. Can you figure out what question actually needs answering, or do you just execute whatever gets thrown at you? We want to see that you can find the real issue, not just the loud one. Half of senior research is figuring out that the question everyone is panicking about is not the question that matters. The important stuff are evisnce of influence, meaning how you actually moved decisions, not just how you presented findings to a room that nodded politely and then ignored you. Rigor. Why should I trust your evidence? What made your methods appropriate? And we want practicality. You can ship, not just study. You know how to work within constraints, timelines, and the glorious mess of real product development.

Here's what to actually write on your resume instead of fake metrics. I've got three patterns that work.

Pattern A is insight plus influence. You did the research, you learned something specific, and it drove a real decision. For example: "Led generative research with churned subscribers that revealed billing confusion as the primary driver, leading product to prioritize transparent pricing flows for Q3." See how there's no made-up percentage? Just a clear thread from research to insight to decision.

Pattern B is decision support. You answered a specific question for a specific team using specific methods, and it reduced real risk. Something like: "Designed and ran concept tests for three onboarding approaches, giving the team confidence to ship the guided setup flow without a longer beta period." You supported a decision. You reduced risk. That's the job.

Pattern C is system-level impact. You built something that changed how teams operate. "Created a lightweight research intake process that product managers actually use, reducing ad-hoc requests by shifting teams toward quarterly planning cycles." You made the org better at using research. That's senior work.

A few more examples in this vein: "Synthesized field research across 8 markets to identify a localization gap that became the top priority for the international expansion team." Or: "Ran evaluative research on checkout redesign, identified a critical trust barrier, and partnered with design to iterate before launch." Or even: "Established a quarterly competitive UX review that product leadership now uses in roadmap planning." No fake numbers. No invented lift. Just clear descriptions of what you did, what you learned, and what happened because of it.

Now, when are numbers actually legitimate? Only when you owned the measurement or the metric was genuinely part of your research program. Benchmarks you established. Baselines you tracked. Post-launch outcomes you actually monitored. A/B tests you helped design and interpret where you were in the room making sense of the data, not just cheering from the sidelines. Here's a good test: if you cannot explain the data source, the timeframe, and what you personally owned in one sentence, skip the number. Just skip it.

Before you submit that resume, run every bullet through what I'm calling the Anti-BS Checklist. Can I attribute this without lying? Can I defend it in a one-minute probe during an interview? Does it describe a decision, not just an activity? Does it show what I uniquely contributed? Would a senior researcher actually believe it? If you are answering "no" or "maybe" or "well, technically, if you squint," delete the bullet and try again.

Here's the fire closing you need to hear. Fake numbers do not make you look senior. They make you look insecure about your actual contributions, or naive about how products actually ship. Every experienced research leader has seen enough resumes to know that "drove 40% improvement" from an interview study is fiction. And when we see it, we do not think "wow, impressive impact." We think "this person either doesn't understand attribution or is willing to fudge the truth, and neither of those is great."

Write bullets that show your judgment and your influence. Then bring the real evidence to the interview. Bring the actual story of what happened, who was in the room, what you pushed for, and what changed because of your work. That's the stuff that gets you hired. Not a made-up percentage you'll have to awkwardly defend when someone asks a follow-up question.

Your research mattered. You don't need to lie about how much.

If this was useful, subscribe to The Voice of User. I write short, opinionated pieces on UXR, and how this work actually gets drone.