How to Do UXR in a Domain You Barely Understand
You get dropped into a specialized product and everyone speaks in acronyms, edge cases, and absolute certainty. You do not know the domain yet, but you still have to run research that the team trusts. The answer is not to cosplay expertise. The answer is to build a system that converts expert knowledge into testable assumptions, then converts user evidence into decisions.
This topic came up in a conversation recently with a colleague and it got me thinking about how researchers navigate this problem, and it seemed worth writing down my thoughts.
The moment you realize you are out of your depth
Picture your first week on a specialized product team. You are sitting in a planning meeting and someone says "We are seeing drop-off in the onboarding flow for users trying to configure their CI pipeline with our new agentic workflow."
Replace that sentence with whatever terrifying acronym soup your domain uses. The feeling is the same.
You nod. You take notes. You underline something twice, like that will help.
Later, you Google it in the bathroom. You learn just enough to nod more convincingly in the next meeting.
This is the fear that haunts every UX researcher dropped into a specialized domain: if you admit you do not know, you lose credibility. So you nod. You write things down. You hope context clues will save you.
They will not.
Pretending to know is how you end up running research that validates the wrong things, asks the wrong questions, and produces findings that make subject matter experts smile politely while ignoring everything you said.
The good news is that you do not need to become an expert. You need to make the experts usable.
The real risk is untested expertise
In complex domains, the danger is not that you lack expertise. The danger is that your team has too much of it, and none of it has been tested against actual user behavior.
Internal experts are often correct about constraints. They know what the system can and cannot do. They know the regulatory limits, the infrastructure bottlenecks, the architectural decisions made three years ago that everyone regrets but nobody will fix.
What they are often wrong about is behavior. They think they know what users want because they used to do that job. They think they know how people use the product because they built the product. They have opinions shaped by the five power users who send detailed feedback and the support tickets that made them angry.
Expert confidence is not evidence. It is just loud.
The organization punishes uncertainty, so assumptions become facts by repetition. Someone said "clinicians will never adopt a workflow with more than three clicks" in a meeting once, and now it is carved into the product strategy like scripture. Nobody has tested it. Nobody has asked a representative sample of users. But everyone believes it.
Field rule: If someone says "obviously," write it down. That is an assumption trying to escape into the roadmap.
Your job is to figure out which of these beliefs are true, which are outdated, and which were never true in the first place.
The Informed Outsider Framework
Here is a model that structures how you operate in a domain you do not fully understand yet.
Constraints are the fixed realities: technical limitations, regulations, platform rules, compliance mandates. HIPAA exists. The trading window closes at 4pm. The API has rate limits. These are not negotiable and your research cannot change them.
Preferences are what users actually want and will tolerate. How much friction is acceptable before they create workarounds? How much latency is acceptable before they switch tools? How do they feel about AI-generated suggestions appearing in their workflow?
Workflows are what users actually do under time pressure. Not what they say they do. Not what your onboarding tutorial assumes they do. What they actually do when the deadline is in an hour and something is broken.
Mental models are how users conceptualize what is happening. A nurse might think of an alert system as "the thing that cries wolf" rather than "clinical decision support." A developer might think of an AI assistant as "fancy autocomplete" rather than "an intelligent agent." These mental models shape expectations and behavior in ways that matter enormously for product design.
Experts are best at constraints. Users are best at preferences, workflows, and revealing their mental models. Your job is to connect them.
I call the stance "Informed Outsider." You show up with high rigor, strong process, and genuine respect for domain depth, while being honest that your domain understanding is under construction.
This means asking precise questions, not performative ones. "Can you walk me through what happens when an order fails validation?" is useful. "So tell me about your experience" is not.
It means documenting assumptions publicly. Write them down. Put them in a shared doc. Make people look at the things they believe but have not tested.
It means never bluffing vocabulary. If someone mentions "reconciliation breaks" in a finance product or "webhooks" in a developer tool and you do not fully understand the implications, ask. If you fake it, you will ask the wrong questions later and nobody will know why your findings are useless.
First 10 days: ship the domain map
Your first deliverable is not a research plan. It is a domain map. This is the artifact that proves you are taking the domain seriously and gives experts something concrete to correct.
Here is what goes in it:
A glossary that includes plain language meaning and decision relevance. Do not just define "exception queue" or "pull request." Your first version will be wrong. That is the point. Wrong glossaries get corrected. Blank glossaries get ignored. Explain why it matters for users and what product decisions it affects.
A workflow map from the user's point of view, not the system's. Your engineering team has architecture diagrams. Those are not workflow maps. A workflow map shows what a user actually does when they try to accomplish a task, including all the weird workarounds they invent because the happy path does not work. A nurse might document vitals in a completely different order than the EHR assumes. A financial analyst might export data to Excel to do calculations your product claims to handle. A developer might copy code into a separate file to test something because the in-product preview does not give them confidence. That is workflow reality.
A stakeholder map that identifies who owns truth, who owns incentives, and who owns risk. This means figuring out who can actually tell you whether something is technically possible, who decides what gets prioritized, and who will say no for compliance or security reasons.
An assumption backlog listing claims that need user evidence. Every time someone in a meeting says "users want" or "customers expect," write it down. That is an assumption. It goes on the list.
A risk register noting where a wrong assumption causes harm or rework. If the team is betting the roadmap on the belief that users will trust AI-generated suggestions without review, and that belief is untested, that is a significant risk.
Field rule: If the only evidence for a claim is "I have seen this once," it belongs in the risk register, not the strategy doc.
How do you build this in ten days? Read what users see. Not internal docs. The actual UI, documentation, onboarding flows, and error messages. Shadow internal operations by reading support tickets, community forum posts, and incident postmortems.
Ask experts for "three common failure modes" and "three edge cases that break everything." This question works because domain experts love talking about edge cases. Let them. The edge cases reveal the constraints that actually matter.
Force every claim into one of three buckets: observed, inferred, or assumed. If nobody can point to actual evidence, it is assumed. Assumed does not mean wrong. It means untested.
Now you are still underlining acronyms in meetings, but you are also turning them into hypotheses.
Borrowing expertise without getting captured by it
You need champions. These are the two to four people inside the organization who will help you interpret findings, catch mistakes, and translate between user evidence and domain reality.
But you need to pick them carefully. If all your champions think the same way, you will just end up validating the dominant opinion.
Get a frontline expert who deals with user reality every day. Support engineers, implementation specialists, customer success managers, solutions architects. They know what actually breaks.
Get a product expert who knows the tradeoffs. They can tell you why the product works the way it does and what constraints shaped those decisions.
Get a risk or compliance person who says no for a living. In healthcare this might be clinical safety. In finance it is compliance. In developer tools it is platform policy or security. They will tell you what you cannot do, which is often more useful than what you can do.
And if possible, get a contrarian who enjoys pointing out flaws. Every team has one. They are annoying in meetings, nobody invites them to happy hour, and they are invaluable for research.
Set rules of engagement. Champions review artifacts, not raw opinions. You show them the workflow map and ask what is wrong. You do not ask them to speculate about what users want.
Never let one champion become the single source of truth. If you only talk to one subject matter expert, you are not doing research. You are just agreeing with that expert.
Time box their involvement. Make requests specific. "Please mark what is wrong in this workflow map" takes fifteen minutes. "Tell me about the user experience" takes two hours and produces nothing useful.
The co-discovery loop
Once you have your domain map and your champions, you run a weekly loop:
Prebrief with champions to extract assumptions and constraints. Before you talk to users, you need to know what the team believes and what is actually fixed.
User sessions to test workflows and language. Watch people actually use the product. Listen to how they describe their problems. Note where their language differs from the team's language. A financial analyst might call something a "breaks report" while your product calls it an "exception summary." A developer might describe AI suggestions as "autocomplete" when your team thinks of it as "an agent."
Co-synthesis with champions before you present anything to the broader team. This is where you sit down with your champions and walk through what you observed. They correct your technical misunderstandings, help you interpret domain-specific behavior, and flag anywhere you might be about to say something inaccurate. This is not them telling you what users meant. This is them telling you whether your understanding of the system and constraints is correct. You do not want to walk into a readout and have someone say "that is not how the system works" in front of leadership.
Readout to the broader team where you present findings with confidence because you have already pressure-tested them. Champions have seen this material. The technical errors are fixed. Now you can focus on the implications and recommendations instead of defending basic accuracy.
Decision log documenting what changed, what did not, and why.
The important thing: experts are collaborators in interpretation, not gatekeepers of what users said. They can tell you that your technical understanding is wrong. They cannot tell you that users did not say what users said.
The traps that ruin research in complex domains
There are five common ways to fail at this.
Expert capture is when you end up validating the org's mental model instead of the user's reality. You spend so much time learning from internal experts that you start thinking like them. You use their acronyms unironically. Your research confirms what everyone already believed. Congratulations, you are now useless.
False precision is when you produce fancy metrics with no behavioral grounding. "User satisfaction score increased by 7%" means nothing if you do not know what users are actually doing differently.
Over-rotating on edge cases happens because experts love edge cases. They will tell you about the one enterprise customer who had a bizarre requirement, and suddenly your roadmap is optimized for that one customer. Meanwhile, the common path is broken for everyone else.
Confusing constraints with preferences is the classic mistake. "Users cannot export to PDF because the feature does not exist" is a constraint. "Users do not want to export to PDF because they prefer Excel" is a preference. One requires engineering. One requires validation. Do not mix them up.
Studying the UI instead of the job means you are testing whether users can complete tasks in your product instead of understanding what job they are trying to accomplish. A nurse does not wake up wanting to document vitals. They wake up wanting to keep patients safe. A developer does not wake up wanting to configure pipelines. They wake up wanting to ship reliable software. Your product is a means to an end.
What success looks like
You know this is working when the team stops arguing about what users do. Not because they agree with each other, but because there is evidence and everyone has seen it.
Assumptions turn into a visible backlog and get retired by evidence. "Clinicians will never adopt a workflow with more than three clicks" either gets validated, invalidated, or revealed to be more nuanced than anyone thought. Either way, you know.
Requirements become simpler because you clarified the job and the constraints. Turns out half the features on the roadmap were solving problems users do not actually have or features that do not want.
Product language becomes closer to user language. The error message that said "transaction failed due to insufficient margin requirements" now says "you do not have enough funds to complete this trade."
Risk and compliance partners trust you because you surfaced constraints early, instead of surprising them at launch.
If you are still underlining terms in meetings six months in, you have found your next assumption to test.
The mindset shift
You do not earn credibility by knowing everything. You earn it by making uncertainty visible, reducing it fast, and protecting the team from confident nonsense.
Your job is not to become the domain expert. Your job is to make the experts usable. In complex domains, that skill is worth more than any amount of pretending.
🎯You don't need to be the expert. You need to make them usable. For more on navigating research in messy orgs, subscribe.