6 min read

The Strategic Case for UXR as an Intelligence Function

The Strategic Case for UXR as an Intelligence Function
Photo by Valentin Müller / Unsplash

There is a version of this argument that research leaders have been making for fifteen years. Research should be at the table. Research should influence strategy. Research should be treated as a strategic function, not a service.

That argument has not worked. Not because it is wrong. Because it is made in the wrong register, to the wrong audience, at the wrong moment in the product development cycle. By the time a research leader is making the case for strategic relevance, the strategy has usually already been set.

The case I want to make is different. It is not about influence or seat at the table or any of the other phrases that sound good in a conference talk and dissolve on contact with an actual reorg. It is about organizational classification. When engineering restructures around AI, every function gets sorted. Infrastructure or overhead. The research functions that survive this sorting are not the ones with the most rigorous methods or the most compelling readouts. They are the ones that own something the organization cannot generate from within its own velocity.

Most research functions do not own that thing right now. They run studies when people ask them to. That is the entirety of the value proposition, and it does not survive the structural shift.

The Sorting Has Already Started

Engineering teams are not getting faster at the same work. They are doing different work. Agents handle implementation. Engineers direct, review, and refine. The spec gets written, the agent builds, the engineer checks execution quality. The gap between a decision and a shipped product has compressed to days in some organizations.

That compression is not evenly distributed yet. But the trajectory is clear and the organizations furthest along offer a view of where the rest are heading. When a pod can ship in days, the decision gets made with whatever is in the room at the moment the engineer opens the task. Not because anyone decided to exclude research. Because waiting for a study has a cost that iterating does not.

Research has not been removed from those environments. It has been outrun.

And here is the part that should actually concern research leaders: the org chart has not changed. No reorg memo. No new job titles. The sprint still runs. The planning meetings still happen. The structural shift is invisible in the official documentation and visible in every product decision that gets made before research could contribute to it.

The sorting happens inside that invisibility. When budgets get pressure-tested, functions get classified. The ones with clear organizational ownership of something essential survive. The ones that are essentially a queue of studies to be run when requested do not, or they survive in a much reduced form that nobody is happy about.

What the Service Model Actually Produces

The service model research function sits downstream of decisions and responds to them. A request arrives. A study gets scoped. Findings get delivered. The value is real when the translation layer works, when a PM absorbs the finding and carries it forward into the next decision at the right moment.

That translation layer is what the compression is eliminating.

I have watched this play out in practice more times than I want to count. A research team does excellent work. The findings are solid. The readout is well-attended. And then the product team is in a planning meeting two weeks later building a spec, and the finding is sort of there, approximately, in the version the PM remembered. Maybe the nuance survived. Maybe it did not. The researcher never sees the spec. The spec gets built.

When the build cycle was measured in weeks, this was lossy but functional. When it is measured in days, the lossy part gets fatal.

The service model has three failure modes that tend to show up together and get treated as separate problems. The first is framing failures, where building starts before anyone has properly understood the problem space, and research arrives too late to correct the frame. The second is evidence quality failures, where teams generate their own AI synthesis of customer feedback or LLM-produced journey maps that have the shape of research without the substance, and nobody goes looking for real evidence when they think they already have it. The third is fragmentation, where each embedded researcher builds their own methodology and judgment, the individual work might be good, but the organizational knowledge does not accumulate.

More researchers without a governance model does not fix this. It multiplies it.

The Intelligence Function Is Not a Rebrand

When I describe moving from a service model to an intelligence function, the first response I usually get is that this is just new language for the same aspiration. Strategic research. Embedded influence. We have heard this before.

It is not the same argument. The intelligence function is a structural repositioning, not a tone shift.

The service model waits for requests. The intelligence function maintains organizational knowledge continuously and encodes it where decisions happen. The service model produces findings that travel through humans who may or may not carry them forward. The intelligence function maintains a current model of what the organization believes about its users, how confident it is in each belief, and where the gaps are, and makes that model accessible where specifications are being written.

The practical difference is this: a product team working from a current, accurate organizational understanding of users does not need to ask research a question before making a decision. The understanding is already there. Research is not bypassed. Research is the reason the frame is right before the decision happens.

That is a different kind of influence than anything the readout model ever produced. It is also harder to measure, which is one of the reasons it does not get funded until something goes visibly wrong.

The Frame Is the Asset

The organizational asset that makes this possible is what I call the Frame: the organization's accumulated, actively maintained model of its users. Not a persona deck from last year. Not a repository of studies. Not a Confluence page that exists because someone filed it there eighteen months ago and forgot.

The Frame is an operational asset with explicit ownership, confidence levels on each belief, freshness indicators that track when the underlying evidence was generated, and a process for retiring outdated knowledge rather than just accumulating it. It lives where product decisions get made, not in a research repository that requires a field trip. It gets updated when a study closes, not when someone remembers to link the readout.

The function that owns the Frame owns something the organization cannot generate from within its own velocity. An agent can execute a specification. An agent cannot tell you whether the specification is built on a correct model of the user. That judgment requires human research and organizational memory that someone has been maintaining.

This is also what makes the intelligence function relevant to the engineering restructuring specifically. When agents handle execution from specifications, the cost of a wrong model of the user is not one bad feature. It is a systematically wrong product, built faithfully, at agent speed, at agent scale. The Frame is the thing that prevents that. The research function that owns the Frame is providing something that sits upstream of everything engineering produces.

The Costs Nobody Is Counting

The pushback I always get at this point is about investment. The intelligence function requires protected time, named ownership, governance infrastructure. That costs money the organization may not want to spend.

The service model's costs are already enormous. They are just invisible.

Three separate sprint studies in a single quarter reconstructing the same foundational context from scratch because nobody knew what the organization already knew. That is a waste that can be calculated in researcher hours. Features that tested well in micro studies and failed in adoption because the micro studies were testing within a frame that no longer matched how users actually worked. Product decisions made on two-year-old assumptions that current research would have corrected, and the launch underperformed in a way that was entirely predictable in retrospect.

These costs do not appear on any budget line. They appear in roadmaps that keep getting surprised, in launches that underperform, in the slow drift of research from a function that shapes decisions to a function that documents them after the fact.

The intelligence function does not add cost. It makes the existing cost visible and then, over time, reduces it. That argument lands differently with leaders who have budget authority than "research should be strategic." One is a philosophy. The other is an operational risk conversation.

The Window

The sorting I described at the beginning is not a future event. It is happening now, inside organizations that look like they have not changed, inside planning meetings where the function's value is being assessed against what it can actually produce at the speed decisions are being made.

The research functions that will be classified as infrastructure are the ones that started building toward that classification before anyone asked them to justify it. Not because they predicted the future correctly. Because they understood that you do not get to choose your positioning after the org chart is already drawn.

I do not know exactly how long the window is. I have a suspicion it is shorter than most research leaders think. The ones I have talked to who are furthest along in this transition did not wait for permission or for a visible failure or for the perfect moment to make the case. They started building the Frame. They changed what their studies produced. They connected their output to the specification process instead of the readout calendar.

They will probably be fine. I am less certain about the ones still running the same request queue they ran three years ago and calling it good research practice.

It is good research practice. That is not the question anymore.

🎯If this piece resonated, subscribe to The Voice of User. I write about UXR, organizational dysfunction, and the things the conference circuit is too polite to say.