10 min read

On the Concept of the "Frame": A Definition and Some Thoughts on Operationalization

On the Concept of the "Frame": A Definition and Some Thoughts on Operationalization
Photo by pine watt / Unsplash

In a recent piece on AI-enabled UXR teams, I introduced a concept I called the frame. I defined it briefly, used it to make a broader argument about organizational design, and moved on. Since then the responses I've gotten have mostly been about that concept specifically. What is it exactly. How is it different from what we already have. How do you actually build one. How do you get an org to maintain it.

Good questions. I didn't fully answer them in that piece because the argument didn't require it. This piece is the attempt to do that properly.

Fair warning: this is working thinking, not a finished framework. I'm writing it in public because I think the concept is useful and I want to find out where it breaks. If you read this and think I've missed something or gotten something wrong, I genuinely want to hear it.

What the Frame Is Not

It's easier to start here because the frame is easy to confuse with things that already exist and already have names.

The frame is not a repository. A repository is an archive of what research has been done. It stores studies, findings, decks, recordings. It tells you what happened. The frame is about what the organization currently believes. Those aren't the same thing and treating them as interchangeable is how orgs end up with enormous Dovetail workspaces and no shared understanding of their users.

The frame is not a persona set. Personas are a representational artifact. They're a way of communicating user archetypes, usually at a point in time, usually to a specific audience. They can express parts of the frame. They're not the frame. Most persona sets are also, if we're being honest, at least partially fictional and entirely static. The frame is supposed to be neither.

The frame is not a journey map. Journey maps are useful for specific things. Visualizing a process, identifying friction points, aligning a team on how an experience unfolds. They're a slice. The frame is the whole thing.

The frame is not a research strategy. A research strategy is about what the team plans to study. The frame is about what the organization currently knows. Related, not the same.

I'm spending time on this because the most common response when I introduce the concept is "oh, we have that, it's our [insert existing artifact]." Usually they don't. Usually what they have is one of the above, which is useful for what it's designed to do and insufficient for what the frame is supposed to do.

A Working Definition

The frame is the organization's accumulated, actively maintained model of its users.

Not what it has studied. What it currently believes, based on the best available evidence, about who its users are, what they're trying to do, what motivates them, what creates friction for them, how they make decisions, and where the product fits or doesn't fit into their lives.

Three words in that definition are doing heavy lifting.

Accumulated means it's built over time from multiple sources, not generated by a single study or a single team. It incorporates qual, quant, behavioral data, support signals, market research, and product intuition when those are grounded in real observation. It's not owned by UXR exclusively. UXR stewards it. There's a difference.

Actively maintained means someone is responsible for keeping it current. Not passively stored. Not allowed to sit until it's convenient to revisit. Maintained on a cadence, with explicit attention to where it's getting stale, where coverage is thin, and where the organization is operating on assumptions that haven't been tested in a while.

Model means it's a structured representation, not a pile of findings. It has internal coherence. You can use it to make predictions. If your model of the user is accurate, it should help you anticipate how they'll respond to something new, not just describe how they responded to something old.

The last part is what makes it genuinely useful and genuinely hard to build. Most research functions produce descriptive knowledge. The frame, when it's working, produces predictive capacity. That's a different thing.

The Four Properties

I find it useful to think about the frame as having four properties that can be assessed explicitly. Not perfectly. But specifically enough that you can have a real conversation about the state of your frame rather than a vague one.

Coverage is which parts of the user population and problem space the org understands well versus which parts it's operating on assumption. Most orgs have deep coverage in a few areas, usually wherever the founding team had personal experience or wherever the last major research effort landed, and almost nothing elsewhere. The gaps don't announce themselves. They just quietly shape every decision made in their vicinity and nobody notices until something ships into one and fails in a way that feels inexplicable.

Freshness is when each part of the frame was last meaningfully updated. Freshness isn't uniform, and this is important. Some parts of a user model age in months. Some are stable for years. A study on how users navigate a specific checkout flow might still be accurate two years later if the flow hasn't changed. A study on what motivates users to try a new product category might be stale in six months if the market moved. Treating all past research as equally current or equally suspect is both wrong and expensive.

Confidence is the degree to which each belief in the frame is based on strong direct evidence versus inference, extrapolation, or organizational folklore. Every org has beliefs about its users that have achieved the status of common knowledge without anyone being quite sure where they came from. Someone said it in a meeting once. It appeared in a deck. It got repeated. Now it's true. Confidence assessment makes those visible before they drive a major product bet.

Ownership is who is accountable for knowing whether the frame still holds. Not who did the last study on a given topic. Who is responsible, right now, for knowing whether a specific part of the frame is current and accurate, and for calling for updated work when it isn't.

Most orgs can answer coverage and freshness imperfectly if they dig around. Almost none of them have a clear answer to ownership. That's the one that matters most and gets sorted out last, usually after something goes wrong.

Why Frames Degrade

Most organizations had a version of a frame once. There was a foundational study. A segmentation initiative. A generative research effort that a previous leader championed and that actually changed how the org thought about its users. For a while the org operated from it. Decisions were sharper. Research felt purposeful in a way it usually doesn't, because there was something substantial to connect findings to.

Then the team got busy.

The frame from that effort is still technically present. It's just not current. The users changed. The product expanded into territory the original frame never covered. The competitive landscape shifted. New segments emerged that nobody mapped. But the org is still operating within the old frame implicitly, because nobody's had the bandwidth or the organizational permission to rebuild it.

Every study runs on assumptions nobody has tested in two years. Every recommendation is filtered through a user model that may or may not reflect who's actually using the product today. New researchers get hired and spend months reconstructing context that should have been maintained and handed to them on day one.

This is how products drift without anyone being able to explain why. Research that's technically solid produces findings that somehow never quite land. Teams argue about what users want because there's no shared model to anchor the argument in.

Nobody names it as a frame problem because the frame was never named in the first place.

On Operationalization: Working Thoughts

This is the part I'm least certain about and most interested in getting wrong in public.

The question I've gotten most often since the last piece is: how do you actually get an org to treat the frame as something that requires active ownership? Not in theory. In a real org with real constraints and a PM who needs something by Thursday.

Here's what I currently think, with the caveat that this is provisional.

You can't build it through persuasion alone.

Explaining why the frame matters produces agreement and inaction in roughly equal measure. Every research leader I've talked to about this concept has said yes, absolutely, we need this. Almost none of them have built it. The gap between agreeing it's important and doing the organizational work to make it real is enormous, and the gap doesn't close through better articulation of why it's important.

What actually moves organizations is making the absence of the frame visible and expensive at a moment when someone with authority is paying attention. A product decision gets made on two-year-old assumptions. A launch underperforms in a segment the team thought it understood. A quarter goes by where three separate studies reconstruct the same foundational context from scratch because nobody knew what the org already knew. When that happens in front of the right person at the right time, the frame conversation shifts from research philosophy to risk management. That's when it becomes real.

The tactical implication: don't try to build the whole frame at once. Pick the most obviously stale part of it. Make the staleness concrete. Show what decisions are being made against it. Use that one case to establish the precedent that the frame is something the org actively maintains. One win that makes the absence of governance feel like a real cost is worth more than a dozen strategy presentations.

Ownership without structural protection doesn't hold.

The frame almost always ends up owned by whoever cares most. Usually one researcher with strong opinions about organizational knowledge infrastructure who maintains an informal version of it through personal investment. This works until that person leaves, gets promoted into something that consumes all their time, or burns out from maintaining it without organizational support. Then it's owned by nobody, and within two quarters the org is back to archive-plus-implicit-assumptions.

Real ownership requires the frame to be in someone's actual job description with protected time attached. Not "part of their responsibilities alongside everything else." Protected time. The kind that doesn't get cannibalized when a PM needs something by Thursday, which is always.

It also requires the frame steward to have the organizational permission to call for discovery work without filing a request and waiting for prioritization approval. Discovery work that isn't tied to an immediate product question looks like overhead from the roadmap perspective. Getting permission for it means framing it as risk management: the org is about to make a significant investment in a space it has no foundational understanding of, and that's a risk with a cost. That framing works better than "we should invest in foundational research."

The frame is not a document.

Documents go stale by definition. A maintained frame is a program, not an artifact.

It requires some regular, lightweight form of access to users that isn't tied to any product question. Not a lot. Enough to track how the population is shifting, what new tensions are emerging, where the existing model is starting to describe people who no longer quite exist. This doesn't require a dedicated team. It requires protected capacity and a commitment that it happens on a cadence rather than when someone has bandwidth.

It requires periodic synthesis that asks not what did we learn this cycle but how has our understanding of the user changed and what in the frame needs to be updated. That's a different question than standard research synthesis and it produces different outputs.

It requires a process for retiring outdated knowledge. This is the least glamorous part and probably the most operationally important. Repositories grow in one direction. Nobody ever removes anything. The result is that a researcher trying to understand what the org knows has to archaeologically distinguish current beliefs from historical artifacts, usually under time pressure with no guidance. Treating knowledge expiration as a real attribute that requires active management is the thing that separates a maintained frame from an ever-growing pile.

The frame can become dogma.

One risk I want to name because it's real and I didn't address it properly in the last piece. If the people who steward the frame treat ownership as a monopoly on interpretation rather than accountability for accuracy, the frame stops being a living model and becomes an orthodoxy. The thing that was supposed to reduce uncertainty starts producing it in a different form, because now people can't update the organizational belief even when the evidence has shifted.

A healthy frame is stable enough to guide work and contestable enough to revise. The governance structures around it need to include a real process for surfacing disagreements, a real standard of evidence for changing what the org believes, and explicit acknowledgment that the frame steward's job is to keep the model honest, not to defend it.

Where I'm Still Working This Out

A few open questions I don't have good answers to yet.

How do you handle the frame across a very large organization where different product areas have legitimately different user populations? A single frame is probably the wrong model at scale. Some kind of federated structure with shared governance principles is probably right, but I haven't seen it done well and I'm not sure what it looks like in practice.

How do you measure frame quality in a way that's concrete enough to be useful without creating a bureaucratic assessment process that consumes more energy than the frame itself? The four properties give you a structure for conversation. I'm not sure they give you a metric.

How do you handle the transition when the frame is badly degraded and needs to be rebuilt rather than just maintained? The rebuild is a different kind of work than maintenance and it requires a different kind of organizational investment. I've seen it done once. I'm not sure how generalizable it is.

These are the things I'm still thinking about. If you've worked on any of them in a real org, I want to hear how it went.

Closing

The frame concept is my current best attempt to name something I keep observing in research organizations and haven't heard named elsewhere. It may be wrong. It may need significant refinement. The operationalization section above is provisional in a way the definition section isn't.

What I'm confident about is the underlying problem. Most UXR organizations have an archive and an implicit belief system and no governance over whether the belief system is current. That gap produces the reactive, starts-from-scratch, never-compounding quality that most research leaders recognize and most don't have a vocabulary for addressing.

The frame is the vocabulary. The operationalization is still being worked out.

If any of this resonates, or if you think I've gotten it wrong, reach out. The more specific the disagreement, the more useful the conversation.

🎯 The frame isn't a deliverable. It's the thing that makes your deliverables mean something. If you want unfiltered writing on how UXR organizations actually work and why most of them are structured to fail quietly, subscribe.