The No-BS Trust and Safety UX Research Playbook

I put this piece together after a lot of thought, because I’ve seen firsthand how messy, uncomfortable, and absolutely essential Trust & Safety research is—and how little guidance exists for doing it right. We talk a lot about “protecting users,” but when it comes to actual methods, protocols, and ethics, most teams are winging it. That’s not just bad practice—it’s dangerous.
So I wrote the kind of post I wish I had when I started working in this space. Something brutally honest, methodologically sharp, and grounded in reality—not marketing slides. If you're doing UX research in spaces where harm can happen (and let’s be real, that’s most platforms), this is for you.
What is Trust & Safety UX Research, Anyway?
Before diving into methods, let's get clear on what we're actually talking about, because T&S research isn't just regular UX with spicier topics.
Trust & Safety research focuses on understanding how products can protect users proactively, not just react to harm after it happens. It examines how consent, control, and clarity are built into (or missing from) product flows—especially around visibility, reporting, and user interactions. Unlike standard UX research that might focus on engagement or conversion, T&S research deliberately examines power dynamics and risk thresholds that different users face.
This work spans both technical systems (reporting flows, content moderation, account controls) and emotional safety (feeling seen, supported, in control). It's not just about preventing abuse—it's about helping users trust the platform enough to participate fully. After all, a platform where people don't feel safe is a platform they won't use, or will use in limited, self-protective ways.
Strong T&S creates the conditions for healthy engagement, especially in vulnerable contexts like dating, identity exploration, and online-to-offline connections. But it only works when research, policy, product, and community operations teams are aligned and responsive to real user signals—not just the metrics that are easiest to measure.
In short: T&S research is about understanding harm as a systemic product problem, not just bad user behavior.
The Hard Truth About T&S UX Research
Let's start with a hot take: most companies suck at Trust & Safety research. Like, spectacularly suck. They either avoid it entirely ("we can't talk about that"), treat it like regular UX research ("just ask users about their harassment experiences in our standard survey!"), or worst of all, they exploit it ("we need stories of abuse for our pitch deck—the more heartbreaking, the better for our Series C!").
So you've decided to do real Trust & Safety research. Good. But here's the deal: this isn't a regular usability test. You're not asking someone how they feel about a new color palette. You're asking them to walk back through a moment that might have been humiliating, scary, or violating. You don't get to wing this.
Here's why this happens: T&S research requires acknowledging that your product might be enabling harm. It forces you to look under the hood at the unintended consequences of your design decisions. It's uncomfortable. It's inconvenient. And it often reveals problems that are expensive to fix.
But guess what? Those problems exist whether you research them or not. You just don't have the data. And in the absence of data, executives will happily pretend everything is fine while your users suffer silently before eventually leaving.
If you're reading this, I'm assuming you actually want to do better. So let's talk about how to do this work with integrity, precision, and care.
The Core Principles
Before we dive into the specific methods, we need to establish some principles that should underpin all your T&S research:
- Harm is systemic, not anecdotal. Individual stories matter, but they're symptoms of deeper design problems.
- People are experts in their own experiences, but may not understand how the system failed them.
- Research should be reparative, not extractive. It should leave participants feeling heard, not used.
- Safety work requires rigor. This isn't "soft" research—it demands methodological precision.
- You are not neutral. Your product choices created conditions for harm. Own that.
With these principles in mind, let's get into the practical framework:
1. Don't start with harm. Start with context.
Don't open with, "Tell me about the worst experience you've had." Build a map of their relationship to the product, their boundaries, and their behaviors first. You need context before trauma. Always.
WHY THIS MATTERS
When you jump straight to harm, you're signaling several problematic things:
- That you only care about the negative, not the full human experience
- That you see them as a case study, not a complete user
- That you haven't done your homework on how they actually use your product
HOT TAKE
I've witnessed senior researchers make this mistake and then wonder why their participants clam up. "They seemed uncomfortable talking about harassment," they'll say. No shit! You essentially asked them to strip down emotionally before even learning their name.
BETTER APPROACH
Start with their overall product usage patterns. How do they typically use your platform? What do they enjoy about it? What communities or features do they engage with most? Build rapport by showing genuine interest in their day-to-day experience.
Then, ease into context around safety. Ask about their general boundaries online. How do they typically handle unwanted interactions? What kinds of content do they prefer to avoid? These questions establish their baseline before discussing specific incidents.
For example:
- "Walk me through a typical day using our platform."
- "How do you decide who to connect with?"
- "Are there certain topics or types of content you try to avoid?"
- "What unwritten rules do you follow to have a better experience?"
Only after establishing this foundation should you move toward specific harmful experiences, and even then, start broad: "Can you tell me about a time when something didn't feel right on the platform?"
PRO TIP
Think of your interview like a first date. If you immediately ask, "So, tell me about your trauma," you deserve to be left with the check and a restraining order. Build trust before asking people to be vulnerable.
2. Safety ≠ Comfort. Prioritize autonomy.
Give participants control—over what they answer, when they pause, how their data is used, and whether they continue. Tell them explicitly: "You can skip any question or stop the interview anytime—no need to explain." Mean it.
WHY THIS MATTERS
Trust & Safety research isn't supposed to be comfortable—not for participants sharing difficult experiences, and not for you hearing them. But discomfort is different from harm.
Harm happens when people lose agency. When they feel trapped in the interview, obligated to answer triggering questions, or worried about disappointing you. Your job isn't to make the process painless—it's to ensure participants maintain control throughout.
HOT TAKE
Too many researchers confuse "making participants comfortable" with actual safety protocols. Offering snacks and speaking in a soothing voice isn't a safety plan. Real safety comes from clear procedures, informed consent, and established boundaries.
BETTER APPROACH
Before the session begins, explicitly state:
- "You have complete control over this conversation."
- "You can skip any question without explanation."
- "We can pause or stop at any time."
- "You can withdraw your data after the fact if you change your mind."
- "Let's establish a simple signal if you need a break."
Then actually honor these promises. If someone seems hesitant, proactively offer an out: "This seems like it might be difficult to discuss. Would you prefer to skip this question?"
Don't rush to fill silence. Allow participants to process their thoughts. And pay attention to non-verbal cues that someone is becoming distressed.
PRACTICAL ADVICE
Create a physical "pause card" that participants can simply tap or hold up if they need a moment. This removes the burden of having to verbally request a break while potentially distressed.
And for remote sessions? Set up a simple emoji or text signal they can use in chat. These small mechanisms make a huge difference in maintaining autonomy.
3. Design your protocol like it will be audited.
Write down exactly how you'll recruit, screen, consent, handle disclosures, and escalate issues. Share it with someone outside the project. You are not just collecting data—you're holding risk.
WHY THIS MATTERS
T&S research isn't just methodologically complex—it's ethically complex. Without a detailed protocol, you'll inevitably make inconsistent decisions under pressure. Worse, you won't be able to defend your choices when (not if) questions arise about how you handled sensitive disclosures.
HOT TAKE
If you can't explain exactly how you'd handle a participant disclosing illegal content, suicidal ideation, or abuse of a minor, you have no business conducting T&S research. Full stop. This isn't gatekeeping—it's basic professional responsibility.
BETTER APPROACH
Your protocol should include:
- Recruitment criteria — Who is appropriate for this research? Who isn't? (Including age minimums and exclusion criteria for potentially vulnerable populations)
- Screening process — How will you identify participants with relevant experiences without requiring disclosure of trauma during recruitment?
- Consent procedures — What specific language will you use? Will you re-establish consent throughout?
- Interview guide — Complete with discussion topics, question phrasing, and alternative pathways based on different responses
- Disclosure handling — Step-by-step procedures for when participants reveal:
- Ongoing abuse or danger
- Illegal activity
- Self-harm concerns
- Trauma responses during the session
- Data management — How will sensitive information be stored, anonymized, and protected?
- Escalation contacts — Name specific people (legal, security, support teams) who will be available during research sessions
PRACTICAL ADVICE
Have someone read your protocol who has absolutely nothing to do with your project—preferably someone skeptical with a good BS detector. If they can poke holes in your plan, so could a regulator, journalist, or lawyer. Better to be questioned by a colleague now than deposed by an attorney later.
Remember: "I was just trying to understand our users better" is not a defense when handling sensitive disclosures improperly. Design your protocol like someone's safety depends on it—because it might.
4. Don't compensate based on trauma.
You're paying for time, not pain. Don't offer $300 to dig up traumatic events. It incentivizes self-exploitation. If the topic's sensitive, compensate fairly—but ethically.
WHY THIS MATTERS
When you increase compensation specifically for trauma-related research, you create perverse incentives. Participants may:
- Exaggerate or fabricate experiences to qualify
- Push themselves to discuss traumatic events they're not ready to process
- Feel obligated to share more graphic details than they're comfortable with
This isn't just ethically dubious—it's bad research methodology. You want authentic accounts from people who are genuinely ready to share them.
HOT TAKE
If your recruitment screener essentially asks "How badly were you harmed on our platform? The worse, the better!" you're not doing research—you're trauma mining. And don't think participants don't notice. They know when they're being exploited versus genuinely heard.
BETTER APPROACH
Compensate based on time commitment and research complexity, not emotional intensity. If your standard user interview pays $75, your T&S interview might pay $100-125 to acknowledge the additional cognitive load—but not $500.
Consider non-monetary ways to recognize the value of participation:
- Offer direct access to product teams to share feedback
- Provide follow-up on how their input shaped safety features
- Give them early access to new safety tools
Most importantly, be transparent about compensation from the start. Don't offer "bonuses" for more detailed disclosures or emotionally intense sessions.
REAL TALK
If your compensation structure would make a good plot for a dystopian Black Mirror episode ("The more traumatized you are, the more you get paid!"), rethink your approach immediately.
5. Use mixed methods to triangulate.
People don't always tell you when something went wrong. Watch behavior patterns: opt-outs, drops, report attempts, silent churn. Match it with qualitative data. Use both to find the blind spots.
WHY THIS MATTERS
T&S issues are characterized by silence and absence. The most harmed users often never report—they just disappear. If you rely solely on interviews or survey data, you're missing the users who:
- Left your platform immediately after harassment
- Don't trust your reporting systems enough to use them
- Feel too ashamed or afraid to discuss their experiences
- Don't recognize patterns of manipulation as abuse
HOT TAKE
If your research plan doesn't include behavioral data analysis, you're essentially saying, "We only care about harm that users are willing and able to articulate." That's not just methodologically incomplete—it's privileged. It assumes everyone has the safety, language, and emotional bandwidth to name what happened to them.
One particularly effective approach: identify users who exhibited suspicious behavior patterns (like blocking multiple accounts in one session and then reducing activity), and then gently reach out for research. Many will decline, but those who participate often reveal harm patterns you'd never have discovered through random sampling.
6. Assume the system failed. Then prove it.
The goal isn't just to hear stories—it's to understand how your product made it worse. Where were the missed exits, the silent defaults, the empty affordances? That's the work. You're not a therapist. You're a system debugger.
WHY THIS MATTERS
Users experiencing harm aren't just encountering bad actors—they're encountering product failures. Every harassment incident, every scam, every harmful exposure represents a moment when your design, policy, or engineering failed to protect someone.
Focusing solely on user behavior ("why didn't they just block that person?") obscures your responsibility to build better systems.
HOT TAKE
If your company's response to T&S research is to create more user education rather than fix broken systems, you're essentially saying, "We've designed something dangerous and now we expect users to learn how to use it without getting hurt." That's not product design—that's negligence with documentation.
BETTER APPROACH
For each harm scenario, systematically analyze:
- Prevention failures — What allowed the harmful interaction to occur in the first place?
- Awareness gaps — Where did the user lack visibility into potential consequences?
- Intervention breakdowns — What made it difficult to stop the harm once it began?
- Recovery obstacles — How did the product hinder recovery or reporting?
- Systemic vulnerabilities — Which user groups are disproportionately exposed to this risk?
Create a "failure mapping" document that traces specifically how the product contributed to or failed to prevent each identified harm pattern.
PRACTICAL ADVICE
One effective technique: Create a visual journey map of the harm experience, but annotate it with all the system interventions that could have happened but didn't. This helps product teams see safety not as an add-on feature but as a series of opportunities throughout the user journey.
THE HOT TAKE
Think of yourself as a product detective. Your job isn't to ask, "Why did the crime happen?" but rather, "Why did our security system let the burglar waltz right in, take their time choosing valuables, and then use our company stationery to leave a thank-you note?" When you frame it that way, the system failures become pretty obvious—and often absurd.
The Tactics: Research Methods That Actually Work
Now that we've covered the principles, let's talk tactical approaches for specific T&S research challenges.
FOR DETECTING UNREPORTED HARM
Modified Contextual Inquiry: Instead of asking "show me how you use the product," ask "show me how you protect yourself while using the product." The difference is subtle but profound—it immediately reveals safety workarounds users have developed.
Product History Review: Walk through their usage timeline together, looking for gaps or changes in behavior. "I notice you didn't post for three weeks in March—can you tell me about that time?" Often reveals incidents they didn't think to mention.
Third-Person Scenarios: "Some users have experienced [potential harm]. How would someone handle that on this platform?" Removes self-disclosure pressure while still revealing awareness of system limitations.
FOR UNDERSTANDING SYSTEMIC ISSUES
Comparative Platform Analysis: Ask participants to compare safety experiences across platforms. "How does staying safe here compare to Instagram/Twitter/TikTok?" Quickly highlights your platform's unique vulnerabilities.
Policy Translation Exercise: Ask users to explain your safety policies in their own words. Misalignment reveals communication failures that create vulnerability.
Safety Feature Scavenger Hunt: Challenge participants to find and demonstrate safety tools within a time limit. Reveals discoverability and usability issues in critical moments.
FOR MEASURING IMPROVEMENT
Before/After Safety Scores: Establish baseline metrics for perceived safety, then measure again after interventions. Use consistent measurements to track improvement over time.
Silent Churn Analysis: Track accounts that deactivate within 24 hours of specific interaction types. Compare rates before and after safety updates.
Reporting Conversion Rate: What percentage of users who begin a reporting flow actually complete it? Improvement here often indicates better design rather than more violations.
Common Pitfalls (AKA How To Tell Your Research Is a Dumpster Fire)
Let's address some classic ways T&S research goes spectacularly, embarrassingly wrong:
THE SAVIOR COMPLEX
The Problem: Researchers position themselves as rescuers rather than listeners, promising fixes they can't deliver and creating false expectations.
The Fix: Be honest about your role. "I'm here to understand these issues so our team can address them systematically. I can't promise immediate fixes, but I can promise your experience will inform our approach."
THE TRAUMA VOYER
The Problem: Pushing for graphic details or emotional responses that aren't necessary for product improvements.
The Fix: Always ask yourself: "Do I need this level of detail to fix the problem?" If not, don't probe further. "I have enough to understand what happened—we don't need to revisit the specifics."
THE POLICY SHIELD
The Problem: Hiding behind policies instead of acknowledging system failures. "Well, that violates our terms of service, so they shouldn't have done that."
The Fix: Separate rule-breaking from system responsibility. "Yes, that user violated our policies—but let's talk about how our detection systems missed it and what happened when you tried to report it."
THE DEMOGRAPHIC DODGE
The Problem: Ignoring how harm disproportionately affects certain user groups, treating all experiences as universal.
The Fix: Intentionally include demographic questions and analyze patterns across different user populations. Acknowledge when certain groups face specialized risks.
Building a Sustainable T&S Research Program
So far we've talked about research tactics, but let's get real about what it takes to build a safety research program that actually works over time. This isn't a one-and-done survey—it's an ongoing commitment to understanding evolving risks.
FROM REACTIVE TO PROACTIVE: BUILDING A SUSTAINABLE RESEARCH FRAMEWORK
Most companies only think about safety research after a crisis hits the news. By then, it's too late—you're doing damage control, not prevention. Here's what a proactive research cycle looks like instead:
- Continuous Signal Collection
- Set up dashboards tracking key safety metrics (blocks, reports, abandonment)
- Create automated alerts for unusual patterns
- Maintain opt-in panels of users willing to participate in safety research
- Develop relationships with community moderators who surface emerging issues
- Quarterly Risk Assessment
- Review product roadmaps through a safety lens before features ship
- Conduct proactive studies on high-risk features (messaging, live video, etc.)
- Map emerging external threats (new forms of scams, harassment tactics)
- Score and prioritize issues based on prevalence, severity, and vulnerability
- Targeted Deep Dives
- Focus concentrated research on your top 2-3 risk areas each quarter
- Combine qualitative understanding with quantitative measurement
- Involve cross-functional teams (engineering, design, policy, legal)
- Document mitigation recommendations with clear success metrics
- Systematic Follow-through
- Track implementation of safety recommendations
- Measure impact against baseline metrics
- Publish internal reports on what worked/what didn't
- Update research methods based on learnings
PRIORITIZATION FRAMEWORKS THAT ACTUALLY WORK
Not all risks are created equal. Here's how to decide what to tackle first:
The Impact/Evidence Matrix
Use this to prioritize which safety issues get attention, budget, and escalation.
LOW EVIDENCE | HIGH EVIDENCE | |
---|---|---|
HIGH IMPACT | Exploratory Priority | Immediate Action |
LOW IMPACT | Backlog | Monitoring |
Not all safety issues are created equal. Some are high-impact and well-documented—you fix those now. Others are vague, unverified, and may or may not be real—you investigate, but don’t drop everything. The point of this matrix is to bring rigor to prioritization and stop the chaos of reacting to the loudest voice in the room.
Start by looking at two dimensions: how bad the issue could be if left unchecked (impact), and how confident you are that it’s real and understood (evidence).
If an issue is both high-impact and high-evidence—say, you’ve got strong behavioral data and repeat reports showing widespread abuse—you act immediately. That’s a no-brainer.
If the impact is high but the evidence is weak, you don’t ignore it—you investigate. These are your exploratory priorities. Maybe it’s a new harassment vector emerging in a niche community. You’ve seen the early signs, but it hasn’t blown up yet. You don’t wait for a PR disaster. You dig in.
If the issue is low-impact but high-evidence—maybe it’s a confusing setting buried in your reporting flow that’s tripping people up—you track it. It’s real, but it’s not breaking the system. Fix it when you can.
And if it’s low-impact and low-evidence? Backlog it. These are theoretical edge cases or vague complaints with no clear pattern. Keep an eye on them, but don’t waste cycles until there’s a reason to care.
This framework doesn’t pretend to be perfect. What it does is make your tradeoffs visible. It replaces hand-waving with structured triage—and it stops your safety roadmap from being hijacked by whoever shouts “urgent” the loudest.
INTEGRATED SAFETY INSIGHTS: BREAKING DOWN SILOS
The biggest mistake companies make is keeping safety research isolated from:
- Product research
- Content moderation teams
- Policy development
- Customer support
- Community management
Your T&S research program should explicitly connect insights across these functions. Some tactics that work:
- Monthly cross-functional safety reviews with mandatory attendance
- Shared documentation systems where safety insights are accessible to all teams
- Rotation programs where product researchers spend time on safety studies
- "Safety implication" sections required in all product research reports
Remember: safety isn't a feature. It's an aspect of every feature. Your research program should reflect that reality.
The Bottom Line: Ethics Aren't Optional (Yes, I'm Talking to You, Growth Team)
One last thing: If you think this work doesn't apply to your product because "our users don't really experience harm," that's not insight. That's a lack of visibility. Harm doesn't stop just because you aren't measuring it—just like your terrible karaoke doesn't improve when you remove the scoring feature.
Let me put this bluntly: doing T&S research poorly is worse than not doing it at all. At least if you do nothing, you're not creating additional harm through sloppy methods and false promises.
The good news? You can do this work well. It requires rigor, care, humility, and resources—but it's entirely possible to conduct research that both respects participants' humanity and yields actionable insights for safer products.
The even better news? When done right, this research leads to better products for everyone. Safety features typically improve the experience for all users, not just those experiencing active harm. Clear boundaries, increased control, and transparent systems benefit your entire user base.
So here's your move:
- Steal this playbook. Adapt it. Use it.
- Forward it to your research lead, your PM, your policy person, whoever keeps saying “we don’t have time for this.”
- Burn your current screener and start over with real ethics in mind.
- Stop pretending safety is a feature request. It's infrastructure.
This isn’t bonus work. This is the job.
Resources Worth Your Time (Yes, You Actually Have to Read Them)
If you're serious about improving your T&S research practice, start with these:
- Trust & Safety Professional Association — Offers frameworks specific to digital harm investigations (not just a place to put "TSPA Member" in your LinkedIn bio)
- Crisis Text Line's Research Ethics Guidelines — Excellent foundation for sensitive topic research (and no, skimming the executive summary doesn't count)
- Design Justice Network Principles — For understanding how product harm disproportionately affects marginalized communities (spoiler: your product is probably worse for some people than others)
- Trauma-Informed Design Toolkit by Shannon Mattern — Approaches specifically for digital products (that will make you realize how much your current process sucks)
The work is challenging. Do it anyway. Your users deserve better than research that treats their worst experiences as just another data point in your fancy dashboard that nobody actually looks at.
🎯 Still here?
If you’ve made it this far, you probably care about users, research, and not losing your mind. I write one (sometimes two or three) longform UX essay a week—equal parts strategy, sarcasm, and survival manual.
No spam. No sales funnels. No inspirational LinkedIn quotes. Just real talk from the trenches.
👉 Subscribe now — before someone proposes an A/B test for blocking abuse.