13 min read

Short Story – The Curious Case of George the UX Researcher, Part One: The Optimization

A Modern Fable of Professional Extinction
Short Story – The Curious Case of George the UX Researcher, Part One: The Optimization
It began with a layoff, a chatbot, and a coffee cup that said 'User-Centric Since 2012'.

Author's note: I like telling stories—especially the ones that hit a little too close to home. So I’m writing a modern fable about what it feels like to be a UX researcher (or honestly, any human with a conscience) in the age of AI, optimization, and disappearing job titles. It's not a rant. It's a story. About us. I’ll do my best to serve up a new chapter every Sunday—just in time for your morning coffee and mild existential crisis. Might skip a week now and then, but George isn’t going anywhere. Thanks for reading. Let’s see where he ends up.

Prologue: On the Nature of Superpowers and Corporate Delusions

George Pemberton had always believed that asking good questions was a superpower. The company believed ChatGPT was cheaper.

This fundamental disagreement in worldview would prove to be George's professional undoing, though he wouldn't realize it until he was sitting in Conference Room B—that graveyard of corporate dreams where dying plants went to contemplate their mortality alongside broken printers and middle managers' abandoned ambitions.

It should be noted that Conference Room B had achieved a kind of legendary status within the company, not unlike the Bermuda Triangle, but with worse lighting and the lingering smell of microwaved fish lunches. Employees spoke of it in hushed tones, the way ancient mariners might discuss particularly treacherous reefs. "Did you hear? Jenkins got called to Conference Room B." "Oh no. What did he do?" "Asked for a raise, apparently." "Poor bastard."

But we're getting ahead of ourselves. Let us begin at the beginning, which is always the most sensible place to start, even when everything that follows defies all sense entirely.

On the morning of what would later be enshrined in UX researcher folklore as "The Great Purge of 2025" (historians would note the irony that no one bothered to research whether purges were effective), George arrived at his standing desk with the optimism of a man who still believed in the fundamental rationality of the universe.

This was George's first mistake of the day, though hardly his last. The universe, as any seasoned corporate employee could have told him, had abandoned rationality sometime around 1987 and had been operating on pure chaos theory ever since. It was a bit like expecting a cat to file quarterly reports—technically possible, but likely to result in disappointment and scattered paperwork.

He clutched his coffee—ethically sourced, naturally, because George was the sort of person who worried about the moral implications of his caffeine consumption—and contemplated the day's great mystery: why 73% of users abandoned their shopping carts on the checkout page.

Somewhere in the digital ether, George mused while adjusting his ergonomic monitor to the precise angle recommended by the Occupational Health and Safety guidelines, there are humans clicking "Buy Now" and then fleeing like startled deer. What stories could they tell? What fears drive them away at the moment of purchase? What dark UX patterns lurk in the shadows of our checkout flow?

It was the sort of question that made George's pulse quicken with the thrill of impending discovery. He lived for these mysteries, these glimpses into the beautiful, chaotic, utterly illogical world of human behavior.

He would never get to solve this particular puzzle.

At 9:47 AM, a Slack message materialized on his screen with the ominous inevitability of a telegram in a war movie. It read: "Please join us in Conference Room B at 10 AM. —Brenda, People Operations."

The message sat there, glowing with the soft menace of corporate bureaucracy. George stared at it with the growing unease of a man who had just realized he'd been using the wrong statistical significance threshold for six months.

Conference Room B was where dreams went to die. Everyone knew this. It was an unspoken law of office dynamics, like how the coffee in the break room would always be either scorching lava or lukewarm disappointment, never anything in between.

The room itself seemed to have been designed by someone who had read about human comfort in a technical manual but had never actually experienced it firsthand. The chairs were precisely the wrong height for the table, the temperature was regulated by what appeared to be a thermostat with commitment issues, and the windows offered a spectacular view of the parking lot's dumpster area. It was, in short, the architectural equivalent of corporate middle management—functional in theory, soul-crushing in practice.

People Operations, George noted with the detached interest of a researcher observing an unusual specimen. Not Human Resources. Not Personnel. People Operations. As if humans were machinery to be optimized, tuned, and occasionally... discontinued.

Chapter One: In Which George Encounters the Machinery of Modern Employment

At precisely 10 AM—punctuality being one of George's many professional virtues, along with asking inconvenient questions and actually reading user feedback—he found himself seated across from two specimens of corporate evolution who had clearly adapted to survive in the hostile environment of quarterly earnings calls.

Brenda from People Operations possessed the kind of smile that had been carefully calibrated in management training seminars. It was a smile that said, "I care about your wellbeing," while her eyes transmitted the subliminal message, "I am about to do something terrible and we're all going to pretend it's for your own good."

Brenda had, in fact, achieved a kind of corporate enlightenment. She had transcended the need for genuine human emotion and had reached a zen-like state of perpetual professional pleasantness. It was rumored that she practiced her facial expressions in the bathroom mirror each morning, running through a complete range of "supportive" looks like a method actor preparing for the role of Compassionate Authority Figure #3.

She consulted her tablet with the reverence of a high priestess reading ancient prophecies, though George suspected the screen displayed nothing more mystical than a standardized termination script, complete with bullet points and recommended emotional responses.

Beside her sat Marcus from Legal, a man who had clearly chosen his profession based on his natural ability to look profoundly uncomfortable in all social situations. Marcus clutched a manila folder with the desperate intensity of someone holding the last life preserver on a sinking ship. His expression suggested he would rather be literally anywhere else—a root canal appointment, a timeshare presentation, perhaps even a meeting about synergizing cross-platform stakeholder engagement strategies.

Marcus had the particular look of someone who had spent his entire career translating human emotions into legal liability assessments. He approached personal interactions the way a bomb disposal expert might approach an unexploded device—with extreme caution, protective equipment, and a strong desire to delegate the task to someone else. The manila folder in his hands contained what HR euphemistically called "transition documentation," though it was really more of a legal spell designed to prevent the company from being sued by the recently optimized.

"George," Brenda began, her voice carrying the artificial warmth of a customer service chatbot, "thank you for joining us. I hope you're having a good morning?"

Ah, thought George, the ritual pleasantries. Like offering a condemned man his choice of last meal. Technically thoughtful, practically irrelevant.

"I was," George replied, settling into his chair with the resignation of someone who had seen enough corporate theater to recognize the opening act of a tragedy. "Though I suspect that's about to change."

Brenda's smile flickered—just for a moment—like a fluorescent bulb experiencing an electrical hiccup. She recovered quickly, consulting her tablet as if it contained the wisdom of Solomon rather than a script written by someone who had clearly never met an actual human being.

The script had, in fact, been written by the company's Chief People Strategy Officer, a man named Brad who had once described empathy as "a scalable soft skill with measurable ROI potential." Brad had never actually fired anyone himself—that was what People Operations was for—but he had strong opinions about the optimal way to deliver bad news efficiently. The script reflected his philosophy that terminated employees were basically customers receiving a different type of service experience, and should be managed accordingly.

"George, we're going through a period of organizational optimization."

Optimization. The word hung in the air like incense at a funeral. George felt something cold and analytical click into place in his mind—his researcher's instinct to examine language, to decode the euphemisms that powerful people used to make terrible things sound inevitable.

"Optimization," George repeated slowly, as if tasting a new and unpleasant flavor. "That's interesting terminology. Are we optimizing for efficiency? Cost reduction? Or perhaps we're optimizing for the appearance of doing something decisive in the face of market uncertainty?"

Marcus shifted uncomfortably, his manila folder rustling like autumn leaves. Brenda's smile tightened imperceptibly.

"Due to shifting market dynamics and our renewed focus on AI-driven decision making," she continued, reading from her script with the emotional range of a particularly sophisticated text-to-speech program, "we're eliminating the User Experience Research role."

The words hit George with the surreal impact of being told that gravity had been discontinued due to budget constraints. He blinked slowly, running the sentence through his mental processor again to ensure he hadn't somehow misheard.

"Eliminating the... I'm sorry, what exactly are we eliminating?"

"The UX Research position," Brenda repeated helpfully, as if George might have temporarily forgotten his own job title. "We've determined that user feedback can be more efficiently gathered through automated analytics and machine learning algorithms."

Efficiently gathered. George felt a laugh building in his chest—not the kind of laugh that indicated amusement, but the sort that emerged when the universe revealed a joke so cosmic in its absurdity that the only appropriate response was hysteria.

"I see," George said, his voice carefully neutral. "And who, exactly, made this determination? Was there perhaps a study? Some research into the effectiveness of replacing human insight with... algorithmic efficiency?"

Marcus and Brenda exchanged a look—the kind of look typically reserved for parents whose child has just asked where babies come from during Thanksgiving dinner, or why Grandpa talks to the television.

"Well," Marcus ventured, speaking for the first time with the cautious tone of someone testing whether ice is thick enough to support their weight, "the data clearly shows that AI can process user feedback at scale much faster than traditional research methods."

The data clearly shows. George felt his left eye begin to twitch—a stress response he'd developed during his third year of trying to explain the difference between correlation and causation to executives who thought statistics were just numbers that supported whatever they wanted to believe.

It was a peculiar feature of corporate life that the phrase "the data clearly shows" was invariably deployed by people who had never actually looked at any data, much like how "studies have proven" was typically used by individuals whose most recent encounter with scientific literature had been a headline glimpsed while scrolling through social media. George had long suspected that somewhere in the company there existed a secret manual titled "Authoritative Phrases to Use When You Have No Idea What You're Talking About," and Marcus had clearly memorized it cover to cover.

"Fascinating," George said. "And this data—was it gathered through user research, by any chance? Or did the AI research itself?"

The silence that followed was the kind of silence that occurs when a philosophical paradox meets corporate logic and both simultaneously combust.

"But who's going to moderate the usability tests?" George pressed on, his researcher's instincts kicking in like muscle memory. "Who's going to conduct the user interviews? Who's going to ask follow-up questions when someone says something confusing? Who's going to notice when participants are lying, or confused, or just telling you what they think you want to hear?"

Brenda and Marcus exchanged another look—this one longer, more fraught, like a silent argument conducted entirely through eyebrow movements.

"We have ChatGPT now," Brenda said finally, as if this explained everything.

This was rather like saying "We have a calculator now" when asked who would be composing the company's poetry, or "We have a GPS now" when questioned about who would be providing marriage counseling. But Brenda delivered the line with the confidence of someone who had been assured by people in expensive suits that artificial intelligence was basically magic, and magic could solve any problem, including the inconvenient problem of having to understand what humans actually wanted.

George stared at her. "ChatGPT."

"Yes."

"ChatGPT is going to conduct user interviews."

"Well, not directly," Marcus interjected, sensing dangerous waters ahead. "But ChatGPT can analyze user feedback and generate insights much faster than—"

"Have you ever tried asking ChatGPT why someone abandoned their shopping cart?" George interrupted, his voice rising slightly above its usual measured tone. "Because I have. Want to know what it said?"

Brenda and Marcus waited with the trapped expression of people who very much did not want to know but were afraid not to ask.

"It said, and I quote: 'Users may abandon shopping carts due to various factors including price sensitivity, security concerns, or suboptimal user experience design. Consider implementing exit-intent surveys to gather actionable insights.'" George paused for effect. "Do you know what an actual user told me when I asked them the same question?"

More silence.

"She said she abandoned her cart because the checkout page reminded her of her ex-husband's dating profile, and she couldn't figure out why until she realized they were both asking for too much personal information upfront." George leaned forward. "How exactly is ChatGPT going to uncover that particular insight?"

Brenda's smile had now achieved the rigidity of a death mask. "George, I understand this is difficult, but we need to adapt to changing times. AI is the future of customer insights."

AI is the future of customer insights. George filed this phrase away in the growing mental folder he kept labeled "Things People Say When They've Never Actually Talked to a Customer."

This folder had grown considerably over the years and included such classics as "Users will adapt to our design," "We don't need to test this, it's intuitive," and the perennial favorite, "Our customers are simple—they just want it to work." George suspected that if he ever published the contents of this folder, it would either become a bestselling comedy book or serve as evidence in a class-action lawsuit brought by users against the technology industry as a whole.

"Right," George said slowly. "And when this AI-driven approach results in products that nobody wants to use, who's going to figure out why? When conversion rates plummet because nobody bothered to ask users what they actually need, who's going to investigate? When the customer satisfaction scores crater because the AI optimized for engagement metrics instead of human satisfaction, who's going to—"

"George," Brenda interrupted, her voice taking on the firm tone of someone who had clearly been trained to handle "difficult conversations" (another euphemism that George mentally catalogued), "I need you to understand that this decision has already been made. We're not here to debate the merits of the strategy."

Of course not, George thought. That would require research.

"So what exactly are we here to debate?" he asked.

"Your transition plan," Marcus said, finally finding his voice again. "We've prepared a very generous severance package, and we'll provide excellent references for your job search."

Job search. The phrase landed with the weight of reality. George suddenly became acutely aware that he was sitting in Conference Room B, surrounded by dying plants and broken printers, being told that his professional identity was being eliminated in favor of a computer program that thought user research was a matter of processing text and generating statistically probable responses.

"How generous?" George asked, because even in existential crisis, practical concerns had a way of asserting themselves.

"Three months' salary, plus extended benefits," Brenda said, brightening slightly as they moved into territory where she felt more confident. "And of course, we'll provide a reference letter emphasizing your... traditional research skills."

Traditional research skills. As if asking humans about their experiences was some quaint folk practice, like churning butter or navigating by the stars.

"And when do I need to... transition?" George asked.

"Today, actually," Marcus said apologetically. "Security will escort you out after we finish here. Standard procedure."

Standard procedure. George wondered if there was a manual somewhere titled "How to Eliminate Human Insight: A Step-by-Step Guide to Corporate Optimization."

There was, in fact, such a manual, though it was called "Workforce Modernization Through Strategic Resource Reallocation" and had been written by a consulting firm whose primary expertise lay in making terrible decisions sound like inevitable business evolution. The manual had chapters with titles like "Maximizing Efficiency Through Human Capital Optimization" and "AI Integration as a Pathway to Sustainable Growth," which were fancy ways of saying "fire people and replace them with computers." The manual had cost the company $2.3 million and had been used to justify eliminating roughly $47 million worth of institutional knowledge, which was considered an excellent return on investment by people who measured success in spreadsheet cells rather than actual outcomes.

He looked around Conference Room B—at the wilting ficus that nobody had watered in months, at the printer that displayed a permanent error message like a monument to technological dysfunction, at Brenda and Marcus who were clearly eager to conclude this interaction and return to their respective kingdoms of People Operations and Legal Liability Management.

"You know what the funny thing is?" George said, standing up slowly. "In six months, when your conversion rates are in the toilet and nobody can figure out why, you're going to hire a consultant to ask users what went wrong. It'll cost you three times my annual salary, and they'll tell you exactly what I could have told you today."

Brenda's smile had now achieved the brittleness of ice in spring. "George, I'm sure that won't be—"

"But here's the thing," George continued, warming to his theme with the passion of a man who had nothing left to lose. "The consultant won't know your users like I do. They won't understand the quirks of your customer base, or the edge cases that matter, or the reasons behind the reasons that people do what they do. They'll give you a report full of bullet points and recommendations, and you'll implement them blindly because that's what consultants are for, right? Absolution through expensive expertise."

He moved toward the door, then paused and turned back.

"And when that doesn't work either, you'll blame the market, or the competition, or economic uncertainty. Anything except the possibility that maybe—just maybe—eliminating the people who actually understood your users wasn't the most strategically sound decision you've ever made."

Marcus cleared his throat. "George, I understand you're upset, but—"

"I'm not upset," George said, and realized with surprise that it was true. "I'm fascinated. This is the most interesting case study in organizational decision-making I've ever witnessed. You're optimizing away the very capability that would help you understand whether your optimization was successful."

He paused at the door, struck by a final thought.

"You should really research the effectiveness of this approach," he said. "Oh wait. You just fired the person who would do that research."

And with that, George Pemberton, formerly Senior UX Researcher, currently unemployed Professional Question Asker, walked out of Conference Room B and into the uncertain landscape of a world that had apparently decided that artificial intelligence was an adequate substitute for human curiosity.

Behind him, Brenda and Marcus sat in the silence of two people who were beginning to suspect they might have made a terrible mistake, but were too committed to the script to acknowledge it.

It was the kind of silence that settles over a crime scene, except instead of investigating a murder, they were contemplating the systematic elimination of institutional knowledge. Marcus was already mentally drafting the memo he would need to send to Legal about "potential knowledge gaps in user experience decision-making," while Brenda was wondering if there was a way to add "Change Management Excellence" to her LinkedIn profile without technically lying.

The wilting ficus in the corner seemed to nod in agreement.

The ficus, it should be noted, had been a witness to seventeen different layoffs, twelve reorganizations, and one particularly memorable incident involving the former CFO and a stress ball. If plants could file incident reports, this particular ficus would have produced documentation that could have prevented half the company's strategic blunders. But plants, unlike AI, were not considered a reliable source of business intelligence, despite being considerably more perceptive than most of the C-suite.

To be continued in Part Two: "The Five Stages of Professional Grief (With Footnotes)"

If you enjoyed watching George’s professional dignity get quietly sunsetted, stick around.

👉 Subscribe for more corporate tragedies, UX research horror stories, and the occasional moment of existential clarity (or absurdity—it’s hard to tell these days).