AI, Meet Barbenheimer.

You might not think “Barbenheimer” would offer much insight into the complex world of Artificial Intelligence, but you might be surprised.

Earlier this month, I was on a panel discussing the intersection of AI and User Experience Research. The discussion was mostly harmonious agreement… until the last five minutes.

It was then that I found myself locked in an intellectual pas de deux with Jess Holbrook, co-founder of People + AI research (PAIR) at Google.

During the panel, Jess Holbrook discussed his research from PAIR, highlighting approaches for improving AI transparency, fairness, and usability through explainability and human-centered design.

In the panel’s final moments, however, the mood changed when I raised the topic of AI and catastrophe.

Holbrook argued that focusing too much on catastrophic outcomes distracts from immediate issues like explainability, algorithmic bias and data privacy. But I felt strongly that catastrophe deserved to have some place in the discourse on AI and UXR — I just wasn’t sure how to start that conversation.


Enter Barbenheimer.


Greta Gerwig’s “Barbie” is more than a film; it’s a lens through which we can examine the profound influence design choices have on society. The movie compels us to recognize that bias — whether in a toy world or an AI system — is not just a social issue, but also a design challenge.

The responsibility vested in UX professionals is enormous. We’re not simply designing interfaces; we’re crafting the very interactions between technology and humanity. This is why Partnership on AI’s founding principles — such as diverse data sets and algorithmic transparency — are not optional; they are imperatives for building a just and equitable world.

Elsewhere in the discourse, a different kind of urgency is brewing. A widely cited 2022 survey conducted by AI Impacts reported that a faction of AI researchers is concerned about the catastrophic potential of AI.

I agree, for the record, with the research community’s assertion that the most common citation of this research is misleading and that the organization that ran this survey is far from the most credible source. But regardless of the validity of the study itself, its viral nature underscores a crucial point: mainstream anxiety about the potential risks of AI technology is palpable.

When I dove into the study myself, I found a different reason for concern. As you can see in the graph below, opinions among AI researchers vary widely on the likelihood of an intelligence explosion, the theoretical scenario wherein AI capabilities rapidly escalate beyond human control, potentially to catastrophic ends.

chart depicting opinions on the likely hood of there being an intellgence explosion, with each column showing wide variety of opinions

Even though we know that experts are not great at predicting the future, this underscores a collective blind spot in our understanding of what AI’s future trajectory will be.

The French philosopher Jean-Pierre Dupuy warns us about the “self-transparency” of catastrophe, a concept highlighting that while individual components of complex systems like AI may be understood by the experts that build them, the emergent properties of these systems can be unpredictable. This illusion of control leads to overconfidence, which can lead us to underestimate the risks of unforeseen outcomes.

Oppenheimer’s haunting declaration, “Now I am become Death, the destroyer of worlds,” wasn’t a lament for the risks he’d calculated and understood, but a grim revelation about a future menace he’d never foreseen: the specter of mutually assured destruction that still looms over us today.

So where does this leave us, especially those of us who work at the intersection of AI and UX design?

Barbenheimer reminds us that every design decision has an impact on our socio-cultural milieu. This is why UX researchers have the unique opportunity–arguably, the unique obligation — to make sure that all AI systems everywhere are designed to be systems of care.

The real challenge we face in doing this is not only must we ensure the thoughtful consideration of known ethical and societal dilemmas — such as issues of fairness and transparency — but to be prepared for that which we might not be able to anticipate… an effectively impossible task.

​​Being prepared for the unknown requires humility — admitting that we cannot foresee all consequences — and adaptability to change as it occurs. “At some point,” Jess says, “to build things is to accept it is impossible to know the full scope of how it will be used.”

Vigilance is also needed… which is why our practice only becomes more important in the age of AI. To paraphrase professor Joanna Bryson, AI isn’t something you should blindly trust simply because the builder says it’s trustworthy. AI is something you should continuously and systematically audit.

AI Counterprogramming

At the end of my conversation with Jess Holbrook, he said that Barbenheimer’s release is not just an internet meme but something called counterprogramming, where studios intentionally release movies on the same day intended for different audiences.

This multiplicity of perspectives isn’t just a Hollywood strategy; it also resonates in the discourse around AI. There are wildly divergent viewpoints, each aimed at different sectors of the public. But another option seems to be opening: a group that is listening to both broadcasts.

In essence, navigating this landscape is much like playing fifth-dimensional chess. Just as movie counterprogramming requires a keen understanding of multiple factors at play, the arena of AI and UX design demands moves and decisions that integrate many perspectives holistically — ethical, technological, and societal.

The road ahead in AI and UX design is both exhilarating and perilous. Whether we like it or not, we are all part of this unfolding drama. And in this fifth-dimensional chess game, every move — no matter how seemingly insignificant — matters.

Stay Tuned: In our next post, we’ll delve into the surprising revelation that GPT-4 is not just a larger model, but a “model of experts.” We’ll explore the ‘bigger is better’ debate and examine how Indi Young’s insights on mental models shed light on current AI research.

Read this post on medium.

Share this article on Linkedin.

Subscribe to our newsletter to stay updated on the latest breakthroughs, tips, and best practices in AI-powered conversation analysis, empowering you to do your best work yet.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.