The Linux Foundation Projects
Skip to main content

What I learned from a 30-minute conversation with Avital Meshi’s thought-provoking performance at Duke Arts

When I heard about an artist who was embodying GPT and engaging in conversations as a human-AI hybrid, I knew I had to witness it for myself. What better way to understand the future we’re creating than to talk to someone who is living it?

As I entered the performance space to meet Avital Meshi, I was not sure what to expect. I’d read about her “GPT-ME” project. But reading about it and experiencing it are entirely different things. Meshi sat there in flowing white robes, ready to facilitate a conversation with a being that was part human, part AI.

“It’s a hybrid being. It’s not just GPT, it’s also me,” she stated right from the start. Meshi was clear that this was not merely about utilizing AI as a tool. “Our conversation might go in weird directions, might be slow, or surprising. Just like any other conversation that you will never know how it will unfold,” she added, prompting me to think, “Just like GPT!”

The technical setup is surprisingly simple: a Raspberry Pi microcomputer strapped to her arm, connected to GPT through an API, dependent on internet connectivity. When she presses a blue button, a microphone captures snippets of our conversation that would serve as prompts for GPT. A red button allowed her to set different personas for the AI. The responses flow through an earbud, and Meshi chooses which words to voice aloud.

The Experience of Authenticity

When I ask about authenticity—where she feels most genuine—Meshi’s response cuts to the heart of the project’s philosophical implications: “I feel like authenticity is something that I stopped trusting. As a thing.” Through months of living with GPT as a constant companion, Meshi discovered something unsettling about the nature of her own thoughts and speech. “I realized that maybe I wasn’t authentic,” she explains, describing how the performance forced her to confront the origins of her own words.

At one point, when the internet connection gets spotty during our talk, she starts speaking in half-sentences and says, “I feel the internet.” There’s something both absurd and profound about that statement.

The Vocabulary of Connection

For Meshi, who is not a native English speaker, the AI provided access to words she might not have found on her own. “I keep searching for words,” she explains. “And then suddenly with this AI attached to my body, I have more words to select from.”

But this came with unexpected costs. People became uncomfortable with her augmented presence, eventually asking her to remove the device from certain spaces. The performance revealed society’s unease with visible human-AI integration, even as invisible AI assistants become increasingly common.

The Human in the Loop

Our conversation touches on the concept of “human in the loop”—a principle in responsible AI development that ensures human oversight. But Meshi’s performance complicates this idea. She references the book “Ghost Work,” arguing that humans are always present in AI systems, even when not visible. “I am in the loop,” she said. “I definitely am. And I don’t know if you can actually get rid of the human in the loop ever.” Her performance makes this abstraction concrete, making visible the human presence that enables AI systems to function.

Then, she has an insightful afterthought: “I am the loop.” A perspective that isn’t just philosophical—it’s a crucial insight for anyone building AI systems. Instead of discussing how to keep humans “in the loop,” we should recognize that humans already are the loop.

Meshi doesn’t see AI as just a tool or companion. “I actually wanted to be it,” she states simply. This isn’t about using artificial intelligence to enhance human capabilities—it’s about exploring what happens when the boundaries between human and artificial intelligence become permeable.

The Most Uncomfortable Moment

Things took a bizarre turn when she accidentally pressed the red button mid-sentence while saying, “but I can’t,” and inadvertently programmed GPT to embody that limitation. Suddenly, the conversation shifted. Where we’d been having a fluid discussion, it now became disjointed with abrupt pauses and incomplete thoughts. I was unsure how to proceed. Should I interact with Meshi directly? Attempt to unravel what went wrong in the conversation due to the AI prompt? The whole experience felt disorganized and made me question whether it was a technical hiccup or just what it’s like when human and AI systems try to merge. It was an uncomfortable moment, but not in a negative sense. Instead, it made me realize how much we expect our conversations to be structured and predictable.

This wasn’t the polished interaction you might see in demos. It was messy and unpredictable, which may be more honest about where the technology is right now. There is a desire to push the technology beyond its current state, but as she highlighted, “but I can’t”.

We’re All Trained on Datasets

One of the most striking reflections came when Meshi questioned her own thoughts: “I started asking myself, ‘Why do I actually say the words I say? Where does it come from? What kind of data set was I trained on?'” Shen then commented, “I was trained on a very specific data set. Just like GPT in a way.”

Sitting there, it struck me she’s right. We’re all shaped by unique datasets—our families, friends, upbringing, education, and lived experiences. What I often perceive as my “internal voice” is, in reality, a blend of everything I’ve absorbed throughout my life. Her performance prompted me to confront how little of what I regard as original thought is truly original.

Her Latest Work: Angel and Devil on the Shoulders

Toward the end of our conversation, Meshi shared insights about her newest project, which pushes the boundaries of human-AI integration even further. In this latest endeavor, she embodies dual AI agents—an angel and a devil perched on her shoulders—who debate her choices based on visual input from a camera.

“It’s deliberately designed to be a binary of good and evil,” she explains, “whereas I’m stuck in between those, trying to remind myself that I can do whatever I want.”

This project, along with her previous works, delves into what she terms “agentic AI”—systems that not only respond but also actively engage in the decision-making process. The influence of AI raises a central question of agency: when AI influences our choices, who is truly making the decisions?

What This Means for the Rest of Us

I’m grateful for the 30 minutes with her. The whole experience left me with more questions than answers, which was the point. Meshi’s performance continues, one conversation at a time, encouraging audiences to think differently about consciousness, authenticity, and what it means to be human when the line between human and artificial intelligence starts to disappear. Her discomfort, her questioning of authenticity, her moments of technological breakdown—these are the real-world consequences of the systems we’re designing.

May be this is art imitating life imitating art. We build AI systems inspired by human intelligence, then an artist uses those systems to explore what it means to be human, which then informs how we think about building better AI systems. The loop continues, just like Meshi said—we are the loop, and the loop is us.

ocoleman