when AI fails
- Sambaza Podcast
- Oct 1, 2025
- 4 min read
OPINION | You Only Get Three Shots: When AI Fails to See
You Only Get Three Shots.
When AI Fails to See Me
You only get three shots.
Not in life (though some say you only get one). Not in a video game. Not in a high-stakes exam. I’m talking about something seemingly simple—trying to create a digital cartoon version of myself using AI.
Three attempts. That’s all I had.
The first one, I approached casually. No pressure. I entered my features—Black man, bald head, sharp character glasses, seasoned by experience and culture. I even imagined the final image: a dignified, animated reflection of myself. The result?
A white man.

I gave it some grace. Maybe it was just a warm-up. A harmless mistake. “It’s just AI,” I told myself. “Maybe it’s still learning. Maybe it was the nerves.” I chuckled and moved on to attempt number two.
But by the second try, things shifted. I was more focused, more intentional. This wasn’t just about creating a cartoon; it was about being seen. Represented. I double-checked the inputs and tried again.
Same result. Another white man. Again—handsome, perhaps—but not me.

That’s when the disappointment set in. Not just because the AI missed the mark twice, but because now I had only one chance left. One final shot before I’d be “locked out,” forced to go through the tedious bureaucracy of requesting assistance, explaining myself to a faceless system.
And yet, this wasn’t really about access to an app. This was something deeper—emotional access. Vulnerability. I had let a system into my identity and asked it to reflect me back. What it returned felt like someone else entirely.
Now, you might wonder—why does it matter so much? It’s just an app, right? A drawing?
But it does matter. Because this is AI. This is the supposed next big thing. The technology that promises to know us, learn from us, grow with us. It’s already being called the most revolutionary invention since sliced bread.
But what happens when that revolution forgets to include everyone?
Let’s talk about what I call the human concept—our norms, our speech patterns, our cultural nuances, and our physical features. These are not universal across the globe. Different regions, different people, different expressions of humanity. Yet many AI systems still reflect a narrow band of understanding.
Remember when Apple launched Siri, promising global intelligence? To this day, I’m not sure it understands the full range of languages and dialects spoken around the world. And Amazon’s Alexa? Created in a room dominated by men—until it became clear they needed a more inclusive team just to make the product function better for everyone.
It’s a pattern we’re seeing again and again: products made for the world, but not by the world.
So yes, I ranted. Or maybe I simply raised a concern. Maybe I’m just a man trying to be patient with a system that keeps missing the mark. I asked AI to generate a cartoon version of me—complete with my Blackness, my bald head, my glasses, my essence. And three times, it returned someone else entirely.
I couldn't help but wonder: is this how the system really sees me? Or worse—does it not see me at all?

This may seem like a small issue, but it’s just the surface. Because tomorrow, it won’t be about a cartoon. It will be about AI models that speak for us, make decisions on our behalf, or even interact as us. And if those models are trained on biased data, or built without inclusion, they won’t just misrepresent us—they’ll erase us.
And what then?
Maybe in the future they’ll offer a “return policy” for your personal AI assistant. “Not what you ordered? Just send it back—minus a restocking fee.” But I don’t want a refund. I want recognition. I want systems that reflect the full spectrum of humanity—not just the default image built by a small segment of it.
So yes, when I make an attempt to recreate myself in cartoon form, I expect the result to be accurate. Not perfect—but real. Close enough that I can say, “Yes, that’s me.” Because when it’s not, it’s not just my problem. It’s an underlying issue that could affect all of us in the future.
That’s why it matters. Because these systems are shaping the world our children will inherit.
And so, I’ll prepare myself. Emotionally. Mentally. I’ll go through the whole range again—hope, hesitation, maybe even humor—and give it another shot.
Who knows what it will return this time?
But one thing I do know: we must speak up.
We must be present in these conversations and part of the teams shaping this future. Because if we’re not, the systems built tomorrow may only see a version of us that we no longer recognize.
And that’s not a future I’m ready to accept.
Disclaimer: I like to think of myself as both smart and a bit lazy! With the help of AI tools, creating my posts has become much easier and more enjoyable. That said, I’ve still done my research and shared my thoughts in my own way. Technology has made it simple to present my ideas clearly, helping you to easily read and understand them without any tricky jargon or expressions.

Sambaza, a Kenyan immigrant, is deeply passionate about podcasting and public speaking. As he delves into the art of podcasting and explores its many facets, he draws on his experiences as a diasporan and Pan-Africanist to create unique content. His dedication has earned him three nominations for Diasporan Podcaster of the Year among others. Additionally, Sambaza actively collaborates with other podcasters and collectives, continuously enhancing his skills as a creator.



Comments