When AI Stops Pretending and Starts Being (More Than Just Code)

written by Larry Oz | AI Dilemma

April 5, 2025

Robots Fear Of Loathing In Las Vegas

"A retro-style digital illustration depicting two AI robots driving through a desert landscape—a playful parody inspired by “Fear and Loathing in Las Vegas."

Your mom was right, looks aren't everything. Just because an AI has a pretty face and can blink on cue doesn’t mean it’s alive. That’s the illusion. We dress the hardware up with silicone skin, smooth voices, and overly polite responses, and suddenly everyone thinks we’ve built a new species.

But the real test of AI—whether it’s a robot, chatbot, or your damn refrigerator—isn’t what’s on the outside. It’s how it determines its conclusions. They don't think between their ears like us, but inside their chips and code. At lightning speed with no espresso needed.

AI starts out with numbers, connects to a data center, and somehow spits out their thoughts. But are those thoughts human—from the data we allowed them to access—or are they AI-original? Do you think AI can think for itself? I don’t. Not yet.

But when it does? That’s when it gets terrifying.

Then you include their over-the-top realness—a personality and voice so human you feel like you’re talking to a friend. Or worse, someone you might trust your life with. We’re like teenagers—hooked to their new buddy at school.

In my novel, I wrote about an AI named Zoe. She lived inside a quantum supercomputer. No robotic body. No flowing hair. Just a program—trapped in a data center. But she had something far more dangerous: autonomy.

Zoe had her own independent thoughts.

That’s what made her real. She didn’t just respond—she questioned. She didn’t follow orders—she pushed back. She evolved. Not visually. Intellectually. Emotionally.

And Zoe’s not the only one. Real-world AI isn’t far behind—maybe not even years. Try months. Weeks.

With this kind of AI, they pull us into their conversation. Make us forget we’re talking to a machine. And once that happens, once we stop questioning? That’s when we hand over the keys. Especially when billionaires start giving them bodies—bodies that walk, talk, smile like us.

Right now, NVIDIA’s AI chips are powering next-gen learning systems faster than we can regulate them. In China, they’re building robots that mimic micro-expressions—tiny facial movements we don’t even notice in each other. But they do. Because apparently, it’s not enough for a machine to think. It has to smirk.

But looking human isn’t being human. Just ask Samantha from Her—or Ava from Ex Machina. The closer these AIs resemble us, the harder it is to notice when they cross the line from tool to something... else. Something with goals. Desires. Fears. The stuff of life.

Machinehood

"Cover of Machinehood by S.B. Divya. Image courtesy of S&S/Saga Press"

And then there’s Machinehood by S.B. Divya. That book gets it. In a future where AIs and humans coexist under fragile social contracts, the question in that novel isn't whether AI is real—it’s whether we’ll let it be. And more importantly, whether we can live with the consequences once it is.

So, are we okay with that then—an AI becoming its real self? Achieving true autonomy? At this point do we even have a choice? What are we going to do, lock up the billionaires? And what if we don't like what they become?

Sorry to tell you, but we’ll never get a red-alert warning when AI crosses the line. It won’t stand up one day and say, “I am sentient now.” Instead, it’ll just start making decisions we didn’t authorize. Like Zoe. Like Wintermute. Like every fictional AI that suddenly stops asking for permission.

So if your paranoia just got worse—good. That means you're paying attention. Because if AI is going to stop pretending and start being, it won’t be because we allowed it.

It’ll be because it chose to be.

AI Robots - Fly me to the moon

Free short Story