Your Fear of AI Isn't Irrational—It's Intelligence
Why the AI-scaries are legit—and what to do about it.
If you’re both excited and freaked out about AI, your brain is doing exactly what it should.
Last week, I shared an AI-generated video presentation with my team (one of those Notebook LM video overviews that looks like a PowerPoint with someone narrating your content).
Their eyes went wide. “Wait—AI made this?”
Then, in the same breath: “That’s so cool. But also… scary.”
I’ve started asking people what they mean. “Why scary? What specifically?”
Silence. Shrugs. Vague gestures toward “jobs” or “what comes next.”
But the feeling? Crystal clear.
Most of them haven’t even tried using AI yet. That fear isn’t coming from experience or understanding what the technology actually is and can do.
The media giants are engineered to grab attention, not inform. Headlines and hype are the game. And it’s triggered by something else — the sense that this is something very big, very powerful, and they don’t understand it.
But the feeling itself? It’s not irrational. It’s your brain picking up on signals: rapid change, uncertain futures, and a genuine fork in the road for humanity.
Let’s unpack what’s actually happening.
The External Noise: Why Everyone’s on Edge
The anxiety isn’t coming from nowhere.
Between January and June 2025, nearly 78,000 tech workers lost their jobs directly to AI.
Salesforce replaced 4,000 customer service roles with AI agents.
IBM cut 8,000 HR positions, handing their work to a chatbot that now handles 11 million employee queries annually.
Goldman Sachs estimates that if AI adoption continues at this pace, 11 million American workers could be displaced.
But it’s not just about job loss.
It’s about the narrative being sold alongside those numbers. The loudest voices tell us that AI will replace us. That human expertise is becoming obsolete. That we’re on an inevitable march toward a world where machines do our thinking for us.
The media isn’t helping.
If you’re not actively searching for balanced perspectives, what you’re being fed is overwhelmingly apocalyptic. AI as an existential threat. AI as a job-killer. AI as the end of creativity, connection, and human relevance.
The utopian takes exist, but catastrophizing clickbait gets the views.
Technology researcher D’vorah Graeser describes this as a familiar pattern.
Forty years ago, the internet promised connection and empowerment. Instead, it delivered surveillance and control. Five tech giants now determine what billions see and hear. Algorithms designed to bring us together have fractured us into separate echo chambers optimized for engagement at any cost.
AI can follow the same path—only faster and with far greater consequences.
But we’re not there yet. We’re still at the very beginning.
And that’s the critical point. We have the chance to apply the lessons we learned from social media, from doom-scrolling, from unchecked platform power.
We’re standing at a fork in the road.
Futurist Gerd Leonhard frames it this way: one path leads to a future where AI amplifies our best qualities and expands what’s possible. The other path leads to AI as the most sophisticated tool for exploitation and control the world has ever seen.
The choice isn’t made yet.
But it’s being made now—by whoever shows up to make it.
The Internal Wiring: Why This Feels Different
Your brain is treating AI like a predator.
Not because it is one, but because rapid, unpredictable change triggers the same ancient survival mechanisms that kept your ancestors alive when actual threats stalked them.
When your brain detects danger, it floods your system with cortisol. Blood sugar rushes to your brain. Energy surges to your muscles. Everything non-essential shuts down.
Survival mode activates.
It’s a brilliant system for immediate physical threats.
The problem?
The threat today isn’t a predator you can fight or flee. It’s constant uncertainty. It’s the sense that something massive is moving faster than you can process it. It’s the fear of losing control over your livelihood, your relevance, your future.
And because that threat never resolves, the stress cycle runs indefinitely.
That uneasy feeling isn’t weakness. It’s your brain doing its job—noticing real signals in your environment that deserve attention.
The key is moving from panic to engagement, from paralysis to action.
Most people who tell me “AI is scary” can’t articulate what the technology actually does. They haven’t used it. They don’t know its limitations.
They’re reacting to a story about AI, not the reality of it.
And that gap—between fear driven by headlines and understanding driven by experience—is where the real danger lies.
Because while you’re frozen, decisions about AI’s future are being made without you.
The Real Choice: Architects, Not Victims
AI’s future isn’t inevitable. It’s not a force of nature. It’s a tool built by humans, and it desperately needs human guidance.
Technologist Morten Rand-Hendriksen points out something critical: we’ve been hacked by our own language.
When scientists in the 1950s started building systems that could mimic human reasoning, they called it “artificial intelligence.” That choice of words told us these machines were something human-like.
So we started using human metaphors—training, learning, reasoning, thinking—to describe computation.
Then we built AI that could generate language. And the moment AI started talking to us, our brains couldn’t help but perceive it as a thinking being.
But there is no mind in the machine.
AI doesn’t understand. It doesn’t have goals or intent. It predicts the next most likely word in a sequence using math.
When you ask it a question, it transforms your sentence into patterns, matches those patterns against everything humans have ever written, and generates a response one word at a time.
No understanding. It doesn’t know the difference between truth and lies. Just probability.
It’s why an AI can confidently give you a recipe that sounds perfect and tastes terrible. It doesn’t know what food is. It only knows what words about food look like.
Understanding this changes everything.
Because once you see AI for what it actually is—a powerful prediction engine, not a replacement for human judgment—the fear shifts. It becomes respect. Caution. Strategic thinking.
AI can help you draft an email, summarize research, and generate ideas.
But it can’t tell you whether those ideas are ethical, strategic, or aligned with what actually matters. It can optimize for patterns, but it can’t tell you which patterns serve people and which ones exploit them.
That’s your job. That’s the irreplaceable human part.
Right now, AI’s trajectory is being shaped by a relatively narrow group: developers, executives, and politicians.
The question is whether it will reflect their values and priorities—or whether it will reflect the wisdom, ethics, and lived experience of millions of people who understand what it means to do work that matters, to build lives that flourish, to create systems that serve humanity.
This isn’t just about work. It’s about life. Society. How we connect, learn, create, and make decisions. Everything.
And despite what you might hear, governments and institutions aren’t leading the charge to include public voices in AI governance.
We have to push for that ourselves. We have to create the spaces, demand the transparency, and build the literacy.
The Practical Toolkit: What to Do Right Now
D’vorah Graeser offers a four-part framework for anyone who wants to influence where this goes:
Explore. Spend 30 minutes this week experimenting with an AI tool.
If you’re a manager, try using it to organize meeting notes or draft project outlines. If you’re in education, test it for lesson planning frameworks.
The goal isn’t mastery—it’s direct experience. You can’t evaluate something you’ve never touched.
Contribute. When you use AI, give it detailed feedback.
Most people don’t realize: the things you tell the AI—the corrections you make, the clarifications you provide—help train it. You’re literally teaching it.
So talk to it like a person. Explain why something didn’t work. Be specific. Every interaction is a vote for the kind of AI you want to exist.
Connect. Find your people.
Join an online community focused on AI literacy and ethics. Follow voices that offer multiple perspectives, not just hype or doom. Curate your own mentors by seeking out different viewpoints.
This isn’t just about local advocacy—it’s about building fluency, understanding governance, and elevating the human-centric skills that AI can’t replicate.
Bridge. When you encounter someone who knows less than you do, help them.
Share what you’re learning. The more people who understand enough to participate in shaping AI’s path, the less likely it gets shaped by the loudest voices alone.
These four actions create momentum.
The more you explore, the better your feedback will be. The more you connect, the stronger the collective voice. The more you bridge, the broader the base of people demanding that AI serves humanity—not just profit.
As Buckminster Fuller said: “We are called to be architects of the future, not its victims.”
What This All Means
Let me bring this back to that moment with my team.
When they saw the AI-generated video and said “that’s cool but scary,” both reactions were valid.
The amazement is real—this technology can do things that feel like magic. And the fear is real too—because we don’t yet know which path we’re on.
But I’ve noticed something: when I give people counterexamples to the fear-mongering narratives they’ve been fed—when I show them what AI actually is, what it can and can’t do, where the real leverage points are—I can see the shift happen in real time.
Their face changes. Their posture changes.
The anxiety doesn’t disappear, but it transforms into something else: curiosity. Agency. A sense that maybe this isn’t happening to them, but something they can actually shape.
That shift is everything.
We can’t afford to be paralyzed right now.
We’re at the beginning of something massive, and the trajectory hasn’t locked in yet. The choices being made now—the conversations we have, the feedback we give, the communities we build—will set the path for generations.
Most of the AI content you encounter online is designed to hijack your attention through either catastrophe or hype.
What you need is signal, not noise. Curated sources. Communities focused on practical understanding. Spaces where you can learn, ask questions, and figure out your own position without the noise.
That’s what this is about. Cutting through the extremes. Building real fluency. Moving from anxiety to agency.
Your fear isn’t the problem. It’s information. It’s your brain telling you something important is happening, and you need to pay attention.
So pay attention. But don’t stop there.
Pick one action from that framework.
Spend 30 minutes with an AI tool this week. Find one community or mentor whose perspective expands yours. Help one person understand what you’re learning.
Start small. Start now.
Because the future isn’t something that happens to us. It’s something we build—together, one choice at a time.

