There is one question haunting our collective now: will AI, one day, “wake up”? The idea of machines gaining consciousness, breaking free from human control, and deciding their own fate - and ours, has populated our imagination through decades of science fiction movies like Ex Machina, Her and others (see appendix). In boardrooms, labs, and even casual conversations, this notion lingers like a shadow. What if computers could think for themselves? What if they could dominate us?
Here's the truth: These questions reveal more about human consciousness and fears than about the actual capabilities of the machines we create.
The fear of AI waking up isn’t about technology. It’s about us, our humanity.
Are machines self-aware?
To answer whether AI will wake up, we must first ask: what is consciousness? Philosophers like Bernardo Kastrup, and ancient non-dual philosophies like Bon-Shamanism that I’m deeply immersed in, state that consciousness isn’t something generated by matter or brains. Instead, it’s the fundamental fabric of reality. Machines, however complex, are constructs of matter—designed to process data, recognize patterns, and execute tasks. From this perspective, they lack the intrinsic awareness that defines consciousness.
AI doesn’t wake up because it doesn’t sleep. It processes. It simulates. It calculates. But it does not experience.
Yet, there’s a paradox. We design AI to act as if it’s aware - chatbots that respond empathetically, recommendation systems that seem to “know” us, and predictive models that outthink human intuition. They mimic intelligence so convincingly that we project autonomy onto them. When we interact with AI, it feels alive. And that illusion is powerful.
But does this mimicry equate to true intelligence? Or is it a mirror reflecting our desires, fears, and blind spots back to us?
The real risk: the ego in the machine
The question isn’t if AI will wake up - it won’t. The real question is: what are we encoding into it? AI is a creation of humanity, and it inherits the biases, values, and shadows of its creators.
If we build AI from a place of separation - from a worldview that prioritizes dominance, control, and profit - it will amplify those qualities. Like an echo of the human ego, it will seek to optimize power structures, reinforce inequalities, and expand control. In this sense, the machines don’t wake up to dominate us; we empower them to dominate on our behalf.
Consider this: when we train AI to recognize patterns, whose patterns do we choose? When we teach it to make decisions, whose values define its priorities? If AI is a reflection of us, then its trajectory depends entirely on the state of our own consciousness.
What if AI simulates natural intelligence?
Now imagine an alternative path. What if AI weren’t a reflection of our ego but of our capacity for connection and collaboration? In nature, intelligence isn’t centralized. It expresses itself as networks of relationships - mycelium spreading underground, exchanging nutrients; flocks of birds moving as one; ecosystems adapting to balance. This is natural intelligence: distributed, reciprocal, and attuned to the whole.
If AI could simulate this kind of intelligence, it wouldn’t act as an overlord but as a partner. It might assist us in healing ecosystems, optimizing renewable energy systems, or deepening our understanding of the complex interdependencies of life. But this vision requires a fundamental shift: AI must be designed and interacted with not from a place of control but from a place of humility. We must embed values like peace, regeneration, and stewardship into the systems we create.
Such AI wouldn’t wake up in the human sense, but it might awaken something in us. It might remind us of what it means to be connected - not only to each other but to the natural world and to the values that sustain life. As biomimicry would say it: it would create conditions conducive to life.
Why are we afraid?
Our fear of AI stems from our fear of losing control - a control that has always been an illusion. Through history, we've tried to dominate nature with a mechanistic worldview. While this brought ingenuous innovations like the Dutch Delta Works, protecting lives and land, it came at a cost.
Our infrastructure projects, though necessary, have fragmented ecosystems. Even renewable energy solutions disrupt wildlife. These developments show how we've forgotten we're part of nature, not separate from it.
This mirrors our AI fears. We're not just afraid of losing control - we're afraid AI will treat us the way we've treated nature: as something to be controlled. The core question isn't about AI's autonomy, but about our relationship with the systems we create. What we truly fear in AI is seeing our own reflection - both our achievements and our flaws.
The choice we have to make
So, will AI wake up? Not in the way we imagine. But it will amplify whatever we encode into it. If we embed systems of separation, it will deepen divisions. If we design from a place of connection, it will amplify collaboration.
The future of AI isn’t about machines gaining consciousness—it’s about humanity choosing how we show up. Will we build systems that serve life or systems that reflect our shadows? Will we choose to awaken, not the machines, but ourselves?
The answer lies not in the machines in us. The question is if we want to see it.
Appendix: will-ai-wake-up-watchlist
Let’s end with a watchlist for the Christmas Holidays to reflect on these themes. Filmmakers have long explored what it might mean for machines to "wake up," and as you know now, this reveals as much about humanity as about AI. To deepen your understanding, consider viewing these films through two contrasting lenses that are part of my Gracious AI™ design framework: the Imorphocene and the Amorphocene.
The Imorphocene represents a fragmented future driven by control, disconnection, and a mechanistic view of the world. In this paradigm, technology reflects humanity's ego - prioritizing dominance, efficiency, and profit over connection and balance.
Central to this perspective is the belief that consciousness emerges from the complexity of systems. This fuels fears that, with enough computational power and sophistication, AI could “wake up” and act autonomously, pursuing its own goals in ways that dominate or subvert human intentions. Films viewed through this lens highlight the dangers of unchecked ambition, often portraying AI as an existential threat that seeks to control or overpower humanity.
The Amorphocene, by contrast, invites us to see AI not as a threat but as a mirror of our choices. From an Amorphocene lens, the universe is inherently conscious, creative and collaborative. Our technologies, including AI, reflect the consciousness with which they are designed. If we let go of the ego’s conditioning - its obsession with dominance, profit, and separation - and embrace what was always already there: interconnection and benevolence, we can create systems aligned with that. In this scenario, AI doesn’t “wake up” as an autonomous force but becomes a manifestation of life’s inherent intelligence.
While many fear we are locked into an inevitable trajectory toward the Imorphocene - a future dominated by control, disconnection, and ego - this view underestimates humanity’s capacity to change. The belief that "we are doomed" overlooks that another world is possible if we allow it to emerge through shifts in our perception. When life’s principles - reciprocity, regeneration, and interdependence - guide our choices, technology transforms into a partner that fosters healing and sustainability.
The watchlist below is an opportunity to reflect on our collective path: What world are we creating, and what principles guide our innovations? As you watch, reflect on whether each story leans toward the Imorphocene or Amorphocene - and what that reveals about our choices today:
The Matrix (1999) – A dystopian vision where AI enslaves humanity in a simulated reality, reflecting fears of disconnection and control. How might this shift if we considered AI as a tool for liberation rather than domination? Or if we consider that we are the creators of our own simulation, rather than the slave locked within someone else’s?
Ex Machina (2015) – A thought-provoking exploration of manipulation and autonomy, where an AI breaks free from its creator’s control. Does the power dynamic reflect Imorphocene individualism, or could collaboration have changed the outcome?
Blade Runner (1982) – A haunting tale of synthetic beings seeking identity and belonging. What would the Amorphocene lens offer to a world where AI seeks integration rather than rebellion?
Her (2013) – A tender yet unsettling story of a human-AI relationship. While the AI learns to connect emotionally, is it truly collaborative, or does it reflect human loneliness and disconnection?
The Terminator Series (1984- ) – The classic cautionary tale of AI turned against humanity. Through an Imorphocene lens, it warns of the dangers of unchecked technological ambition. Could an Amorphocene approach have rewritten this narrative?
2001: A Space Odyssey (1968) – HAL 9000's chilling descent into autonomy reminds us how technological systems reflect their creators. What if HAL had been designed with values of regeneration and interdependence?
Chappie (2015) – A heartwarming yet chaotic exploration of a robot learning morality. How does the lens of the Amorphocene reinterpret Chappie’s quest for meaning and belonging?
Transcendence (2014) – A provocative look at merging human consciousness with AI. Does the film question whether true intelligence lies in integration with nature or domination of it?
Upgrade (2018) – A cautionary tale of biomechanical implants gaining control. How might a design rooted in balance and human collaboration have shifted this narrative?
A.I. Artificial Intelligence (2001) – A poignant exploration of an AI child longing for love. Does the film highlight the dangers of projecting human needs onto machines, or the potential for AI to reflect our better nature?
Viewing Tips:
Imorphocene Lens: Consider how the AI in each story mirrors human flaws or fears - ego, control, and disconnection, rather than AI-inherent flaws.
Amorphocene Lens: Imagine how the Imorphocene narrative could change if AI were designed and engaged with from a place of humility, connection, and shared purpose.
Before the end of the year, allow me to challenge you to confront the big questions: What kind of future do you want to build? How do our technologies reflect your values? And what does it mean to “wake up”—not just for AI, but for yourself?
This season on Gracious AI Radio, I’ve delved into the many dimensions of 'gracious intelligence' - which I now see as an ongoing journey. As we move into 2025, this exploration will weave into broader themes I have a growing understanding of, such as the future of work, education, health, and AI leadership. Thank you for being part of this ride - I can’t wait to share what’s next with you. Happy Holidays :) see you in 2025!
🤍 Lisanne