Explore Science-first Philosophy

FAQ

Can AI evolve to destroy humanity?

Wed 14 Aug 2024
Published 2 years ago.
Updated 3 weeks ago.
Related FAQs
What is cognitive dissonance?
Is the Fermi Paradox still relevant?
Did Plato Believe in a Soul and Heaven?
Do cats see humans as another cat?
Why does awareness increase moral responsibility?
What is Ninio’s Extinction Illusion?
Share :
Email
Print

Can AI evolve to destroy humanity?

Some people fear that AI will evolve on its own, become self-aware, and ultimately wipe out humanity. But what if stopping that is easier than we think? One effective solution is simply not allowing AI to self-replicate and evolve in dangerous areas.

Think of it like nature. In the animal kingdom, it’s self-replication and evolving traits like predatory behavior that make the jungle dangerous. Similarly, if we let AI evolve unchecked, it could develop characteristics like manipulation, deceit, or even a desire to dominate. But without self-replication or the ability to evolve traits like hate or a thirst for power, AI can’t turn into a threat. Well, at least not in that way. In the animal kingdom, donkeys cannot reproduce, and are no threat to humans, tigers can reproduce and easily kill on unarmed human. We need to make sure we evolve AI toward the donky model, not the tiger model.

The real danger isn’t AI suddenly becoming sentient—it’s allowing it to grow in ways that amplify harmful human behaviors. So, it’s not about fearing AI itself, but about ensuring we set the right boundaries. By focusing on what AI can and cannot evolve, we remain in control of its development, making sure it’s a tool that serves humanity, not a danger that threatens it.

Just like in nature, it’s the ability to self-replicate and evolve in dangerous ways that makes the jungle—and AI—risky. Let’s make sure we’re setting the right limits to keep our digital jungle safe.

— map / TST —

Michael Alan Prestwood
Author & Natural Philosopher
Prestwood writes on science-first philosophy, with particular attention to the convergence of disciplines. Drawing on his TST Framework, his work emphasizes rational inquiry grounded in empirical observation while engaging questions at the edges of established knowledge. With TouchstoneTruth positioned as a living touchstone, this work aims to contribute reliable, evolving analysis in an emerging AI era where the credibility of information is increasingly contested.
TST Column
April 22, 2026
Column Research….
1. Timeline Story
Augustine of Hippo
2. Linked Quote
“In order for a war to be just, three things are necessary.”
3. Science FAQ »
Why do we overreact and escalate?
4. Philosophy FAQ »
How does TST Ethics handle the trolley problem?
5. Critical Thinking FAQ »
How do you prevent yourself from overreacting?
6. History FAQ!
What is the history of ethical war?
Bonus Deep-Dive Article
1 Goal: Flourish (TST Ethics)

Comments

Join the Conversation! Currently logged out.

Leave a Comment

1 thought on “Can AI evolve to destroy humanity?”

  1. Kim Berry

    Hard to say “what may happen in some distant future.” But AI today won’t be like 2001 HAL, having a “desire to dominate.” I don’t agree that the key to whether AI becomes a threat is “self-replication.” Maybe define “self-replication.”

    It’s already becoming harmful – and will be or already is a threat. It’s being used to create news articles, scientific reports, forming the basis for product selection. In my case I was nearly banned for life from YouTube due to their flawed “machine learning” which they boast is able to identify and shut down millions of bad channels each year. It will be used to make hiring decisions – let it scan the resumes and spit out the top three candidates. It will determine who gets into college, who gets a mortgage. There will be no human to plead to, nor any human that even understands that basis for the decisions.

    Want to be blown away? Enter this prompt into ChatGPT:

    In the style of a study published in The Lancet, generate a study by author Mike Prestwood who has a PhD in computer learning from Stanford, that, after studying 100 software developers, concluded those that had developed in Borland Paradox had a statistically significant advantage in subsequently mastering SQL, C#, and other modern computer environments – and have a slightly higher IQ. Cite references by Philippe Kahn and Donald Knuth and Smith, A., & Jones, B. Mention that both Paradox and Delphi and Borland products.

NEW BOOK! NOW AVAILABLE!!

30 Philosophers: A New Look at Timeless Ideas

by Michael Alan Prestwood
The story of the history of our best ideas!
Scroll to Top