Explore Science-first Philosophy

FAQ

Can AI evolve to destroy humanity?

Wed 14 Aug 2024
Published 2 years ago.
Updated 3 weeks ago.
Related FAQs
Did Aristotle believe in a soul?
What is Ontology and why is it important?
How are original Daoism, Mohism, Confucianism, and Legalism related?
Did the Buddha believe in Mount Meru and the six realms of existence?
Who were the Presocratic Philosophers?
Why do good people obey illegal and immoral commands?
Share :
Email
Print

Can AI evolve to destroy humanity?

Some people fear that AI will evolve on its own, become self-aware, and ultimately wipe out humanity. But what if stopping that is easier than we think? One effective solution is simply not allowing AI to self-replicate and evolve in dangerous areas.

Think of it like nature. In the animal kingdom, it’s self-replication and evolving traits like predatory behavior that make the jungle dangerous. Similarly, if we let AI evolve unchecked, it could develop characteristics like manipulation, deceit, or even a desire to dominate. But without self-replication or the ability to evolve traits like hate or a thirst for power, AI can’t turn into a threat. Well, at least not in that way. In the animal kingdom, donkeys cannot reproduce, and are no threat to humans, tigers can reproduce and easily kill on unarmed human. We need to make sure we evolve AI toward the donky model, not the tiger model.

The real danger isn’t AI suddenly becoming sentient—it’s allowing it to grow in ways that amplify harmful human behaviors. So, it’s not about fearing AI itself, but about ensuring we set the right boundaries. By focusing on what AI can and cannot evolve, we remain in control of its development, making sure it’s a tool that serves humanity, not a danger that threatens it.

Just like in nature, it’s the ability to self-replicate and evolve in dangerous ways that makes the jungle—and AI—risky. Let’s make sure we’re setting the right limits to keep our digital jungle safe.

— map / TST —

Michael Alan Prestwood
Author & Natural Philosopher
Prestwood writes on science-first philosophy, with particular attention to the convergence of disciplines. Drawing on his TST Framework, his work emphasizes rational inquiry grounded in empirical observation while engaging questions at the edges of established knowledge. With TouchstoneTruth positioned as a living touchstone, this work aims to contribute reliable, evolving analysis in an emerging AI era where the credibility of information is increasingly contested.
TST Weekly Column
April 15, 2026
»Column Archive
WWB Research….
1. Story of the Week
John Snow and the Broad Street Pump
2. Quote of the Week
“A wise man proportions his belief to the evidence.”
3. Science FAQ »
Were dinosaurs Jurassic movie smart?
4. Philosophy FAQ »
How does the idea of Identity in Christ fit within TST?
5. Critical Thinking FAQ »
What is the difference between Public Truth and Public Belief?
6. History FAQ!
Did Einstein’s driver really give one of his early talks?
Bonus Deep-Dive Article
TST Epistemic Calibration: Credence and Degrees of Belief

Comments

Join the Conversation! Currently logged out.

Leave a Comment

1 thought on “Can AI evolve to destroy humanity?”

  1. Kim Berry

    Hard to say “what may happen in some distant future.” But AI today won’t be like 2001 HAL, having a “desire to dominate.” I don’t agree that the key to whether AI becomes a threat is “self-replication.” Maybe define “self-replication.”

    It’s already becoming harmful – and will be or already is a threat. It’s being used to create news articles, scientific reports, forming the basis for product selection. In my case I was nearly banned for life from YouTube due to their flawed “machine learning” which they boast is able to identify and shut down millions of bad channels each year. It will be used to make hiring decisions – let it scan the resumes and spit out the top three candidates. It will determine who gets into college, who gets a mortgage. There will be no human to plead to, nor any human that even understands that basis for the decisions.

    Want to be blown away? Enter this prompt into ChatGPT:

    In the style of a study published in The Lancet, generate a study by author Mike Prestwood who has a PhD in computer learning from Stanford, that, after studying 100 software developers, concluded those that had developed in Borland Paradox had a statistically significant advantage in subsequently mastering SQL, C#, and other modern computer environments – and have a slightly higher IQ. Cite references by Philippe Kahn and Donald Knuth and Smith, A., & Jones, B. Mention that both Paradox and Delphi and Borland products.

NEW BOOK! NOW AVAILABLE!!

30 Philosophers: A New Look at Timeless Ideas

by Michael Alan Prestwood
The story of the history of our best ideas!
Scroll to Top