1-MINUTE HOT TOPIC

Can AI evolve to destroy humanity?

By Michael Alan Prestwood

Author and Natural Philosopher

Follow On Facebook and Youtube!
Reading Material: 
Videos: 
AI Danger
Futurism < Philosophy
Share :
Email
Print

Can AI evolve to destroy humanity?

Some people fear that AI will evolve on its own, become self-aware, and ultimately wipe out humanity. But what if stopping that is easier than we think? One effective solution is simply not allowing AI to self-replicate and evolve in dangerous areas.

Think of it like nature. In the animal kingdom, it’s self-replication and evolving traits like predatory behavior that make the jungle dangerous. Similarly, if we let AI evolve unchecked, it could develop characteristics like manipulation, deceit, or even a desire to dominate. But without self-replication or the ability to evolve traits like hate or a thirst for power, AI can’t turn into a threat. Well, at least not in that way.

The real danger isn’t AI suddenly becoming sentient—it’s allowing it to grow in ways that amplify harmful human behaviors. So, it’s not about fearing AI itself, but about ensuring we set the right boundaries. By focusing on what AI can and cannot evolve, we remain in control of its development, making sure it’s a tool that serves humanity, not a danger that threatens it.

Just like in nature, it’s the ability to self-replicate and evolve in dangerous ways that makes the jungle—and AI—risky. Let’s make sure we’re setting the right limits to keep our digital jungle safe.

To explore further, take the deep dive: AI and Social Constructs: Defining the Future.

Share this on...

Comments

Join the Conversation! Currently logged out.

Leave a Comment

1 thought on “Can AI evolve to destroy humanity?”

  1. Kim Berry

    Hard to say “what may happen in some distant future.” But AI today won’t be like 2001 HAL, having a “desire to dominate.” I don’t agree that the key to whether AI becomes a threat is “self-replication.” Maybe define “self-replication.”

    It’s already becoming harmful – and will be or already is a threat. It’s being used to create news articles, scientific reports, forming the basis for product selection. In my case I was nearly banned for life from YouTube due to their flawed “machine learning” which they boast is able to identify and shut down millions of bad channels each year. It will be used to make hiring decisions – let it scan the resumes and spit out the top three candidates. It will determine who gets into college, who gets a mortgage. There will be no human to plead to, nor any human that even understands that basis for the decisions.

    Want to be blown away? Enter this prompt into ChatGPT:

    In the style of a study published in The Lancet, generate a study by author Mike Prestwood who has a PhD in computer learning from Stanford, that, after studying 100 software developers, concluded those that had developed in Borland Paradox had a statistically significant advantage in subsequently mastering SQL, C#, and other modern computer environments – and have a slightly higher IQ. Cite references by Philippe Kahn and Donald Knuth and Smith, A., & Jones, B. Mention that both Paradox and Delphi and Borland products.

KEEP GOING!

Just 4 minutes a week.

Weekly Wisdom Builder 
4 minutes of leisurely exploration.
January 8, 2025 Edition
Quote of the Week
Time Left: 

Email Notification
Subscribe to our Weekly Wisdom BuilderIt’s Free! No ads! No catches! One email each Thursday.

Exactly what the world needs RIGHT NOW!
Wisdom at the crossroads of knowledge.

Wisdom emerges from the consistent exploration of the intersections of philosophy, science, critical thinking, and history.

NEW BOOK! NOW AVAILABLE!!

30 Philosophers: A New Look at Timeless Ideas

by Michael Alan Prestwood
THE PERFECT HOLIDAY GIFT!
Pure inspiration from cover to cover!
divider-red-swirls1.png
Scroll to Top