Weekly Insights for Thinkers

FAQ

Are LLMs or Inference the Future of AI?

Sun 28 Jul 2024
Published 2 years ago.
Updated 2 months ago.
History & Future of AI in 1 Minute!
Share :

Are LLMs or Inference the Future of AI?

Artificial intelligence started with Symbolic AI, a system deeply rooted in traditional coding methods and logic-based rules. This early form of AI was almost like an advanced, rule-following machine, operating within tightly defined boundaries. 

By the 1970s, Inference AI emerged, moving toward mimicking brain processes. Instead of following strict rules, Inference AI learned to identify patterns in data, adapting to new inputs much like the way animals learn from their environments. By 2010, deep-learning methods were added. More recently, this approach now includes generative AI, which builds on learned patterns to create new content—essentially a form of gap-filling and idea-blending similar to the brain’s natural cognitive abilities.

Then came Large Language Models (LLMs), shifting the focus from raw cognitive patterns to mimicking human behavior. LLMs like GPT and BERT aren’t built to “think” like a brain, but rather to “behave” like one, responding in human-like ways based on vast amounts of language data. Much like humans who study patterns in communication to connect with others, LLMs draw on enormous datasets to communicate naturally and adapt across countless topics. Reinforcement Learning (RL), which learns from feedback, complements this behavior-focused approach. In a way, RL acts like a feedback loop, refining actions much as humans do when learning from experience.

Looking to the future, it’s clear that the strongest AI will come from a hybrid approach that uses Cognitive AI to facilitate complex Behavioral AI. Much like the human brain, the hybrid approach will combine cognitive ability with behaviors perhaps even inventing new AI-only abilities.

Just as animals use both instinctive learning and behavioral adaptation to navigate their niche environments, hybrid AI systems will do the same and fill niche environments that enhance reality and propel humanity forward.

To learn more, take the 8-minute deep dive: Building Tomorrow’s AI.

— map / TST —

Michael Alan Prestwood
Author & Natural Philosopher
Prestwood writes on science-first philosophy, with particular attention to the convergence of disciplines. Drawing on his TST Framework, his work emphasizes rational inquiry grounded in empirical observation while engaging questions at the edges of established knowledge. With TouchstoneTruth positioned as a living touchstone, this work aims to contribute reliable, evolving analysis in an emerging AI era where the credibility of information is increasingly contested.
This Week @ TST
March 4, 2026
»Edition Archive
WWB Research….
1. Story of the Week
Marcus Aurelius: An Explorative Agnostic
2. Quote of the Week
“Our knowledge is finite, while our ignorance is infinite.”
4. Philosophy FAQ »
What is TST Ethics?
5. Critical Thinking FAQ »
What is confirmation bias, and why does it matter?
Bonus Deep-Dive Article
1-2-3-4-5: TST Philosophy Overview

Comments

Join the Conversation! Currently logged out.
NEW BOOK! NOW AVAILABLE!!

30 Philosophers: A New Look at Timeless Ideas

by Michael Alan Prestwood
The story of the history of our best ideas!
Scroll to Top