KGs and Prolog in Cognitive Frameworks
Understanding the roles of KGs and Prolog versus LLMs in AI systems can be enriched by applying the cognitive frameworks of System 1 and System 2 thinking, concepts popularized by Daniel Kahneman in his work on decision-making processes.
System 1 (Fast, Intuitive Thinking)
System 1 thinking is characterized by speed, intuition, and automatic responses. It’s the part of our cognition that allows us to react quickly to familiar situations, relying on ingrained patterns and heuristics. Both KGs and Prolog align closely with this type of thinking. KGs, with their structured relationships and predefined ontologies, enable rapid decision-making by providing a clear, organized framework through which information can be quickly accessed and linked.
Prolog, with its rule-based logic and deterministic nature, also fits within System 1. It’s designed to execute specific, predefined rules efficiently, similar to how the amygdala triggers instinctive responses in humans. These structures embody what we might call “second nature”—those actions and decisions that have been trained to a point where they are automatic and deeply ingrained. In this sense, KGs and Prolog are tools for fast, structured decision-making—tools that can quickly process and apply known rules to yield immediate insights.
System 2 (Slow, Deliberate Thinking)
In contrast, System 2 thinking is slower, more deliberate, and involves complex reasoning and analysis. It’s the cognitive process we engage when we need to think things through, consider alternatives, and solve novel problems. LLMs fall into this category. They process vast amounts of information slowly and deliberately, generating insights through a broader, more holistic understanding. LLMs synthesize data from a wide array of sources, often producing responses that are contextually rich and nuanced, albeit sometimes less precise. This cognitive framework highlights the complementary roles of Prolog and LLMs. While Prolog excels in structured, logical reasoning (System 1), LLMs thrive in areas that require holistic, imaginative thinking (System 2).
When considering the cognitive frameworks of System 1 and System 2 thinking—where System 1 is fast, automatic, and intuitive, and System 2 is slow, deliberate, and logical—Prolog and knowledge graphs align closely with System 1. They are quick, structured, and rule-based, capable of making decisions based on predefined heuristics, similar to the amygdala’s role in triggering instinctive responses. These structures embody what we might call “second nature”—those actions and decisions that have been trained to a point where they are automatic and deeply ingrained. LLMs, on the other hand, fit well within the realm of System 2. They process information slowly and with greater complexity, generating insights through a broader and more intuitive understanding.
But within System 2, there is a further distinction: intuitive versus logical thought, which aligns with the outdated (but still useful as two modes of thinking) left-brain/right-brain dichotomy. Prolog embodies the logical, left-brain aspect of cognition—structured, analytical, and precise. LLMs, with their broad generalization and creative pattern recognition, represent the intuitive, right-brain side—holistic and imaginative. This duality suggests that the real power in AI might come from leveraging both approaches, using LLMs for their expansive, intuitive capabilities and Prolog for its precise, logical reasoning.
Aren’t LLMs More System 1?
You might be thinking that if LLMs can respond quickly with a fluent sentence or write code on demand, doesn’t that make them System 1? After all, they seem automatic and quick. But speed isn’t really the salient metric.
System 1 isn’t just fast—it’s internalized, trusted, and non-deliberative. When you speak your native language or shift gears while driving, you’re not reasoning through it—it just happens. That’s System 1. But when you’re composing a sensitive email or troubleshooting a tricky guitar chord, you’re using System 2—even if it only takes a few seconds.
LLMs are generative. Every response is a product of inference under uncertainty. Even when the output feels fluent, it’s the result of a process—not a hard-coded rule or cached result. There’s a difference between old chatbots with canned responses versus the response of LLMs that are more custom-tailored to a prompt. That’s why I see LLMs as System 2. They’re synthesizing, not retrieving. They’re executing a process, not firing a reflex.
And that’s just direct LLM usage. In practice, most real-world LLM applications involve additional layers of process: RAG pipelines, chain-of-thought prompting, multi-step reasoning, post-response filtering, even self-critique or re-ranking. These are explicit deliberative structures wrapped around a generative model. That’s not System 1—that’s a whole assembly line of System 2 behavior.
Meanwhile, Prolog or a KG-based rule is exactly what System 1 looks like: fast, precise, and non-exploratory. You ask a question, it checks what’s already encoded, and it either returns a result or fails cleanly. There’s no “maybe.” It’s a direct expression of internalized knowledge.
So yes, LLMs can respond quickly. But they’re not internalized logic and rules in the way System 1 is. They’re exploratory, contextual, probabilistic, and often need validation. That’s the signature of System 2.
Conclusion
While Prolog and LLMs have their distinct strengths, they also complement each other in ways that could lead to more balanced and powerful AI systems. Prolog’s deterministic logic can serve as a necessary counterbalance to the expansive, sometimes fuzzy nature of LLMs, ensuring that AI systems are not only creative but also reliable and precise. This interplay between logic and intuition, structure and flexibility, could be the key to advancing AI in a way that harnesses the best of both worlds.
This duality between System 1 and System 2 underscores why both KGs and Prolog remain relevant in the LLM era. They complement the slower, more complex processing of LLMs with their ability to handle tasks that require speed and precision, ensuring that AI systems can balance both fast, rule-based decisions and more deliberate, context-driven insights.