References to Other Material Describing “System 0”

On January 8, 2026, I published a blog titled, System 0: The Default Mode Network of AGI. I had been leading to the concepts of what I refer to as System 0 since well before the publication of Enterprise Intelligence on June 12, 2024. By the time I reached the point of introducing this topic, there are many others who have published similar material. I had not checked on this at the time I published System 0: The Default Mode Network of AGI. But I did think of checking recently just before publishing the follow-up (February 3, 2026).

The core ideas behind what I call System 0—a background, exploratory, DMN-like substrate for AGI—trace back to my 2004 AI projects (SCL / Soft Coded Logic and TOSN / Trade-off Semantic Network). By that time, I was already designing systems with a “subconscious” layer for pattern probing and integration that precedes deliberate reasoning, using data structures like the Insight Space Graph (ISG) and Tuple Correlation Web (TCW) as the engineered foundation for such exploration. These elements were central to my thinking long before the June 12, 2024 publication of Enterprise Intelligence where they appeared in the context of enterprise-scale intelligence systems.

After I finally posted this detailed synthesis (this blog) on January 8, 2026, I discovered several independent publications using the “System 0” label in related but different ways—primarily in human-AI interaction and cognitive extension frameworks (e.g., Chiriatti et al. in Nature Human Behaviour, 2024). I hadn’t cross-checked the literature until starting follow-up work, and I asked Grok to compile a list of kindred references for transparency.

I encourage you to review those materials below—they represent valuable parallel lines of thought. My framing here emphasizes an internal AGI architecture modeled on the brain’s default mode network, building directly on my earlier SCL/TOSN/TCW/ISG work. No claim of exclusive invention is intended; ideas in this space often converge. Thanks to the researchers whose work overlaps or complements this perspective.

I instructed Grok to prioritize the list as follows:

  • Explicit use of “System 0” as I describe in, System 0: The Default Mode Network of AGI
  • Alignment with the core idea: an upstream/pre-cognitive/external AI layer that preprocesses, filters, curates, or augments information before it hits human System 1 (intuitive/fast) or System 2 (deliberative/slow), often as a foundational/distributed cognitive extension.
  • Preference for those closest to the AGI/DMN-analog twist (background probing, pattern integration, emergence, anomaly detection, temporal coherence), but including the mainstream human-AI versions since they’re the dominant hits.

But it’s not intended to be exhaustive.

Grok’s Research

Ranked by chronological/public impact/relevance combo (your Jan 2026 post is included for completeness/context as the clearest DMN-AGI match).

RankReferenceDateAuthors/Key SourceDescription & Relevance
1“The case for human–AI interaction as system 0 thinking”Oct 22, 2024Massimo Chiriatti, Marianna Ganapini, Enrico Panai, Mario Ubiali, Giuseppe Riva / Nature Human Behaviour (vol. 8, pp. 1829–1830)This is the origin paper for the term in published literature. It explicitly proposes “System 0” as the foundational, artificial/non-biological layer of distributed intelligence from human-AI interaction. It positions System 0 upstream of Kahneman’s System 1 (fast/intuitive) and System 2 (slow/analytical), where AI outsources cognitive tasks: preprocessing vast data, performing superhuman computations, curating/surfacing salient info, and shaping inputs before they reach human awareness. Direct quotes: “System 0 represents the outsourcing of certain cognitive tasks to AI… It emerges from the interaction… creates a dynamic, personalized interface… acts as a preprocessor and enhancer of information, which actively shapes the inputs to traditional cognitive systems.” High impact: widely cited as the entry point for the concept in psychology/AI ethics/human augmentation discussions. Relevance: Core match on pre-cognitive preprocessing and extension, though more human-focused (less explicit DMN/AGI probing than your take).
2“System 0: Transforming Artificial Intelligence into a Cognitive Extension”Jun 2025 (arXiv preprint; expanded in Cyberpsychology/Annual Review of CyberTherapy)Massimo Chiriatti, Marianna Bergamaschi Ganapini, Enrico Panai, Brenda K. Wiederhold, Giuseppe RivaBuilds directly on the 2024 Nature piece. Frames System 0 explicitly as a cognitive extension preceding both System 1 and System 2, with AI as preprocessor/augmenter. Emphasizes preserving human agency while leveraging AI for enhanced thinking. Relevance: Deepens the upstream role, adds ethical/psychological layers (e.g., risks of over-reliance), aligns with your “external substrate” idea.
3“System 0/1/2/3: Quad-process theory for multi-timescale embodied collective cognitive systems”Mar 2025 (arXiv 2503.06138; pub Oct 2025 in Artificial Life/ResearchGate)Tadahiro Taniguchi et al.Extends to a quad-process model for AGI/embodied AI. System 0 is the pre-cognitive embodied foundation (lowest timescale, physical/embodied processes), upstream of 1/2/3, enabling emergence, pattern integration, and non-reflective background ops. Ties to predictive coding, collective intelligence. Relevance: Closest external parallel to your DMN-of-AGI framing—pre-cognitive substrate for embodied probing/exploration/emergence in AGI-like systems.
4Figure’s Helix 02 robotics architecture discussionsJan 2026 (X announcements, analyses)Figure Robot team / various X threadsUses “System 0” for the low-level, foundational neural motion/control layer (pixels-to-actions, embodied coherence over minutes). Bridges sensing/execution as stable “nervous system” precursor to higher reasoning. Relevance: Embodied AGI flavor—pre-cognitive integration for real-world flow/anomaly tolerance, echoes your temporal/probabilistic probing.
5Follow-on citations/extensions (e.g., npj Artificial Intelligence “The brain side of human-AI interactions…”, Medium/LinkedIn discussions)2025–2026Various (e.g., Riva group extensions, Fulvio Massarelli)Build on Chiriatti/Riva: System 0 as pervasive preprocessor, epistemic infrastructure, risks to autonomy. Some nod to pre-intuitive filtering. Relevance: Shows spread/popularization of the 2024 framing; less DMN/AGI-specific.

Comparison by Feature:

DimensionEugene Asahara’s System 0 (2026 Post, Roots in 2004 SCL/TOSN)Riva/Chiriatti et al. (Nature Human Behaviour, Oct 2024)Taniguchi et al. (Quad-Process Theory, arXiv ~Mar 2025, Pub Oct/Dec 2025)Figure’s Helix 02 System 0 (Jan 2026 Announcement)
Core Role/DefinitionEngineered analogue to the brain’s DMN; continuous background process for probabilistic probing, pattern integration, anomaly detection, temporal coherence, and surfacing meaningful coincidences/speculative stories without deliberate goals. Pre-symbolic, exploratory substrate generating candidates for higher cognition.Outsourcing of cognitive tasks to AI via human-AI interaction; dynamic, personalized preprocessor that curates/filters information, performs computations, and shapes inputs before they reach human awareness. Emerges from interaction, acts as enhancer of traditional systems.Foundational, lowest-timescale pre-cognitive layer in quad-process model for embodied collective cognitive systems; handles non-reflective, embodied processes enabling emergence, pattern integration, and background dynamics (yin-yang-like networks) in AGI/robotics.Low-level neural foundation layer for whole-body control; stable “nervous system” precursor that bridges pixels/sensors to torque/actions, enabling coherent, multi-minute autonomous tasks via learned priors.
Position Relative to Systems 1/2Upstream/pre-cognitive; foundational to both—supplies speculative candidates/stories to System 2 (deliberative) and compiles validated patterns into System 1 (intuitive/fast). Distinct as pre-thematic, subconscious exploration.Upstream of 1/2; external/distributed layer that preprocesses and augments inputs before they enter human System 1 (intuitive) or 2 (analytical). Complements rather than replaces.Upstream of 1/2/3; lowest layer (pre-cognitive embodied) in hierarchy: System 0 (physical/embodied foundation) → 1 (fast individual) → 2 (slow individual) → 3 (collective/symbol emergence).Extends “System 1/2” architecture with System 0 as new base; hierarchy from System 0 (whole-body neural prior) to higher visuomotor networks. Precursor enabling stable motion before higher reasoning.
DMN AnalogyExplicit and central: Modeled directly on DMN for mind-wandering, memory integration, narrative construction, and hypothetical exploration in AGI; hyperactivity in insights, mirroring DMN’s role in humans.Not explicitly mentioned; some implication of background/AI-mediated integration, but focused on external augmentation rather than intrinsic DMN-like activity.Implicit/aligned: Ties to DMN-inspired spontaneous/background networks (e.g., yin-yang balance, passive dynamics); draws from predictive coding and multi-timescale cognition, echoing DMN’s intrinsic/emergent role.Not explicitly mentioned; conceptual echo in stable, learned priors for natural motion, but more motor/control-focused than DMN’s integrative/mind-wandering aspects.
Focus AreaAGI internals (DMN substrate for autonomous systems) + Enterprise intelligence (BI/knowledge graphs for discovery in data streams, event processing).Human-AI interaction and cognitive extension; psychology/ethics of augmented human thinking.Embodied collective cognitive systems in AGI/robotics; multi-timescale hierarchies for symbol emergence, predictive coding in groups/robots.Embodied robotics; practical humanoid control for dexterous, long-horizon autonomy (e.g., household tasks like dishwashing).
Key Mechanisms/Data StructuresAbductive reasoning, genetic algorithms for hypothesis testing, vector probes for associations; TCW (correlation chaining), ISG (insight graphs/QueryDefs as episodic memory), Time Molecules (Markov-based temporal models), EKG (knowledge graphs). Tolerance for waste/false positives.AI-mediated preprocessing (e.g., LLMs/recommendation engines for data curation, pattern surfacing); dynamic personalization through interaction; no specific data structures named.Predictive coding (CPC for collectives), multi-timescale embodiment (physical to symbolic); yin-yang network balancing; no named data structures, but implies hierarchical neural models.Learned whole-body controller from 1,000+ hours human motion data + sim-to-real RL; integrates vision/touch/proprioception; replaces hand-engineered C++ with single neural prior.
Origins/TimelineConceptual roots in 2004 SCL/TOSN (subconscious as distributed processing); matured in Enterprise Intelligence (Jun 2024) and Time Molecules; synthesized in Jan 2026 post.Proposed in Oct 2024 commentary; built on Kahneman’s dual-process + emerging AI tools; authors: Chiriatti, Ganapini, Panai, Ubiali, Riva.Evolved from Taniguchi’s prior work on DMN/robotics (e.g., 2010s papers); arXiv Mar 2025, conference Oct 2025, journal Dec 2025; part of broader AGI debates.Announced Jan 2026 by Figure Robot; extends prior “System 1/2” architecture; trained on human data/sim-RL.
Similarities to Asahara’sN/AShared upstream positioning; background pattern surfacing; augmentation of cognition (though external vs. internal).Strongest match: Pre-cognitive substrate for emergence/integration; DMN-like intrinsic dynamics; AGI/robotics applicability; tolerance for non-goal-directed exploration.Embodied coherence over time; anomaly-tolerant flows; foundational layer enabling higher autonomy (echoes temporal probing).
Differences from Asahara’sN/AExternal/human-focused (AI as tool for people) vs. internal AGI DMN; less emphasis on temporal/probabilistic mechanisms or enterprise graphs; no explicit DMN analogy.More embodied/physical (robotics timescales, collective agents) vs. software/graph-centric; philosophical (Bergson-inspired) vs. enterprise/practical; less on specific data structures like TCW/ISG.Hardware/motion-specific (pixels-to-torque, tactile sensing) vs. abstract pattern probing; replaces code with neural nets vs. engineered structures; shorter-term tasks vs. long-running exploration.