From Data through Wisdom: The Case for Process-Aware Intelligence

My new book, Time Molecules, is available!! This is its launch-day blog (June 4, 2025)!

Time Molecules is a book about bringing process awareness to business intelligence (BI). Traditional BI flattens reality into snapshots— scalar values, points of information— leaving us staring at dimensionally flattened shadows on a wall. But life unfolds over time, and Time Molecules introduces a way to model that flow through webs of Hidden Markov models (HMM) and Bayesian-inspired networks, revealing not just what happened, but how and why it happened.

The blog frames this in terms of DIKUW (Data → Information → Knowledge → Understanding → Wisdom), showing how modern AI—ML, LLMs, knowledge graphs—have largely conquered the first three levels, but still stumbles at understanding and wisdom. Those remain human domains.

I argue that systems thinking, creative insight, and original thought are not just nice-to-haves—they’re how we stay a step ahead in an increasingly AI-driven world. We need analytics that help us explore the possibilities a few steps ahead, not just react to what we readily see at a particular instant. That means moving from charts and KPIs to structures like the Insight Space Graph, Tuple Correlation Web, and scaled out HHMs of Time Molecules.

AI can offload cognitive load, but it shouldn’t replace human agency. This post—and the book—are about defending that agency, expanding it with tools, and keeping the human at the center of the analytic loop. The goal isn’t smarter AI—it’s smarter humans.

The fundamental theme of this blog is that today’s level of AI’s capacity is good enough to be of profound use. It may not be artificial super intelligence (ASI) and only a few consider it to be even artificial general intelligence (AGI), but like any other intelligence, it’s useful in the right context. Along the well-known continuum of data information knowledge understanding wisdom, today’s level of AI is good enough to promote the service of enterprise analytics systems from the domains of data information into the domain of knowledge.

Note: DIKUW is based on (no U is included) Russell L. Ackoff’s 1989 paper “From Data to Wisdom”. There is a difference between a doctor who has gone through over a decade of medical school and residency and one who has “seen it all” over the course of decades of intense practice.

This isn’t simply one more step in a continuum, but a profound change in how analytics systems support human analysts (as well as machine analysts). It’s the kind of change analogous to a universe of two dimensions to three dimensions—like in Flatland, by Edwin Abbott.

Agenda

Here is a high-level overview of this blog:

  • Human Agency and Original Thinking: Why maintaining our creative spark—and a healthy churn of ideas and teams—is essential as machines encroach on analytic work.
  • Machine Intelligence Along the DIKUW Spectrum: How BI, ML, LLMs, and semantic reasoners map onto data→information→knowledge—and why their “understanding” is really fast pattern search, not true insight.
  • Supplementing Human Understanding and Wisdom with Machine-Powered Data, Information, and Knowledge: How graph structures (KG, ISG, TCW. EKG of Enterprise Intelligence) and the linked hidden Markov models of Time Molecules offload cognitive tasks—books and databases for data/info, AI for knowledge—freeing us to focus on deeper understanding and wisdom … or at least that’s what I hope we each end up doing with that freed time.
  • Staying a Mile Ahead: Leveraging Time Molecules, systems thinking, and diverse human–machine teams to shift from reactive dashboards to proactive anticipation.

This is Part 1 of a 3-part series that dives deep into the intuition for my book, Time Molecules:

  1. From Data Through Wisdom: The Case for Process-Aware Intelligence
  2. Thousands of Senses
  3. Analogy and Curiosity-Driven Original Thinking

Human Agency and Original Thinking

At face value, my new book appears to be a dive into the integration of BI, AI, knowledge graphs (KG), complex event processing (CEP), systems thinking, and process mining. In fact, the subtitle is: “The Business Intelligence Side of Process Mining and Systems Thinking.” But it’s actually about my big picture intent—a call towards maintaining our human capacity for critical, creative, and original thinking.

To be clear, Time Molecules is a technical, not philosophical, book. It’s meant as a logical step for the millions of people who have worked with BI systems for the last couple of decades to advance in this LLM-driven era of AI. For a more “direct” overview of Time Molecules, please see my earlier blog: Sneak Peek at My New Book-Time Molecules.

In my other book, Enterprise Intelligence, the most important idea is to think of each human knowledge worker as a really versatile agent—much more than any AI agent today, albeit not able to assimilate as varied data and as quickly as LLMs can. In fact, the protagonist of Enterprise Intelligence is an extended knowledge graph I simply call the Enterprise Knowledge Graph (EKG). It is an encoding of knowledge across thousands of human “knowledge workers” operating in at least a couple dozen domains (departments, specialized skills, etc.), who are consumers of BI systems, gleaned through the properties of their queries.

LLMs and KGs (more so, the expanded EKG) operate in a symbiotic relationship, whereby the KG helps ground LLMs in real, mostly human-supervised knowledge, and LLMs can significantly improve the process of assisting humans in generating KGs. They are two sides of a coin of knowledge.

“Agency “means that an entity has a mind of its own. Agents are beyond push-button or meticulously programmed devices that respond simply and deterministically. They can sense data about their surroundings, reason about it, react in a robust way that promotes its current goals, and even learn from its mistakes by evolving the process. That level of agency is required for us because:

  1. Each human could be thought of as a species of one, each unique. We’re not like millions of crab larvae, where all that matters is a few survive to adulthood. We’re self-aware.
  2. To paraphrase Heraclitus, we never step in the same river twice. We need to maintain our ability for critical thinking because every situation is unique to varying degrees. A river is a process that’s common enough to have a name, but a really smart guy had to point out that it’s in constant change. Therein lies the concept underlying Time Molecules.

As regular people with families and real interests in the world (not just a paycheck), if we wish to preserve our agency (our ability to make our own choices), we should shoot for a healthy rate of churn of our organizations.

Healthy rate of churn isn’t a euphemism for chaos. I usually think of college basketball as a fairly good example healthy rate of churn. There well over a hundred division 1 college basketball teams. In any given season a couple of dozen have a chance to win the NCAA championship. Out of that group are perennial names, but that list changes. Every now and then a Gonzaga breaks into that group. And every now and then a perennial favorite falters, such as Duke with their uncharacteristically rough 2021-22 season—finishing 13–19.

Recall how disruptive a down Internet (for whatever reason—ex. DSN problems, down electricity, etc.) or cell phone coverage had been. Today, as I’ve transitioned from searching on Google to directly asking Grok, I realized how much more disruptive losing AI assistance can be. In fact, X was down for a couple of hours the day before I wrote this sentence (May 24, 2025). Although it wasn’t down very long and I was able to use ChatGPT (o4-mini), I felt uncomfortably hampered in my ability to work through problems at hand.

Machine Intelligence Along the DIKUW Spectrum

The notion of the spectrum of DIKUW is very familiar in the analytics world. Here are examples of each from both a business and everyday perspective:

  • Data – Raw, fact values
    • Business: A row in your sales table showing a sales amount of $684.99 for OrderID 1024.
    • Everyday: The temperature reading on your phone at 3:29am of 68°F.
  • Information – Values computed from data.
    • Business: A dashboard chart showing today’s total revenue of rubber ducks in Boise of $23,450.
    • Everyday: Your fitness app telling you that you walked 5,000 steps.
  • Knowledge – Relationship between information.
    • Business: A rule from a knowledge base (ie. knowledge graph, LLM) that “when web visits drop by 20%, conversions fall by 15%.”
    • Everyday: You know that if you leave the house without an umbrella, you’ll probably get wet when it’s cloudy.
  • Understanding – Applying knowledge towards reasoning about a situation.
    • Business: After noticing a drop in customer satisfaction scores, a manager connects the dots between a recent policy change that lengthened return times, an increase in customer service calls, and higher employee turnover—recognizing that operational stress, not just the policy, is driving the customer sentiment decline.
    • Everyday: You realize that you feel more focused on weekday mornings because your brain is rested, so you schedule hard tasks then.
  • Wisdom – Applying reasoning to change the current situation or promote an outcome.
    • Business: Choosing to hold back a new product launch until logistics can support it, even if projections look strong.
    • Everyday: Deciding to call in sick and rest when you feel a cold coming on, knowing you’ll recover faster and avoid spreading germs.

Our human intelligence spans the entire spectrum. Each step is progressively more sophisticated in magnitudes as profound as the addition of a new geometric dimension. There may be steps preceding data and steps following wisdom. There may be entirely different systems of intelligence. But DIKUW works well enough for the time-being.

As far as the spectrum applying to machine intelligence, today, it sits very comfortably at the data and information levels—computing. The advent of machine learning and data science advances machine intelligence tenuously into knowledge level—the level where machine-based critical thinking is plausible.

Before describing critical, creative, and original thinking, it’s important to note that they form a continuum where the boundaries are incredibly fuzzy. I phrase each in the context of finding novel solutions to primarily novel problems, although those novel solutions are often just a “better mousetrap” for known problems. Unlike values of data and information (computed from data), solutions are generally graphs of relationships, a richer structure than a value.

It will also help to think of an example from popular culture—that is, the difference between Howard, Leonard and Sheldon in the TV show, the Big Bang Theory. Sheldon looks down on Howard as a mere engineer, applying what’s already known, discovered by others, albeit in highly complex, technical domains. Sheldon even looks down on the brilliant Leonard, an experimental physicist, validating and exploring applications of what theoretical physicists like Sheldon come up with. I think of …

  • critical thinking as the formulation of solutions within established complex systems applying a wealth of discovered rules. These are experts with very much intense training and experience—ex. doctors, lawyers. This requires a masterful understanding of the domain, but the end product is in the range of knowledge. The solutions aren’t necessarily novel but still require a substantial level of skill to devise and execute.
  • creative thinking as novel solutions to novel problems within established complex systems. This is “outside of the box” thinking, but not too far out. Creative solutions readily apply to clear and present problems. The solutioning requires masterful understanding and wisdom, but the end product is also mostly in the range of knowledge. However, the creativity probably contributes to understanding and wisdom.
  • original thinking as novel solutions to unknown problems. As much as I hate to say it, sometimes that invokes the dreaded phrase, “solution looking for a problem”—a phrase I’m painfully aware of. The solutioning requires masterful understanding and masterful wisdom, and the product contributes significantly to understanding and wisdom. Original thinking is where our efforts leave the zero-sum game of everyday life.

While machine intelligence might demonstrate some level of understanding and wisdom (which powers creative and original thinking) in this LLM-fueled era of AI, it is just a simulation, no more real than a video in “4K”, no matter how high-fidelity (thus appearing real) it seems. Genuine understanding and wisdom are still very much the realm of human intelligence.

However, machine intelligence today (what is readily available to the mainstream public—those who are not AI researchers, do not work for three-letter “intelligence” organizations, or highly scientific R&D departments) is capable of a modest level of critical thinking. Critical thinking is about evaluating patterns, checking rules, and testing hypotheses across vast datasets from massive haystacks of data and information. This maps neatly to the Knowledge level of DIKUW, where thought processes (ex. Chain of Thought, Tree of Thought) and “reasoners” (ex. SWRL from the Semantic Web world) spots correlations and applies learned relationships and rules. But creative or original thinking—the spark that births new models, metaphors, or entirely novel solutions—lives at the Understanding and Wisdom levels. That kind of insight requires weaving context, experience, and values into uncharted problems—something human intelligence still outpaces AI at, because creativity demands more than recombination of existing data; it demands genuine leaps into the unknown.

I should mention strategic thinking, which I think of as a kind of critical thinking. I just mentioned that I think of critical thinking in terms of very highly-skilled knowledge workers such as doctors and lawyers. Using their wealth of knowledge and experience accumulated in their brains over many intense years of training and practice, they search a wide space for solutions. But there are also the generals and engineers who often face novel adversarial problems (the sort presented by the cleverest opponents at the “Dr. Evil” level) which first require creative and original thinking to build a map of novel knowledge. Then they apply it with critical thinking to solve their goals.

Prior to web search engines and technical social media (ex. Stackoverflow, Reddit, Quora), I used to apply creative and original thinking during the course of my “normal” day much more than today. Today, AI is chipping away at the amount of critical thinking I do. Very useful and valuable … but equally scary.

Supplementing Human Understanding and Wisdom with Machine-Powered Data, Information, and Knowledge

Over the past few decades, we’ve used BI to go from data → information (the output computed from a tuple of parameters). Today, analytics systems grossly exceed our human capacity to turn data into information. They can store and compute trillions of facts within seconds. In limited contexts, machine learning models blow away our human ability to discover new knowledge—relationships, patterns, rules, and configurations hidden in massive volumes of data. For example, consider AlphaFold’s breakthrough in predicting protein structures, generative chemistry models uncovering novel drug molecules, and recommendation systems revealing hidden customer segments.

Over the past decade and a half we’ve used ML to take us from data/information → knowledge (rules, categorizations, relationships). The LLM-powered AI systems of today (turbo-charged by deployment within RAG, CoT, agents, etc.) only simulate understanding and wisdom. They’re just recombining what we taught them through humanity’s extensive corpus of recorded knowledge parsed through clever algorithms. It’s an illusion born of searching vast spaces at machine speed (the brute-force method used by Edison testing filament after filament).

Although I earlier offered examples of understanding and wisdom, those examples of genius are merely the output of the algorithmic process of understanding and wisdom. Some algorithms are more powerful than others—for example, the categorization through k-means versus the transformer architecture of LLMs. Some we’re pretty sure exist but we don’t know how it works—the most obvious being how our human level of understanding and wisdom builds.

But, you know, maybe that’s essentially the source of our human-generated ideas as well. Each of the billions of us occupy and experience different parts of the world—across space, time, and contexts. Unique pieces of the space of knowledge are distributed among us, each of us a unique experiment capable of reporting unique information to the rest of us.

Sometime, somewhere, unique aha moments arise in us. Some are at the wrong time and place, some are just what we were looking for. The difference between us and a system such as AlphaFold is that the latter is stripped down from immersion in the physical world, to sets of parameters and algorithms, which limits its possible innovations.

Data, information, and knowledge differs from understanding and wisdom in that the former set are data and relationships between data, and the latter set are algorithms used to process the data and relationships between the data. We could even phrase it as information and knowledge are the caching of the results of algorithms processing data. Knowledge is cache.

I even consider books to be in the realm of knowledge. Books are collections of words and requires an algorithm (the ability to read) to derive understanding and wisdom from it. The amount of understanding and wisdom we derive varies on the depth and quality of things we already know and the sophistication of our ability to read.

The algorithm of human understanding and wisdom hasn’t been realized. With all massive volumes of data and computing hardware at the hundreds of datacenters across the world, we still can’t match the magical understanding and wisdom of people. Part of it is that the symbols (data and information) applied through our current ML and LLM algorithms are stripped of nuance. I try to address this in my blog, Embedding Machine Learning Models into Knowledge Graphs.

Data to Information

For most enterprises, BI systems have traditionally been about collecting data (ETL/ELT, DW/DM) and presenting information. For most BI consumers (those using Power BI and Tableau), the expectation from the BI systems is to convert data into information or even knowledge to help a decision-maker (usually a manager or executive) make informed decisions. Those decisions made by managers and executives are based on what they predict will happen, which requires an understanding of the system.

For example, a BI analyst might be tasked with identifying poor-performing products. The typical BI query is: “What are the five most under-performing products in terms of sales?” The decision-maker plugs that value and other nuggets of information into the models of the business ecosystem in her brain to compute decisions of whether to discontinue the product or somehow better market the product.

Traditional BI platforms—ETL/ELT pipelines, data warehouses, and dashboards—don’t “think” on their own. Any logic they apply is simply the expertise of domain specialists and developers hard-coded into ETL processes and report definitions.

Lastly, information is algorithmically computed. But it doesn’t mean it’s correct. The computation rules and thresholds can come from different perspectives that lack understanding of a particular context. This isn’t necessarily a flaw. It’s good to see every situation from different points of view, as long as we remember that each computed piece of information is just one perspective.

The Invasion of Machine Intelligence into the Knowledge Realm

In today’s AI era, machine intelligence—driven by LLM-centric AI, ML, and the Semantic Web—acts like an ever-expanding river of knowledge, flowing through and enriching the realm of patterns, rules, and associations.

The Wall Between Knowledge and Understanding

There is a formidable wall between knowledge over to understanding (and wisdom). That is, data, information, and knowledge are all symbolic. Even LLMs are symbolic in that they are statistics across tokens (symbols) in its training data.

The Figure 1 illustrates how machine intelligence has progressively advanced through the classical hierarchy of data, information, and knowledge—and now finds itself at the threshold of understanding and wisdom, traditionally the domain of human intelligence.

Figure 1 – The invasion of machine learning into the realm of knowledge.

Beginning at the left side of Figure 1 is the realm of data, where raw values are stored in systems such as CRM, ERP, SCM, and HR platforms. These values—transactions, dates, IDs, and amounts—are just that: values. They have meaning only in the context of where they come from, like sales systems, finance, inventory, or marketing domains. This is where line-of-business data resides, often isolated and in silos. This wasn’t really very difficult for machine intelligence to fill.

Moving right, machine intelligence conquered the realm of information, where data is processed through algorithms—summed, counted, sliced, and charted—to generate meaningful patterns and summaries. This is the domain of business intelligence tools such as Tableau, Power BI, and Kyvos, and the kinds of artifacts they produce: bar charts, line graphs, KPIs, and dashboards. At this stage, machine intelligence already has a strong foothold. Models and notebooks (e.g., in Jupyter), language models like ChatGPT, and embedded analytics are capable of taking structured data and rendering useful visualizations and statistics. Machine intelligence effectively dominates this part of the landscape.

Machine intelligence’s current realm to conquer is knowledge, where things become more interesting—and harder. Knowledge involves structured relationships between nuggets of information. It’s where we move beyond “the numbers” and start capturing how things relate: hierarchies, dependencies, causality, generalization. Here we see knowledge graphs, Prolog code, machine learning models, and LLMs building representational systems that encode facts, rules, and learned associations. This is not just summarized data—it is data with structure, context, and intent.

At this point, machine intelligence has made substantial inroads—but it encounters a substantial obstacle. This is the “big wall” shown in Figure 1—a conceptual hurdle between knowledge and understanding. While machines can now represent knowledge, they often lack the generalization and abstraction abilities that human intelligence effortlessly exercises. Understanding means not just representing relationships, but interpreting them in context, applying judgment, and adjusting strategies when conditions change.

Be grateful for that big wall. It’s what will give you enough time to figure out this new AI-driven world and to find your unique place in it. Surely, machine intelligence might seem to have breached the wall to a limited extent (ex. AlphaFold that I mentioned earlier). But it’s just the illusion of understanding and wisdom masked inside the ability to execute a very nifty algorithm at scale.

To the far right is wisdom, the realm of humans: professionals, analysts, scientists, and engineers who can take in knowledge and algorithmically transform it into actionable strategies, causal insights, or even moral decisions. Here, algorithms live not just in code, but in brains, experience, and organizational behavior. While machines may mimic some of these processes, true wisdom remains largely human.

Ultimately, this figure portrays a battlefield of sorts. Machine intelligence has conquered the lands of data and information, and is now encroaching into knowledge, building models, graphs, and probabilistic logic. But it still faces a major leap: making sense of it all with understanding, and applying it meaningfully with wisdom. Whether that leap is a matter of time, scale, or something fundamentally unreachable is left open.

Another way to look at this invasion of machine intelligence is from the Marines, Army, occupation angle. I despise war as much or more than most, but it does provide a convenient analogy for the invasion of machine intelligence into the realm of knowledge, traditionally dominated by human intelligence:

  1. Data Lakes build the ability to collect and process massive volumes of data in terms of depth and breadth. This is like the need to accumulate resources before the invasion— money, equipment, manpower, etc. Note that most of the items to follow have existed in some form for many years—statistics-based ML algorithms, natural language processing (NLP-the predecessor of LLMs, ontologies and taxonomies of knowledge graphs, the semantic web since the late 1990s, Prolog based on logic. But it wasn’t until the volume of data and the technology to manage it came about that we could do some really cool things.
  2. Machine Learning discover patterns and turn data into actionable knowledge, then apply those insights through predictive and prescriptive algorithms. Like the Marines, this is targeted and tactical.
  3. Large Language Models are like the AI army. It’s vast, versatile. I mention in Enterprise Intelligence that I couldn’t write the book before LLMs came along to mitigate friction.
  4. Knowledge Graphs and Semantic Networks represent entities and their relationships as interconnected nodes and edges, enabling rich context, traversal queries, and inferencing across domains. KGs are like the occupation force that enforces the rules. Structures explained in Enterprise Intelligence and Time Molecules:
    • Insight Space Graph (ISG) lifts BI into the knowledge realm by capturing and ranking the most salient insights from analyst visualizations—turning the aha moments in charts into structured, navigable graphs of what matters.
    • Tuple Correlation Web (TCW) turbocharges pattern discovery by calculating and storing correlation scores between business tuples—revealing which combinations of parameters move together over time.
    • Time Molecules give us the “systems equivalent of a sum or count”—the atomic unit of process, modeling the flow, delay, and probability between events, not just static snapshots.
    • Enterprise Knowledge Graph provides the semantic backbone that unifies data, information, and insights across domains—enabling consistent context, shared definitions, and cross-domain inference.
    • Prolog-based Reasoners encapsulate domain knowledge as logical rules and facts, providing a declarative inference engine for precise, explainable reasoning.
LevelClassic BI ETL, DW, OLAP cubes, DashboardsML Models (Clustering, Trees, Association)Knowledge Graphs (KG)Tuple Correlation Web (TCW)Insight Space Graph (ISG)Time Molecules
DataRaw events, logs, rowsFeatures derived from dataTriples ingesting those factsTuples linked by simple correlationsNodes capturing extracted insightsEvent‐to‐event records
InformationAggregates: sums, counts, min/max, dashboardsStatistical summaries, engineered featuresAttributes on entitiesCorrelation scores attached to tuple pairsInsights automatically derived from BI queriesTime‐bucketed transition counts
KnowledgeDecision rules, clusters, association rulesSemantic relationships, inference over domainsContextual correlations of business tuplesPatterns surfaced from analyst visualizationsCompressed representation of a process.
Table 1 – Data, information, and knowledge across the knowledge components.

In that way, BI/ML/AI hold more data, compute information faster, and surface more knowledge; ISG/TCW/Time Molecules narrow the gap from knowledge to real understanding and wisdom—so you’re not just reacting to shadows but replaying the choreography of your business.

Time Molecules is built for that shift. It captures the steps, pauses, and patterns between events—the kind of stuff you pick up over years of experience but never had a way to model. It’s not just about what happened, but how it happened. That’s the kind of thing you’ve always carried in your head. Now there’s a way to show it.

A tuple is the atom of dimensional analysis in OLAP, and a Time Molecule is the atom of process-aware intelligence. Just as a tuple describes a where-and-when in a cube (a point in multi-dimensional space), a Time Molecule describes a how—the directional flow between two events, including the probability, timing, and sequence. OLAP helps you slice the cube; Time Molecules help you replay the story behind the numbers.

LLM’s Roles in Breaching the Wall

LLMs present an opportunity for resurrecting two stars of earlier AI winters—the semantic web and Prolog.

The Symbiosis of Knowledge Graphs and LLMs

The Semantic Web has always promised structure and meaning layered over raw information. But authoring that structure—building and maintaining a usable knowledge graph—is painstaking. It requires human expertise, agreement on ontology, and constant upkeep. This is especially hard in fast-moving domains where definitions evolve faster than taxonomies can keep up.

On the other hand, LLMs are fast, adaptive, and shockingly good at making sense of loosely structured or unstructured content. But they hallucinate. They approximate. They can’t be trusted to ground knowledge on their own. Their strength lies in speed and reach of breadth, not precision.

The sweet spot is their symbiosis. LLMs can draft candidate facts, suggest relationships, or even generate SPARQL queries to explore or extend a knowledge graph. And in return, knowledge graphs can anchor LLMs to a curated, enterprise-grounded truth—reducing hallucinations and improving reliability.

Note: Although LLM hallucinations are discussed in a negative manner, it’s really kind of comforting. At least it’s a hint that AI isn’t yet intellectually superior to the whole of humanity. Mistakes, miscalculations, misunderstandings are the cost of doing business in a wonderfully complex world capable of all sorts of possibilities. That constant change is what makes creativity a super power.

In that way, the Semantic Web becomes less a static artifact and more a living system—accelerated by LLMs, corrected and reinforced by humans, and structured to bridge the gap between machine-generated knowledge and human-understandable meaning.

Prolog in the LLM Era

In my series, Prolog in the LLM Era, I discuss the role of Prolog (at least the concepts encapsulated in Prolog) in this latest AI hype-cycle. Prolog (along with Lisp) was the star of an AI hype-cycle in the 1980s—the “expert system” era. The efforts mostly failed back then because the timing wasn’t right.

From Scalar Value Answers to Systems Thinking

As mentioned in the Sneak Peek of Time Molecules, I’ve long been fascinated by systems thinking, ever since reading The Fifth Discipline, Peter Senge, when it was first released in the early 1990s. This is the shift from the scalar values of who, what, where questions to richer why and how questions which require answers beyond mere scalar values—in the context of systems thinking and process mining, that means values that are graphs of relationships between values.

The depiction of systems—a graph encoding and portraying the relationships between events—is a kind of knowledge graph. The discovery of systems is really the purpose for process mining. The elucidation of process flows, the product of process mining, is something that is based on event data, but still require ample understanding and wisdom to derive from that event data.

Similarly, other types of knowledge graphs, as I discussed in Beyond Ontologies: OODA Knowledge Graph Structures, require ample understanding and wisdom. However, hidden Markov Models, a combination of many Markov models with a web of correlations (as I describe in the Tuple Correlation Web in Enterprise Intelligence) represents a step from value-based BI to systems-based BI. That is, similar to how traditional BI structures (i.e. OLAP cubes) can compute information values very quickly, Markov models, can be easily computed without human-level understanding and wisdom.

ConceptOLAP WorldTime Molecules World
UnitTupleTime Molecule (event pair with metrics)
DescribesA scalar value, point, in multidimensional spaceAn abstracted description of a process.
Primary focusAggregated valuesTransitions and other properties between events
Used forReporting, dashboardsProcess mining, system behavior modeling
Typical question“What were sales by region?”How has the process changed?

A Markov model is a probabilistic framework that treats each event as a state and assigns transition probabilities for moving from one state to the next, capturing the rhythm and likelihood of sequences without needing human-crafted rules. By stacking these as higher-order chains—or weaving them together under hidden layers—you get a lightweight, computation-friendly way to map process flows at scale, much like how OLAP cubes deliver sums and counts in an instant. See the Sneak Peek for more on hidden Markov models as the “BI side of systems thinking”.

My Assistants, ChatGPT and Grok

Since ChatGPT (3.5) became readily available in November 2022, I’ve used it (as well as other LLMs such as Grok) very extensively. It’s become very much a part of the way I work—as much as Web searches had been incorporated two decades ago.

But these AIs haven’t been good at pushing me into the frontiers of the cutting edge. That is, in the realm of critical, creative, or even original thinking. If LLMs were an intern helping me write Time Molecules (as well as Enterprise Intelligence), this is how I would grade its value to me (as opposed to its “intelligence”):

  1. B+ Research problems with tools. This is mostly the sort of thing that requires perusing lots of Reddit, Quora, and X posts.
  2. B- Wordsmith sentences to short paragraphs. It’s good at being a glorified thesaurus.
  3. C- Fact check my finished work. It was OK at this. But it could only fact check a few paragraphs at a time. Which means it didn’t really consider what I had addressed elsewhere.
  4. C Writing short code snippets (Python, SQL, Cypher).
  5. D Writing code beyond functions. Granted, it’s much better than it was only a few months ago. However, I still don’t trust it.
  6. F Wordsmith paragraphs from braindump of my ideas. ChatGPT and Grok were absolutely terrible in this context. I spend the first hour of the day braindumping rough drafts of text, hoping ChatGPT or Grok could refine it into a draft. It always lost the “flair” of my text, the novel meaning in my ideas. Even if I offered it fairly polished text to lay on final touches, it would revert it to something back “inside the box” of its knowledge.

Although some of those grades aren’t bad, #6 is by far the most important grade (I’d say it should account for 50% of the overall grade) in the context of writing a non-fiction book on a highly technical, cutting/bleeding-edge, fast-moving subject. All in all, AI added significant value to the process of writing both books. But having utilized it in such an extensive manner convinced me it doesn’t genuinely understand what it’s outputting in response to my prompts and it certainly isn’t wise.

To be fair, ChatGPT o4-mini appears to be much better in my context than the 4.x models that were available to me during the early stage of writing Time Molecules. Maybe if I began today, each grade would be at least a step higher (ex. B- to B for #1 and F to D for #6). That goes for Grok at the time of writing.

For an intelligence to demonstrate or at least simulate understanding and wisdom, it must be able to consider my words beyond the conventional “wisdom of the crowds”.

Understanding and Wisdom in an Adversarial World

As I’ve mentioned in a previous blog, Levels of Intelligence-Prolog’s Role In the LLM Era, one of the biggest influences on how my thinking about analytics and BI evolved is the incredibly intriguing book, Thought in a Hostile World, by Kim Sterelny. For better or worse, life on Earth evolves through the adversarial and eternal struggle between predator and prey.

A big factor of harnessing the power of understanding and wisdom is about seeing things from our opponent’s perspective—their “appetites”, strategies, what they know and don’t know. It’s our uniquely human capacity (at least for life as we know it on Earth) for understanding and wisdom that catapulted us well beyond the capabilities of other species of organisms. Although we’ve had formidable challenges as a species, we’ve defeated every one of them—or better yet, figured out how to capitalize on it.

Variety, whether it’s variety between species or the subtle variations among individuals within a species, is the key to healthy ecosystems—that is, the ability to evolve with changing conditions out of our control. It’s the avoidance of single points of failures, the key to resilient systems. Even AI should have a healthy competition amongst themselves. We should never settle for the just the best AI, rather multiple AIs trained from different pools, and capable of challenging each other.

Staying a Mile Ahead

The goal of the infrastructure I describe in Time Molecules is to always be one or two steps ahead—to see beyond the surface, beyond the magician’s sleight of hand. It’s not just about recognizing a spike in sales or a drop in traffic and reacting. That’s reflex. Systems thinking means understanding why it’s happening, what’s likely to happen next, and how changes ripple across the process.

It’s about going from metrics to motion, from snapshots to sequences. That’s how you stop reacting, instead creating. To me, the single most important aspect of intelligence is the ability to explore parallel paths to a solution—”path” implying we base on decisions on what happens several steps ahead (as opposed to a “hop” of the next step).

My goal for Time Molecules is to see things as more than a set of facts and rules composed by someone else. It’s about seeing things from a default higher point of view—systems, interacting processes.

After a few decades of computer socialization, most of us still use computer applications as the electronic version of something that exists in the world—electronic versions of calendars, paper to write on, shopping catalogs, mail, toys, etc. The utility of the software applications has exceeded the capabilities of their real-world counterparts. It has massively compacted the amount of physical material and time required to perform our tasks. But it seems like just a much faster version of what we did fifty, even a couple hundred years ago.

That’s really what’s different about AI. It’s not just an electronic version of a know-it-all friend. It drastically accelerates something very profound—research and development. For most people, R&D is a side activity performed as a last resort. With the Internet, particularly search engines, what little R&D most of us perform is degraded to finding something that somebody else has already done that addresses that problem.

For this blog, by AI, I’m referring to the machine learning models (ex. recommendation engines, decision trees) that have been in widespread production for the past decade since “data scientist” was declared the “sexiest job of the 21st century” (by HBR in 2012) and large language models (ChatGPT, Grok, etc.) as they are at the time of writing). Both are essentially about predicting what comes next given a set of parameters.

Similarly, in a very rough nutshell, it’s said by many that our human brains are “prediction machines”. The highly intellectual activities such as planning and risk mitigation are fundamentally about predicting what happens next. But the prediction mechanisms of our human brains are relatively excellent at evolving with new information.

That is the crux of Thomas Bayes’ great insight—in a complex world, everything is probabilistic, and we can iteratively update our predictions with each piece of new information. The world is a dance between complexity and the structure of the systems within it. In the context of Time Molecules, probabilities make sense only if we have faith in underlying patterns and processes that result in producing sequences of events.

Every thought we have, our perception of the world, and every decision we make is based on models trained in our brains. That training occurs over the course of our lives by our experiences. Our models of the world are imperfect–our time is finite as well as the scope of our experiences across the space of what experiences are possible. So, as we’ve developed tools to extend our capabilities in the physical world, we’ve developed systems (books, teaching systems, and analytics systems) to accentuate the models in our heads.

Think of highly-trained scientists, doctors, lawyers, engineers, and programmers. They’ve spent at least a decade honing models in their brains about their domain. They have teams of assistants who gather raw data for them.

Although AI today, specifically LLMs, which have been frustratingly poor at helping me walk the road barely or never travelled (at least as far as I knew), it is good enough to allow us to distribute to it what was 99% the domain of human intelligence. As a ballpark figure, I’d say it’s effective at significantly helping us with the easy 80% of intellectual tasks. That’s good enough where some of the models that were traditionally the sole domain of human intelligence can be off-loaded to AI systems.

Teams

Our human intelligence didn’t just evolve. It evolved in a system that included a virtuous cycle of improving intelligence. It’s the most mysterious and intricate phenomenon we know of. It’s true that technology has mostly freed our time to enable us to reach even higher—from knapping stone tools, fire, agriculture, machinery, and now AI.

It could be argued that our human intelligence is the evolutionary result of a virtuous cycle of the ability to work as a distributed team of specialists and the required intelligence to communicate with our team members, read their minds, master our assigned skill.

It’s about promoting a systems-centric approach to BI and analytics in general. But for the distribution of knowledge, not its consolidation for the benefit of a few powerful organizations (ex. national governments and the “Fortune 500” global corporations) and/or people to have a substantially advantageous big picture. It’s about every enterprise having the big picture. I emphasize that it’s at the enterprise level. There are relatively few dominant enterprises in each possible domain, but there are millions of enterprises throughout the world. Of those millions, there are dozens to hundreds of 2nd-tier enterprises in each domain mature and capable enough to disrupt their way into the top tier. Further, there are pools of millions of small businesses, from mom and pop businesses one to three-person garage startup—each a seed of an enterprise with potential.

That mix of “old-growth forest” top-tier enterprises (where economies of scale and consistency prevail), dozens to hundreds of sapling second-tier contenders capable of disrupting into the top tier with infusions of change, and millions of small businesses—seedling mom-and-pop shops to one-to-three-person startups—creates a healthy ecosystem. Each plays its part: leaders provide stability, challengers drive innovation, and seedlings incubate fresh potential. I’m not envisioning a system of AI-enabled anarchy. Properly harnessed, AI should keep this flow dynamic rather than contribute to system logjams (i.e. primarily teaching the top tier how to dig deeper moats).

All enterprises begin with thoughtful, critical, creative, and at times, even original ideas. But creativity and originality eventually fade—a voluntary loss of agency.

In this time of accelerating AI capabilities, each of us should compete at a group level, i.e. distributed group of agents. Life has found that as a great way to compete at the multicellular way in the multicell, and other ways such as the colonial siphonophores. Even as multi-cellular creatures, individuals form groups of individuals from slime molds to our hierarchies of groups, families, neighborhoods, cities, country, etc.

Conclusion

My two books are about how to use AI in a supporting role to our human-centric activities. It’s not about how to build ever-smarter AI. This hedges our bet on AI whether it doesn’t get much smarter in the near future or not. When AGI, ASI, the singularity arrives, whether today or decades from now, we are still sentient and sapient people with loves, desires, fears, and unique perspectives. No matter what, we cannot allow ourselves to be placed in a situation where AI dominates and we become its tools or something akin to pets. Traditional web searches and social media giants are bad enough at whittling away our agency.

We also need to remember that there is a difference between an AI that can pass the Turing Test (even though the definition is a “moving goal post”) and what makes the thinking of an Einstein or Newton clearly different from all but a handful of others. It’s certainly not simply magnitudes more of something—ex. more “parameters” for an LLM analogous or simply more neurons in our brain. There’s something about their peculiar characteristics of formal education, sequences of accumulated experiences, randomly encountered profound impressions, environment, etc. that is disregarded in this LLM-driven era of statistics-driven AI. Even chain of thought and retrieval-augmented generation (RAG) seems kind of pedestrian when I wonder how Einstein, Euler, Aristotle, and Newton thought.

Time Molecules (and my previous book, Enterprise Intelligence) is available on Amazon and Technics Publications. If you purchase either book (Time Molecules, Enterprise Intelligence) from the Technic Publication site, use the coupon code TP25 for a 25% discount off most items.

Leave a comment