The Model
The broker had twenty-two years of closed transactions. He could drive past a house and price it within three percent. His read on a neighborhood carried more weight with local banks than their own appraisals. Two decades of contact with the same market, the same negotiation patterns, the same failure modes had compressed into judgment that operated below conscious thought.
I was twenty minutes into the demo when his face changed.
He had spent those decades building something that lived inside his nervous system and nowhere else. Listings priced by instinct, negotiations read by feel, and not a single system to carry any of it beyond him. When he took a vacation, the signal went dark. Training a junior agent meant watching his methodology degrade at every handoff. When a client called at midnight, the answer had one address.
"That's just like Go High Level," he said.
It was not. He had no model for what he was looking at, and the gap between what he knew and what his business could carry without him was foundational—not a marketing problem, not a hiring problem. I watched it repeat for the next two years.
Eight hundred demos across industries and price points. Always the same constraint.
A consulting partner billing four hundred dollars an hour whose practice went silent when she took a week off. She was the signal her firm transmitted, so when the source disappeared, the revenue followed.
A fitness coach in Austin with three hundred thousand followers who spent every morning converting his expertise into sixty-second videos. Nothing stored. Nothing compounded. Tomorrow he started from zero or the channel went dark.
An agency founder in London with forty employees and not a single system that could carry her judgment from brief to deliverable. Four handoffs between her and the client, each degrading the original intent until what shipped barely resembled what she meant.
The signal could not transmit without the source.
M = T × S × K × A × E × I × L
M is what happens when one person's encoded expertise captures the production layer of a market. Pieter Levels generates $3.1 million a year in revenue, alone. Nick Dobos cleared $8.8 million. Justin Welsh surpasses $300,000 a month. Each encoded their expertise into systems that transmit without them, and these are early returns on a model that compounds.
Why Multiplicative
The model is a dependency chain, and the easiest way to see it is as a transmission system. Knowledge generates the signal. Structure determines whether it survives the transfer. Infrastructure carries it. Leverage amplifies the reach. Truth determines whether what arrives matches what was sent, timing determines whether anyone is listening, and architecture determines whether it is aimed at the right target.
Remove any one component and the transmission collapses. A signal with no channel has nowhere to travel. An amplifier attached to a corrupted signal broadcasts corruption at scale. If any variable equals zero, M equals zero, because you cannot compensate for a missing component by strengthening the others. A louder signal does not fix a broken channel.
| Variable | Name | If Zero |
|---|---|---|
| T | Truth | Everything collapses under its own growth |
| S | Situational Awareness | You build for a world that no longer exists |
| K | Knowledge | You have nothing worth encoding |
| A | Architecture | You can't see what to build |
| E | Encoding | Your expertise stays trapped in your head |
| I | Infrastructure | AI has nowhere to live, nothing compounds |
| L | Leverage | One person stays one person |
The Foundation: Truth and Knowledge
Every signal degrades. Noise is inevitable; fidelity is the battle. But there is a category of degradation that originates at the source, one Shannon never modeled. A well-built channel carrying a corrupted signal produces a convincing lie at scale. I call the accumulated distance between signal and source brand debt, and it compounds silently until the structure fails under its own growth.
At five hundred followers, the divergence is invisible. At five hundred thousand, every new follower is a potential auditor. The internet is a fossil record, not a whiteboard, and AI has made cross-referencing contradictions a thirty-second operation. Levine's Truth-Default Theory shows that humans are biologically wired to presume honesty, accepting signals as truthful until something triggers doubt. AI breaks every trigger. The base rate of honest signal transmission across the internet is dropping while the tools to detect the drop accelerate.
Truth is the first variable because it sets direction: whether each cycle strengthens the system's integrity or degrades it. High truth alignment means the source and the transmitted signal converge over time, gaining credibility under scrutiny. Low truth alignment means they diverge, and the divergence accelerates precisely when you can least afford it, at maximum amplification. Look, truth is the interest rate on the entire model. Every other variable compounds in the direction it sets.
Knowledge is what generates the signal in the first place, and its four sub-components multiply internally. Self-knowledge determines which signal you are built to transmit; pick the wrong domain and you encode reluctance into everything you produce. Domain expertise gives the signal depth; without it, you transmit noise that sounds like signal. Specific knowledge is the frequency only you can broadcast on, what Naval calls the knowledge that feels like play to you and work to everyone else, found where genuine curiosity, natural talent, and market demand intersect.
Then there is psychology. Every platform you publish on is a neurochemical feedback loop engineered around three beliefs: you are not enough, you cannot do it alone, it is already too late. Understanding that architecture separates someone who reads the signal environment from someone who gets read by it.
The Capability Layer: Architecture and Encoding
Most businesses fail at AI implementation for the same reason most buildings collapse: nobody mapped the structure holding the weight.
The pattern is the same everywhere. Businesses install AI on top of chaos: scattered tools, disconnected workflows, no unified data layer, operations living in Slack threads and people's heads. The AI has nothing to reference, nothing to automate, nothing to improve. It writes emails faster, summarizes meetings, generates ideas, and never becomes an actual operating advantage. The honest answer is that the AI has nowhere to live.
Architecture solves this by answering a question most people never ask: what are the final outputs this system must produce? Not the processes or workflows. The deliverables, the packages, the client-facing artifacts the business exists to transmit.
Everyone tries to automate from A to Z, starting at the beginning of the workflow and layering AI onto each step moving forward. BCG found that only five percent of organizations achieve AI value at scale. Eighty percent layer AI onto unchanged processes. The ones who succeed start at Z, the final output, and reverse-engineer backward. What signal does this department produce? What does a high-fidelity version look like? What knowledge does the encoder need? Work backwards from the output to the source, and you discover the architecture: the load-bearing walls your industry requires.
Every industry has different walls. Healthcare, education, consulting: the domains are specific, but the process of discovering them is universal. Study the businesses that survived five or more years, strip to base components, find what repeats. What repeats is load-bearing. What varies is style. What is absent in failures is the gap that kills.
Encoding is where your expertise crosses from your nervous system into the system's architecture. Two channels. Explicit rules, clear if-then logic, decision trees, process maps, translate into code and automations. Tacit principles, contextual judgment, the instinct that a deal is about to shift before anyone says a word, the sense for when to push a client and when to hold, translate into system instructions, decision frameworks, and contextual documents. The first category is the guardrails, the second is the intelligence.
Researchers at Siemens and Chalmers University compared AI performance across four modes: AI alone, AI with basic context, AI with structured knowledge, and AI with fully codified expert domain knowledge. The gap between lowest and highest was 206 percent. Encoded expertise shaped output quality more than the capability of the model itself.
What I suspect most people miss is this: the person who spent a thousand hours doing the work before AI existed gets a thousand times the leverage from encoding. They carry the map, the topology of their domain built through years of failure, correction, and pattern recognition, and AI gives them speed across terrain they already know. The person who skipped the hours gets the answer at mile one but never built the map. Without it, every new problem is mile one again. AI produces polished output and the illusion of comprehension, but the muscle never fired.
The System Layer: Infrastructure and Leverage
Think of it as a skyscraper. Foundation at the bottom: node architecture, security, authentication. Data layer above it, where context lives. Core systems, CRM and ERP and HR, where everything flows through. Workflow layer, where work gets coordinated. Applications above that, what makes your business specific. And at the top, the rooftop: AI, agents, future expansion.
You do not hang a satellite dish on a building that has no foundation. But most people try exactly that. They want agents, copilots, automation, while the business still lives in email, Slack, Google Drive, and people's heads. So AI becomes a toy.
Once the business has a single source of truth, defined workflows, consistent execution, clear ownership, and visibility into what is actually happening, AI stops being a novelty and becomes leverage. Now it has context. It can see dashboards, active projects, where things are stuck, what is trending up, what is quietly slipping. It sees the same reality you see. And that is when compounding begins.
The feedback loop is the moat. More data produces better outputs, which attract more clients, which generate more data, and each cycle widens the gap. If someone is six months ahead, every subsequent cycle makes closing that gap more expensive because it compounds on everything before it. The operating system improves the longer it runs, not because the model changes but because the system stacks context: more patterns, more history, more "this always happens when..." moments feeding better decisions and smoother execution every quarter.
Leverage makes the moat visible. Without it, the system runs perfectly and nobody knows. Labor, the oldest form, is the least scalable. Capital requires permission. Code and media are permissionless and infinitely replicable: software that runs while you sleep, content that distributes while you work, agents that serve clients you will never speak to. AI flips a constraint that held for the entire history of professional work, because one person's expertise can now serve an entire industry simultaneously. Dario Amodei, CEO of Anthropic, has said he is seventy to eighty percent confident that a billion-dollar single-employee company will exist by the end of this year.
The Environment: Situational Awareness
Situational awareness is the discipline of reading how fast the gap between AI capability and industry adoption is narrowing. When capability is high and adoption is low, the window stands wide open. When adoption catches up, positions lock.
Every major researcher who has published on this in the last eighteen months points at the same shift, even where they disagree on timelines and risk: the window is one to three years. Anyone encoding right now is inside it. If you start two years from now, you will face an incumbent with two years of compounding data, encoding, and infrastructure running ahead of them.
S constrains every other variable, because the deepest knowledge and the most precise architecture are wasted if you build for a market that moved while you were planning.
How the Model Operates
Seven variables, but they are not equal; they occupy four distinct roles.
T is a filter. Binary. You either have signal integrity or you do not. If T fails, the model does not produce a weak result. It produces a corrupted one. Truth is a gate. You pass through it before the model applies, or you do not pass at all. The only question that matters: is the signal clean at the source?
K, S, and E are the operating variables. Knowledge generates the signal. Situational awareness determines when you act on it. Encoding transfers it into a system that carries it without you. These three are where the work happens and where the multiplication matters. A zero in any one collapses the output.
A is part of E. Architecture is the mapping phase of encoding. You cannot encode what you have not mapped. Most people try to encode before they understand the structure of what they are encoding into. Architecture is the first step of encoding.
I and L are what the model produces. Infrastructure is where the encoding lives once it has a home. Leverage is what the compound generates when the feedback loops have been running long enough. You do not build infrastructure and leverage the way you build knowledge or encoding. They emerge from the first five variables operating together. What that emergence looks like, and what remains genuinely unpredictable about it, is where the model leads.
The model loops, and each cycle sharpens every variable: K deepens, A clarifies, E refines, and the compound that I and L represent widens with every cycle a competitor has not yet started running.
Time is the variable no amount of capital, talent, or technology can buy back. An investor named Michael Bloch drew the line that governs what compounds and what expires: hard to do, or hard to get? AI compresses everything that is hard to do: building software, producing content, running analysis, maintaining integrations. These were moats against the scarcity of intelligence, and intelligence is the one form of scarcity that is ending. What AI does not compress is what takes years to accumulate: proprietary data generated through operations, network density built through adoption, regulatory permission earned through compliance, and the encoded expertise that only exists because someone spent a thousand hours closing feedback loops before AI existed. If your advantage is bottlenecked by intelligence, you are on borrowed time. If it is bottlenecked by years, you are building something that lasts.
Capital, regulation, network effects, competitive response, and black swan events kill real businesses. The model does not pretend otherwise. It maps the variables you can affect while the ones you cannot are in motion, and it maximizes your position regardless of which randomness occurs. When the environment shifts in your favor, you are the only one with the infrastructure to capture it.
Ninety-five percent of businesses report zero value from AI. That failure rate is the model's thesis. If almost everyone fails at AI implementation, the five percent who figure out encoding have a multi-year compounding advantage that widens with every cycle the ninety-five percent are still failing. The failure rate is not the problem. It is the moat.
The model does not claim completeness. You can score high on all seven variables and still fail because regulation shifted, capital dried up, or a competitor with deeper pockets entered six months later. The model maps what you control. If you score above 30 on every audit in this paper and fail to gain measurable traction within twenty-four months, the model is missing a variable. That is a falsifiable claim, and the standard I hold this work to.