Encoded Founder — Chapter II
Situational Awareness
Why timing is the variable that cannot be bought back
S = Capability / Adoption × Time
When capability is high and adoption is low, the window is open.
When adoption catches up, positions lock. Time multiplies whatever gap exists.
Chapter Thesis
Situational awareness is the timing variable. S = Capability / Adoption × Time measures the window between what AI can do and who has restructured around it. Truth determines direction. Situational awareness determines whether any of it happens at the right moment or arrives after the positions have locked.
On June 1, 2009, at two in the morning over the middle of the Atlantic Ocean, an Airbus A330 carrying 228 people lost its ability to see.
Ice crystals had formed inside the pitot tubes at 38,000 feet. The airspeed indicators went unreliable. The autopilot disconnected. Automatic stall protection dropped offline. The aircraft was now being flown by hand, at night, in turbulence, over open ocean, with no reliable speed readings.
What happened in the next three minutes and thirty seconds killed every person on board. Not because the aircraft was broken. Because the pilots could not see what was happening to it.
First Officer Pierre-Cedric Bonin, thirty-two years old, was at the controls. Captain Marc Dubois was resting in the crew cabin. When the autopilot disconnected, Bonin did something that violated the most fundamental instinct taught in flight school: he pulled the side-stick back. Nose up. The aircraft climbed. The angle of attack increased past ten degrees, past sixteen, past thirty, past forty. Past that critical threshold, air separates from the upper surface of the wing. Lift collapses. The aircraft does not glide. It falls.
The stall warning activated. A synthetic voice, loud and unambiguous, repeating a single word: stall. It sounded seventy-five times during the descent.
Seventy-five times.
Bonin kept pulling back.
The second co-pilot, David Robert, took over the left seat. But the Airbus A330 uses dual side-sticks, one on each side of the cockpit, and they are not mechanically linked. Robert pushed forward to lower the nose. Bonin, simultaneously, pulled back. Neither pilot could feel what the other was doing. Each undid the other. The aircraft's own design had built a wall between two people trying to save it.
Captain Dubois entered the cockpit. The transcript from the cockpit voice recorder, recovered two years later from the ocean floor, captures what he found:
Dubois: "What the hell are you doing?"
Robert: "We've lost control of the plane, we've totally lost control of the plane."
Dubois: "No no no, don't climb. No no no."
The aircraft was descending at nearly eleven thousand feet per minute. The nose was pointed sixteen degrees above the horizon. The wings were producing almost no lift. The stall warning was still sounding.
Bonin's last recorded words: "But what's happening?"
"But what's happening?"
Bonin's last recorded words. AF447 cockpit voice recorder.
The Airbus hit the Atlantic belly-first at 2:14 AM. The impact was not survivable.
They could not see what they needed to see.
Not because they lacked information. The angle of attack was displayed. The stall warning was screaming. The altimeter was unwinding. But none of it cohered into an understanding of what was happening. They heard the alarm and did not comprehend that they were in a stall. They watched the altimeter drop and did not project that they were three minutes from the ocean. They had data. They lacked awareness.
Mica Endsley published a framework for the United States Air Force in 1995. She was studying a specific problem: why trained pilots flew functioning aircraft into the ground. She called her answer situational awareness, and it operates on three levels.
Level 1 is perception. You detect the relevant elements. You see the instruments. You hear the warning. You register the numbers. Bonin had all of it. Every alarm was sounding. Every gauge was readable.
Level 2 is comprehension. You integrate what you perceive into an understanding of what it means. Stall warning plus rising angle of attack plus falling airspeed means the wings have stopped flying. Not a sensor problem. Not turbulence. A stall. Bonin broke at Level 2. He heard the alarm. He could not synthesize what the alarm meant.
Level 3 is projection. You extrapolate what is happening now into what will happen next. At that descent rate, with that angle of attack, the aircraft hits the ocean in approximately three minutes. If the nose stays up sixty more seconds, recovery becomes aerodynamically impossible. Without comprehension, projection was unreachable. Bonin could not project a future he did not understand.
Seventy-five stall warnings. Three minutes and thirty seconds. Two hundred and twenty-eight lives. The data was screaming the answer at a volume that could not be ignored. The pilots, with thousands of hours between them, could not hear it.
In 1975, a twenty-four-year-old Kodak engineer named Steve Sasson built the world's first digital camera. He demonstrated it to management. Their response, in Sasson's own words: "That's cute, but don't tell anyone about it."
Kodak's internal research division predicted in 1979 that digital photography would overtake film by 2010. Management sat on that data for thirty-three years. Film revenue peaked at sixteen billion dollars. Felt permanent. The stall warning was sounding. In January 2012, Kodak filed for bankruptcy.
In 2005, Michael Burry began reading the actual loan-level data inside mortgage-backed securities. Documents publicly available to every bank on Wall Street. The borrowers underneath the AAA-rated tranches were defaulting at rates that made the entire structure worthless. He shorted the housing market. When the system collapsed three years later, he had made over seven hundred million dollars. The data was public, sitting in SEC filings anyone could pull. Burry did not have access the banks lacked. He read what they refused to.
The stall warning is sounding right now, not in a cockpit but in every knowledge industry on earth. The pattern is identical: the data exists, the filters are active, and the cost of inaction is invisible. For now.
The stall warning is sounding right now. Not in a cockpit. In every knowledge industry on earth.
I spent two years on the other side of the demo table from people with more experience, more credentials, and more at stake than I had. The pattern repeated every time, not because they were stupid but because their orientation was built for a world that no longer exists.
The Discipline of Seeing
USAF fighter pilots returning from Korea and Vietnam were asked a simple question: what separates the pilots who survive from the ones who don't?
The answers came back the same across different squadrons, different theaters, different decades. Not reflexes. Not aggression. Not the aircraft. The ability to hold the complete picture in their head and act on what was about to happen before it happened.
They called it the ace factor: the pilot who could track every aircraft, every vector, every intention simultaneously, update that picture in real time, and move on the projection a fraction of a second before the opponent moved on theirs survived. The pilot who could not, even if faster, better trained, flying a superior aircraft, died. What killed them was inferior sight.
John Boyd never lost a dogfight. He later became the most influential military strategist of the twentieth century, and he built an entire theory of conflict around this observation: the OODA loop. Observe, Orient, Decide, Act. The loop itself was not the insight. Boyd's breakthrough was the second step: Orient.
Orientation is the mental model through which you process what you observe. When the model fits the terrain you actually operate in, orientation yields comprehension. You see the instruments screaming stall and grasp that the wings have stopped flying. You see the digital camera and grasp that it will replace film. You see the mortgage data and grasp that the entire structure is worthless.
When the model is outdated, orientation yields distortion. The observation stays accurate. The comprehension warps. Actions follow that would have been right two years ago. Catastrophically wrong today.
Boyd proved it in the air. He would start training dogfights at a deliberate disadvantage, opponent on his tail, in firing position. Within forty seconds he was on theirs. Every time. He oriented faster. He updated his model of the fight at a rate his opponent could not match. By the time they reacted to what he had done, he was already past their reaction, acting on the next projection. The gap between their loop and his widened with every cycle. He called it getting inside the opponent's loop. Once inside, the opponent is always reacting to where you were, never to where you are. The fight ends before they understand why.
Every knowledge industry is living inside that dynamic right now.
Most people in your position have not started. Not because they lack intelligence. Their orientation was built between 2015 and 2023, and the ground underneath that model shifted in 2024 and 2025 faster than anyone predicted. They observe the data. They orient through an outdated model. They decide based on distorted comprehension. They act in ways their 2022 self would applaud and their 2026 market punishes. The distortion stays invisible to the person inside it.
The most dangerous form of the pattern looks like competence: outdated expertise.
Not ignorance. Outdated expertise.
On March 27, 1977, dense fog rolled across the runway at Tenerife. Two 747s on the same strip. Captain Jacob Veldhuyzen van Zanten, KLM's chief flight instructor, the man whose face appeared in the company's advertisements, the man who had trained the people who trained other pilots, advanced his throttles and began his takeoff roll.
He did not have clearance. His flight engineer, the third man in the cockpit, the one with the clearest view of the situation, asked one question: "Is he not clear, that Pan American?"
Van Zanten: "Oh, yes."
Two words. He was wrong. The Pan Am 747 was directly in his path, invisible in the fog. Five hundred and eighty-three people died. The flight engineer had Level 2 comprehension. He understood the situation. Van Zanten, the most experienced pilot in the cockpit, had Level 1 perception filtered through a model that could not be wrong. He had clearance. The runway was his. The schedule demanded departure. The model held his identity together. Updating it would mean admitting a mistake.
So five hundred and eighty-three people died instead.
The Israeli military built its entire strategic posture on the same kind of belief. They called it Ha-Konseptzia, The Concept: Egypt would not attack until it acquired long-range aircraft capable of striking Israeli airfields. Eleven separate warnings arrived in September 1973. Lieutenant Colonel Ya'ari suggested that Egyptian exercises near the Suez might be real preparations for war. Ya'ari was reprimanded. The officer who saw it correctly was punished by the system that could not afford for him to be right. On October 6, Egyptian forces crossed the Suez Canal. 2,700 Israeli soldiers died in nineteen days. Eleven warnings. Not an intelligence failure. A Level 2 collapse at the institutional level, where the cost of being right exceeded the cost of being surprised.
So the institution chose surprise.
The consultant who built a seven-figure practice billing four hundred dollars an hour carries a mental model in which her expertise is the product. Data showing that AI with her encoded methodology can deliver at sixty percent of her quality for two hundred dollars a month never reaches her. If it did, the model her entire career rests on would need to be rebuilt. So the model stays.
The agency founder with twelve employees carries a mental model in which headcount equals capability. Data showing that a single person with encoded expertise and AI infrastructure can produce equivalent output at one-tenth the cost does not update anything. Updating the model means looking at twelve people and seeing six layoffs.
Both of them have Level 1. They have perceived the data, read the articles, attended the conferences, heard the predictions. Perception is not the bottleneck. The bottleneck is the space between Level 1 and Level 2, the orientation phase where observation gets processed through a model calibrated to conditions that shifted eighteen months ago. The data enters. The model distorts it. What comes out protects today at the expense of tomorrow.
Boyd's framework predicts what follows: when you are inside someone else's loop, the advantage compounds with every cycle they have not started running.
The formula measures exactly that gap:
The Formula
In December 2025, an agency founder in Dallas sat down on a Saturday morning with an AI coding tool. Twelve employees. $100,000-a-month operation. He was not planning to restructure his business. He wanted to see what the fuss was about.
By Sunday evening he had built, alone, in a weekend, a working version of the client deliverable his team charged $15,000 a month to produce.
The quality was not perfect, but close enough, at two hundred dollars a month.
He cancelled a hiring round on Monday. Could not sleep for three days. Six of his twelve employees did work that a system costing two hundred dollars a month could now handle. Same quality tier. Ninety-five percent lower cost. He was not alone. Across the industry that month, agency owners were having the same weekend. Davidson tracked it from inside the market. The founders who did the math came out looking like they had seen a ghost.
S = Capability / Adoption × Time
Capability, in this formula, means what the technology did on a Saturday in a founder's living room, to real work, for a real business, at a price point that made the existing model indefensible.
Davidson's field data across agencies, consultancies, and service businesses puts domain-specific capability at fifty to sixty percent. Not for every task. For the pattern execution that constitutes the bulk of billable work: the methodology you have run two hundred times, the deliverable format you could produce in your sleep. That layer is now replicable at a fraction of the cost. And AI with fully codified expert knowledge produces output 206 percent better than AI alone. Raw AI is not the capability that matters. AI combined with the expertise of the person who did the work before AI existed is.
I booked 300 demos with real estate agents and local businesses in early 2024, pitching AI and automation. Every single prospect tried to compare it to something they already knew. "That's like Go High Level." "That's like that software." No. It is completely different. People do not have a comparison for AI because they have never seen anything like it. Their minds go blank. That blankness is the adoption denominator sitting at near-zero. Not resistance. Not skepticism. A total absence of the mental model required to process what they were seeing.
Sutskever's observation holds: "These models generalize dramatically worse than people." Capability is real but uneven. In structured, repeatable work (legal document review, content production, data analysis, customer support), it runs at sixty to eighty percent and climbs weekly. In novel problem-solving, strategic judgment, the kind of contextual read that requires having been in the room when the deal shifted, it hovers near zero. The forty percent AI cannot touch becomes the most valuable layer in the market. You encode that layer. The model begins there.
DIAGRAM — S FORMULA COMPONENT BREAKDOWN
S = Capability / Adoption × Time
| COMPONENT | WHAT IT MEASURES | CURRENT STATE | DIRECTION |
|---|---|---|---|
| Capability (Numerator) |
What AI can actually do to real work in your domain | 50–60% of pattern execution tasks. 206% better with encoded expertise. | ▲ Rising |
| Adoption (Denominator) |
Who has integrated AI into operations — not tried, rebuilt | 88% tried. 6% saw material impact. Gap: 82 points. | ▲ Starting to move |
| Time (Multiplier) |
How fast the window is closing | Compressing faster than any prior cycle. 3–5 year estimate → 5 months actual. | ▼ Compressing |
When capability is high and adoption is low, S is large. The window is open. Time compresses the window.
McKinsey surveyed nearly two thousand organizations across 105 countries in 2025. Eighty-eight percent had adopted AI in at least one business function. The headline number means almost nothing, because only six percent qualified as high performers generating material impact on their earnings. Eighty-eight percent tried. Six percent restructured. The gap between installing a tool and rebuilding a business around it stretches eighty-two percentage points wide.
That gap is the denominator. Adoption means restructured by AI, and almost nobody qualifies. When almost nobody in your industry has achieved real integration (not a ChatGPT subscription, not a pilot program that produced a nice report and got shelved), the denominator is small. S is large. The window stays wide open.
It closes when real adoption reaches critical mass in a given industry. No knowledge industry has hit that threshold yet. But the denominator has started to move.
Every major technology cycle in the last century has followed the same adoption curve, each roughly ten times faster than the one before. Electricity took sixty years to restructure industry. The automobile took fifty. Television, forty. Personal computers, thirty. The internet, twenty. Smartphones, ten. Each adopted faster because the previous technology laid the distribution infrastructure for the next.
AI is adopting faster than smartphones, which put a computer in every pocket. The cloud put compute at every endpoint. The rails were already built.
DIAGRAM — TECHNOLOGY ADOPTION COMPRESSION TIMELINE
Each Cycle Roughly 10× Faster Than the Last
| TECHNOLOGY | YEARS TO RESTRUCTURE INDUSTRY | COMPRESSION FACTOR |
|---|---|---|
| Electricity | ~60 years | — |
| Automobile | ~50 years | 1.2× |
| Television | ~40 years | 1.5× |
| Personal Computer | ~30 years | 2× |
| Internet | ~20 years | 3× |
| Smartphones | ~10 years | 6× |
| AI | Faster than smartphones | >10× |
Each adopted faster because the previous technology created the distribution infrastructure for the next.
The compression is already visible. Davidson expected the agency restructuring to take three to five years. It took five months. The per-seat SaaS model, the financial architecture underneath every software company in the world, came under direct attack in a single quarter. Time is the multiplier that makes the gap matter. When the estimate compresses, and every piece of evidence from the last twelve months says it is compressing, S shrinks faster than anyone inside the system expects.
The pattern the formula captures is simple: high capability, low adoption, the window is open. Adoption catches up: the window closes. Time multiplies whatever gap exists. And time is compressing.
The Ground Right Now
On February 3, 2026, Anthropic published a GitHub repository. Approximately 2,500 lines of plain text: prompt instructions for Claude Cowork's legal plugin. No new model. No new API. No breakthrough architecture. Just a document explaining, in natural language, how to make an existing AI system perform legal work.
Two hundred and eighty-five billion dollars was repriced out of software stocks in a single trading session.
Not over weeks. Not in a slow bleed that gave executives time to draft memos and hold board meetings and hire consultants. In hours. Thomson Reuters dropped 15.83 percent, its largest single-day decline ever. Every company in the legal, real estate, and project management stack dropped double digits. The market did not wait for quarterly earnings. It read the repository, understood what it meant, and repriced the entire sector before the trading day ended.
A plain-text file on GitHub did what years of AI hype could not: it made Wall Street do the math.
When I moved from pitching local businesses to an enterprise AI firm, I expected the conversations to change. The vocabulary improved. The confusion was identical. Business owners doing $100K a month used the same mental models as real estate agents making forty thousand a year. The noise had given them better words for the same blindness.
The math was straightforward. If an AI agent can perform legal document review, financial reporting, pipeline management, and customer support at a commercially viable quality level, and if the instructions for making it do so are 2,500 lines of English that anyone can read, copy, and deploy, then the per-seat software model that generates revenue for every enterprise SaaS company on earth is broken. AI agents do not use seats. An agent managing your CRM, drafting your contracts, and analyzing your financials does not need three subscriptions at a hundred and thirty dollars per seat. It needs API access. A different product at a different price running on different economics entirely.
By mid-February the damage was approaching two trillion dollars. Salesforce pivoted to what it called "Agentic Enterprise License Agreements," charging ten cents per autonomous action instead of fifty dollars per human seat. For the first time in the history of the index, software as a sector traded at a discount to the S&P 500.
Capability had just met adoption in real time. The numerator proved it could replace functions generating trillions of dollars in SaaS revenue. The denominator barely twitched. Most of the companies losing market value had not restructured anything. Their customers were beginning to reduce seats, not because they left the platform, but because they deployed agents that did the work five humans used to do. The function still happened. The CRM still got updated. The reports still ran. The human just stopped sitting in the middle.
On the other side of the same equation, a different kind of proof was running.
Nat Eliason gave an AI agent called Felix one thousand dollars and told it to run a business. Within weeks it had generated approximately $195,000 in revenue. Zero human employees. Running on a two-hundred-dollar-a-month subscription. Maor Shlomo solo-built Base44 to $3.5 million annual revenue in six months, then sold it to Wix for eighty million dollars.
Solo founders across industries generated millions in annual revenue with no team. The pattern repeated with such regularity that the outlier became the proof.
Twelve employees, $100,000 a month in overhead, revenue that stops when the team stops. Compare: one person with encoded expertise and an AI stack. Same quality tier. A fraction of the cost.
Amodei, asked when we will see the first billion-dollar company with a single employee, said 2026. Seventy to eighty percent confident. GitHub Copilot now writes forty-six percent of all code for active users, and eighty-eight percent of AI-generated code stays in the final version. ChatGPT has passed the US Medical Licensing Exam, the Bar Exam at the ninetieth percentile, and the CPA exam.
Every market has a performed version and a real version. The distance between the two is where careers go to die.
The creator economy generates $250 billion in annual revenue. The performed version. The real version: four percent of creators earn six figures. The rest stand at the bottom of a cliff, looking up at a number that was never meant for them.
Most high-ticket sales run through buy-now-pay-later financing. Students going into debt to purchase courses about making money. Completion rates sit between five and fifteen percent. Nothing is stored. Nothing compounds. Revenue stops when the founder stops.
Fifty-five percent of companies that made AI-driven workforce reductions reported regretting the decision within twelve months. Seventy-four percent reported measurable quality degradation. The automation worked. The layer it automated was the wrong one. Volume metrics masked the deeper failure underneath.
Run the formula against what you just read.
Capability: fifty to sixty percent across most knowledge industries and climbing. Opus outperforms human engineers. Copilot writes nearly half of production code. AI agents run businesses and collect revenue with no human in the loop. The numerator is large and growing.
Adoption: genuine restructuring at six percent. The denominator is small.
Time: compressing. Davidson expected three to five years. It took five months. $285 billion was repriced in a single trading session after a text file posted to GitHub exposed the underlying vulnerability, and the repricing kept spreading. Every technology adoption cycle in the last century has run roughly ten times faster than the one before it. Every piece of evidence from the last twelve months says AI follows that compression.
The window is open. The window has a clock. And most of the people reading this are still running a mental model built for a market that stopped existing sometime in the last ninety days.
The Noise Flood
Warren Buffett reads five hundred pages a day. Annual reports. Financial statements. Industry analyses. Almost nothing else. He lives in Omaha, not New York, specifically to maintain physical distance from the noise of the financial center. His information diet is radically narrow and radically deep. He does not track the feed. He tracks the fundamentals. He has compounded at a rate that everyone who tracks the feed has failed to match.
Watch what he filters out.
Open X right now. Scroll for thirty seconds. What you see feels like a room full of people talking. Most of those voices are not human. The majority of traffic on X comes from bot networks routed through the platform at rates no one in cybersecurity had ever measured before. The majority of all web traffic is now non-human. Alexis Ohanian said it publicly: "The dead internet theory is real."
AI-generated YouTube channels with no human creator, no human editor, no human voice collectively reached billions of views and millions in revenue. Platforms deleted the content. It grew back. Voice cloning crossed what researchers call the indistinguishable threshold. Human detection of high-quality deepfake video sits at 24.5 percent, worse than a coin toss. You cannot tell what is real. Neither can anyone else in the room. Trust in mass media hit twenty-eight percent in October 2025, the lowest number Gallup has ever recorded.
Shannon proved mathematically why this matters: as noise overwhelms signal, the channel does not degrade gracefully. It collapses. You either reduce the noise or increase the signal quality beyond what the noise can drown. No third option exists.
The human brain processes roughly eleven million bits per second through its sensory channels. Conscious attention handles approximately fifty. Fifty bits. In a channel where most content is synthetic and most traffic is non-human, the channel has broken. No amount of human attention can repair it. The physics forbid it.
Buffett's approach hints at the answer. You do not find signal in an ocean of noise by consuming more or being smarter. You change what you filter for.
Nassim Taleb published a data table in Fooled by Randomness that most people skim past without grasping what it means. A stock portfolio with a fifty-one percent chance of producing a positive return in any given year yields a roughly 50/50 signal-to-noise ratio when observed annually. Observe monthly: 5/95. Daily: 0.5/99.5. Hourly: almost zero signal. Almost pure noise. The more frequently you check, the more noise you consume. The data does not grow more accurate with more observation. It grows less.
Every feed you check works the same way. The person who reads the field once a month and acts on what they find gets more signal than the person who checks hourly. The hourly checker feels more informed. They are more exposed to noise.
Frequency is only one lever. The deeper problem is identifying which signals deserve action when the entire channel is flooded with plausible-sounding content generated at zero cost. Four filters survive the flood. Each operates independently. Together they produce a system that no volume of synthetic content can overwhelm.
The skin-in-the-game filter. One question: what does this person lose if they are wrong? Someone who encoded their expertise into infrastructure that serves clients has maximum skin in the game. If the encoding is bad, the business fails. The anonymous account posting AI predictions on X has zero. Follow the risk. Ignore the commentary.
The proof-of-work filter. Three signals survive AI generation because they are expensive to fake. Lived experience with falsifiable details that can be checked against a timeline. Staked money or reputation on outcomes, with a public record of predictions made before the events they predicted. And original frameworks built from personal failure, correction, and direct experience that do not appear in any training dataset.
The time filter. If the information is still relevant in a week, it might be signal. Still relevant in a month, it probably is. Still relevant in a year: durable. Anything that loses relevance within twenty-four hours was noise dressed as urgency. The SaaSpocalypse did not happen in a tweet. It built over twelve months of accumulating capability, deploying agents, reducing seat counts, and repricing infrastructure, then became visible in five trading days. The signal was twelve months long. The noise was five days loud.
The action filter. The simplest and most ruthless. Does this information change what you would do tomorrow morning? If a piece of content, however well-written, however many likes, however many replies, does not change your next action, it is entertainment. Entertainment is noise that feels productive. The test is not "did I learn something." The test is "did I change something."
DIAGRAM — THE FOUR NOISE FILTERS
What Survives the Noise Flood
| FILTER | THE TEST | IF IT PASSES | IF IT FAILS |
|---|---|---|---|
| Skin in the Game | What does this person lose if they are wrong? | Business, reputation, money at stake | Commentary. Ignore. |
| Proof of Work | Lived experience? Falsifiable timeline? Original frameworks from failure? | Signal that survives AI generation | Synthetic. Discard. |
| Time | Still relevant in a week? A month? A year? | Durable signal | Urgency noise. Expires in 24h. |
| Action | Does this change what you do tomorrow morning? | Operational signal. Act. | Entertainment disguised as insight. |
A five-year archive is the proof of work that survives the noise flood, the signal that remains above the rising noise floor when everything synthetic sinks below it. Truth and awareness are sequentially dependent: you cannot read the terrain clearly through a signal you have corrupted. You cannot build on ground you cannot see.
The noise will not decrease. The flood is permanent, driven by economic incentives, algorithmic optimization, and the near-zero marginal cost of generating content. What decreases is the number of people who can hear through it.
The Convergence
Most of what you hear about AI is noise.
Apply the four tests. The majority of people posting about AI on X and LinkedIn are selling courses about AI, not building businesses with it (skin in the game: zero). Most "AI experts" who appeared in 2023 have no track record of building anything before the hype cycle started (proof of work: absent). The average AI take has a half-life shorter than the tweet it was posted in (time: fails). Most AI content does not change what anyone does the next morning (action: fails). Engagement without outcome.
If you stopped at the counter-evidence, you would conclude that AI is a slightly more expensive version of blockchain. Wrong conclusion, but at least evidence-based. Seventy-two percent of CIOs reported breaking even or losing money on AI investments. Companies abandoned AI initiatives at more than double the rate of the previous year. MIT Technology Review published their assessment under the headline "The Great AI Hype Correction" and declared it a year of reckoning.
The constraints are real. Hallucination cost businesses an estimated $67.4 billion globally. Sutskever's observation that models "generalize dramatically worse than people" carries weight because it is measured, not marginal.
Institutional consensus reinforces the skepticism. A Nobel laureate, the ILO, the World Economic Forum, Forrester, and multiple federal agencies converge on the same message: AI augments rather than replaces. Net job creation is positive. No mass displacement is visible in any labor projection through 2030. If your question is whether AI eliminates jobs, the data says no.
But this paper asks a different question. Not employment. Competitive position and pricing power. A knowledge worker can keep their job while losing all differentiation and margin. Employment data does not measure margin compression. The consultant who still has clients but bills half what she billed three years ago is employed. The agency that still operates but competes against a single founder producing equivalent output at one-tenth the cost is employed. Whether you have a job is not the issue. Whether what you know is compounding or commoditizing is.
Stop here and you would conclude that AI is an overhyped technology cycle following the same arc as every bubble before it. You would be wrong. Not because the bubble is fiction. Because the bubble is not the whole picture.
The first quarter of 2026 told a different story.
On February 5, Anthropic released Claude Opus 4.6 with agent teams that split larger tasks into coordinated, autonomous jobs. On the same day, OpenAI shipped GPT-5.3-Codex. Four major frontier models launched in twenty-three days. MCP, the Model Context Protocol that defines how AI agents connect to external tools, was donated to the Linux Foundation with OpenAI, Anthropic, Microsoft, Google, AWS, and Bloomberg as founding members. It is now the de facto standard.
Venture capital and corporate spending poured hundreds of billions into AI infrastructure in Q1 alone. OpenAI raised $110 billion, the largest private venture round in history. Anthropic's revenue grew from eighty-seven million to a nineteen-billion-dollar run-rate in twenty-two months, seventy to eighty percent from enterprise customers. Not consumers. Not developers. Enterprises paying real money for real outcomes.
None of these are predictions. They happened in the last ninety days. The hype cycle is real. The grift is real. The failures are real. And underneath all of it, measurable in revenue, in market repricing, in product launches, in the actual behavior of the companies building the technology and the enterprises paying for it, the shift is also real.
The convergence is not six people agreeing that AI is exciting. Six people working independently, in different fields, with different methodologies and different incentives, arrived at the same conclusion despite the noise, despite the hype, despite the failures.
Aschenbrenner sees it through compute scaling. His analysis is mathematical: count the orders of magnitude, extrapolate the trendlines, project the timeline. "AGI by 2027 is strikingly plausible." Not because he believes in the hype. Because he trusts the graph. Amodei, who is building what may be the most powerful AI system on earth, calls it the adolescence of technology and warns that the wealth concentration "will break society." The man building it is scared of what he is building.
Sutskever tracks the constraints. The scaling era is over. Current models generalize dramatically worse than humans. Yet his conclusion arrives at the same endpoint: what compounds now is not compute but domain knowledge, feedback loops, context architecture. Harris, who warned about social media in 2017 and watched every prediction arrive on schedule, identifies the only viable path as a narrow one: the danger is not that AI fails but that it succeeds without constraints.
Davidson measures it in the market. Solo founders doing eight-figure revenue with zero employees. "The business that can produce equivalent output at one-tenth the cost will eventually win. Not because it's morally better. Because the math is the math." Naval sees the economics underneath: no demand for average, the bell curve hollows out, individuals with more leverage than anyone in economic history.
Different priors. Different data. Different fears. Same window.
One to three years.
Different priors. Different data. Different fears. Same window. One to three years.
Aschenbrenner: "Right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness." He is not wrong about the number. He may be wrong about the location.
The Window
Everything above is terrain. Now it gets personal.
There is a version of this moment that lets you feel informed without feeling implicated. You read the data. You understood the framework. You saw the convergence. You learned the filters. You can explain situational awareness at a dinner party now.
And nothing about your business changes Monday morning.
Level 1. Perception without comprehension. The data entered your field of vision but did not enter your operating model. You consumed the chapter the way you consume a thread on X: felt something, engaged, forgot. Half-life: twenty-four hours. By next week, background noise again.
Letting that happen is the most expensive decision you will make this year.
S is not static; it decays. Run the formula for your industry right now. Then run it again for six months from now. Capability climbs weekly. Real adoption accelerates as the companies that saw the Cowork launch restructure, as the solo founders who built in Q1 start compounding cycles, as enterprise customers deploy agents into the workflows that human seats used to occupy. Time compresses. The formula produces a different number every month, and for anyone not actually encoding expertise into systems that compound (not reading about building, not attending conferences about building, but encoding), the number goes down.
Aschenbrenner, Davidson, and Amodei converge independently on the same point: every month of lead matters. The inflection between cheap cycles and expensive ones is invisible until you have passed it.
Nobody in the AI discourse is saying this next part, because the discourse is optimized for engagement, not for the person sitting in the chair with fifteen years of expertise and a business model that used to work.
The same conversations I had in January 2024 with real estate agents, I had in January 2026 with enterprise executives. Better suits. Better vocabulary. Same fog. Two years. Eight hundred demos. The cycle repeated with different words and the same gap. People have started catching up in the last two to three months, Claude going mainstream, bigger companies restructuring, but the pattern holds. The window is still open. It will not stay open forever.
You are not behind. You are early.
This paper is being written inside the window it describes. The model was built in real time, not in retrospect.
My model was built in 2024. I do not have fifteen years of orientation to protect. The advantage has nothing to do with intelligence. It comes from recency.
The domain expert with five to fifteen years of genuine expertise is the person this shift was built for. AI without encoded expertise produces generic output that anyone with a subscription can replicate. AI with encoded expertise produces output that carries your judgment, your methodology, your contextual understanding, which is the difference between a $200-a-month commodity and infrastructure that an industry runs on.
A thousand hours of real work done before AI existed is a thousand hours of raw material waiting to be encoded. The consultant who spent a decade learning which questions to ask when a client's voice changes has something no training dataset contains. The agency founder who knows from scars, not frameworks, which deliverables carry the weight and which are decoration has architecture no prompt can replicate. Specific knowledge. The frequency only you can broadcast on. Worth more now than it has ever been worth in the history of professional work, because for the first time it can be encoded into systems that compound it.
The trap is not that your expertise is worthless. Your expertise feels like a liability because the ground shifted and the old delivery model is breaking. Courses collapse. Communities churn. Agencies get undercut. The vehicle is dying. The cargo, what you actually know, is the most valuable asset in the new market. But only if you transfer it out of the dying vehicle and into the new one before the vehicle stops entirely.
Van Zanten's expertise was not the problem. His refusal to update was. His flight hours, his pattern recognition, his instinct for the aircraft: all of it real, all of it valuable, all of it became the instrument of catastrophe because he would not let the incoming data change his model. The expertise was the asset. The mental model was the filter. The filter killed 583 people.
Your expertise is the asset. Your business model, the one built between 2015 and 2023, the one that produced real results in the old environment, is the filter. The data in this chapter is the flight engineer asking: "Is he not clear, that Pan American?"
You can dismiss it. You can wait. You can tell yourself you will figure it out when you need to. Van Zanten told himself the same thing. The pattern does not vary.
Or you can update the model. Not abandon your expertise but encode it. Not discard what you built but transfer it into infrastructure that compounds. Your S is high precisely because you have the knowledge that makes encoding valuable and the territory has not yet been captured. Time is compressing. It has not run out. But if someone started encoding in January and ran six cycles by July, you are not facing a six-month gap. You are facing the accumulated output of six cycles that fed each other.
Bloch's filter applies here directly: the window described in this chapter is not just about when to start but about what to build while it remains open. Build what is hard to get (the proprietary data, the network density, the regulatory permission, the encoded expertise) because everything that is merely hard to do is already being compressed to near-zero.
The widest gaps right now, measured by the ratio of capability to adoption: legal services, where AI proved it could perform contract review and regulatory analysis while real adoption across a four-hundred-fifty-billion-dollar market remains near zero. Accounting and financial advisory, where AI has passed the CPA exam and most firms still bill by the hour. Real estate, where three million agents operate with almost no encoded systems. Healthcare administration, where the scheduling, billing, and compliance layer runs on spreadsheets and phone calls. Construction project management, where a two-trillion-dollar industry stores its field knowledge in the heads of foremen who are retiring faster than they are replaced. Not predictions. Current gaps, visible in publicly available capability and adoption data.
Whether you have what it takes is not in question. Whether you move before the formula answers for you is.
S reads the terrain. Without it, every other variable in the system arrives too late.
When S collapses, the multiplication chain collapses with it. You built a perfect system for a world that no longer exists.
Seventy-five stall warnings. The data was screaming. The formula was running. The instruments were available. The only thing missing was the willingness to read them and act.
Update the model. Or keep pulling back on the stick.
The terrain is mapped. The window is measured. What remains is what you carry through it.
Chapter III: Knowledge — the raw material that makes encoding possible.
The S-Score diagnostic and assessment prompts for this chapter are in Appendix C.