A field report on AI proficiency in 2026. The dominant "how to use AI" framing has changed six times in forty months. No course can ship in less time than its techniques take to obsolete themselves. The only path to fluency is to use the tool for everything. Every claim sourced.
The pace is the entire problem. METR's task-horizon paper from March 2025 measured how long an autonomous task a frontier AI agent could complete, and found that horizon was doubling roughly every seven months. SWE-bench Verified, the most-watched coding benchmark, gained sixty percentage points between early 2025 and April 2026. In the single calendar year of 2025, the umbrella term that practitioners used for "how to work with AI" changed several times. Prompt engineering. Building effective agents. Vibe coding. Vibe engineering. Each had a four-to-seven-month run as the canonical framing, then it got replaced or folded into the next one. None of those terms now describes the workflow that the head of Claude Code uses to ship twenty-two production pull requests in a day.
Two years ago, prompt libraries were everywhere. PromptBase, FlowGPT, the awesome-prompts repos on GitHub, the courses that promised to teach you the "right structure" of a prompt. There was a job title on LinkedIn called Prompt Engineer. Most of that has gone quiet. The libraries are unmaintained, the marketplaces have stopped growing, and the job listings dried up. Modern frontier models obsoleted the techniques those libraries were built around by being good enough out of the box. That is the cadence. That is the problem.
A structured course takes nine to eighteen months to develop, certify, and roll out. Frontier models change, fundamentally, faster than that. The development cycle of any "AI training" curriculum is longer than the half-life of the techniques it would teach. By the time the course ships, the regime it describes has aged out. This is not a problem better training programs would solve. It is a problem nobody can solve, because the thing being taught is moving faster than the act of teaching by an order of magnitude. There is no stable body of knowledge to transfer. There is only the tool.
The argument of this report is in two halves. First, the capability has moved dramatically in the past twelve months. Earlier drafts of this issue used 2024 figures and were retired by exactly the amount the field has moved. Second, and harder: the only path to fluency, given the pace, is to use the tool constantly. Make AI write every line of code, including the one-character changes. Make it draft every email, produce every analysis, summarize every document. Build the workflow. Ship the work. Let your sense of the model's capability update in real time, faster than any curriculum could ship. The roughly four-percentage-point productivity advantage that Anthropic measured in March 2026 for high-tenure users (once controlled for model, language, geography, and use case) is the only direct empirical signal we have of what time-on-tool buys you. The rest is vendor claims, LinkedIn opinions, and guesses dressed in slides.
The METR follow-up, published February 2026, is itself a perfect demonstration. The same outfit whose July 2025 study found experienced developers were nineteen percent slower with AI re-ran the study with a second cohort and the year's newer models. They could not produce a clean answer. The confidence intervals straddled both signs. Their stated belief is that developers are now sped up, and they are redesigning the experiment because the world moved faster than their methodology could measure. The leading research group on AI productivity is, in early 2026, telling you that the field they study is moving faster than they can study it. Believe them, and act accordingly.
The story of AI in 2026 is a curve that is moving faster than any methodology built to study it. Two complementary measurements (SWE-bench Verified scores and METR's task-horizon) both show roughly the same thing: exponential capability gain.
SWE-bench Verified, the most-watched coding benchmark, has gone from a top score around 25–30% at its August 2024 launch to 87.6% in April 2026. METR's parallel measurement of how long an autonomous task an AI agent can complete shows roughly seven-month doubling. Two different yardsticks, the same shape of curve. The chart pairs them.
Three independent signals tell the same story. Stack Overflow's 2025 survey: 84% of working developers use or plan to use AI tools, 51% of pros use them daily. Anthropic's Economic Index: employee AI use at work doubled from 20% in 2023 to 40% in mid-2025. GitHub Octoverse 2025: 80% of new GitHub developers use Copilot in their first week; the number of public repos using an LLM SDK grew 178% year over year.
There's a new kind of coding I call 'vibe coding', where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.
Two complementary pictures of the daily use of AI in 2026. The first shows what people actually do with it, and how the people who use it most use it differently. The second shows why the umbrella terms describing "how to use it" turn over every five months, faster than any course can ship.
A treemap of Claude.ai conversation breakdowns from Anthropic's January 2026 Economic Index, with a panel below comparing how high-tenure users (six months or more on platform) behave versus new users. The differences are small in any single category but consistent in pattern: experienced users iterate more, command less, and hand over more of the rules-heavy work to the model.
A timeline of the dominant umbrella term that practitioners used to describe "working with AI" between January 2023 and April 2026. Each had a four-to-seven-month run as the canonical framing, then was replaced or merged. The cadence is faster than the development cycle of any structured curriculum, which is the central, mechanical reason that AI training programs ship obsolete. The course doesn't keep up because nothing keeps up.
No published study compares structured AI training against unstructured heavy use. There is no head-to-head RCT. The honest empirical answer is: nobody knows. The signals we do have all point in the same direction. Fluency tracks tenure, frequency, and pattern of use, not pedagogy.
A grid of dot-pair plots. Each panel compares two groups on a single measured axis, every dataset taken from a public study or a vendor-released analytics report. The pattern is consistent: people who use AI more, in more contexts, for longer, use it differently and (on the metrics we can observe) more successfully. The causal arrow cannot be settled with this data (selection effects are real), but the correlation is everywhere.
Investment in enterprise generative AI ran to $30–40 billion through mid-2025 with a 95-percent null-result rate (MIT NANDA). Anthropic's January 2026 Economic Index estimates that AI could contribute 0.7–2.6 percentage points of annual productivity growth over the next decade. That is a range, not a forecast. The earlier numbers are unchallenged. The newer numbers are the first hint that capture is starting to show up at the macro level.
AI tools amplify existing expertise. The more skills and experience you have as a software engineer the faster and better the results you can get from working with LLMs.
Plotting predictions against measurements across the public studies of 2025–2026 reveals a wide and honest range. Capability gains are real. Productivity gains are real-but-conditional. Anyone offering a single number is selling something.
Every public, methodologically defensible measurement of AI's productivity impact this report could verify, plotted on a single axis. Predictions cluster between +24% and +39%. Measurements span from a 19% slowdown to a 55% lab-task speedup, with vendor-reported and self-reported numbers landing high and randomized-controlled numbers landing all over. The range is itself the finding.
METR's task-horizon doubling implies that the autonomous-task duration of frontier AI agents grows about ten-fold every two years. The chart plots the measured points (March 2025: ≈1 hour at 50% success; February 2026: ≈12 hours, second-hand) and extends the trend three more years. Whether or not the trend holds exactly, the implications of even a slower-doubling version are dramatic. Marked explicitly as projection, not measurement.
A field whose canonical practices have a five-month half-life cannot have a curriculum, by construction. The development cycle of any "AI training" course is longer than the era any technique it would teach lasts. Anyone selling a curriculum is selling something that ships obsolete. This is not a critique of training as a category. It is a statement about what is teachable, and "how to use a tool that doubles in capability every seven months" is not.
If you have a budget for AI capability-building, spend it on tools, GPU credits, and protected hours for your team to use AI on real work. The single empirical signal we have for fluency development, namely Anthropic's roughly four-percentage-point high-tenure advantage from March 2026 (once controlled for model, language, geography, and use case), does not come from a course. It comes from people who have been using the tool every day for six months. The "curriculum" is your own shipped work. Skip the certificate.
Make AI write every line of code, including the one-character changes. Make it draft every email. Make it produce every analysis, summarize every document, write every test. Especially when typing it yourself would be faster. Typing it does this typing it builds nothing. Volume is the only documented input to fluency. The friction of switching is the literacy gap. Refuse to do anything the old way, and the gap closes for you while it grows for everyone else.
Field Report No. 2 in an irregular dispatch from Tristan Chiappisi, an engineer who works in data, builds in the AI space, gives talks about both, and writes things down when the data points somewhere interesting. The writing is set in Source Serif 4 and Inter Tight. There are no advertisements, sponsored sections, or affiliate links.
Every numerical claim in this issue is sourced to a 2025 or 2026 primary or near-primary source. Where data was thin, the claim is omitted; where projections appear, they are explicitly labeled. The earlier draft of this issue used 2024 figures and was retired for that reason.
If you want to know more, or have me speak, reach out on LinkedIn.
Reach me on LinkedIn →