Where is the Impact of AI on the Economy?

Three explanations for the paradox — ordered by how soon you'll feel them.

Every day, my newsfeed presents the same paradox. On one side: breakthroughs in large language model capabilities that are, at times, difficult to believe. On the other: no measurable impact on GDP and mixed outcomes on productivity. Both are true, and the tension between them is confusing.

From what I've observed, there are three possible explanations — and they're ordered by how soon you'll feel them.

Ugly Fruit

There are tasks that could be automated in nearly every aspect of your life. The problem has never been capability. The problem is that the return on investment for automating most of these tasks was lower than the time it would cost to do so. Worse, as you become more proficient at your job — and thus better able to identify what could be automated — the value of your time increases. The economics rarely work out.

These tasks are ugly fruit. The kind that's perfectly fine to eat but not attractive enough to be worth picking, because it won't sell at the grocery store. Before AI coding agents, the orchard was full of fruits that were too ugly. Nobody could afford to harvest them.

The cost benefit analysis has changed.

Most of my personal LLM usage has been directed at exactly these tasks: small personal tools that make my life marginally better. They cost me almost no time and almost no money. They were built as background processes, with me providing feedback whenever I had a few minutes — usually just reiterating the plan and the desired outcome.

One particularly fun example: a relative upgraded their 3D printer and gave us their old one. It sat on the floor of my office for months. Setting it up properly — learning 3D modeling, researching best practices, figuring out the software toolchain — would have been leisure time spent tinkering alone in my office rather than time spent with my family. It wouldn't have made sense to work on it.

So I pointed Claude Code at it and went about my day.

The initial results were unimpressive, but after handing Claude a few photos it course-corrected and eventually produced what we wanted: an adapter that sits on top of a Duplo brick with Lego-compatible pegs on top. Looking up the API documentation, splicing together models, learning a bit of 3D modeling — that would have cost me more time than I'd ever get back. Using Claude Code? It cost me no time and a few dollars in inference to build a small Python module for generating all sorts of bricks. My four-year-old couldn't be happier, and my ten-month-old will appreciate it when he's a bit older.

A 3D-printed Lego-compatible adapter sitting on top of a Duplo brick, with a Lego minifigure standing on it.

You might look at that and think it's economically trivial. Individually, it is. But the aggregate value of a thousand ugly fruits picked for nearly nothing is not trivial at all — it's just invisible. Nobody tracks the value of a dad's 3D-printed Lego adapter or the hundred small automations that made someone's week five percent smoother. Most productive AI activity happening right now is not in the arena of high-stakes, high-return business and science. It's smoothing out edges in our lives and handling the things we always wanted to do but simply couldn't justify the time.

Tools That Fit the Hand

No group has adopted AI as quickly or intensely as software engineers. That's not a coincidence. AI was developed by engineers and has turned out to work exceptionally well for engineers. The people building these models and the tools around them have a deep, intuitive understanding of the needs and expectations of people like themselves. They do not have that understanding for nurses, attorneys, teachers, or most other careers.

You see this pattern across the history of software. An engineer builds a small internal tool, shares it with colleagues, and before long there's a git repository with over a thousand stars and everyone on Twitter is insisting you're "NGMI" unless you switch to Pangoro. Meanwhile, products built for healthcare, law, and education are rarely enjoyable to use. The deeper and more specialized the task, the more alien and broken the software feels.

Consider electronic health records. The healthcare industry spent decades and billions of dollars attempting to digitize clinical paperwork. The result is that physicians now spend roughly two hours on documentation for every one hour of patient care — not because the technology failed, but because it was built by engineers who understood databases, not exam rooms. Epic and Cerner optimized for billing and regulatory compliance, not for the experience of a doctor mid-examination clicking through seventeen dropdown menus.

Have you ever held a game controller that was too small, or a handle too large to fully clasp? It's properly made, but it wasn't made with your hands in mind. The same thing happens with software. The only way to close that gap is constant communication with users, rapid iteration on their feedback, or embedding domain experts directly in development teams. All of which takes time.

If it took decades to digitize forms — a relatively well-understood problem — how long will it take to build AI tools that genuinely fit the hands of every profession? That gap between builder and user is the gap AI products must now cross in every industry.

Soon™

The speed at which you can write code does not closely correlate with how quickly you can build a product people want. Somewhere along the way, people got the idea that the act of writing code was the great bottleneck in productivity and growth. It was not. The bottleneck was, and remains, that we don't actually understand the problems we're trying to solve — and even when we can properly identify a problem, we rarely know the best way to solve it.

User feedback helps with the former. People can tell you what doesn't work for them. But they cannot tell you what will. I first encountered this idea almost two decades ago. In the early-to-mid 2010s, game developers were formalizing their craft, and Mark Rosewater — the head designer for Magic: The Gathering — put it cleanly: your audience is good at recognizing problems and bad at solving them.

Between an AI breakthrough and its appearance in GDP, there is a pipeline, and every stage takes time:

A problem must be identified by a large enough group of people that a customer base exists for a solution. This alone can take years. Many problems are invisible to everyone except the people suffering from them, and those people often lack the vocabulary or the platform to articulate what's wrong.

The problem must be understood and documented well enough to communicate to software engineers who don't work in that domain. This is harder than it sounds. Translating "our intake process is a nightmare" into a technical specification requires both sides to learn a bit of the other's language.

The engineers must intuit, from that imperfect understanding, the few correct solutions from among dozens or even hundreds of possible approaches. Most of those approaches will be wrong. The good ones are rarely obvious.

The software must be iterated on — reshaped, molded to the group's actual workflows, preferences, and edge cases. This is not a single pass. It's months or years of feedback loops.

Finally, the product needs to be marketed, deployed, and adopted. Employees need training. Organizations need time to change habits. Even excellent software can take years to reach the people it was built for.

Each of these steps resists acceleration. AI can compress the coding, but it cannot compress the learning, the trust-building, or the organizational change. I expect it will take a couple of years — starting from roughly fall 2025 — for this pipeline to produce visible, measurable results.

In the meantime, the paradox will only get louder. We will hear, and hear it more insistently, that LLMs and emerging techniques are breaking new ground, saturating old benchmarks, and enabling rapid progress on certain tasks. And in the very same newsfeed, we'll hear that nothing has happened yet.

Both will be true — for now.