My rough expectation for AI code assistance productivity is that, at a high level, we’ll see a 10-20% productivity boost in coding. Importantly, this number is a conjecture that I hope to confirm rather than a firm prediction.

But the distribution is likely to have fat tails: incredible speedups for some activities, especially greenfield projects; disappointing results, and even productivity losses in other cases, especially for poorly documented legacy projects, niche languages, etc. I suspect that the METR study essentially captured the “sad” part of the distribution.

Crucially, the productivity boost will not automatically improve overall velocity. Coding is just one part of the software development lifecycle. Requirements gathering, testing, deployment, and other process gates also need to speed up in order to fully realize velocity gains.

It’s of course worth noting that accurately measuring productivity improvements is devilishly hard in the best of circumstances. AI hype, confirmation bias, and Goodhart’s Law make it much, much harder these days.

The path forward certainly starts with experimentation and measurement, with a healthy dose of skepticism along the way. The next step is to identify and clear other bottlenecks in the lifecycle.

I imagine the next year or so, as we start to put some rigor around AI coding practices, will be pivotal! Should be interesting to see how the developer experience changes over time.

Also find this post on LinkedIn!