Skip to main content
Back to writing

Enterprise AI

I build enterprise AI infrastructure. Here's what nobody wants to talk about.

Right now, every AI coding tool you use is being sold to you at a loss.

GitHub Copilot, Claude, ChatGPT, Cursor. The companies behind these tools are burning through billions trying to win market share. Your $20/month subscription doesn’t come close to covering the compute costs of running these models. Everyone in the industry knows this. Nobody’s talking about what happens next.

I build AI infrastructure and developer tooling at a Fortune 500 company. I’m not anti-AI. I spend my days helping engineering teams integrate AI into their products. But I’m seeing a pattern that should worry anyone responsible for technology strategy and I think we need to have an honest conversation about it.

The pricing correction is coming

We’ve seen this movie before. AWS was absurdly cheap in the early days. So was Salesforce. Every platform company follows the same playbook: subsidise aggressively, build switching costs, then adjust pricing once you’re locked in.

The difference this time? The subsidy gap is massive. These companies are operating at billions in losses. When correction comes, it won’t be a polite 10% annual increase. It’ll be structural.

Think about what that means for your P&L. If your engineering organisation has 500 developers each paying $100-200/month for AI tooling, that’s manageable. What happens when that number moves to $500/month? $1,500? $3,000?

And here’s the pricing model risk nobody’s modelling: right now it’s per seat. But coding agents are effectively fractional developers. The moment vendors can prove ROI at that level, why wouldn’t they shift to value-based pricing? If an AI agent saves you a $150k developer, charging $30-50k for it is still a “bargain” on paper. Your three-year cost projections probably don’t account for any of this.

The skills problem is the real risk

Pricing you can budget for. Skills attrition you can’t easily reverse.

Here’s what I’m watching happen in real time. Junior developers are shipping code they don’t fully understand. They’re getting working solutions from AI tools without building the mental models of why those solutions work. They’re not learning to debug properly because the agent does it for them. They’re not developing the architectural thinking that comes from struggling through problems.

This isn’t their fault. The tools are genuinely brilliant and it’s rational to use them. But we’re creating a generation of developers who may not be able to function without AI assistance.

Now combine that with natural attrition. Your senior engineers who actually understand the systems are retiring or moving on. The institutional knowledge of how your applications actually work is walking out the door. And the people replacing them have never had to build that understanding from scratch because they’ve always had an agent to lean on.

Fast forward three years. Your vendor announces a 10x price increase. You look at your options and realise you can’t credibly threaten to leave because your team literally cannot develop without these tools anymore. That’s not a negotiating position. That’s a dependency.

What responsible adoption actually looks like

None of this means stop using AI tools. That would be stupid. These tools are transformative and the productivity gains are real.

But going all-in without guardrails is just as reckless.

A few things I think every technology leader should be doing right now:

Model for price increases in your forecasts. Don’t budget for current pricing. Budget for 5-10x increases over three years. If it doesn’t happen, great. If it does, you’re not scrambling.

Maintain core competency. Rotate junior developers through projects where AI assistance is deliberately limited. Make sure they can actually write, debug and architect software. Treat it like a fire drill. You hope you never need it but you practise anyway.

Track your dependency. Know exactly how much of your codebase was AI-generated. Know which teams can function without these tools and which can’t. That data matters when renewal negotiations come around.

Build optionality. Don’t standardise on a single AI vendor. Keep your tooling flexible enough that you can switch providers or reduce usage without everything falling over.

Invest in your people. The organisations that will weather a pricing correction best are the ones where engineers understand their systems deeply, with or without AI help. That knowledge is your insurance policy.

The uncomfortable question

I’m genuinely optimistic about AI-assisted development. I wouldn’t be building infrastructure for it if I wasn’t. But optimism without risk management is just hope. And hope isn’t a strategy.

The question I’d love to see more technology leaders asking is simple: if our AI tooling costs went up by 10x tomorrow and our most experienced engineers had already left, what would we actually do?

If you don’t have a good answer to that, it might be time to start building one.