A few days ago, Uber's CTO shared a post that made waves: "We're going back to the drawing board." In December, the company gave 5,000 engineers access to advanced AI tools. Within four months — by April — the annual budget was gone. Costs had grown 6x compared to 2024.

The headline everyone ran with was "AI is too expensive." I think that's the wrong conclusion.

AI isn't expensive. Unmanaged AI is.

Three different challenges, at three different levels, that every organization will need to solve — and fast.

The Individual

It all starts at the personal level. Every employee, every day, makes dozens of small decisions: which model to use, when to start a new conversation, how many times to try a different phrasing. Each decision looks trivial. Together, they add up.

The most powerful model isn't always the right one

Developers naturally gravitate toward the most advanced tool available — it feels safe, it feels "the best." But many day-to-day tasks (code comments, simple refactors, format conversions) don't actually need the most advanced model. Using a model matched to the task can cut costs by 10x, with no loss in quality.

Stop, summarize, and start a new session

When people carry on long conversations with AI tools, it's easy to forget that every new message pulls the entire history along with it. Stopping frequently, summarizing where things stand, and starting a fresh session — whether mid-task or when switching tasks — is a small habit. But it's one of the most important ones to build.

You can't improve what you can't see

In most environments today, employees have no real-time visibility into the cost of their actions. It's also a muscle that simply doesn't exist for most people — they don't see what the last call cost, how much they burned through this week, or where they stand relative to the team. This isn't due to lack of willingness — it's a lack of "education," a lack of measurement tools.

When you build visibility — even at a basic level, like a simple weekly dashboard — behavior changes on its own. Awareness produces judgment.

The Team Manager

The direct manager, already overloaded — with outsized expectations placed on them and their team to deliver dramatic improvements — needs to operate on several fronts.

Educate the team

On the latest developments, on cost awareness, on sharing knowledge between employees — which saves a lot of time and money.

See consumption, and make it visible

What I've seen is that most team managers have no real picture of their team's AI consumption. They know "we're using it," they might know the total budget, but they don't see the breakdown at a level that allows real decisions. When you build them a simple monthly report — broken down by developer, by task type, by trends — they start becoming genuine partners in managing it.

A simple framework beats a perfect document

A team working with a basic "which model for which task" framework — even if it only covers 70% of cases — is in a much better place than a team waiting for a detailed policy. The framework can be a single page. It can be updated quarterly. What matters is that it exists and is agreed upon.

Consistent rhythm creates culture

In teams I've seen work well with AI, there's a short recurring conversation — every two weeks, fifteen minutes — on what worked, what didn't, where waste happened, where value was created. It's not a heavy retrospective, it doesn't need a deck. It's a connection point that prevents bad habits from setting in, and lets best practices spread naturally.

The tools that matter most for a team manager are much simpler than a fancy dashboard. They're regular visibility, an updated framework, and a rhythm of reflection.

Senior Leadership

The third level is where the picture connects to the language of the organization as a whole — the language of budgets, strategy, and investment decisions.

"AI" alone isn't a budget line item

When AI spending shows up in the budget as a single line item, it appears mysterious. When it's broken down by organizational unit — team A, product X, feature Z — it becomes transparent and manageable. The breakdown isn't always technically simple, but it's the difference between "an expense that's growing" and "an investment I understand what it does."

ROI doesn't have to be exact. It has to exist

One of the questions that troubles many leadership teams is how to quantify AI's value. The answer, from my experience, is you don't need a perfect measurement. You need a good enough one — breakdown by teams, breakdown by products, understanding where the money goes, even if the numbers are estimates. "We know there's value but it's hard to quantify" — that's a very problematic answer.

Daily oversight (!) with alerts and reports across all dimensions

An extreme spike on a single day can change a monthly or quarterly plan. You need to build tools across multiple dimensions: daily visibility into the various tools, control over anomalies without slowing down the business, forward planning aligned with growth, strategy meetings on future use, and more.

What helps leadership isn't a complex BI system. It's unit-by-unit breakdown, practical value measurement, and ongoing forecasting. Together, those three turn AI from a "the answer is complicated" line item into a "I have an answer" one.

Connecting it all

The most important thing is that all these layers are connected.

When you only work at the individual level — aware developers, real-time visibility — but the manager doesn't see the bigger picture and leadership doesn't know where value is created, improvements stay local. They'll hold for a few weeks and then fade.

When you only work at the leadership level — reports, ROI, budgets — but developers are still picking the wrong model for the wrong task, the reports will show you exactly where the waste is, without being able to change it.

When you give a team manager a beautiful framework but no data and no backing from leadership, they'll try for a while and eventually give up.

This structure is like a muscle — it needs all three heads to work. Investment in only one side isn't enough.

Where to start

If you're leading an organization that already uses AI, or thinking about starting, here are three questions that can help place your organization on the map:

Three questions for self-assessment

  1. If you asked a team manager at your company, off the cuff, to estimate how much their team spent on AI this month — how accurate would the answer be?
  2. Does your organization have a simple document answering "which model for which task"? If so — when was it last updated?
  3. If your CFO asked about the ROI of your AI spending, would there be an answer with numbers, even if estimates?

No organization answers "fully yes" to all three. That's the natural state of the market today. The question isn't where you are now — the question is whether you know where you want to get to, and what your next step is.

The organizations that will do best with AI over the next five years won't be the ones that spend the most. They'll be the ones that build organizational discipline around their AI — at the individual level, the team level, and the leadership level. That kind of discipline is built gradually, not all at once. It starts with one question, one weekly window, one measurement.

This isn't a technology problem. It's organizational work.