In the Trenches with 50 Midwest CIOs: Agents, Context, Costs, and Real Enterprise AI.
May 09, 2026
I spent a couple of days in Chicago this week with 50 CIOs, CTOs, and Heads of AI, from household-name enterprises across regulated, legacy-heavy industries. These weren’t AI tourists. They were the people responsible for making this stuff work.
And the biggest takeaway was simple: despite everything we read online, enterprise agents are still very, very early and there is still so much to build!
When the room was asked who felt they had agents operating at scale, no one raised a hand. In a smaller agentic workflow breakout, only 5 of 25 said they had agents in production at all. The concerns were consistent: security everywhere, not just in the CISO office; ROI that is still hard to measure; legacy systems that must be modernized before AI can truly scale; unclear governance and ownership models; and rising questions around vendor claims, data readiness, build-vs-buy decisions, and agent lifecycle management. The demand is real, the executive urgency is real, but the production operating model for enterprise agents is still being invented in real time.
Another huge issue: many enterprises did not even know where to start because the underlying work is not well documented. Before you can deploy agents, you need to understand the actual workflows, handoffs, approvals, exceptions, and systems of record. And once those processes are mapped, the answer should not be to blindly automate them. Many of these workflows need to be rethought, simplified, or reengineered first. That creates a big opportunity for tools that help enterprises discover how work really gets done, decide what should change, and only then bring AI and agents into the flow.
Even once you've mapped the work, the next problem hits fast: cost. One leader described a pattern where a tiny sliver of users was burning the majority of tokens. That's how AI finops becomes a real discipline overnight - budgeting, routing, usage controls, the works.
Uber is well ahead of most enterprises here, and even Uber is feeling it. As the company has gotten more agent-pilled, the token bill has gotten ugly.
If I combine Uber’s comments with what I heard from the more mature organizations at the summit, it feels pretty clear that AI cost management is going to become a Tier 1 enterprise pain point over the next 12 months.
For many enterprises, it will always start with the “easy” button to get their workers agent pilled using Claude or Codex. Eventually, the likely answer is not “use the best model for everything.” SOTA models for the highest-value work, then route other tasks to cheaper models, smaller models, older generations, or more deterministic systems depending on the use case and required quality.
This is also why companies like Atlassian are talking so much about Rovo, context, and owning that layer. Michael Cannon-Brookes’ point from the earnings call was exactly this: if you understand the work, the people, the permissions, the tickets, the docs, and the workflows, you can make AI more useful and more cost-efficient. The more organized and business-aware the context, the less waste you feed into the model, the better the outcome, and the more manageable the token bill.
Here’s Boris Cherny, Head of Claude Code, on why context matters in the messy enterprise and why ServiceNow’s pitch is landing: organize the dependencies, data, and workflows in one platform, reduce complexity, and feed models cleaner context. With 7T transactions and 100B workflows, that can mean better outcomes and lower token costs.
Finally, there was real frustration with Microsoft and other large vendors. As pricing shifts toward consumption, enterprises feel less predictability and more walled-garden pressure. No one wants to rip out SAP overnight, but the cost of accessing and moving their own data is starting to grate. Increasingly, many just want to treat these ERPs as headless systems of record. The opportunity, once again, for startups is going to be huge in the coming years.
We are still early on enterprise agents. But the next race is already clear.
For most non-tech enterprises, the winners won’t be the ones with the best model. They’ll be the ones that know where the work lives: the workflows, the context, the governance, the cost controls.
The frontier labs have the intelligence. They don’t have the enterprise.
That’s the gap. And it’s where the next decade of value gets built.
As always, 🙏🏼 for reading and please share with your friends and colleagues!
Thanks for reading What's Hot 🔥 in Enterprise IT/VC! This post is public so feel free to share it.
#not your usual AI layoff email from Brian at Coinbase with a focus on speed and startup mode - more importantly orgs have to rethink how work is done in an AI native world: “We’re rebuilding Coinbase as an intelligence, with humans around the edge” - max 5 layers, No pure managers (player-coaches only), AI-native pods + one-person teams
#breakdown of YC’s latest class (h/t Robert Scoble) - lots of infra building for agents, not humans
#no better time to start a company - the best technical talent has an abundance of capital to access…or they can join a frontier lab and get paid a ton and have zero direct reports 🤣
#great read on agents and harnesses and so agree with this point
Frontier closed models are far too expensive for the large majority of tasks the world needs to do. As teams start mapping costs to ROI, Open Model Harness Engineering will take off even more. It is almost always worth the investment to at least try to get a potential 20x+ cost reduction
#ServiceNow wants to own this whole stack - won’t happen but either way, i highly recommend reading the full ServiceNow analyst day deck - master class in enterprise scale and opportunity for AI
#prescient - read the Satya report above as it explicitly discusses the need for “owned intelligence” and “Creating those systems requires a disciplined approach to holding humans accountable for the work that agents execute.”