This past week we saw that taken to the extreme as both Anthropic and OpenAI announced the creation and funding of separated FDE/AI services companies to speed up the deployment of agentic workflows. What’s most fascinating about OpenAI’s version called The DeployCo is that the very companies it could kill are actually investors - consultancies like Bain, Capgemini and McKinsey & Co as investors (Axios).
The big question is not whether or not these are here to stay and are needed, but what happens when customers get ramped up with the enterprise “easy button.”
That is the exact question I posed on X and it sparked quite a lively conversation (click below).
Here’s what Anthropic wrote in its announcement for it’s new AI services co:
Yes, Claude-powered systems tailored to each organizations operations. This is the path that most enterprises will take, save the largest, as their CEOs and boards put pressure on how many workflows are automated and how much AI the companies are using. What is a concern over time is how locked-in these companies may become especially in a world of compute scarcity and as these frontier labs can change pricing on a whim.
By the way, none of this is new. It’s the same old playbook rebranded. Gergely nails it here, and captures my skepticism, but regardless of what you call it, companies need help making agentic workflows actually deliver value.
Here’s a post I wrote back in 2017 looking at the Mulesoft and AppDynamics S-1 filings - Services are not a dirty word!
This is also why the “services are bad” narrative has always been too simplistic. In enterprise software, the messy last mile is often where the category gets made. MuleSoft understood this years ago. The company wasn’t just selling software; it was helping customers connect sprawling systems, prove value, and turn integration from a project into a platform. AI agents are creating a similar moment now: the product only becomes real when it is embedded into the workflows, data, permissions, and operating rhythms of the enterprise.
So what’s the takeaway for founders?
If every frontier lab is bringing FDEs and services muscle into the enterprise, startups can’t afford friction. The product either has to be dead simple to adopt, or it has to go much deeper than the labs can: into the customer’s workflows, data, permissions, edge cases, and business logic. And founders should be honest: they may need their own version of FDEs too. Call it services, solutions engineering, customer engineering, or something else, but in this market the job is to do whatever it takes to make customers successful and turn that learning into product.
“Better answers” will matter, but they won’t be enough. We are headed into a multi-model world where customers will care about quality, cost, latency, governance, and the freedom to switch. The winners will help enterprises move fast today without trapping them in yesterday’s implementation decisions tomorrow.
In AI, speed gets you in the door. Customer success earns trust. Workflow ownership and optionality are what make you durable.
As always, 🙏🏼 for reading and please share with your friends and colleagues!
Thanks for reading What's Hot 🔥 in Enterprise IT/VC! This post is public so feel free to share it.
#😮 PSA - was quite skeptical of claims but after watching this, wow, this is extremely bad behavior and the lack of morals/creativity is shocking - i highly encourage you to watch as a founder and/or investor
#i’ve been writing about the sandwich model of agentic adoption and why showing, not telling super important - here Tobi from Shopify further shows how he’s created a learning org by forcing every interaction with River to be done in public so everyone can learn
#another important caveat - remote first has led to a documented culture which makes this even easier - read below 👇🏻
shopify is still a fully remote company. everything is documented, structured, categorized, and made for easy read/write. our vault, internal tooling, hell, even perf tools feel second to none. not because we nerd out over it (we do) but because it’s a necessity given how spread out the team is.
#yes been seeing more and more founders pointed in this direction - space, the final frontier esp. as we are more and more GPU/compute constrained and as political pressure and protests ramp up over AI data centers
#we need more than GPUs and there is a difference between chips and architectures for training and inference - answer inference or agentic - great read from Stratechery - also why I’m fired up about one of our stealth portfolio cos building a next gen MPU (memory processing unit) to solve this problem
#how about the midwestern CIOs and AI leaders? James Kaplan, CTO of McKinsey, convened 50 CIOs/AI leaders in Chicago last week and here are some of the learnings on where they are in terms of adoption:
I spent a couple of days in Chicago this week with 50 CIOs, CTOs, and Heads of AI, from household-name enterprises across regulated, legacy-heavy industries. These weren’t AI tourists. They were the people responsible for making this stuff work.
#as I like to say, easy on, easy off - the lower the friction to get started means companies need to work that much harder to lock you in with data, workflows, whatever but in the frontier lab space - folks are jumping a ton - even Sam Altman chimes in on comments!