By Nick Fera, CEO of enosix | April 2, 2026
Last year, I sat in a room where a large manufacturing company’s operations team was proudly showing off its new enterprise AI quoting tool.
It was fast. It was fluent. It could give a customer a number in seconds instead of hours. Everyone was impressed.
A few weeks later, finance had a very different opinion.
The AI had been quoting prices that did not reflect current contract terms. Margins were off. Rebills had gone out. Customers were frustrated. Three people were now manually auditing every AI-generated quote before it could be trusted again.
That is not an AI success story.
That is what happens when decisioning gets decoupled from execution.
A wrong transaction generated by enterprise AI is still a wrong transaction.
That is the part of the AI conversation too many companies still want to skip.
The market is not short on enthusiasm. Deloitte reports that 85% of organizations increased AI investment over the last year and 91% plan to increase it again. But only 6% reported payback within one year, and Deloitte says returns on typical AI use cases often take two to four years to materialize.
McKinsey points to the same basic tension from a different angle: only 39% of respondents report any enterprise-level EBIT impact from AI, and most of those say that impact is still less than 5% of EBIT.
That should tell executives something important.
The problem is not that AI lacks promise. The problem is that too much enterprise AI still breaks where the business actually lives.
A model can generate an answer. When it’s wrong, the business must live with the consequences and potential for value erosion.
When AI participates in pricing, quoting, service, approvals or customer commitments, it is not enough for the output to look right. It has to reflect the rules that actually govern the business: real pricing rules, real product constraints, real inventory, real approval thresholds, real contract terms.
If it is working off stale data, copied logic, or a simplified approximation of how the company runs, the output usually breaks where it matters most: in the transaction.
And once the transaction is wrong, finance sees it later in credits, rebills, disputes, delayed invoicing, manual review, and the quiet erosion of customer trust.
That downstream cleanup rarely makes it back into the AI ROI story. But it should.
Because if three people still have to verify AI before the business can act, you have not automated anything. You have just added a step.
We have seen this movie before in enterprise software: the demo wins the room, then operations and finance inherit the bill.
AI is just the latest version of this.
The better executive question is not whether enterprise AI can generate something useful.
It is whether it can improve an important process and execute correctly.
That distinction matters even more now because AI is moving out of the assistant phase and into automated business action. SAP is explicitly positioning Joule Agents as AI agents embedded across business functions, using process expertise, business-process grounding, enterprise data, and the power to automate complex workflows. Anthropic is publicly using the term “enterprise agents,” which tells you this is not fringe language anymore. The direction is clear: AI is moving from recommendation to action.
And that is where this gets real.
In a consumer setting, the wrong answer is annoying. In an enterprise setting, a wrong action has expensive consequences.
If an AI drafts a mediocre email, nobody loses sleep. If an AI quotes the wrong price, commits inventory that is not actually available, skips a required approval, or triggers the wrong next step in delivery, the financial consequences are significant.
Which is exactly why this cannot stay boxed inside an AI innovation team.
This is a CFO conversation now. And they should be asking harder questions.
Does this reduce manual validation, or just move it downstream?
Are exceptions going down, or are we just not measuring them yet?
Can we tie AI-driven activity to better margin, faster cash, lower operating cost, or better customer response?
Is this grounded in live operational logic, or in a stale copy of it?
Are we gaining critical market share and competitive advantage?
If this AI acts incorrectly at scale, what does the cleanup cost?
That is a much better test than asking whether the model is exciting.
A model can be exciting and still be economically useless.
McKinsey’s latest State of AI research reinforces the point. The firms seeing stronger bottom-line impact are not simply deploying models. They are redesigning workflows and putting leadership and governance structures around AI adoption. McKinsey specifically identifies workflow redesign as a key factor in realizing value, while also showing that most organizations still are not seeing tangible enterprise-level EBIT impact from gen AI.
In other words, value is not coming from intelligence in isolation. It comes from operational rewiring.
For many enterprises, the operational truth of the business still lives in SAP: pricing logic, product logic, availability, approval rules, and transactional reality. Not a cached version of it. Not a middleware approximation. The live system of execution.
That matters because AI only becomes economically viable when its decisions and actions are grounded in the system that governs what the business can do.
When AI is grounded in that truth, the economics improve: fewer exceptions, less downstream cleanup, stronger process integrity, faster cycle times, and outcomes finance can trust.
When it is not, we’re back to impressive demos and messy books.
We are moving into an era where enterprise AI does not just recommend. It acts. The next wave of enterprise value will not come from AI that merely sounds intelligent. It will come from AI that can operate correctly inside the rules, processes, and constraints that run the business.
The companies that figure this out early will have better economics. They will reduce friction instead of just moving it somewhere else. They will create fewer downstream exceptions, less cleanup, and fewer surprises for finance.
The next wave of winners will be the ones whose AI can be trusted to execute correctly.
Because if it can’t be executed correctly, it won’t pay off
About Nick Fera
Nick Fera is Chief Executive Officer of enosix and Managing Director of enosix GmbH, where he leads the company’s strategic, operational, and financial activities globally. Before enosix, he served as CEO of Firm58 and on the board of ThirdPartyTrust, which was acquired by Bitsight in 2022. Earlier, as CEO of Parlano, he led the company’s sale to Microsoft, where the technology became part of Microsoft Teams. He holds a BS in Finance from the University of Illinois and an MBA from Northwestern University.
Learn more at www.enosix.com.
Sources for Enterprise AI That Can’t Execute Isn’t an Asset. It’s a Liability.
All sources are primary or Tier-1 independent. URLs are provided for direct verification. Secondary blogs and aggregators are excluded
