There is a pattern we have seen repeat itself across multiple clients in the last two years. An organisation invests in an AI tool, runs a pilot that produces impressive results in a controlled environment, and then watches the initiative stall when it tries to move from pilot to production. The technology worked. The deployment did not.

The failure point is almost never the AI model. It is the architecture underneath it. Specifically, it is the absence of a clear answer to three questions that enterprise architecture exists to answer: Where does the data come from? How does the AI output connect to the systems that act on it? Who governs the decisions the system makes?

When those questions have not been answered before a pilot starts, the pilot succeeds in a sandbox. When it tries to connect to live data, live systems, and real decision-making processes, the gaps become blockers.

The Integration Problem

AI models require data. In a production environment, that data lives in multiple systems, often built at different times, by different vendors, using different data standards. A credit risk model needs customer data from a CRM, transaction data from a core banking system, and bureau data from an external provider. Connecting those three sources in a controlled, auditable, governed way is not a data science problem. It is an integration architecture problem.

Organisations that have invested in integration over time, whether through an enterprise service bus, an API gateway, or a modern integration platform, have a material advantage when it comes to AI deployment. The data pipelines they need already exist, or can be extended. Organisations that have not made that investment find themselves building integration infrastructure under time pressure, usually badly, while the AI team waits.

We have seen this play out directly. A financial services client came to us with a well-designed ML model for credit decisioning. The data science was solid. What was missing was any mechanism to reliably feed the model live data from their core banking system and return the output to the loan origination workflow in real time. Building that integration layer took longer than building the model. It would have been much faster if the integration architecture had been designed as part of the AI programme from day one.

The AI model is the visible part. The integration, governance, and data architecture underneath it determine whether the visible part ever reaches production.

The Governance Problem

AI systems make decisions, or they support people making decisions. Either way, those decisions need to be governed. Who approves the model? Who monitors its performance over time? Who is accountable when the model produces an output that causes harm? How is the model updated, and what approval process does an update require?

These are not compliance questions, although compliance is part of it. They are operational governance questions. In a financial services context, the Prudential Authority has clear expectations about model risk management. In a healthcare context, there are clinical governance standards that apply. In any context, an AI system that operates without clear ownership and accountability is a liability that will eventually produce a problem nobody is equipped to manage.

Enterprise architecture provides the framework for answering these questions before deployment, not after an incident. That framework includes defining the data lineage so you know where every input comes from, the audit trail so you can reconstruct any decision the system made, and the monitoring infrastructure so you know when the model's performance has degraded.

The Scalability Problem

Pilots are designed to work. They use curated data, controlled conditions, and a small user group who are motivated to make the tool succeed. Production environments are none of these things. The data is messier. The load is higher. The users are less motivated and less technically comfortable. The edge cases that did not appear in the pilot appear constantly in production.

Architecture that is designed for a pilot rarely survives contact with production at scale. Building it properly from the start costs more time upfront and saves multiples of that time in rework. This is not a new observation. It applies to every enterprise software deployment. AI deployments are not exempt.

The organisations that have scaled AI successfully share a common trait: they treated the architectural design as a first-class deliverable, not an afterthought. The model and the architecture were designed together, by people who understood both.

What This Means Practically

If you are planning an AI programme, the design process should include enterprise architects from the beginning, not brought in after the data scientists have designed the solution. The questions they ask are different, and they are not less important.

Those questions include: What systems need to integrate with this? What is the data model, and how does it map to our existing master data? What are the latency requirements for real-time inference? Where will the model run, and what does that mean for data sovereignty? How will the model output be consumed by downstream systems? Who owns the governance of this system once it is live?

Answering these questions before building saves significant rework. It also changes which tools you select, which cloud services you use, and which vendors you engage. Those are upstream decisions. Making them after the model is already built creates constraints that are expensive to undo.

The best AI programmes we have worked on were ones where the enterprise architect and the data scientist were in the same room from week one. The worst were ones where the architecture was considered a deployment concern, not a design concern. The outcomes of those two approaches are not comparable.