Walk into any boardroom in Johannesburg, Lagos, or Nairobi right now and you will hear the same question: "What is our AI strategy?" It is a reasonable question. But it is usually the wrong starting point, and the answers it produces are often equally wrong.

Organisations that begin with "what AI tools should we use?" tend to end up with a collection of expensive pilots, a few impressed executives, and no measurable improvement to operations or revenue. Organisations that begin with "where in our business would an AI-driven capability be genuinely difficult for a competitor to replicate?" end up somewhere completely different.

The distinction matters more in African markets than anywhere else. Implementation costs are real. Connectivity constraints are real. Data quality problems are real. The cost of a failed AI programme is not just financial. It sets back internal appetite for the next three years.

The Question Worth Asking

Before any tool selection, any vendor demo, any proof of concept, the right question is this: where in your value chain does a 30% improvement in speed, accuracy, or cost create an outcome that locks in customers, improves margins, or raises the barrier to entry for competitors?

That is the zone where AI investment produces returns that compound. Everything else is efficiency improvement. Efficiency improvement is worth pursuing, but it does not create structural advantage.

In financial services, this tends to cluster around credit decisioning, fraud detection, and customer onboarding. In telecoms, it clusters around network fault prediction and churn modelling. In healthcare, it clusters around diagnostic support and claims processing. In government, it clusters around document processing and service request routing.

The specific use case matters less than the method for finding it. You need people who understand both the business and the technology well enough to have that conversation at the intersection.

The organisations winning with AI are not the ones who moved fastest. They are the ones who chose the right problem first.

Data Readiness Is Not a Prerequisite, It Is Part of the Work

The most common thing we hear when we ask about AI readiness is: "Our data is not ready yet." This is almost always true, and almost never a reason to wait.

Data quality problems do not resolve themselves. Waiting for clean data before starting an AI programme is like waiting for a perfect road before learning to drive. The readiness work and the AI programme design need to happen together, with the target use case defining which data problems to prioritise.

We have seen this with clients across multiple sectors. A financial services organisation had been deferring an AI-led credit decisioning project for two years because "the data was not ready." When we started with the target use case and worked backwards, we found that 80% of the data needed for an initial model was actually in reasonable shape. The remaining 20% was addressable in under six months. The two-year deferral had no rational basis.

The practical approach is a data readiness sprint scoped to a specific use case, not a blanket data quality programme that has no end and no business outcome attached to it.

Generative AI Specifically

Large language models and generative AI tools are a different category from predictive analytics and classical machine learning. They are more accessible, require less training data, and can be deployed against a much wider range of business problems. They also carry a different risk profile.

The risks that matter in an enterprise African context are:

None of these risks are prohibitive. They are manageable with the right architecture, governance model, and internal capability programme. But they need to be designed for from the start, not retrofitted after the pilot is already running.

What a Practical First Step Looks Like

For most organisations, the right first engagement is a structured discovery process that produces three outputs: a prioritised list of AI use cases scored against value and feasibility, a data readiness assessment scoped to the top two or three use cases, and a 12-month roadmap with realistic resource and cost assumptions attached.

That process typically takes four to six weeks. It replaces the vendor-led demo circuit, which tends to produce excitement but no decisions, and the internal working group that debates strategy for six months without committing to anything.

The organisations we have seen move fastest are not the ones with the largest budgets or the most ambitious strategies. They are the ones that picked a specific problem, committed to solving it well, built the data and governance foundation to support it, and measured the outcome honestly.

That is the pattern worth replicating.