Not all AI strategy engagements are equal The market for AI strategy consulting has expanded rapidly, producing a wide range of quality: from rigorous capability assessments and actionable roadmaps to expensive restatements of industry reports. Organizations evaluating AI strategy consultants need to distinguish between engagements that surface real organizational constraints and opportunities versus those that produce polished slide decks disconnected from implementation reality. What a useful AI strategy engagement delivers A useful AI strategy engagement should produce specific, actionable outputs: 1. Current state assessment Not “your industry is being disrupted by AI” — that is not a finding. A current state assessment should identify: Which processes in the organization are candidates for AI improvement and why What data exists, where it is, and what it can support What capability gaps exist (data engineering, ML engineering, MLOps, domain expertise) What systems require integration for any AI deployment to be viable 2. Prioritized opportunity list with sizing Opportunities should be prioritized by: expected business impact (specific, quantified), implementation complexity (data requirements, system integration, change management), and time-to-value. The output should help an executive decide where to invest, not just tell them that AI has value. 3. Realistic implementation roadmap A roadmap that can actually be executed, accounting for existing constraints. If the data is not ready for 6 months, the model work cannot start for 6 months. A useful roadmap shows this honestly. 4. Build vs buy vs partner recommendations Specific guidance on which capabilities to develop internally, which to acquire via vendor products, and which require specialized external partners — with the reasoning behind each choice. Red flags in AI strategy consulting Red flag What it usually means Heavy vendor partnerships disclosed late Recommendations shaped by referral fees Generic AI opportunity list (cost reduction, efficiency) No real organizational assessment was done No mention of data readiness Consultant does not understand the actual constraint Roadmap with no dependencies or sequencing Not a real roadmap “Quick wins” that all require 6+ months Quick wins are a selling mechanism, not a deliverable References only from similar-size/industry clients May not translate to your context What to ask when evaluating AI strategy consultants What does your typical engagement produce as deliverables, and can you share an anonymized example? How do you handle projects where your assessment is that there is no high-value AI opportunity now? What is your process for data readiness assessment? Who will actually be working on the engagement (partners vs junior consultants)? What percentage of your recommendations have been implemented by clients, and what were the outcomes? For guidance on evaluating AI consulting firms more broadly, what to look for when evaluating AI consulting firms covers the selection criteria in detail. How do you distinguish actionable recommendations from generic advice? What distinguishes a useful engagement from a superficial one: the recommendations are specific to the client’s data, processes, and capabilities — not generic AI trends. We validate recommendations against the client’s actual data during the engagement, running feasibility experiments on representative samples to confirm that the recommended opportunities are technically viable. A strategy built on validated feasibility, rather than industry analogies, gives the client confidence to commit investment. The capability assessment should evaluate readiness across four dimensions: data infrastructure (is the required data available, accessible, and of sufficient quality?), technical capability (does the team have the skills to build and maintain AI systems?), organisational readiness (do decision-makers understand AI’s capabilities and limitations?), and governance (are there policies for AI ethics, data privacy, and model risk management?). We structure roadmaps in 90-day increments — short enough to maintain accountability and adjust course, long enough to complete meaningful work. Opportunities are prioritised using an impact-feasibility matrix, with quick wins (high impact, low difficulty) recommended for immediate action and strategic investments (high impact, high difficulty) recommended for longer-term planning. How do you measure the ROI of an AI strategy engagement? The ROI of an AI strategy engagement is measured by what it prevents, not just what it enables. A well-conducted strategy engagement that identifies three high-value opportunities and two unviable ones saves the organisation from investing in the unviable opportunities — a cost avoidance that typically exceeds the engagement fee. We measure engagement ROI across three dimensions: implementation rate (what percentage of recommendations were actually implemented within 12 months?), value realised (what measurable business impact did the implemented recommendations deliver?), and waste avoided (what investments were avoided based on the engagement’s findings?). The implementation rate is the most diagnostic metric. An engagement with a 20% implementation rate either produced impractical recommendations or failed to secure organisational commitment. An engagement with an 80%+ implementation rate produced actionable recommendations that the organisation was prepared to execute. We target 60%+ implementation rates as the threshold for a successful engagement, and we follow up at 6 and 12 months to track actual implementation progress and adjust recommendations based on learnings from early implementations.