From AI Policy Framework to AI Delivery System: Why “Strategic Pillars” Fail Without an Operating Model

National AI strategies often read like a perfectly curated playlist: the right themes, the right ambitions, the right buzzwords. Then delivery starts, and suddenly nobody knows who owns what, how decisions get made, how risk is controlled, or how outcomes are measured. That is not a strategy problem. It is an operating model problem. South Africa’s National Artificial Intelligence Policy Framework is unusually direct about this execution gap. It frames policy-making as a cycle where implementation is not the “and then magic happens” part, but the phase where policy is translated into actionable steps, resourced properly, supported by institutional mechanisms, and driven through an implementation plan with responsibilities, timelines, and performance metrics. It also insists that monitoring and evaluation must be systematic, data-driven, and continuous, with stakeholder involvement acting as the golden thread across formulation, implementation, and evaluation. That combination of delivery discipline and democratic legitimacy is exactly what many AI strategies lack. The question is how to convert pillars into a functioning national AI delivery system. pillars are not a delivery system The framework defines strategic pillars as “fundamental components” that support and drive implementation and provide guidance and direction. This is what pillars are good at: setting direction and coverage. South Africa’s pillars span talent development, digital infrastructure, R&D and innovation, public sector implementation, ethical AI, privacy and data protection, safety and security, and transparency and explainability. What pillars do not do, by default, is specify:• How priorities become a funded portfolio• How multi-department delivery is coordinated• How procurement, data access, and legal approval are unblocked• How safety, privacy, transparency, and fairness are enforced consistently• How performance is measured, and decisions are corrected In other words, pillars describe the “what”. Execution depends on the “how”.This aligns with international practice. The OECD’s AI principles emphasise accountability, transparency, robustness, and human-centred safeguards, but these principles only become real when institutions operationalise them through governance and controls across the AI lifecycle. Likewise, the World Bank’s work on AI in the public sector positions AI as part of public sector modernisation, where governance and institutional arrangements are decisive for success, not optional extras. The operating model that turns policy into execution A national AI operating model is the practical machinery that links policy intent to delivery outcomes. It clarifies who decides, who delivers, who assures, and who measures. South Africa’s framework already hints at this requirement through its emphasis on institutional arrangements and implementation planning (DCDT, 2024). A credible operating model for national AI typically has four interlocking capabilities. Portfolio governance that makes choices, not lists.AI is not deployed through a single programme. It is deployed through a portfolio of use cases across sectors and departments. Without a mechanism to prioritise, sequence, and stop initiatives, national strategies devolve into scattered pilots that cannot scale. The framework’s “futures triangle” framing, which balances the push of current pressures, the pull of national aspirations, and the weight of historical constraints, is useful here (DCDT, 2024). It provides a legitimate way to prioritise national AI efforts without falling into pure techno-optimism or pure risk-aversion. In South Africa’s context, the “weight of the past” is not abstract. The digital divide, institutional inertia, and outdated regulatory frameworks are named as real barriers (DCDT, 2024). So the portfolio cannot only reward technical feasibility. It has to deliberately fund initiatives that close capability gaps and unlock adoption where inequality would otherwise widen. Assurance that builds trust by designTrust is not produced by policy statements. Trust is produced by repeatable assurance practices that make AI systems safer, more transparent, and more accountable in real operations. The framework explicitly prioritises ethical AI, robust data governance, privacy and security protections, and transparency and explainability to foster trust among users and stakeholders (DCDT, 2024). It also links explainability to trust, accountability, bias detection and mitigation, and public awareness (DCDT, 2024). To execute this at a national scale, assurance needs to function like a minimum baseline across departments. This is where international frameworks help. NIST’s AI Risk Management Framework is designed to be operationalised across organisations and throughout the AI lifecycle, with structured functions that support governance and ongoing risk management (NIST, 2023). ISO/IEC 23894 similarly provides guidance on integrating AI risk management into organisational processes (ISO/IEC, 2023). The practical point is simple: national AI must have a consistent way to evidence that systems are safe enough, fair enough, and accountable enough to scale. Delivery capability that is built for servicesPublic sector AI programmes fail when delivery is treated as “build a model” rather than “change how a service works”. The framework’s public sector implementation pillar calls for AI to optimise state management and service delivery, supported by guidelines and standards (DCDT, 2024).Delivering service change requires multidisciplinary teams, stable product ownership, and measurable benefits. GOV.UK’s service guidance reflects this service-centred delivery logic, linking discovery and alpha learning to a benefits case, then ongoing monitoring and evaluation in beta and live. This matters because model performance alone is not impact. Impact is adoption, reduced processing time, fewer errors, better access, improved outcomes, and sustained operational performance. Performance management that treats evaluation as steeringThe framework is firm that implementation should be assessed regularly using quantitative and qualitative data, and should include diagnostic, implementation, and impact evaluations to drive improvement and data-driven decisions (DCDT, 2024).That is a mature stance. It reframes evaluation from compliance paperwork to a steering mechanism. It also reduces the risk of “AI theatre”, where activity is mistaken for progress. Performance management also connects directly to legitimacy. The framework’s emphasis on stakeholder involvement, including citizens, policy makers, and practitioners, is framed as a source of transparency and accountability that improves policy effectiveness (DCDT, 2024). The framework also calls for stakeholder engagement to ensure alignment with societal values (DCDT, 2024). If national AI delivery is going to earn trust, it needs to show its work, measure its results, and incorporate feedback while systems are being used, not only at the end. The “regulation bridge” problem: policy must guide what comes next. The framework positions the