From AI Policy Framework to AI Delivery System: Why “Strategic Pillars” Fail Without an Operating Model

National AI strategies often read like a perfectly curated playlist: the right themes, the right ambitions, the right buzzwords. Then delivery starts, and suddenly nobody knows who owns what, how decisions get made, how risk is controlled, or how outcomes are measured. That is not a strategy problem. It is an operating model problem. South Africa’s National Artificial Intelligence Policy Framework is unusually direct about this execution gap. It frames policy-making as a cycle where implementation is not the “and then magic happens” part, but the phase where policy is translated into actionable steps, resourced properly, supported by institutional mechanisms, and driven through an implementation plan with responsibilities, timelines, and performance metrics. It also insists that monitoring and evaluation must be systematic, data-driven, and continuous, with stakeholder involvement acting as the golden thread across formulation, implementation, and evaluation. That combination of delivery discipline and democratic legitimacy is exactly what many AI strategies lack. The question is how to convert pillars into a functioning national AI delivery system. pillars are not a delivery system The framework defines strategic pillars as “fundamental components” that support and drive implementation and provide guidance and direction. This is what pillars are good at: setting direction and coverage. South Africa’s pillars span talent development, digital infrastructure, R&D and innovation, public sector implementation, ethical AI, privacy and data protection, safety and security, and transparency and explainability. What pillars do not do, by default, is specify:• How priorities become a funded portfolio• How multi-department delivery is coordinated• How procurement, data access, and legal approval are unblocked• How safety, privacy, transparency, and fairness are enforced consistently• How performance is measured, and decisions are corrected In other words, pillars describe the “what”. Execution depends on the “how”.This aligns with international practice. The OECD’s AI principles emphasise accountability, transparency, robustness, and human-centred safeguards, but these principles only become real when institutions operationalise them through governance and controls across the AI lifecycle. Likewise, the World Bank’s work on AI in the public sector positions AI as part of public sector modernisation, where governance and institutional arrangements are decisive for success, not optional extras. The operating model that turns policy into execution A national AI operating model is the practical machinery that links policy intent to delivery outcomes. It clarifies who decides, who delivers, who assures, and who measures. South Africa’s framework already hints at this requirement through its emphasis on institutional arrangements and implementation planning (DCDT, 2024). A credible operating model for national AI typically has four interlocking capabilities. Portfolio governance that makes choices, not lists.AI is not deployed through a single programme. It is deployed through a portfolio of use cases across sectors and departments. Without a mechanism to prioritise, sequence, and stop initiatives, national strategies devolve into scattered pilots that cannot scale. The framework’s “futures triangle” framing, which balances the push of current pressures, the pull of national aspirations, and the weight of historical constraints, is useful here (DCDT, 2024). It provides a legitimate way to prioritise national AI efforts without falling into pure techno-optimism or pure risk-aversion. In South Africa’s context, the “weight of the past” is not abstract. The digital divide, institutional inertia, and outdated regulatory frameworks are named as real barriers (DCDT, 2024). So the portfolio cannot only reward technical feasibility. It has to deliberately fund initiatives that close capability gaps and unlock adoption where inequality would otherwise widen. Assurance that builds trust by designTrust is not produced by policy statements. Trust is produced by repeatable assurance practices that make AI systems safer, more transparent, and more accountable in real operations. The framework explicitly prioritises ethical AI, robust data governance, privacy and security protections, and transparency and explainability to foster trust among users and stakeholders (DCDT, 2024). It also links explainability to trust, accountability, bias detection and mitigation, and public awareness (DCDT, 2024). To execute this at a national scale, assurance needs to function like a minimum baseline across departments. This is where international frameworks help. NIST’s AI Risk Management Framework is designed to be operationalised across organisations and throughout the AI lifecycle, with structured functions that support governance and ongoing risk management (NIST, 2023). ISO/IEC 23894 similarly provides guidance on integrating AI risk management into organisational processes (ISO/IEC, 2023). The practical point is simple: national AI must have a consistent way to evidence that systems are safe enough, fair enough, and accountable enough to scale. Delivery capability that is built for servicesPublic sector AI programmes fail when delivery is treated as “build a model” rather than “change how a service works”. The framework’s public sector implementation pillar calls for AI to optimise state management and service delivery, supported by guidelines and standards (DCDT, 2024).Delivering service change requires multidisciplinary teams, stable product ownership, and measurable benefits. GOV.UK’s service guidance reflects this service-centred delivery logic, linking discovery and alpha learning to a benefits case, then ongoing monitoring and evaluation in beta and live. This matters because model performance alone is not impact. Impact is adoption, reduced processing time, fewer errors, better access, improved outcomes, and sustained operational performance. Performance management that treats evaluation as steeringThe framework is firm that implementation should be assessed regularly using quantitative and qualitative data, and should include diagnostic, implementation, and impact evaluations to drive improvement and data-driven decisions (DCDT, 2024).That is a mature stance. It reframes evaluation from compliance paperwork to a steering mechanism. It also reduces the risk of “AI theatre”, where activity is mistaken for progress. Performance management also connects directly to legitimacy. The framework’s emphasis on stakeholder involvement, including citizens, policy makers, and practitioners, is framed as a source of transparency and accountability that improves policy effectiveness (DCDT, 2024). The framework also calls for stakeholder engagement to ensure alignment with societal values (DCDT, 2024). If national AI delivery is going to earn trust, it needs to show its work, measure its results, and incorporate feedback while systems are being used, not only at the end. The “regulation bridge” problem: policy must guide what comes next. The framework positions the

The AI Advantage: South Africa’s Job Market Is Rewarding Digital Skills

Hiring is picking up, remote work is edging back, and people who work well with AI are getting more options. At LeanTechnovations, we help teams embed practical AI skills into day‑to‑day workflows, turning curiosity about AI into measurable improvements in speed, quality and cost.  If you have invested time in learning how to use AI and data at work, the market is starting to meet you halfway. South Africa’s hiring activity is up compared to last year, and recruiters are actively seeking candidates with the right skills. That is not guesswork. It is what the latest Pnet Job Market Trends data shows for September 2025. The shift, in simple terms Pnet reports that overall hiring activity rose 7% year on year and 6% month on month. That lift came from two places. Employers posted more vacancies, and recruiters increased their database searches for specific skills. In other words, companies are both advertising and headhunting, which is a valuable sign of confidence. Flexibility is also returning. After two years of decline, the share of remote roles grew for six months in a row from March to September 2025. Remote and hybrid listings now make up about 3.6% of all vacancies. It is a small base, but the direction is positive for professionals who need location flexibility or who live outside major metros. There is good news for AI skills, too. Reporting on Pnet’s data, MyBroadband notes that the number of AI professionals placed in new roles nearly doubled, up 96% between late 2019 or early 2020 and late 2024 or early 2025. AI-related job postings increased by 183% between early 2018 and early 2024. Most hires are still in IT at about 62%, with education around 10%, and the rest spread across finance, consulting, and telecoms. What that means for real jobs The roles seeing the most remote traction are not only developers. Pnet’s list of top remote roles includes Data Analyst (12% of roles advertised as remote), Business Analyst (9%), Business Developer (8%), Personal Assistant (12%), and Client or Customer Support Agent (11%). These are practical, everyday jobs where AI and digital tools help with tasks such as data cleanup, reporting, summarising, scheduling, and service triage.  There is some balancing detail in tech. As overall demand shifts, a few IT roles show fewer remote options than before. Systems or Network Administrator, Software Developer, and IT Project Administrator or Manager are the clearest examples. IT still dominates remote work overall, but flexibility is spreading into business, admin, and finance, while some tech niches are tightening on location needs.  Jobseekers Start with your craft, then add practical AI. You do not need to be a machine learning engineer to benefit. Focus on the tasks you already do and look for simple, safe ways to use AI to help. Examples include pulling a clean first draft, checking for errors, summarising research, creating a clear checklist, or building a small data workflow that saves time. The roles gaining remote options are precisely the ones where these skills show up in day-to-day work.  Show outcomes, not tool names. Recruiters are searching databases more often, which means precise results matter. In your CV and interviews, use short stories about time saved, error rates reduced, or response times improved. That is the kind of signal that gets noticed when teams are hiring with purpose. Team Leaders The winning move is to build capability inside your current team. Hire for learning ability and curiosity, then teach people how to use AI safely and usefully in their real work. Follow with a short strategy sprint that maps one process end-to-end, tests where AI actually adds value and measures the result. Scale what works and write it into standard operating procedures so the gains stick. This is the simple model we use with clients. Learn the basics, find the right fit, then implement for outcomes. Explore a Lean approach, as Lean focuses on delivering value with less waste. AI helps by making the work visible, faster to standardise and easier to improve. It can surface bottlenecks in a value stream, draft clear standard work, and cut rework by catching errors early. The result is time back for people to solve better problems and serve customers faster.  AI does not replace Lean. It powers it with better visibility, faster learning and fewer repeat mistakes.  Where to focus first? Well, Pnet shows stable demand in Business and Management and growing flexibility in Finance and Admin or Support. These areas are ideal for early wins because the processes are repeatable and close to customers or cash. Think reconciliations, weekly reporting, service inboxes, onboarding checklists, and knowledge retrieval. These are perfect places to try minor, well-governed AI improvements with clear metrics. A short starter plan Teach the basics. Give people a safe, hands-on introduction to AI that uses their real tasks. Cover good prompts, how to check outputs, and how to protect data. Keep it practical and short.  Run one value sprint. Pick a process that hurts. Map it, remove obvious friction, and test one or two small automations—track time saved and quality.  Scale and standardise. Bake the better workflow into your tools and ways of working. Support the change with coaching and simple dashboards. Work in South Africa is not becoming less human. It is becoming more digital and more outcome-focused. When people combine their craft with practical AI skills, they get more choice and more stability. When teams build those skills and design work around them, they move faster and serve customers better—the data points in the same direction. Hiring is up, and remote work is recovering slowly. The opportunity is to turn that momentum into everyday practice.  If you want a quick, practical view of where your organisation stands, take our AI Readiness Assessment: https://aireadiness2.scoreapp.com/  Sources: Pnet. 2025. Job Market Trends Report. October 2025. Pages 3 to 10 Vermeulen, J. 2025. Good news for people with AI skills in South Africa. MyBroadband, 21 April 2025. MyBroadband  Categories – AI Adoption, Artificial Intelligence,

The Cautionary Tale: AI Use with the Lack of Human Oversight

When an organisation gets caught out for submitting flawed AI-generated work, the same narrative follows: “AI made a mistake.” It’s a convenient excuse, but a misleading one. Artificial intelligence does not make mistakes. It generates outputs based on prompts and data. It has no context, no judgment, and no understanding of consequences. The real problem lies with people who use it without providing the correct context and worse to, proper review. The recent Deloitte incident is a perfect example. A global consultancy, trusted by governments and major corporations, found itself refunding part of a 440,000 Australian dollar contract after errors were found in a report prepared for the Australian government. The issue was not that AI was used, but that human accountability was missing. Until organisations accept that technology is only as reliable as the people managing it, similar incidents will keep happening. The Deloitte Example Deloitte has agreed to repay part of a 440,000 Australian dollar fee after errors were discovered in an Australian government report that it helped produce with generative AI. The Department of Employment and Workplace Relations commissioned the firm to review the Targeted Compliance Framework and related IT systems. After publication, academics and journalists identified fabricated or incorrect references, along with a misdescribed court decision. Deloitte amended the document and disclosed the use of Azure OpenAI GPT-4o tools, while maintaining that the report’s core findings were unchanged. The department confirmed that the recommendations remain intact, yet the credibility damage was already done. A senator criticised the firm for allowing AI to do the heavy lifting with inadequate human checks. The result was a partial refund and a public lesson in accountability.  Other outlets reported similar details. Coverage notes that the report included invented or misattributed citations and a fabricated legal quote later removed in revisions. The department accepted a partial refund and said the substance of the review stands, but questions about quality assurance and disclosure persist. What “Hallucination” really means The Deloitte report is a textbook example of AI hallucination in a high-stakes setting. Hallucination is the term for when AI models produce information that is fabricated or incorrect, often with total confidence. Generative models predict likely words based on patterns in data. They do not possess context, institutional memory, or legal expertise. When prompted for citations, they can produce plausible but false references. This is known as hallucination. In the Deloitte case, watchdogs found references that did not exist and a legal citation that did not match the real judgment. After corrections, observers noted that replacing one weak reference with a cluster of alternatives did not fix the underlying problem. The initial claims were not grounded in verified sources.  This is not unique to consulting. Courts have sanctioned lawyers for filings that relied on AI text containing invented cases. In a recent Utah matter, the appeals court punished counsel after a brief cited a precedent that could not be found in any database. Local reporting and follow-up coverage make the same point. AI can assist research, but professionals must verify. A Failure of Human Accountability, Not of AI The fault does not lie with AI. It lies with the people who choose to ship AI-assisted work without the checks that any professional deliverable requires. Tools generate drafts. People are responsible for the truth. If a report contains a fabricated reference, that is a process failure. If a legal brief cites a non-existent case, that is a breach of professional duty. When organisations treat AI outputs as finished work instead of raw material, they convert productivity aid into a reputational risk. AI can accelerate research and drafting, but it cannot (and should not) replace human expertise in validating content. When Deloitte’s team delivered a report riddled with fake references and errors, that was a breakdown in their quality control and professional duty. No AI policy or guideline can absolve professionals of accountability for what they present to clients. Just as the court in Utah made clear that lawyers must verify their filings despite using AI, consultants and analysts must ensure their AI-augmented reports are accurate and credible. Blaming the AI alone is irresponsible – the onus is on the humans using the tool to use it correctly. Why This Will Keep Happening The market rewards speed. AI accelerates drafting, so teams move faster. Without AI training and clear process review steps, errors slip through, and clients pay for them. Public scrutiny then focuses on the tool rather than the system that enables misuse. Unless leaders establish clear expectations for verification, disclosure, and ownership of outcomes, similar incidents will continue to surface across various domains, including consulting, legal, financial, policy, and technical. The pattern is already visible. When errors like these come to light, they undermine trust, not just in AI tools, but in the organisations deploying them. As Senator Deborah O’Neill (Senator for New South Wales, representing the Australian Labor Party) joked, why shouldn’t a client “sign up for a ChatGPT subscription” instead of paying a hefty fee to a firm that handed in AI-written material? This sharp critique highlights a serious reputational risk: if consulting firms (or any experts) misuse AI, they may be seen as overcharging for establish clear expectations for verification, disclosure, and ownership of outcomes, similar incidents will continue to surface across various domains, including consulting, legal, financial, policy, and technical work product that is no more reliable than something a layperson could prompt from a chatbot.  Moreover, incidents like this can cause backlash against the broader use of AI in business. Government clients, for example, might become wary of allowing consultants to use AI at all. That would be a shame, because when used responsibly, AI can be a powerful asset – saving time on data analysis, generating useful first drafts, and uncovering insights. The key differentiator is responsible use. The fallout from the Deloitte report is a reminder that we will keep seeing AI-related blunders until organisations institute proper oversight and take accountability. In an era of

Shadow AI in South African Businesses: Why Banning AI Doesn’t Work

South African companies are experiencing a surge in the use of generative AI tools like ChatGPT and Microsoft’s Copilot. A recent study, which surveyed over 100 large-sized enterprises nationally, found that about 67% are already using GenAI in some form, up from 45% in 2024. Yet most of these organisations lack formal guidance on AI use – in 2025, only 15% had an official policy for using such tools. This gap has led to a growing trend of “shadow AI,” where employees use AI tools unofficially, even if the company ignores or outright bans them. Simply banning AI in the workplace is proving ineffective and potentially harmful. Instead, businesses need to acknowledge the reality of AI usage and respond with a clear strategy, staff training, and strong guardrails. Below, we explore how rampant unofficial AI use has become, why banning it doesn’t solve the problem, and how South African firms can proactively and safely integrate AI into their operations. Shadow AI: Unofficial AI Use on the Rise Many employees are quietly using generative AI tools at work without official approval, a phenomenon dubbed “shadow AI.” This unofficial experimentation is on the rise in South Africa, as staff turn to AI assistants like ChatGPT to boost productivity even when their companies lack formal AI policies or strategies. While this shows enthusiasm for AI’s potential benefits, it also raises concerns about data security, compliance risks, and the absence of oversight. Employers are increasingly discovering that AI adoption is happening regardless of official plans, underscoring the need to catch up with governance and clear guidelines. According to the South African Generative AI Roadmap 2025, roughly one-third of surveyed businesses already have employees using GenAI tools informally without company sanction. This share has climbed sharply from about a quarter of firms the year before (rising from 23% in 2024 to 32% in 2025). The same World Wide Worx/Dell study found that only 15% of organisations have an official policy for AI tool usage, highlighting a significant governance gap. In practice, many staff are experimenting with AI on their own – often using personal accounts or unvetted apps – because formal strategies and oversight have yet to catch up. It’s a pattern seen elsewhere, too: one global security survey found companies deploy dozens of AI tools on average, yet 90% of those run without any formal IT approval. These statistics underscore how rapidly AI use is outpacing policy, and they reinforce the urgent need for businesses to establish clearer AI guidelines and oversight. Why are employees doing this? Quite simply, many find these AI tools incredibly helpful for productivity and tasks. For example, Capitec Bank reported significant time savings by using Microsoft’s AI Copilot assistant – cutting a financial reconciliation process from six hours to just one minute. With results like that, it’s easy to see why workers are tempted to use AI, even if company policy is silent or prohibitive. They might use personal devices or home computers to access AI tools, creating an “AI underground” of employees solving problems with ChatGPT, Bard, or other platforms outside official channels. The risk is that this happens without any oversight or guidance, which can lead to mistakes or security breaches. Why Banning AI Backfires Some companies have reacted to AI’s risks by attempting outright bans – forbidding employees from using tools like ChatGPT on work devices or with work data. However, simply banning AI is not an effective solution. As the data shows, employees often circumvent bans when they see clear benefits. Generative AI’s value is just too high for a blanket ban to be realistic, especially as competitors embrace the technology. When organisations say “no AI,” they may drive the behaviour underground rather than stop it. Banning AI can actually increase a company’s exposure to risk. If employees feel they have to hide their AI usage, they won’t ask for guidance or permission, and they might use tools in insecure ways. For instance, an employee might paste sensitive client information into a public AI service from home – a dangerous practice that could violate South Africa’s data protection laws. Under the POPIA regulations, even a single instance of personal data being fed into a public AI model without authorisation could constitute a breach, with serious financial and legal repercussions. Without any approved tools or clear policies, workers might inadvertently share confidential data with AI platforms or rely on AI outputs without verification, leading to errors. In other words, a ban doesn’t prevent the behaviour – it prevents the organisation from managing it. There’s also a competitive aspect: companies that ban generative AI outright risk falling behind more innovative rivals. If your staff are forbidden from using tools that boost productivity, while other firms are leveraging them (with proper safeguards), you could be at a disadvantage. It’s telling that 84% of South African businesses surveyed say that oversight of AI use is an important or very important success factor for GenAI deployment. Businesses increasingly recognise that the answer is not to ignore or suppress AI, but to supervise and manage its use. As tech researcher Arthur Goldstuck warns, many companies are enthusiastically adopting AI “in a regulatory and ethical vacuum” – and “the longer this continues, the more harm can be caused… before these guardrails are in place.” Simply put, the genie is out of the bottle with AI adoption, and banning it won’t put it back in. The smarter move is to guide how it’s used. The Need for a Formal AI Strategy The first step to regaining control is for businesses to develop a formal AI strategy. Shockingly, only 14% of South African companies have a defined company-wide GenAI strategy in place. A further 22% have some strategy but only for specific divisions, while the majority have nothing concrete yet. In many firms, the adoption of AI has outpaced any kind of plan or policy. Goldstuck noted that what’s most startling is that many companies think using a GenAI tool is the same as