The World Cannot Agree on Who Should Control AI, and Africa Is Writing Its Own Answer

Quick Reads
- The world is splitting into two camps on AI regulation: Europe is tightening the rules, while the US under Trump is loosening them, and Big Tech is siding with Washington.
- Meta publicly refused to sign the EU’s voluntary AI code of practice, calling it an overreach that would choke innovation across Europe.
- A growing body of research warns that AI companies now wield the kind of economic and political power once reserved for governments, and most states are too cautious to push back.
- African countries are taking a different road, quietly using data protection laws as a backdoor to regulate AI rather than waiting for purpose-built AI legislation.
- South Africa just published a draft national AI policy that spreads oversight across existing regulators rather than creating a powerful new AI watchdog.
Meta’s chief global affairs officer, Joel Kaplan, publicly declared that “Europe is heading down the wrong path on AI” and confirmed the company would not sign the EU’s code of practice for general-purpose AI models, a voluntary framework designed to help companies comply with the bloc’s landmark AI Act. Meta called the rules “overreach.” The European Commission held firm, saying it would not change its timeline. The fault line between tech giants and regulators is now fully open.
AI is tied to economic growth, national competitiveness, and military advantage, meaning politicians fear that aggressive regulation will stifle innovation or drive it elsewhere. Simon Chesterman of the National University of Singapore frames it more bluntly: AI is shifting economic and political power away from governments, with today’s tech giants setting rules, policing speech, and shaping labour markets and election functions once associated with states.
Africa is not waiting for the West to figure it out. Rather than waiting for comprehensive AI frameworks, which are often complex and slow to develop, governments across the continent are embedding AI-related rules within existing or revised data protection laws. Analysts describe this as a “backdoor” method of regulation, and it is fast becoming Africa’s defining approach to the technology. The urgency is real: a 2025 audit of credit-scoring algorithms across Nigeria, Kenya, and South Africa found consistent bias against women-led businesses, with one Nigerian lender approving 23% fewer loans for women despite women showing better repayment records.
South Africa is going further. Rather than creating a new regulator, South Africa’s AI policy leans on institutions already embedded within those sectors. The Financial Sector Conduct Authority will oversee financial AI systems, health regulators will manage AI in diagnostics, and the Information Regulator retains its data privacy role.
Nigeria is still drafting its own strategy, and the lessons from Ghana and South Africa are instructive. Ghana published a ten-year national AI strategy, but is still struggling with the implementation of the AI governance institutions described in the strategy, including review mechanisms for algorithmic systems, which are not yet operational, even as AI is already being deployed in financial services, hiring, and healthcare. Africa currently accounts for less than 1% of the world’s data centre capacity despite holding roughly 18% of the global population. The window to shape how AI works on the continent before foreign models and foreign rules define it is still open, but it is narrowing.





