Before the latest shock pronouncement from the White House, DeepSeek’s challenge to the conventional wisdom around generative AI shook the tech world.
The stakes are high, as generative AI is already transforming every knowledge-based industry, from financial services to education to drug discovery. Those with access to the largest proprietary models would have an insurmountable advantage over those without. Following this logic, the leading tech giants were expected to spend around $1 trillion in capital expenditures to ensure they were on the right side of a market that would yield a few large winners. A corollary of this logic is that developing countries, where the transformative power of AI is needed most, could not compete in such a costly race and fall further behind.
The DeepSeek announcement turned conventional thinking on its head. The news isn’t about DeepSeek as Africa’s killer app per se. The real story is that this announcement has redefined the AI race. By showing how open-source LLMs can be trained, fine-tuned, and optimized at a fraction of the cost of existing models, competition has now shifted from building the largest models to building the most efficient. This should delight innovators in developing regions, particularly Africa.
Constraints to AI Access in Africa
Access to cutting-edge AI in Africa to date has been impeded by five key constraints: (1) Compute, (2) Energy, (3) Cloud, (4) Data, and (5) Talent. Leveraging open-source models and the further innovations that will emerge from these new market dynamics will help African AI developers solve the first three supply-side constraints. Because this will unlock greater utilization, markets will solve the last two.
Large language models (LLMs) can be proprietary, open-weight, or open-source. Proprietary AI models like OpenAI and Google Gemini have proven their capability but have restrictions on how much they can be fine-tuned or modified. Their use requires cloud-based APIs, which increase operational expenses. Open-weight alternatives such as Llama and Mistral make their model weights public, which can help in fine-tuning models for specific data contexts, but restrict access to the underlying training data, algorithms, and architecture. Open-source models are the most transparent, and allow unrestricted modification and fine-tuning, including to Africa’s context. Some further innovations, such as DeepSeek’s use of lower-precision but less resource-intensive 8-bit floating point numbers (except where the task demands more precise 32-bit numbers) will continue to emerge from competition and create even more efficiency. Deepseek put its code on Github here.
Africa’s Constraints: Energy, Compute and Cloud
Energy. Training an LLM requires massive parallel processing of data, a task akin to compressing and cataloging not just libraries but the entire internet. The energy consumption to train a state-of-the-art proprietary LLM can range from several thousand to over 10,000 megawatt-hours (MWh) depending on the model size, training duration, and hardware efficiency. To put this into perspective, 10,000 MWh is enough to power around 1,000 average U.S, or 40,000 African homes, for a year. Africa already faces electricity constraints, with many regions experiencing intermittent power and only around half of homes connected to national power grids.
Compute and Cloud. The old AI race had created bottlenecks in access to high-end graphical processing units (GPUs). Hyperscale cloud providers such as Amazon, Google, and Microsoft control much of the AI infrastructure. Most African countries lack the critical mass of data consumption and infrastructure required to attract cloud providers to establish local data centers. As a result, AI users must rent cloud-based computing power from regional or US-based data centers and pay in hard currency. This creates a major financial barrier for local businesses and startups seeking to simply use existing models, let alone those seeking to train or fine-tune new LLMs.
Data. For LLMs to be more relevant to African problems, it should be trained on as much African data as possible. Most proprietary models prioritize English and major European languages, leaving gaps in understanding local dialects, regional expressions, and context-specific information. To address data scarcity, techniques such as transfer learning, few-shot or zero-shot learning, data augmentation, and synthetic data generation enable the development of effective AI models with less data. Ultimately, developers must address the underrepresentation of African languages, cultural context, and information.
Furthermore, several African countries, such as Nigeria and Kenya, are concerned about data sovereignty and require sensitive data to be stored and processed locally. This becomes a further constraint when AI models are trained on sensitive or public-sector data.[1]
The impact of the new market dynamics on Africa
DeepSeek is far from the first open-source LLM.[2] What is new is the demonstration of how open source became competitive with well-resourced, cutting-edge LLMs at a fraction of the cost. African researchers will build on this breakthrough and develop open-source LLMs (a) trained on African data, (b) fine-tuned to Africa’s context; (c) optimized and deployed for edge devices such as laptops and phones, and low-power data centers. Why?
Open-source AI models allow local AI researchers to fine-tune language models on African datasets, improving their ability to process languages such as Swahili, Hausa, Amharic, and other local languages. This is well underway on the continent.
By keeping AI processing on-premise rather than relying on external cloud services, such models can access data while ensuring compliance with data regulations, so long as the necessary investments in security are made.
AI research centers in Africa are already working toward these goals. Centers such as Carnegie Mellon University (CMU) Africa in Kigali, African Institute for Mathematical Sciences (AIMS) in South Africa, Makerere University’s AI lab in Uganda, or the University of Nairobi in Kenya, could create collaborative AI supercomputing hubs, where locally trained models can be shared among institutions, fostering regional AI innovation. Those same institutions are also creating the next generation of African AI talent.
By eliminating the need for constant API calls to foreign cloud platforms, AI can empower African businesses and research institutions to innovate at much lower cost, and in local currency. Utilization will increase immensely.
Unleashing African AI talent
Unlocking access will empower innovators to build tailored solutions relevant to Africa, such as AI-driven crop monitoring and yield predictions, AI-powered diagnostics and medical translation tools, chatbots for financial inclusion and fraud detection, and local language applications. Where literacy or smartphone access is low, voice-based AI assistants can help close the gap in access to the benefits of digital solutions.
Kenyan agtech startups like Amini use AI models to provide climate, supply chain and agriculture advice in local languages, improving food security through data-driven insights. Masakhane, a grassroots organization focused on strengthening natural language processing research in African languages, is using AI to develop machine translation and speech recognition tools. Rwanda’s Charis uses AI to recognize and integrate images captured from drones to allow better management of infrastructure projects.
As demand for data and AI talent grows and becomes more valuable, businesses will respond by building the required African datasets. Firms like Kapsule in Rwanda are solving the problem by creating sustainable revenue streams for hospitals and clinics that provide privacy-compliant healthcare data.
Downstream, the AI access problem will alleviate the scarcity of top engineering talent, allowing businesses to focus on customers and execute great business models. This will create great investment opportunities.
The new AI race, and open-source AI, will catalyze African innovation
DeepSeek isn’t the answer, it opens the opportunity. As Africa continues its digital transformation, efficient open-source AI models on edge devices will offer a cost-effective and sovereignty-preserving approach to AI deployment. By prioritizing energy-efficient AI solutions, local language adaptation, and in-house model fine-tuning, it is up to African innovators to leapfrog existing AI adoption barriers.
Open-source models allow African researchers, entrepreneurs, and policymakers to shape their own AI landscape—one that is inclusive, affordable, and aligned with the continent’s goals. We are confident they will do so, and we’ll play our part.
To learn more about African Renaissance Ventures, click here.
[1] Though on-premise data centers also need to ensure the same level of data security as cloud providers, who have invested heavily in security, and sovereignty can be protected in sovereign cloud solutions.
[2] Bloom, Falcon (UAE), X-Gen (Salesforce) are other examples of Open-Source LLMs.
0 Comments