More than 1,000 workers at a Nairobi outsourcing firm that handled work for Meta were abruptly laid off this week after the social media company ended a key contract, a sweeping cut that has thrown fresh attention on the fragile labor system underpinning much of the artificial intelligence economy.

Sama, a Kenya-based company that has done content moderation and data-labeling work for Meta, said it had issued a formal redundancy notice affecting 1,108 workers at its Nairobi office. The layoffs followed Meta’s decision to terminate an important contract, according to reporting in Kenya and Britain, setting off alarm among labor advocates who said the move exposed how dependent many lower-wage tech workers are on the decisions of a single Silicon Valley client.

The dismissals amount to more than a local labor dispute. They offer a stark illustration of a broader reality of the AI boom: even as tech companies promote artificial intelligence as a transformative new industry, much of the work required to train and police those systems still falls to vast networks of contractors in countries like Kenya, where workers often review disturbing material, label data or annotate images for comparatively low pay and with limited job security.

A shock to a vulnerable workforce

Sama’s Nairobi operation had become emblematic of this hidden layer of the tech industry. Workers there have, at different times, been involved in moderating harmful online content and in preparing data used to improve AI systems. Those jobs, long marketed as a foothold in the digital economy, have also come with repeated complaints about pay, psychological strain and abrupt restructuring.

This week’s layoffs underscored how quickly those positions can vanish. When a single client withdraws, an entire workforce can be left exposed.

Meta’s decision appears to have come amid scrutiny of allegations that Kenyan contractors had been exposed to highly sensitive footage captured by Meta smart glasses and used for AI training and annotation. Reports over the past two days linked the company’s contract review to those earlier claims, raising questions not only about labor practices but also about how personal user data is handled inside the AI supply chain.

That combination — insecure outsourced labor and sensitive consumer data — has become one of the most combustible fault lines in the generative AI era.

Long-running tensions in Kenya

The layoffs also revive a bitter history between Meta, Sama and Kenyan workers. The companies have already been drawn into litigation in Kenya over conditions facing content moderators, including allegations involving inadequate pay, insufficient mental-health support and unfair dismissals.

In 2024, Kenya’s Court of Appeal upheld rulings allowing Meta to be sued in the country over moderators’ claims, a significant decision that challenged the idea that global tech companies can remain insulated from labor disputes simply because the work is performed through contractors. The case was closely watched by labor groups, who argued that outsourcing had allowed large technology firms to benefit from a far-flung workforce while distancing themselves from accountability.

Sama had previously said it had ceased Meta content-moderation work, though it remained involved in AI data-labeling assignments for the company. That distinction made Meta’s latest withdrawal especially consequential: even after public scrutiny over moderation, Meta still appeared to rely on Sama’s Nairobi workforce for other forms of labor essential to developing AI systems.

Now, workers and advocates are left asking how much of Sama’s Kenya operation was tied to Meta, whether those dismissed will receive full severance and protections required under Kenyan law, and whether Meta plans to move the work elsewhere rather than eliminate it.

The hidden labor of AI

For years, Kenya has been promoted as a hub for digital outsourcing, with a growing pool of English-speaking workers handling customer service, moderation and data operations for global clients. But the model has increasingly come under criticism as AI companies race to secure human labor for tasks machines cannot yet reliably perform on their own.

Those workers are often asked to do some of the industry’s most taxing jobs: screening graphic material, identifying hate speech, transcribing speech, tagging objects in images and helping refine the outputs of machine-learning systems. The work can be repetitive, emotionally draining and highly contingent on client contracts.

The Sama layoffs show how exposed that workforce remains. In the public narrative around AI, the focus often rests on chipmakers, startup valuations and corporate competition. Less visible are the outsourced workers who make many of those systems possible — and who can be dropped with little warning when legal, reputational or commercial risks shift.

The latest episode also highlights growing privacy concerns around AI training data. If contractors were indeed reviewing sensitive smart-glasses footage, the matter could invite further scrutiny from regulators or courts, including outside Kenya, over how intimate user-generated material is collected, shared and used to build commercial AI products.

For Meta, the contract termination may help contain one risk while opening another. For workers in Nairobi, it has already had immediate consequences.

What happens next may determine whether this becomes another isolated outsourcing dispute or a broader reckoning over who bears the costs of the AI economy’s rapid expansion.

Sources

Further reading and reporting used to add context: