BY MORGAN WILSMANN
Morgan Wilsmann is 2nd-year MAIR student specializing in technology and innovation. Combining her professional background in international development and current work in tech policy, Morgan is interested in the responsible development and deployment of emerging technology, especially in developing country contexts.
Digital platforms are under intense pressure to keep their content safe. In 2023 alone, U.S. state policymakers introduced over 200 content moderation bills to tackle perceivably harmful online material, with a recent focus on artificial intelligence (AI) outputs. Tech companies competing in the AI arms race claim to uphold responsible and ethical AI principles, partly by ensuring their models are trained on data purged of toxic material. However, this ethical narrative largely ignores the human toll – the unseen moderators often in developing countries who shoulder the traumatizing burden of filtering disturbing content.
Removing harmful content is not as simple as flagging and taking it down. Determining if something violates policy requires some scrutiny, as users may revolt and go elsewhere if they feel they are overly censored, but will complain if they see material that disturbs them. While social media companies have been grappling with moderating harmful content for years, it is only in recent months that AI tools have faced scrutiny for their outputs. Recently, a Microsoft employee raised the alarm about its AI image generator tool, claiming the company is not doing enough to prevent abusive and violent content.
AI models that power the likes of OpenAI’s ChatGPT use humans to tease out and label horrific content, like child sexual abuse, bestiality, murder, suicide, torture, self-harm, or incest, so the AI system may “learn” to filter out problematic data before the end user sees it. And, as with most “low-skilled” undesirable work, these responsibilities disproportionately fall upon the economically vulnerable in developing countries. Kenya, with its educated English-speaking workforce, has become a hub for outsourcing content moderation in this area.
Companies like Sama, a San Francisco-based company that brands itself as a champion of "ethical AI," are major players in content moderation outsourcing. OpenAI contracted Sama in 2021 to analyze assigned datasets and label problematic content. With more than 98% of workers employed in East Africa, Sama prides itself on hiring from impoverished locales, claiming to have lifted “59,000 individuals out of poverty.”
Sama made headlines last year after TIME reporters revealed that workers in Kenya earn wages as low as $1.32 per hour. Yet, at a rate over double the minimum wage, focusing on the hourly pay misses the mark. The scandal here is the toll of the challenging work content moderators endure, without adequate protection and support.
The mental health impact of consuming endless streams of toxic content cannot be overstated. At least one moderator was assigned a video livestream of the Ethiopian civil war, in which he watched the murder of familiar faces from his homeland. Others consume endless material detailing incest, bestiality, and child sexual abuse. While Sama claims to offer mental health counseling, workers say they are not offered psychosocial support, or find the service a waste of time. In any case, no amount of therapy can fend off the severe mental distress from consuming awful content day in and day out.
Last year's reporting that exposed the exploitation of Kenyan content moderators led to Sama cutting the ChatGPT contract short. Unfortunately, that meant letting go of the already traumatized workers - now both jobless and without psychosocial support. These former ChatGPT moderators have petitioned Kenyan lawmakers to regulate the work and identify exposure to harmful content as an occupational hazard.
Despite workers’ outcry against exploitation, Kenya’s President Ruto visited Sama’s Nairobi office earlier this month, exchanging platitudes and promises of building out the East African digital economy. Ruto has been courting Big Tech since he took office in 2022, going as far as traveling to Silicon Valley to appeal to Apple, Intel, and others. It is clear that Ruto prioritizes big-money tech jobs over dignified work.
The competing demands of the U.S. policymakers seeking stricter content moderation on digital platforms, tech companies striving to meet those expectations, and Kenyan leaders eager for the attention of the American tech industry have created a breeding ground for worker exploitation. However, there is a need to prioritize both the safety of online spaces, and the well-being of Kenyan content moderators.
For Kenya to responsibly expand its tech industry, policymakers must establish worker protections for content moderators. Meanwhile, in the absence of regulation, AI companies have a duty to ensure all inputs to their product are ethically “sourced.” In this case, even when outsourcing content moderation needs, digital platforms must ensure those enduring the most horrific - but necessary - jobs have robust protections. AI developers and outsourcing firms must act where regulation does not exist, by mandating mental health support, exposure limits, and fair compensation.
This blog was written for Professor Nina Gardner’s Corporate Sustainability, Business and Human Rights course at Johns Hopkins SAIS DC Campus
Image Credit: Canva Image Generator with prompt “An office in Kenya of rows of computers where content moderators are labeling data, in an animated format. “