According to an investigation by Time, OpenAI paid Kenyan workers less than $2 per hour to filter through tens of thousands of lines of text to help make ChatGPT safer to use.
The workers were assigned to label and filter out toxic data from ChatGPT’s training dataset. They were forced to read graphic details of NSFW content such as bestiality, murder, suicide, torture, self-harm, child sexual abuse, and incest, Time reported.
OpenAI partnered with a data labeling partner based in San Francisco, Sama, to detect and label toxic content that could further be fed as data into a filtering tool for ChatGPT. Sama claims to provide developing countries with “ethical” and “dignified digital work.”
Read More: Google Loses Fight To Block Indian Android Antitrust Order
Sama recruited data labelers in Kenya in order to work on behalf of OpenAI, thus playing a crucial role in making the chatbot safe for public usage.
Despite playing an integral role in building ChatGPT, the workers were faced with grueling conditions and low pay. The workers received wages between $1.32 and $2 an hour, based on seniority and performance.
Sama stopped its work for OpenAI in February 2022, about eight months earlier than the contracted period, because of the traumatic nature of the work and also because of an investigative report published by Time about Sama’s work with Meta.