a achieve time It revealed the murkier side of the AI chatbot industry, highlighting how at least one startup uses questionable practices to improve its technology.
The Time report, published on Wednesday, focuses on Microsoft-powered OpenAI and OpenAI Its ChatGPT chatbota technology that has garnered a lot of attention lately for its impressive ability to produce very natural conversational text.
Time’s investigation found that, to train the AI technology, OpenAI used the services of a team in Kenya to skim through text involving disturbing subject matter such as pedophilia, animal abuse, murder, suicide, torture, self-harm, and incest. And for their efforts to categorize hateful content, several team members earned less than $2 an hour.
The work, which began in November 2021, was necessary because ChatGPT’s predecessor, GPT-3, impressive though it was, tended to spread offensive content as its training dataset was compiled by deleting hundreds of billions of words from all corners of the web. .
The Kenya-based team, run by San Francisco-based company Sama, will label the offensive content to help train an OpenAI chatbot, thus improving the data set and reducing the chances of any objectionable output.
TIME said all four of Sama’s employees interviewed described being mentally scarred by their work. Sama provided counseling sessions, but staff said they were ineffective and rarely occurred due to the demands of the job, though a spokesperson for Sama told TIME that therapists could be reached at any time.
Reading the traumatic material sometimes feels like “torture,” one worker told TIME, adding that they felt “disturbed” by the end of the week.
In February 2022, things took a darker turn for Sama when OpenAI launched a separate project unrelated to ChatGPT that required its team in Kenya to collect images of a sexual and violent nature. OpenAI told TIME that the work was necessary to make its AI tools more secure.
Within weeks of starting this image-based project, the troubling nature of the tasks prompted Sama to cancel all of its contracts with OpenAI, though Time notes that it could also have been prompted by PR fallout from a report on a similar topic published on Facebook around the same time.
Open AI told Time that there was a “misunderstanding” about the nature of the images it asked Sama to collect, insisting it had not requested the most extreme images, nor had it seen any of the images that were submitted.
But the termination of contracts affected the workers’ livelihoods, with some team members in Kenya losing their jobs, while others were transferred to low-paying projects.
achieve time It offers an uneasy but important look at the kind of work being done in AI-powered chatbots that have excited the tech industry lately.
As beneficial as technology may be and may be, it comes at a human cost, raising a slew of ethical questions about how companies develop their new technologies, and more broadly about how richer countries continue to outsource undesirable tasks to poor nations. . for lower financial expenses.
The startups that support this technology will come under more focused scrutiny in the coming months and years, so it is best to review and improve their practices at the earliest opportunity.
Digital Trends has reached out to OpenAI for comment on the Time report and will update this article when we hear back.
Editors’ recommendations