The investigation found that ChatGPT used exploited foreign labor to modify its language library

A popular OpenAI chatbot, eerily human-like chat It was built on the backs of underpaid and psychologically exploited employees, according to a new investigation by the time.

The Data Classification team is based in Kenya, and is managed by the San Francisco company Saudi Arabian Monetary AgencyNot only was he reportedly paid shockingly low wages while working for a company She may be on her way to receiving a $10 billion investment from Microsoftbut was also exposed to disturbing graphic sexual content in order to clean ChatGPT of dangerous violence and hate speech.

See also:

Gas, the compliments app, has been purchased by Discord

Starting in November 2021, OpenAI sent tens of thousands of text samples to employees, who were tasked with combing clips for cases of pedophilia, animal abuse, murder, suicide, torture, self-harm, incest, the time mentioned. Team members talked about having to read hundreds of these types of entries every day; For hourly wages of $1 to $2 an hour, or a monthly salary of $170, some employees felt their jobs were “mentally scarring” and a certain kind of “torture.”

Sama’s employees were reportedly offered wellness sessions with counsellors, as well as individual and group therapy, but many of the employees interviewed said the reality of mental health care at the company was disappointing and inaccessible. The company responded that it takes the mental health of its employees very seriously.

the the time The investigation also discovered that the same group of employees had been assigned overtime to compile and catalog an enormous array of graphic – and what appeared to be increasingly illegal – images for an undisclosed OpenAI project. Sama terminated its contract with OpenAI in February 2022. By December, ChatGPT had swept the internet and taken over chat rooms with the next wave of innovative AI talk.

At the time of its launch, ChatGPT was noted for having a Surprisingly comprehensive avoidance system, which goes so far as to prevent users from tempting the AI ​​to say racist, violent, or other inappropriate statements. It also flagged text it deemed intolerant within the chat itself, turning it red and providing a warning to the user.

The ethical complexity of artificial intelligence

While news of OpenAI’s hidden workforce is troubling, it’s not entirely surprising since the ethics of human-based content moderation isn’t a new discussion, especially in spaces of social media that grapple with the lines between free posting and protecting user bases. In 2021, A.J The New York Times reported in Facebook outsources publishing oversight to an accounting and tagging firm known as Accenture. Both companies outsourced staff moderation around the world, and then would deal with massive repercussions for a workforce that is psychologically ill-prepared for work. Facebook paid a $52 million settlement to traumatized workers in 2020.

Content moderation has become a topic in post-apocalyptic psychological horror and tech media, such as the 2022 thriller directed by Dutch writer Hannah Barefoots. We had to remove this post, which chronicles the mental breakdown and legal turmoil of the company’s QA worker. For these characters and the real people behind the work, the distractions of a future based on technology and the Internet are a constant shock.

The rapid acquisition of ChatGPT, and the successive wave of AI art creators, are posing several questions to the general public who are more and more willing to hand over their data, Social and romantic interactions, and even the cultural creativity of technology. Can we rely on artificial intelligence to provide actual information and services? What are the academic implications of text-based AI that can respond to feedback in real time? Is it unethical to use artists’ work to build new art in the computer world?

The answers to these questions are clear and ethically complex. Chats are not Repositories of accurate knowledge or original ideas, but they make for an interesting Socratic exercise. They are rapidly expanding the avenues for impersonation, however Many academics are fascinated by their potential as tools for creative stimulation. to exploit Artists and their intellectual property is an escalating issueBut can it be circumvented now in the name of so-called innovation? How can creators achieve safety in these technological advances without risking the health of the real people behind the scenes?

One thing is clear: the rapid rise of AI as the next technological frontier continues to pose new ethical quandaries on the creation and application of tools that replicate human interaction at real human cost.

If you have been sexually assaulted, call the National Confidential Sexual Assault Hotline at 1-800-656-HOPE (4673), or access 24-7 online help by visiting online.rainn.org.

Leave a Comment