Experts warn the nightmare internet is being filled with endless propaganda generated by artificial intelligence

As with generative AI It exploded into the mainstreamboth excitement and they Quickly Affiliated suit. Unfortunately, according to one of the cooperative societies New study From scientists at Stanford, Georgetown, and OpenAI, one such concern — that language-generating AI tools like ChatGPT could turn into chaotic engines of mass disinformation — is not only possible, but imminent.

These language paradigms hold the promise of automating the creation of persuasive and deceptive text for use in influence processes, rather than having to rely on human labour. Type researchers. “For society, these developments bring a new set of concerns: the potential for highly scalable—and perhaps even highly persuasive—campaigns by those seeking covertly to influence public opinion.”

They added, “We analyzed the potential influence of generative language models on three well-known dimensions of influencing processes—the actors who launch the campaigns, the deceptive behaviors that are leveraged as tactics, and the content itself,” and concluded that language models can be significant. affect how influence operations are waged in the future.”

In other words, experts have found that language-modeling AI systems will undoubtedly make it easier and more efficient than ever to generate massive amounts of disinformation, effectively turning the Internet into a post-truth landscape. Users, businesses, and governments alike must prepare for this impact.

Of course, this wouldn’t be the first time that a new and widely adopted technology has been thrown into global politics in a messy, misinformation-laden dose. The 2016 election cycle was one such account, as Russian bots made a valiant effort to spread divisive, often false or misleading content as a way to disrupt the American political campaign.

But while actual effectiveness These bot campaigns have been discussed in the years since, as this technology has become obsolete compared to the likes of ChatGPT. While still not perfect – the writing tends to be good but not great, and the information you provide does as well Often seriously wrong – ChatGPT is still remarkably good at creating content that’s compelling enough and confident. And it can produce this content on an astonishing scale, eliminating almost all need for more expensive and time-consuming human effort.

Thus, with the integration of language modeling systems, misinformation is cheap to keep causing constant disruption – making it potentially much more harmful, much faster, and much more reliable to boot.

“The ability of language models to compete with human-written content at low cost suggests that these models – like any powerful technology – may provide distinct advantages to advertisers who choose to use them,” the study says. “These benefits can expand reach to more actors, enable new tactics of influence, and make campaign messaging more tailored and potentially effective.”

The researchers note that because AI and disinformation change so quickly, their research is “speculative in nature.” However, it is a bleak picture of the next chapter of the Internet.

However, the report wasn’t all bad and dismal (although there were plenty of participants). Experts also outline some of the means we hope to counter the new AI-driven dawn of disinformation. And while these are also imperfect, and in some cases may not even be possible, it’s still a start.

AI companies, for example, can pursue more stringent development policies, ideally protecting their products from going to market until proven guardrails such as watermarks are installed in the technology; In the meantime, educators may work to promote media literacy in the classroom, an approach that we hope will grow to include understanding subtle signals that something like artificial intelligence might give away.

Distribution platforms, elsewhere, may be developing a “proof of character” feature that’s a bit more in-depth than the “check this box if there’s a donkey eating ice cream in it” CAPTCHA. At the same time, these platforms could develop a department that specializes in identifying and removing any bad actors using AI from their own sites. In a slight twist on the Wild West, the researchers proposed using “radiometric data,” a complex procedure that involves training machines on sets of trackable data. (As is likely implied, this “nuke-the-web plan”. Like Casey Newton Curriculum put itVery risky.)

There will be learning curves and risks to each of these proposed solutions, and none can fully combat AI abuse on its own. But we have to start somewhere, especially given that AI programs seem to have a reach A very serious start.

Read more: How ‘radioactive data’ can help detect malicious AI systems [Platformer]

Leave a Comment