franchise is a short story by Isaac Asimov that first appeared in a science fiction magazine in 1955. The story is about how the United States becomes an electronic democracy in which the world’s most advanced computer (Multivac) selects a single person to answer a number of questions and then uses the answers to determine the outcome of a vote. , while avoiding the need for an actual election.
Although we are not yet in this disturbing future, the role of artificial intelligence and data science in democratic elections is becoming increasingly important. The election campaigns of Barack Obama and Donald Trump, the Danish synthetic party, and the massive theft of data from Macron campaign Good examples.
sentiment analysis
One of the first successful examples of using big data and social network analysis techniques to tune an election bid was Barack Obama’s presidential campaign in the United States in 2012. This campaign and many others that followed traditional polling methods were used with social media analysis.
These analytical methods provide inexpensive, near real-time methods for gauging voter opinion. Natural language processing (NLP) techniques such as sentiment analysis are often used to analyze messages in tweets, blogs, and other online posts, and to gauge whether opinions expressed are positive or negative regarding a particular politician or election message.
The main problem with this approach is sampling bias, since the most active social media users tend to be young and tech-savvy, and not representative of the population as a whole. This bias limits its ability to accurately predict election outcomes, although the techniques are very useful for studying voting trends and opinions.
Trump campaign 2016
While sentiment analysis on social media can be annoying, it is even more annoying when it is used to influence opinion and voting results. One of the most famous examples is Donald Trump’s 2016 campaign for the presidency of the United States. Big data and psychological profiling had so much to do with victory that traditional polls failed to predict.
The Trump example was not a case of mass manipulation. Instead, individual voters received different messages based on predictions about their susceptibility to different arguments. They often received information that was biased, incomplete, and sometimes contradictory to other messages from the same candidate. Trump campaign Contract with Cambridge Analytica For that effort, the same company that was sued and forced to close after it was caught harvesting information belonging to millions of Facebook users. Cambridge Analytica’s approach was based on psychometric methods developed by Dr. Michal Kosinski that can develop an overall user profile by analyzing a small number of likes on social media.
The problem with this approach is not the technology used, but how campaigns covertly use it to psychologically manipulate vulnerable voters by appealing to their feelings and deliberately spreading fake news through bots. This happened in Emmanuel Macron’s bid for the French presidency in 2017 when his campaign was subjected to widespread email theft just two days before the election. A large number of bots were then deployed to publish alleged evidence of the crimes described in the emails, which were later proven false.
Politics and government
Another troubling idea is the possibility of an artificial intelligence (AI)-led government.
In Denmark’s recent general election, a new political party called the Synthetic Party emerged, led by an artificial intelligence chatbot named Leader Lars that was seeking a seat in the country’s parliament. Of course, there are real people behind the chatbot, specifically the MindFuture Foundation. Leader Lars has been mechanically trained on all the political statements of Denmark’s fringe political parties since 1970 for the purpose of developing a platform that appeals to the 20% of the country’s population that never voted.
While the synthetic party may have outlandish ideas like a universal basic income of nearly $15,000 a month, it has spurred discussion about the potential for AI-led government. Can a well-trained and resourced application of AI really rule over people?
We are currently seeing one AI breakthrough after another happening at breakneck speeds, particularly in the field of natural language processing, following the introduction of a new and simple network architecture – adapter. These are giant artificial neural networks trained to generate text, but they can also be easily adapted to many other tasks. These networks learn the general structure of human language and develop an understanding of the world through what they “read”.
One of the most advanced and impressive examples It is called ChatGPTDeveloped by OpenAI. It is a chatbot capable of coherently answering any question asked in natural language. It can generate text and perform complex tasks such as writing entire computer programs from just a few user instructions.
Immune to corruption, but opaque
There are many advantages to using AI applications in government. Their ability to process data and knowledge in order to make a decision is far superior to that of any human being. In theory, he would also be immune from the influence of corruption and would not have any personal interests.
Currently, chatbots can only interact with information someone feeds them. They can’t really think spontaneously or take initiative. Today’s AI systems are better seen as answering machines – oracles – that can respond to “what do you think would happen if…” questions, and should not be seen as agents that can take action or control.
There are many scientific studies on the potential problems and risks of this type of intelligence based on large neural networks. The main problem is their lack of transparency – they don’t explain how they reached a decision. These systems are like black boxes — something that goes in and out — but we can’t see what’s going on inside the box.
We must not forget that there are people behind these machines who may consciously or unconsciously introduce some biases through the learning scripts they use to train the systems. Moreover, as many ChatGPT users know, intelligent chatbots can also spew incorrect information and bad advice.
Recent technological advances give us a glimpse of future AI capabilities that may be able to “rule”, but not without basic human control for the time being. The discussion should soon shift from technical questions to ethical and social issues.
