Generative AI is a machine that generates misinformation. And at least 16 countries are already using it

There is a Midjourney competitor in China. Developed by Baidu, this tool allows images to be generated from a “challenge”. The problem is that citizens cannot create certain images: the censorship mechanism filters sensitive words in the political sphere and prevents, for example, Tinananmen Square from not existing according to this platform. In Midjourney, for example, even Xi Jingping is censored. They are clear examples of what is happening to freedom on the Internet, especially in the new era of generative artificial intelligence.

Examples everywhere. In September 2022, Iran blocked access to mobile networks for its citizens, who suddenly found themselves barely able to use WhatsApp or Instagram. Internet usage in Burma is currently so limited that the country is almost at the level of China. In the Philippines, former President Rodrigo Duterte used an anti-terrorism law to block websites his government said were critical of his administration. This is bad news for internet freedom, and the dangerous use of generative artificial intelligence contributes to it.

Generative AI for evil. A new report by human rights group Freedom House reveals how at least 16 countries have used generative artificial intelligence systems “to sow doubt, discredit opponents or influence public debate.” The aim of this annual research is to create a ranking according to internet freedom in different countries around the world. Factors include restricting freedom of expression, shutting down Internet access, or retaliating for how you express yourself online.

We will get worse. The latest edition of the report highlighted how global internet freedom has been declining for the 13th consecutive year. The reason, at least in part, is the proliferation of generative artificial intelligence systems. According to Allie Funk, one of the people responsible for the project, “advances in AI are exacerbating this crisis”.

The dark side. Thanks to mass access to generative AI tools, the barrier to creating disinformation campaigns has almost disappeared. Automated systems enable precision campaigns and more subtle forms of online censorship, and this new report reveals how these systems are already being used in at least 16 countries.

Message control. At least 47 governments have used experts to manipulate online debates to their advantage, double the number that did so a decade ago, the report said. It explains how “legal frameworks in at least 21 countries compel or incentivize digital platforms to deploy machine learning to eliminate disadvantaged political, social and religious expression.”

It also happens in the US and Europe. These types of tools are used around the world, and in fact, “even in more democratic environments such as the United States and Europe, governments have considered or actually implemented restrictions on access to prominent websites and social media platforms.” We have a recent example in France, where social media restrictions were proposed as a way to combat the protest riots that occurred in the country in July 2023.

Liar’s Dividend. According to Funk, such easy access to these generative AI systems can undermine trust in verifiable facts. According to the report, this is a phenomenon called the “liar’s dividend,” where people are more skeptical of true information, especially in times of crisis or political conflict, when false information can proliferate.

Image | Freedom House

In Xataka | All the attempts to regulate the Internet by the governments of Spain: from the gag law to the digital decree

Leave a Comment