Panel Participants:
Līga Raita Rozentāle, Senior Director of European Cybersecurity Policy, Microsoft
Ieva Ilvesa, Adviser to the President of Latvia for Information and Digital Policy
Prof. Filippo Menczer, Director of the Observatory on Social Media, Indiana University
Hannes Krause, Head of Strategic Communication at the Government Office of the Republic of Estonia
Moderator:
Dr. Gunda Reire, Advisor to the Minister at the Ministry of Foreign Affairs of the Republic of Latvia
The discussion begins with a prompt on the temptation to blame technology as one of the culprits of societal problems. However, technology is not good or bad – it is neutral. It is about the ability of communities to adapt themselves to these technologies. Large companies are important to limite the massive spread of disinformation, but so are governments, and so are the people. The challenge is to find the right balance between them.
The first discussant highlighted how large tech companies like Microsoft have been working through a variety of initiatives to understand how democracy can be distorted through disinformation. A variety of codes and guidelines have been agreed to among these tech companies, including with the European Union. The tech companies have also taken it upon themselves to create a variety of technologies to combat the novel threats. Microsoft has created a video identifier to give confidence scores for videos and help track faked content. However, tech organizations urge for enhancing media literacy and digital literacy as an effective tool to hamper the impact of disinformation. Governments, civil society, NGOs, and others should be more open to sharing threats to ensure data collection and allow proper reactions.
The discussants moved the discussion to the question of whether deepfakes and disinformation are truly issues. A discussant replied by highlighting that the real threat which these new methods are adding on to are echo chambers. The people most likely to be exposed to disinformation are the ones that are least likely to see debunking information. More partisan sides are more vulnerable. The emergence of the echo chambers stems from the very easy unfollowing and unfriending highly accelerates that we sort ourselves into, creating homogeneity of opinion. Though we may be guided by platforms to make such decisions, it is ultimately an autonomous decision by the person and is present on all platforms with a social element. platforms
A discussant continued the discussion to note the role of cognition, which provides a limited attention span to humans. This results in us moving away from information overload and limiting out capacity for information. Algorithms adapt to our cognitive biases, but differently than we think. Algorithms in platforms assess signals to detect quality of content – the more people like it the higher quality the content is believed to be. While this may be helpful with a bit of popularity, at which point wisdom of crowds still works, at a certain point a critical mass is reached where real quality of an article does not matter and humans interpret the likes or shares on it as proof of quality. Hence, the mere exposure to popularity signals increases our vulnerability. interplay between cognitive and social factors can be manipulated via inauthentic actors like social bots to automate the popularity process, and thus manipulate audiences by placing them into an echo chamber.
Discussants noted that such a perspective should shift more pressure on government to change their action plan from censorship to providing or authenticating the source of information. As disinformation ultimately not about differences in opinions but about differences of facts – governments should assess how to reestablish credible sources of information. This should be key to establish a base from which a public forum can healthily operate.
A discussant pointed out that this runs contrary to how media works – “bad news is good news” as it is more interesting and thus more lucrative. Hence, the transatlantic alliance needs to find ways to support the media – it should not be the role of the government to provide emotional news. However, it remains a challenge to assess the correct balance of ensuring the participation of proper “authentic” stakeholders like academia or practitioners in a specific industry and the tendency towards amplification by the media to – in essence – make news materials less boring.
Discussants proposed a variety of processes that could help governments process societal resilience in the age of data overload. Information should be made available in more channels that people actually use, including popular social media for the youth. This involves having a stricter control of information efforts, investments into different communication channels, and constant vigilance towards what is becoming more representative of the populace. Ultimately, governments will have to work together to fully understand the way social media and other sources of information affect people’s behavior. Then members of the alliance must consider how societal resilience can be rebuilt from the ground up on the basis of these new technological dynamics.