At the beginning of pandemic caused by Covid-19 it became crucial to recognise information based on facts from disinformation. Information that are not based on the facts can cause panic and wrong decisions. The meaning that the popular phrase “fake news” conveys does not include all the relevant forms within the scope of our interest on one hand, and also includes some forms that are outside the scope of our interest. Boundaries between these terms might seem vague, but the difference in meanings eventually amounts on a big scale. The focus of our subject shall be disinformation, or pretend-information, that is used to cause public harm or create profit by being false, inaccurate, or misleading by its very design.
After we have defined the central term, we shall touch upon the initial actions undertaken by the EU. The EU recognised issue with disinformation and, starting with 2015, launched the East Strategic Communication Task Force under the European External Action Service (EEAS), as the reaction to Russian disinformation campaigns. The task was to conduct a proactive communication of the EU policies and activities in the eastern neighbourhood to counter the effects of disinformation. In the May of the same year, the Joint Framework on Countering Hybrid Threats was presented through the initiative of the Foreign Affairs Council of the EU.
Hybrid threats in this context mean “the mixture of conventional and unconventional, military and non-military, overt and covert actions that can be used in a coordinated manner by state or non-state actors to achieve specific objectives while remaining below the threshold of formally declared warfare“. Disinformation campaigns are often a part of the hybrid threats or hybrid warfare which are increasingly getting utilised.”
The EU Communication
The Communication: “Tackling online disinformation: a European approach“, presented in April 2018, called upon the response to the phenomenon of disinformation. It stated that any policy response should be comprehensive and that the policy makers should continuously monitor the phenomenon and adjust objectives according to its evolution.
As a feedback on the Communication, two reports were issued.
- The first one is the Report on the Implementation of the Communication, which presents the progress made on the actions proposed by the European Commission.
- The other one is the final Report of the independent High Level Group on Fake News and online disinformation (HLEG) which analyses the suggested best practices in the light of fundamental principles and responses stemming from these principles. HLEG suggests, instead of simplistic solutions, the multidimensional approach to the issue.
Also, important part of the EU approach was the Action Plan against Disinformation set forth by European Commission. The Action Plan builds upon existing initiatives of the European Commission (especially the Communication). With the help of the EEAS, Member States, and the European Parliament, it sets four pillars which are the ground for the response to the disinformation. Each pillar is composed of particular actions that are to be taken in this endeavour. The pillars concern: improving the capabilities of union institutions to detect analyse and expose disinformation; strengthening coordinated and joint responses to disinformation; mobilising private sector to tackle disinformation; and raising awareness and societal resilience.
Code of Practice
In October 2018, the Code of Practice on Disinformation was published as “the first worldwide self-regulatory set of standards to fight disinformation voluntarily signed by platforms leading social networks, advertisers and advertising industry”. The Code was signed by online platforms, such as Facebook, Google, Twitter, Microsoft, and advertising industry.
Signatories of the Code presented detailed roadmaps to take action in 5 areas:
- Disrupting advertising revenues of certain accounts and websites that spread disinformation.
- Making political advertising and issue based advertising more transparent.
- Addressing the issue of fake accounts and online bots.
- Empowering consumers to report disinformation and access different news sources, while improving the visibility and findability of authoritative content.
- Empowering the research community to monitor online disinformation through privacy-compliant access to the platforms’ data.
Between January and May 2019, the European Commission carried out a targeted monitoring of the implementation of the commitments by Facebook, Google and Twitter with particular pertinence to the integrity of the European Parliament elections. In particular, the Commission asked the three platforms signatory to the Code of Practice to report on a monthly basis on their actions undertaken to improve the scrutiny of ad placements, ensure transparency of political and issue-based advertising and to tackle fake accounts and malicious use of bots. The Commission published the reports received for the five months together with its own assessment (intermediate reports for: January, February, March, April, and May 2019).
In March 2019, European Parliamentary Research Service Automated tackling disinformation. The Study classified national initiatives regarding the type of regulation of disinformation, ranging from no regulation to hard regulation, relative to the civil society involvement. Spain has not yet implemented any specific regulation on disinformation. In Italy the Postal Police78 has been charged to issue warning on false reports. Citizens can also alert the police in case they spot wrong information. Then we see the co-regulation initiatives implemented by the EU, Belgium and Denmark, which foster greater civil society involvement. Then, France, UK and Germany have adopted strict regulations, giving much power to the judiciary, and less cooperation with the civil society.
What is next?
Multidimensional approach to disinformation is interesting. This phenomena cannot be easily resolved with certain laws since the issues is quite global. Many questions appear, such as, how the find a person responsible for spreading disinformation, how to prosecute this person, and what if it is not a person but a bot? Even though issue is complex, consequences are rather real and concrete. People see some news, take it as it is and act on it.
Privacy and data protection regulation can act as a powerful tool in this battle. Since algorithms provide certain profile about the person, according to such profile this person gets certain news in the news feeds or ads. The news designed and served by an algorithm triggers specific emotions from that person, and the goal is achieved.
Also, certain non-governmental bodies across the World started fact-checking and included in this battle. These free portals provide fact checking to citizens. When is quite difficult for people to recognise true from false, deep fake from real, it is common that non-governmental and governmental bodies help with their resources. Obvious issue with this approach is creation of, so called, Ministry of Truth (as introduces with George Orwell’s 1984), when we give too much power to organisations or institutions and trust them with what is supposed to be the truth.
This topic, as many other topics that we discuss today, emphasises how the issues that we tend to deal with today are quite global, multidisciplinary, and online. So there are no easy fixes. Only combination of laws, soft laws, codes of practise, awareness, policies, and technology can provide some solution.
Marija Boskovic Batarelo. LL.M. Law and Technology