To address disinformation online, self-regulation by internet intermediaries and positive measures, including independent fact-checking, public education, and media literacy campaigns, could be prioritized over criminalisation.
- Freedom of opinion and expression was protected throughout the campaign process
- Freedom of opinion and expression by the media was respected throughout the electoral process. In addition, the media respected the freedom of opinion and expression of others
- The state took the steps necessary to give effect to rights during voter education
- Concerning more specifically the risks posed by disinformation and undue propaganda on the internet and social media for the smooth conduct of the electoral process, the Assembly calls on member States to: (…) 9.2. develop specific regulatory frameworks for internet content at election times and include in these frameworks provisions on transparency in relation to sponsored content published on social media, so that the public can be aware of the source that funds electoral advertising or any other information or opinion.
- The Commission calls upon platforms to decisively step up their efforts to tackle online disinformation. It considers that self-regulation can contribute to these efforts, provided it is effectively implemented and monitored.
- Ensure that online services include, by design, safeguards against disinformation; this should, for example, include detailed information on the behaviour of algorithms that prioritise the display of content as well as development of testing methodologies.
- In line with the Commission's Communication, the signatories of the Code of Practice recognise the importance of efforts to: (i) Include safeguards against disinformation; (ii) Improve the scrutiny of advertisement placements to reduce revenues of the purveyors of disinformation; (iii) Ensure transparency about political and issue-based advertising, also with a view to enabling users to understand why they have been targeted by a given advertisement; (iv) Implement and promote reasonable policies against misrepresentation; (v) Intensify and demonstrate the effectiveness of efforts to close fake accounts and establish clear marking systems and rules for bots to ensure their activities cannot be confused with human interactions; (...) (vii) Consistently with Article 10 of the European Convention on Human Rights and the principle of freedom of opinion, invest in technological means to prioritize relevant, authentic, and accurate and authoritative information where appropriate in search, feeds, or other automatically ranked distribution channels (...) (ix) Dilute the visibility of disinformation by improving the findability of trustworthy content.
- In line with those rights and freedoms, rather than criminalising or prohibiting disinformation as such, the EU strategy aims to make the online environment and its actors more transparent and accountable, making content moderation practices more transparent, empowering citizens and fostering an open democratic debate.
- Empowering users is key to limiting the impact of disinformation. A better understanding of the functioning of online services, as well as tools that foster more responsible behaviour online or that enable users to detect and report false and/or misleading content, can dramatically limit the spread of disinformation.
- Disinformation is an area where rigid legislation by the EU or member states is not desirable, mainly because doing so would constitute a disproportionate interference with freedom of speech. In the absence of such legislation, the soft law instrument in play is the EU Code of Practice on Disinformation – a voluntary arrangement between the European Commission and big tech companies (...).
- Rather than imposing undue restrictions on freedom of expression and onerous intermediary liability obligations, efforts to address online disinformation should promote an enabling environment for freedom of expression. These measures include: requiring or encouraging heightened transparency regarding advertisement placements and sponsored content; developing and promoting independent fact-checking mechanisms; providing support for independent and diverse public service media outlets; instituting measures to improve public education and media literacy; and collaborating with social media platforms to ensure that their approaches to content moderation, including the use artificial intelligence-driven tools, reinforce and respect human rights.
- There should be no general or ambiguous laws on disinformation, such as prohibitions on spreading “falsehoods” or “non-objective information”.
- States should consider supporting positive measures to address online disinformation, such as the promotion of independent fact-checking mechanisms and public education campaigns, while avoiding adopting rules criminalising disinformation.
- Media outlets and online platforms should enhance their professionalism and social responsibility, including potentially by adopting codes of conduct and fact-checking systems, and putting in place self-regulatory systems or participating in any existing systems, to enforce them.
- In order to protect against unaccountable private domination of the environment for freedom of expression, we urge the development of the following: (…) e. Human rights sensitive solutions to the challenges caused by disinformation, including the growing possibility of “deep fakes”, in publicly accountable and targeted ways, using approaches that meet the international law standards of legality, legitimacy of objective, and necessity and proportionality.
- General prohibitions on the dissemination of information based on vague and ambiguous ideas, including “false news” or “non-objective information”, are incompatible with international standards for restrictions on freedom of expression, as set out in paragraph 1(a), and should be abolished.
- States have a positive obligation to promote a free, independent and diverse communications environment, including media diversity, which is a key means of addressing disinformation and propaganda.
- States should repeal any law that criminalizes or unduly restricts expression, online or offline.
- Smart regulation, not heavy-handed viewpoint-based regulation, should be the norm, focused on ensuring company transparency and remediation to enable the public to make choices about how and whether to engage in online forums.
- Finding appropriate responses to disinformation is difficult, not least because the concept is undefined and open to abuse, and because the size and nature of the problem is contested in the absence of sufficient data and research. State responses have often been problematic and heavy handed and had a detrimental impact on human rights.
- There is clear evidence that robust public information regimes and independent journalism are strong antidotes to disinformation.
- States have resorted to disproportionate measures such as Internet shutdowns and vague and overly broad laws to criminalize, block, censor and chill online speech and shrink civic space. These measures are not only incompatible with international human rights law but also contribute to amplifying misperceptions, fostering fear and entrenching public mistrust of institutions.
- Measures such as the adoption of corporate digital ethics codes and of self-regulatory mechanisms to solve conflicts between companies and users would also allow greater regulatory flexibility for the benefit of the interests of users and companies, while depressurising the relationship with the government and promoting co-responsibility of online behaviours.
- Companies should adopt clear, narrowly defined content and advertising policies on disinformation and misinformation that are in line with international human rights law and after consultation with all relevant stakeholders. (...) They should ensure that all policies are easily accessible and understandable by users and are enforced consistently, taking into account the particular contexts in which they are applied.