Are The “Big Tech” Companies Actually Allowing And Possibly Enabling Disinformation? Publisher Alistair Speirs Thinks The Answer Is “Yes”
Recently the EU Parliament overwhelmingly voted in favor of the most ambitious regulation ever drafted to end the era of potentially disruptive and misleading posts on social media. This legislation directly affects some of the most powerful corporations in the world, but the EU Parliament was decisive and passed this possibly game changing legislation, perhaps influenced by thousands of activists flooding key decision-makers with messages, and publishing revealing investigations showing the damage they claim was caused by Big Tech. There was a grass roots driven momentum in Europe to down-size the influence the social media platforms can have, especially without intense content curation, acceptance of responsibility and ultimately accountability. Was there an element of anti-US sentiment in there too?
B ut the activists are not done yet and are pushing for the EU lawmakers to continue strong negotiations, which they hope will lead to a cascade of new laws all over the world following the EU lead, they hope, as they say “ will put people before platforms and their profits”. Their intention is to ensure that whenever Facebook, Youtube or TikTok or other giant tech companies, might allow harm to societies at scale, for example from allowing or even enabling, dangerous disinformation go viral, to spreading harmful content to kids or misusing our personal data –the local governments would be able to force them to take responsibility.
Social media has certainly brought powerful innovation to our societies, but for many that seems they have been able to evolve with literally just one goal: maximizing their profit, even if that meant adversely affecting our societies, often questioning and delegitimizing science and encouraging a rise in teen depression, including from the horrible and still deeply ingrained FOMO (fear of missing out) and online bullying and “shaming”.
This new EU regulation which some are hoping will become a “new constitution” for the Internet, might result in a future where technologies are shaped around common values, and protect fundamental human rights, instead of exploiting weaknesses and driving traffic to the highest return posts, no matter what their content is.
Also, with so much disillusionment around politics, this regulation has perhaps shown Europe at its best, with almost all the demands from citizens, experts and civic society organizations landing in some way in the legislation, including such phrases as:
• “Detoxify the algorithm” i.e. make the platforms take responsibility for the harm they might cause to our societies, like the viral spread of dangerous disinformation;
• “Big sanctions” meaning that companies can face fines up to 6% of their global income, which could mean billions of dollars, if they don’t fix their systems;
• “Open the black box” and allow independent auditors, researchers and civic society to scrutinize their actions and potentially uncover their wrongdoings;
• “Stop surveillance ads” i.e. ban any exploitation of children’s data, political beliefs or gender choices, to target people with ads.
All of this seems very positive to those who have been campaigning for these changes led by a number of civil rights and activist groups who were in the forefront of the public push to force social media platforms to tackle the sometimes very serious problems their push for profits create. One of these organisations is Avaz who have been reporting on disinformation and its scale, commissioning polls on its harmful effects, evaluating platforms’ efforts to tackle them, and identifying their failures, in the US, EU and Brazil. Now they are calling for a “Paris Agreement for Disinformation”, after research it conducted found significant failings in the efforts of Facebook, YouTube, Twitter, and Instagram to tackle COVID-19 disinformation since the pandemic began.
Their preliminary research names Facebook as the greatest “emitter” of COVID disinformation and YouTube as the platform that fails to act in the greatest proportion of cases, but we have no doubt that the platforms own research will counter that, and we are left to decide who we believe, although the EU action does indicate they felt there was a need to take action.
The European Commission recently released its guidance on strengthening the 2018 Code of Practice on Disinformation and in May, EURACTIV reported that the Commission was pitching measures to tackle disinformation as part of the proposed Digital Services Act (DSA). Luca Nicotra, campaign director at Avaaz, told EURACTIV that, together, these policies present a “once in a decade opportunity to table an approach that can have the ambition to actually address the problem.”
The European Commission is defending its plans to tackle online disinformation from criticism from national governments, according to a working paper leaked to EURACTIV.
Relevant background research, presented at an event in June 2021, found that Facebook was responsible for 68% of the total interactions on fact-checked COVID disinformation documented across the platforms. So EURACTIV contacted Facebook, which also owns Instagram, and YouTube’s parent company, Google, for comment but no reply was received by the time of the publication. However in response to Avaaz’s report, a Twitter spokesperson told EURACTIV that the platform had expanded efforts to combat disinformation since the start of the pandemic. “Making certain that reliable, authoritative health information is easily accessible on Twitter has been a priority long before we were in the midst of a global pandemic”, they said.
The research also found that the platforms took no action on 37% of the COVID disinformation content sampled. In this respect, YouTube was found to be the biggest culprit, with 92% of the pandemic-related disinformation sampled on the platform left unactioned.
There are also major discrepancies between languages, with 84% of disinformation content in Italian receiving no action, compared to 29% of such content that appeared in English and 20% in Spanish.
The agreement on disinformation that Avaaz is calling for, would see big tech companies agree to strengthen commitments to combat disinformation. “We’re fundamentally facing a huge threat to our information environment,” Luca Nicotra told EURACTIV. “That’s why we’re saying we need the highest ambition on this.” He said that some of the points included in the guidance on the Code of Practice on Disinformation are encouraging, but noted that compliance monitoring and penalties for violations remain key gaps. At the very least, he suggested, there should be brand image consequences for platforms that fail to adhere to the code. So far none of the platforms seems to have suffered image damage even though they have been implicated in serious events such as the 6th January attack on the US Capitol.
However the freshly published Guidance on Strengthening the Code of Practice on Disinformation illustrates the European Commission’s expectations on the anti-disinformation measures for online platforms. While the Code is non-binding, the measures are likely to become mandatory following the adoption of the Digital Services Act (DSA).
The DSA’s provisions on disinformation are part of a much wider focus on transparency, particularly in online advertising, which some in the industry see as being directly responsible for the levels of online disinformation, with accusations that the issue of disinformation is more to do with the heavily ad-reliant business model of large online platforms than the technology itself.
The consensus is that “Polarised or even fake content attracts more users and keeps them longer on the platform. Therefore, there is no strong incentive for large platforms to substantially change their business conduct as it would run counter to their ad-reliant business model.”
And there you have it. When we as users of digital media are attracted to and respond to fake or misleading information, we are actively encouraging advertisers to pay the platforms to get us there and keep us there. The motive is purely profit, (for both the advertisers and the platforms) but the consequences of ignoring people and planet are so serious and so far-reaching that it may take more than the EU’s actions to rein them in. We have unleashed a tsunami of addictive and compulsive consumption of digital communication that has already altered and destroyed lives, and political systems, and we need to make some serious changes – ourselves – as well as disabling the platforms that cause it, if we are to reach a new equilibrium with the big tech companies.