山ǿ

Briefing: Misinformation during a Public Health Crisis

Seven reflections from Taylor Owen, Beaverbrook Chair in Media, Ethics and Communications and Associate Professor at Max Bell School of Public Policy, on the relationship between tech and the current COVID-19 pandemic.

Like all of us, I have been absorbing a huge amount of information about the pandemic. Below are some thoughts based on my particular public policy lens: the role of information reliability in a democracy, mis/disinformation as a structural problem, and the broader and changing role tech plays in our society.

1) Confronting an Infodemic

As always, there is a lot of problematic content on the internet, and some of it is quite dangerous. The WHO has called this an “infodemic,” which they define as “an overabundance of information—some accurate and some not—that makes it hard for people to find trustworthy sources and reliable guidance when they need it.” Heidi Larson, head of the Vaccine Confidence Project, said in a 2018 Nature article: “The deluge of conflicting information, misinformation and manipulated information on social media should be recognized as a global public health threat. ” The reason is that when you are trying to inform billions of people about why and how they should drastically change their behaviour, the quality of information being circulated is critical.

And a lot of what is circulating is bad. This includes:

  • Medical misinformation such as bad science, false cures, and fake cases
  • Ideological content from communities who distrust science and proven measures like vaccines
  • Profiteers including traffic seekers, selling “cures”,other health and wellness products, or phishing scams
  • Conspiracy theorists such as those claiming the coronavirus is a bioweapon, was planned by Bill Gates, or created in a Chinese or American lab
  • Harmful speech ranging from racist attacks to Neo-Nazis talking about ways to *spread* the coronavirus to cause chaos

This is all occurring on social platforms, in private groups, and on messaging platforms. The public health problem is not any one bad piece of content, but the effect of the sum of the parts. We are being floodedwith content, both good and bad, which creates an epistemological problem: How do we come to know what we know?

2) The Source of Misinformation

Misinformation flows from the top. And this presents unique challenges. The Chinese government worsened the pandemic by censoring scientists and leading figures (such as Elon Musk) have used their platforms to spread damaging misinformation. Perhaps most extraordinarily, the President of the United States can’t be relied on to provide accurate information to the public, and is using his daily press briefings to spread dangerous misinformation. This has led prominent journalism scholars and media columnists such as Jay Rosen and Margaret Sullivan to call on the media to stop live-broadcasting Trump’s briefings. It is worth pausing on just how extraordinary this moment is.

3) Platforms Taking Responsibility for Content

Platforms are taking more responsibility for content, which further proves we can and should be demanding more responsibility from them. Content moderation is always about a values trade-off: the freedom to speak versus the right to be protected from the harms that speech can cause. This trade-off is changing in real time. Just a month ago, platforms were still (to varying degrees) prioritizing the value of frictionless speech over the potential harms of this speech. But now, Facebook, Youtube, and Twitter have all taken far more aggressive stances to content moderation on this issue than compared topolitical content. The Overton window is broadening. It will likely not close again.

4) Structural Problems Continue

The problem of content moderation reveals bigger design and governance problems. Misinformation has never been a problem of individual bad actors, it has always been a structural problem. It is about the design of our digital infrastructure. And the structural problem remains. It includes the scale of platform activity (a billion posts on Facebook a day), the role of AI in determining who and what is seen and heard, and the financial model which creates a free market for our attention and prioritizesvirality and engagement over reliable information.

If these are real drivers of the infodemic, should these companies be changing how they function? Adjusting their algorithms? Radically limiting microtargeting? And perhaps more importantly, do we want platforms making these broad decisions about speech themselves, and if not, how can democratic governments step in? Should we be seeing these platforms more as essential services? Part of the reason we regulated communication infrastructure was because we deemed it essential, particularly during emergencies. These companies are clearly essential part of our lives, and likely need to be treated as such. What does this wider responsibility looks like?

5) The Stickiness of Policy

Policies imposed during crises can be sticky. Particularly when they involvenew technical infrastructure. There is clearly a significant value to be gained from behaviourand data monitoring that would have been (rightly) viewed as a significant breach of privacy only two weeks ago. Tracking the location of those that have tested positive as well as those they have come into contact with could prove critical. But how do we build out this capacity without entrenching these new powers?

Governments are discussing this with the platforms, as well as with pernicious surveillance tech companies such as Palantir and Clearview AI. The Israeli government has approved emergency measures for its security agencies to track the mobile phone data of people with suspected coronavirus. We have seen even more draconian measures in other countries such as temp drones andforced app downloads. Will we see similar measures here too? If so, we must ensure that these provisions are sun-setted and be vigilant against shifting important norms about surveillance and privacy.

6) The Establishment of New Norms

We are establishing new society-wide technology norms in real time. The digital divide was always a fiction. Social, economic, and political interactions using technology have always been a real part of our lives. But we are now experimenting en masse with the rapid adoption of a technology-mediated society. Our social interactions, our digital economy, our employment, and our politics are moving online. And we are doing so via commercial platforms designed with a very particular set of incentives. These design decisions andincentives are going to have a profound effect on us all. If ever we were to think about and build public digital infrastructures, now would be the time.

7) Impacts on Our Health

Finally, we need to be thinking about the mental health implications of the ways we are adopting digital technologies. Online community is a double-edged sword. In addition to all the amazing things social media enables, it can also causereal mental health problems. We need to be aware of this as wemove more of our information diet,workand social interactions online.


This briefing was prepared by Taylor Owen,Beaverbrook Chair in Media, Ethics and Communications and Associate Professor at Max Bell School of Public Policy, for our Policy Challenges During a Pandemic series in association with awebinar delivered on March 23, 2020. You can watch that webinar below.


Taylor Owen

Beaverbrook Chair in Media, Ethics, and Communications, Max Bell School of Public Policy

Public Policy Forum Fellow and the 2016 Public Policy Forum Emerging Leader

Author of Journalism After Snowden: The Future of the Free Press in the Surveillance State,and The Platform Press: How Silicon Valley Reengineered Journalism

Core Policy Course: Information and Media Literacy

COVID-19 and Public Policy image of empty streets and airport

Policy Challenges During a Pandemic

Policy Challenges During a Pandemicis aseries ofwebinars and briefing notesaddressing policy aspects of the current public health crisis.

Twitter

Back to top