The case for government regulators in the war on disinformation

By Frank-Jürgen Richter

The Straits Times, January 31, 2020

Earlier this month, soon after the Singapore Democratic Party (SDP) announced its legal challenge to the government’s ‘fake news’ law, Facebook, in a separate development, made known that despite widespread pressure it would continue to allow political advertisers to microtarget and misinform.

While the former highlighted global anxieties about governments regulating digital speech, the latter underscored what is for some a difficult, though necessary, pill to swallow: in the absence of such regulation, it would be naive to expect Big Tech to safeguard democracies from the disinformation epidemic.

In shrugging off calls to fact check the content of its political ads, Facebook said it would operate according to “the principle that people should be able to hear from those who wish to lead them, warts and all”.

In truth, even modest restrictions for online political ads – such as those that Google announced late last year – wouldn’t amount to an earnest commitment to confronting the broader problem.

It’s not just about politicians lying in ads. Across the world political actors – from state agencies to shadow proxies, to individuals and armies of bots – are logging on to launch sophisticated disinformation campaigns to steal elections, to manipulate public opinion on policy, and simply to sow discord.

Using everything from false news stories to phony memes, they’re tactically inundating micro-targeted demographics with specially tailored content, feeding unsuspecting groups with often objectively false information.

HARD TRUTH ABOUT BIG TECH

Meanwhile, Big Tech, apparently aghast at their platforms’ unforeseen hijacking, still won’t hand over the troves of proprietary data they’ve ceaselessly collected on how disinformation is shared, on how ads track and target specific audiences.

It’s time we accepted the obvious – that the war against disinformation shouldn’t be led by the tech companies raking in billions from the political ads at issue.

With the 2020 US presidential election upon us, the case of the United States offers a useful starting point for a discussion on what a prudent regulatory framework for combating misinformation might look like, one that attempts to strike a balance between needed regulation and the protection of free speech?

To be sure, disinformation campaigns are at least as old as democracy itself.

But to think we’ve been here before would be a mistake. The speed, reach, and low cost of disseminating disinformation over social media has made it a tool for politically-motivated manipulation of unprecedented efficiency. Furthermore, its capacity to shape political opinion is uniquely insidious.
Consider first how the nature of these platforms facilitates anonymity for those purchasing ads.
Anonymity, and thus impunity, in turn incentivises ever more radical and false messaging. Anonymity also cloaks the political affiliations of the purchaser, making it easier to pitch false information and manipulate those who click on the ads.

If voters around the world understood the ways in which algorithms use their data to sort them into categories characterised by certain predispositions, and how political ads leverage this data to target certain demographics with tailored disinformation, they’d think twice about the hours they spend scrolling and sharing. They might even quit the platform altogether.

Unfortunately, even when longtime users become privy to a platform’s tolerance of manipulative ads, the natural monopolies enjoyed by tech giants make the user’s fleeing the platform unlikely. This lack of competition almost ensures that any attempt by Facebook, Twitter, and Google to unilaterally combat disinformation will inevitably be half-hearted. They have no real financial incentive to prove their integrity.

If democracy is under threat, then democractic institutions must lead the war against that threat.
It’s time, in other words, for more radical solutions, and for a regulatory agency that can compel Big Tech to fall in line.

GIVING GOVT REGULATORS A FIGHTING CHANCE

Float the idea of US government regulation of social media and the minds behind these platforms often scoff at the idea of inexpert bureaucrats policing their highly sophisticated platforms. They’re less likely to point out that it’s the platforms’ policies on proprietary data that have in large part kept these bureaucrats unfit for the task.

By blocking impartial outsiders access to their data, the companies are also denying researchers a fighting chance to understand social media’s impact on political outcomes.

But surely the public has the right to be protected from threats posed by technologies that automate control over mass information markets? If conventional industries are subject to health and safety inspections, why are Big Tech firms exempt from government audits of their platforms’ data?

A new regulatory commission tasked with fighting politically-motivated disinformation on social platforms could insist on data disclosure for the sake of public safety, and as part of a much-needed broader reform of data rights. Opening its own research office, and in possession of the information needed to evaluate the threat, this new agency could conduct independent research.

They should target, at first, the most blatant and straightforward abuses, attempting to curtail objectively false stories and to prevent political parties and organisations from using proxies to publish content and ads.

This new agency should set about developing action plans and anti-disinformation software, and they should involve inputs from both third-party non-governmental organisations and the social media platforms themselves.

Users have a right to know who’s trying to influence their political views and how.
Government oversight should therefore start by requiring disclaimers for all ads deemed political in nature. These disclaimers should identify the ad’s source, those behind its funding, and the targeting criteria that put it in the user’s feed.

In the days before the 2016 election, messages circulated that Hillary Clinton had died, that the date of the election had changed – to curb the tide of such false stories, the regulatory agency should create and refine fact-checking algorithms with the help of the social media companies. Users can, for instance, be offered channels for flagging suspicious stories; there could be a veracity score attached to stories pending investigation.
Lest the social platforms’ dedication to the task be doubted, it will be clearly in these companies’ interests to co-create the standards and policing software they’ll be subsequently obliged to implement.

MOVE BOLDLY, TREAD CAREFULLY
Still, thorny questions about how we hold these platforms accountable remain.
First, many will oppose platform liability as a potentially dangerous departure from the oft-cited CDA 230, a legal provision shielding platforms from accountability for user-generated content. But though tech companies have long enjoyed the same exemptions afforded telecoms companies and other ‘neutral’ venues, they no longer deserve the protections that come with that designation. Far from being neutral content hosts, these sites now curate what their users see.

Second, how do we give this new regulatory office teeth, while not making it a monster?

Many fear that revoking liability protection would pose a risk to free expression – and indeed, making platforms answerable to a government for ‘incorrectly’ determining what is ‘true and legitimate’ speech would force them into a precarious situation.

Platforms might be persuaded to police over-aggressively, even prompting users to self-censor legitimate content. Even worse, some predict, an authoritarian administration might exploit the networks’ new responsibilities by transforming the tech companies into their own government-controlled proxy censors.

The answer, nevertheless, is to hold platforms liable, but to do so in a very limited sense. In order to protect free expression on these platforms, the new oversight agency must regulate in such a way that does not cast social platforms – or any one entity – as the lonely, potentially capricious, arbiters of legitimate expression.

We could imagine, for example, a legal framework according to which social platforms are punished when and only when they fail to implement the specific requirements and software they co-developed alongside regulators and non-governmental organisations.

The social media platforms, in other words, are held accountable for not requiring ad purchasers to identify affiliations, or for neglecting to deploy the fact-checking and attribution algorithms they co-wrote. They are not held accountable for the actual content that suddenly appears online, but only for failing to observe the best practices they co-designed.

Similarly, the burden (or privilege) of determining what constitutes ‘true, legitimate’ content is shifted to the action plans as well as the algorithms and all their underlying criteria – all of which are made open to both user and judicial review. The responsibility for defining legitimate expression is entrusted to the new regulatory body, itself answerable answerable to the people.

In tackling this incredibly intractable issue, we should be bold enough to hold social platforms accountable, while treading very carefully to protect free speech. In the US and elsewhere, new government agencies must be entrusted to help spearhead this effort.

Ultimately, the more disinformation seeks to undermine our democratic processes, and to obscure our vision of the wolf at the door, our best defence will be found through fortifying democratic institutions themselves. Our best hope lies in democratically-accountable public oversight.

Frank-Jürgen Richter is founder and chairman of Horasis, a global visions community. It will hold its next global meeting in Cascais, Portugal, in March.

The article first appeared in the Straits Times.