Agency and Trust in a Digital Democracy

Last week I was on a panel about ‘Democracy and the Digital Commons’ at Suffolk University.  At the start of the panel, each of us gave a 5-10 minute talk to help frame the discussion.  While there’s no transcript of the panel itself, here are my notes for the intro.  (Quick context: each of us tied our talk to the Boston Marathon bombing and in particular Reddit’s response to it.)

The internet was supposed to bring about a new world, a better world in which everyone had a voice. With the rise of blogging and social media, the machinery of publishing was being democratized. And more democracy is always good, right?

Well, not necessarily. Democracy has its flaws and its dangers, just like any other system of governance. As Winston Churchill said, “Democracy is the worst form of government, except all the other ones that have been tried.”

I want to focus on one particular long-understood flaw of democracy, which is this: in a democratic system, every voice counts equally, even though some may be much better educated on a given topic.

We can view what happened after the marathon bombing through this lens. Reddit led a crowdsourced manhunt – another way of saying that is that they led a democratized manhunt. Anyone, regardless of their experience with intelligence investigations, could contribute to the discussion. It’s not surprising, then, that the investigation went awry – that the crowd failed to consider how misidentifying suspects could harm innocent people, or how a public manhunt might influence the behaviors of the perpetrators. The vast majority of the people participating had no training or experience in the matter, so why would they know?

Here’s a question for the room. A loved one of yours has been mysteriously murdered, and you have to choose who investigates, one or the other – the local police, or a community of people on Reddit. Which do you choose?

Now I want you to imagine that the prime suspect is a local police officer. Who you want to investigate – the local police, or a community of people on Reddit?

Here we have a great tension. It’s better and more effective to delegate power to those with the expertise to use it. But it’s only better and more effective when the experts act fairly and in the best interests of those who’ve delegated power to them. And there’s no way to be 100% sure that they’ll act in your best interests. This is called the Principal-Agent Problem.

We can see the Principal-Agent Problem at work with the issue of misinformation more broadly. Techno-Utopians have championed social media as a way to make every user a publisher. But there’s a real value to the training that journalists get. There are cultural standards they internalize and then enact, including things like verification processes, disclosing conflict of interest, and refusal to plagiarize or fabricate quotes.

I for one am relieved to be able to delegate this work to journalists at ProPublica or the New York Times or the Boston Globe. I may not agree with what they chose to report on or the opinions in their editorials but I trust that they’ll follow journalistic norms. I trust them to act on my behalf, my news-gathering agents. But not everyone does. There’s been a concerted effort for decades, which has risen to a fever pitch over the last few years, to portray them as biased, as liars, even, lately, as traitors deserving of violence.

If we stop viewing journalists as our “news-gathering agents”, who replaces them? We’ve got to trust someone to gather our news, because I sure am not capable of doing it all for myself. So who do we trust instead?

One response is, “We’ll trust those in power, we’ll trust the President”, which, no matter what party you belong to should give you pause.

Another, more optimistic response, is to say, “The crowd! We’ll trust the crowd.” In other words, screw agents – let’s all be principals.

But what does the crowd give us? It gives us clickbait passed around every corner of Facebook. It gives us waves of abuse and harassment on Twitter. It gives us lies that spread faster than truth.

It turns out this result is not very satisfying! So there are calls for someone to exercise power and try to fix these problems. People ask Facebook to stop the spread of misinformation. They ask Twitter to stop abuse on their platform. And in doing so, they’re asking the tech platforms to act as their agents.

The tech platforms are hesitant to do this, I think rightly. “Who are we to determine which journalists are legitimate and which are not?” Facebook asks. “Who are we to determine what’s rudeness and what’s abuse?” Twitter asks.

But if not them, then who? They’ve designed their platforms as a meeting ground of millions of principals rather than a place where people can delegate responsibility to agents. The platforms don’t empower people to address these problems, so the only solution is to go behind the platforms. And behind these scenes, of course, these companies are profoundly non-democratic.

And so you end up with a site like Twitter, where many users feel coerced into letting Twitter act as an agent on their behalf despite having no mechanisms to hold it accountable. So Twitter has power its users don’t want to grant it, and that Twitter itself doesn’t want to use, but that must be used for the commons to remain remotely functional.

So how do we move forward? I have three main ideas.

First, I think we need to change how we design our digital platforms. Web applications are governance systems made of electricity and silicon rather than ink and parchment. When you ban a person from your website, it’s not that different from asking the sheriff to walk that no good rascal to the edge of town. And if we view web platforms as systems of governance then we can see just how naive and inadequate sites like Twitter or Facebook or Reddit are. The use of the phrase “upvoting” and “downvoting” on Reddit seems almost insulting. Users aren’t upvoting a person to represent them in a specific situation or downvoting a proposed policy they don’t want to see adopted. Or over on Twitter – people have been using external tools like shared blocklists for years to try to establish some semblance of collective control that the platform itself refuses to grant them.

Specifically, I think we need to design systems that encourage us to delegate power to those we trust – voluntarily, and revocably. Because we need agents that we can trust to act on our behalf, but we also need ways to withdraw power from those we no longer trust. If we can do this on our platforms, we won’t have to beg for intervention from the people behind the platforms.

Second, we need to nourish existing systems of trust and adapt them to online spaces. A lot of tech industry rhetoric has centered around replacing trust, for instance blockchain is supposed to be trust-free. But humans will always have to trust each other, and we’ve developed some pretty good cultural norms and social systems to facilitate that trust. We shouldn’t just throw them away.

Which brings me to my third point. Our legal system has a solution for the Principal Agent Problem. It doesn’t always work, but it does help a lot of the time. This solution is the concept of fiduciary duty. This is what requires Doctors to act in the best interest of their patients, lawyers to act in the best interests of their clients, and bankers to act in the best interests of their customers. Why not require platforms to act in the best interests of their users?

Nothing we do is going to permanently solve the Principal Agent Problem. There will always be some amount of misinformation and abuse in our digital commons. But that’s not an excuse to turn away from the issue. By thinking carefully and compassionately about these problems we can improve our approach to them.

A matter of trust

This originated as a post to a mailing list on the subject of blockchains and how they might help the cause of open science.  The quote below is the claim I was directly responding to.

Shauna: “but the scientific community is arguably the most effective trust-based system in human history” – according to this view we wouldn’t need version control systems or preregistration either. I couldn’t disagree more; trust has no place in science. To me, that’s one of the major things with open scientific practices: removing the trust from science and instead practice transparency.

My response:

Trust is a fundamental issue in all human relationships and communities.  Every single interaction we have with other human beings involves some level of trust.  Just today I had to trust the people who harvested and packaged my food not to have accidentally or maliciously poisoned me, the drivers on the street to obey traffic conventions, and my house mate to have locked the house and not invited anyone dangerous inside — and those are only the most obvious examples.  I could come up with dozens more.

Different situations require different levels of trust.  If a situation requires more trust than you currently have, you can try to increase trust in a number of ways.  You can build new technologies, but you can also strengthen relationships, create neutral institutions, or add legal or regulatory force to your agreements.  None of these work perfectly, and often you’re best off pursuing a combination of them.  In all cases, though, you will have to trust someone at some point – it’s just a matter of deciding which system will allow you to trust in a way that’s acceptable to you.

The scientific community has trust issues, yes, like every other human community.  But its trust issues are of a specific type.  When you read a scientific paper, what makes you doubt the findings?  Personally, I’m not worried that the authors have faked the data, or that the publisher has changed the content of the paper without anybody knowing, or that the paper is stolen or plagiarized.  I know that the scientific community has very strong norms against these types of violations, and so they’re relatively rare.  Broadly speaking, I trust the scientific community to minimize these problems.  There’s not a lot of communities I would trust like that, which is why I claimed that science is special in this way.

The trust issues that the scientific community currently has are largely based around mis-aligned incentives.  I trust most scientists not to engage in outright fraud but I don’t trust them not to make choices in their research practices that may hurt their careers.  They know how the funding, publication, and tenure systems work, and they know that replications, preregistration, and following strict practices to minimize false positives will hurt their careers.  Simply put: most scientists don’t trust that taking actions to make science better will be rewarded rather than punished.  In a world of decreasing funding and a decaying social safety net, is anyone surprised that people do what’s best for themselves within the existing norms of the community?

My focus, then, is on supporting initiatives that help scientists trust that taking actions to make science better will be rewarded rather than punished.  I don’t see how blockchain helps with that even slightly.  I’d rather put time and energy and resources into things like lobbying funders to require certain research practices, supporting journals that facilitate preregistration and minimize publication bias, convincing departments to require a minimum number of replications per researcher per year, and educating students and early career researchers about the importance of these practices.  In other words, changing the norms so that engaging in these behaviors is easy rather than hard – because I trust humans to prefer the easy thing to the hard thing.