Agency and Trust in a Digital Democracy

Last week I was on a panel about ‘Democracy and the Digital Commons’ at Suffolk University.  At the start of the panel, each of us gave a 5-10 minute talk to help frame the discussion.  While there’s no transcript of the panel itself, here are my notes for the intro.  (Quick context: each of us tied our talk to the Boston Marathon bombing and in particular Reddit’s response to it.)

The internet was supposed to bring about a new world, a better world in which everyone had a voice. With the rise of blogging and social media, the machinery of publishing was being democratized. And more democracy is always good, right?

Well, not necessarily. Democracy has its flaws and its dangers, just like any other system of governance. As Winston Churchill said, “Democracy is the worst form of government, except all the other ones that have been tried.”

I want to focus on one particular long-understood flaw of democracy, which is this: in a democratic system, every voice counts equally, even though some may be much better educated on a given topic.

We can view what happened after the marathon bombing through this lens. Reddit led a crowdsourced manhunt – another way of saying that is that they led a democratized manhunt. Anyone, regardless of their experience with intelligence investigations, could contribute to the discussion. It’s not surprising, then, that the investigation went awry – that the crowd failed to consider how misidentifying suspects could harm innocent people, or how a public manhunt might influence the behaviors of the perpetrators. The vast majority of the people participating had no training or experience in the matter, so why would they know?

Here’s a question for the room. A loved one of yours has been mysteriously murdered, and you have to choose who investigates, one or the other – the local police, or a community of people on Reddit. Which do you choose?

Now I want you to imagine that the prime suspect is a local police officer. Who you want to investigate – the local police, or a community of people on Reddit?

Here we have a great tension. It’s better and more effective to delegate power to those with the expertise to use it. But it’s only better and more effective when the experts act fairly and in the best interests of those who’ve delegated power to them. And there’s no way to be 100% sure that they’ll act in your best interests. This is called the Principal-Agent Problem.

We can see the Principal-Agent Problem at work with the issue of misinformation more broadly. Techno-Utopians have championed social media as a way to make every user a publisher. But there’s a real value to the training that journalists get. There are cultural standards they internalize and then enact, including things like verification processes, disclosing conflict of interest, and refusal to plagiarize or fabricate quotes.

I for one am relieved to be able to delegate this work to journalists at ProPublica or the New York Times or the Boston Globe. I may not agree with what they chose to report on or the opinions in their editorials but I trust that they’ll follow journalistic norms. I trust them to act on my behalf, my news-gathering agents. But not everyone does. There’s been a concerted effort for decades, which has risen to a fever pitch over the last few years, to portray them as biased, as liars, even, lately, as traitors deserving of violence.

If we stop viewing journalists as our “news-gathering agents”, who replaces them? We’ve got to trust someone to gather our news, because I sure am not capable of doing it all for myself. So who do we trust instead?

One response is, “We’ll trust those in power, we’ll trust the President”, which, no matter what party you belong to should give you pause.

Another, more optimistic response, is to say, “The crowd! We’ll trust the crowd.” In other words, screw agents – let’s all be principals.

But what does the crowd give us? It gives us clickbait passed around every corner of Facebook. It gives us waves of abuse and harassment on Twitter. It gives us lies that spread faster than truth.

It turns out this result is not very satisfying! So there are calls for someone to exercise power and try to fix these problems. People ask Facebook to stop the spread of misinformation. They ask Twitter to stop abuse on their platform. And in doing so, they’re asking the tech platforms to act as their agents.

The tech platforms are hesitant to do this, I think rightly. “Who are we to determine which journalists are legitimate and which are not?” Facebook asks. “Who are we to determine what’s rudeness and what’s abuse?” Twitter asks.

But if not them, then who? They’ve designed their platforms as a meeting ground of millions of principals rather than a place where people can delegate responsibility to agents. The platforms don’t empower people to address these problems, so the only solution is to go behind the platforms. And behind these scenes, of course, these companies are profoundly non-democratic.

And so you end up with a site like Twitter, where many users feel coerced into letting Twitter act as an agent on their behalf despite having no mechanisms to hold it accountable. So Twitter has power its users don’t want to grant it, and that Twitter itself doesn’t want to use, but that must be used for the commons to remain remotely functional.

So how do we move forward? I have three main ideas.

First, I think we need to change how we design our digital platforms. Web applications are governance systems made of electricity and silicon rather than ink and parchment. When you ban a person from your website, it’s not that different from asking the sheriff to walk that no good rascal to the edge of town. And if we view web platforms as systems of governance then we can see just how naive and inadequate sites like Twitter or Facebook or Reddit are. The use of the phrase “upvoting” and “downvoting” on Reddit seems almost insulting. Users aren’t upvoting a person to represent them in a specific situation or downvoting a proposed policy they don’t want to see adopted. Or over on Twitter – people have been using external tools like shared blocklists for years to try to establish some semblance of collective control that the platform itself refuses to grant them.

Specifically, I think we need to design systems that encourage us to delegate power to those we trust – voluntarily, and revocably. Because we need agents that we can trust to act on our behalf, but we also need ways to withdraw power from those we no longer trust. If we can do this on our platforms, we won’t have to beg for intervention from the people behind the platforms.

Second, we need to nourish existing systems of trust and adapt them to online spaces. A lot of tech industry rhetoric has centered around replacing trust, for instance blockchain is supposed to be trust-free. But humans will always have to trust each other, and we’ve developed some pretty good cultural norms and social systems to facilitate that trust. We shouldn’t just throw them away.

Which brings me to my third point. Our legal system has a solution for the Principal Agent Problem. It doesn’t always work, but it does help a lot of the time. This solution is the concept of fiduciary duty. This is what requires Doctors to act in the best interest of their patients, lawyers to act in the best interests of their clients, and bankers to act in the best interests of their customers. Why not require platforms to act in the best interests of their users?

Nothing we do is going to permanently solve the Principal Agent Problem. There will always be some amount of misinformation and abuse in our digital commons. But that’s not an excuse to turn away from the issue. By thinking carefully and compassionately about these problems we can improve our approach to them.