A matter of trust

This originated as a post to a mailing list on the subject of blockchains and how they might help the cause of open science.  The quote below is the claim I was directly responding to.

Shauna: “but the scientific community is arguably the most effective trust-based system in human history” – according to this view we wouldn’t need version control systems or preregistration either. I couldn’t disagree more; trust has no place in science. To me, that’s one of the major things with open scientific practices: removing the trust from science and instead practice transparency.

My response:

Trust is a fundamental issue in all human relationships and communities.  Every single interaction we have with other human beings involves some level of trust.  Just today I had to trust the people who harvested and packaged my food not to have accidentally or maliciously poisoned me, the drivers on the street to obey traffic conventions, and my house mate to have locked the house and not invited anyone dangerous inside — and those are only the most obvious examples.  I could come up with dozens more.

Different situations require different levels of trust.  If a situation requires more trust than you currently have, you can try to increase trust in a number of ways.  You can build new technologies, but you can also strengthen relationships, create neutral institutions, or add legal or regulatory force to your agreements.  None of these work perfectly, and often you’re best off pursuing a combination of them.  In all cases, though, you will have to trust someone at some point – it’s just a matter of deciding which system will allow you to trust in a way that’s acceptable to you.

The scientific community has trust issues, yes, like every other human community.  But its trust issues are of a specific type.  When you read a scientific paper, what makes you doubt the findings?  Personally, I’m not worried that the authors have faked the data, or that the publisher has changed the content of the paper without anybody knowing, or that the paper is stolen or plagiarized.  I know that the scientific community has very strong norms against these types of violations, and so they’re relatively rare.  Broadly speaking, I trust the scientific community to minimize these problems.  There’s not a lot of communities I would trust like that, which is why I claimed that science is special in this way.

The trust issues that the scientific community currently has are largely based around mis-aligned incentives.  I trust most scientists not to engage in outright fraud but I don’t trust them not to make choices in their research practices that may hurt their careers.  They know how the funding, publication, and tenure systems work, and they know that replications, preregistration, and following strict practices to minimize false positives will hurt their careers.  Simply put: most scientists don’t trust that taking actions to make science better will be rewarded rather than punished.  In a world of decreasing funding and a decaying social safety net, is anyone surprised that people do what’s best for themselves within the existing norms of the community?

My focus, then, is on supporting initiatives that help scientists trust that taking actions to make science better will be rewarded rather than punished.  I don’t see how blockchain helps with that even slightly.  I’d rather put time and energy and resources into things like lobbying funders to require certain research practices, supporting journals that facilitate preregistration and minimize publication bias, convincing departments to require a minimum number of replications per researcher per year, and educating students and early career researchers about the importance of these practices.  In other words, changing the norms so that engaging in these behaviors is easy rather than hard – because I trust humans to prefer the easy thing to the hard thing.

Levy on intermediate group power

Associations and groups that are substantial enough to fulfill needs for belonging and meaning, powerful enough to check the power of the state or to organize democratic life, or institutionally complete enough to offer authoritative norm-generation for their members, are also substantial, powerful, and authoritative enough to potentially threaten the freedom of their members.  That is, it is not just an unfortunate accident that groups come with features that, from a liberal perspective, are both good and bad.

The point is partly an intergenerational one.  Recall from the previous chapter the idea that inequalities of outcomes in one generation becomes inequalities of opportunity in the next and its analog: free associations in one generation become inherited ways of life in the next.  This is a necessary truth; children are born into particular times and places and social worlds that have been shaped by the choices their parents have made. This does not simply mean that the parents were free and the children were not; it was also true of the parents that they were born into particular times and places and social worlds.  If the parents had some freedom to reshape their worlds in partially original ways, to join or form groups into which they were not born, then the children also have some such freedom.  But there could be a narrowing over time; parents can join groups or adopt ways of life that leave their children with fewer choices than they themselves had.

But the point is only partly intergenerational.  It is, more simply, a point about power, even within one generation.  Robert Michels taught us that ‘who says organization, says oligarchy’.  That could suffice as a statement of the problem, but I propose to instead frame it as: whoever says authority, says power – and whoever says organization, says authority.

Jacob Levy, Rationalism, Pluralism and Freedom, p. 71