Liberty is a gift from mankind

I’ve been drawn to libertarian ideology my whole life but have never really embraced it, aside from a one week stint after reading Atlas Shrugged my freshman year of high school. I liked the book enough that I immediately started re-reading it so I could better understand it. And when I did I realized: there are no children in the story. There’s also no loving parents with Alzheimer’s, no aunts with cancer, no friends who need a place to crash for a couple months while they try to figure out what the hell they’re doing with their lives. In all the thousand plus pages of the book, there’s a great paucity of human relationships. It’s just heroes loving other heroes and fighting villains.

(If any of you have read Atlas Shrugged, you might be thinking, “But Eddie Willers!” Eddie Willers just proves my point. Sure he’s a lifelong friend and colleague of Dagny’s who she genuinely seems to like, but she leaves him to die without a second thought. Millie Bush is another non-exception exception. Yes, she’s an eight year old kid, and so technically the book does have a child in it, but literally all she does is get punched in the face by a heroic factory worker. Atlas Shrugged: Punch Children and Let Your Friends Die.)

Anyway, this post is not meant to be a review of Atlas Shrugged. But I think the flaws of Atlas Shrugged are of a piece with the flaws of libertarianism as a whole. It views liberty as a naturally occurring right which must be protected from all threats, most notably the government but also other organizations and individuals. A lot of libertarians do just focus on the government, either because it’s intellectually easier or because their libertarianism is a Trojan horse for small-scale authoritarianism, but my issue is not just the categorization of who is a threat to liberty. It’s the central conception of liberty as a gift from god or nature which must be protected from other men.

This conception is extremely popular.  You can find it in the Declaration of Independence: “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.”

I believe that liberty is not given to men, it is created by men.  If liberty is “the ability to do as one pleases”, then how do we gain those abilities?  How do we even learn what pleases us?  I am at liberty to write this blog post, and yes that is partly due to the freedoms of speech and press granted protected by the US Constitution, but it’s also due to years of public education and the mentorship of family members, not to mention the labor and community resources that went into building WordPress, the internet, and computers in general.  Without them, I would not have the liberty to write this blog post.  In fact, I cannot think of a single liberty I possess which was not in some way facilitated by people known or unknown to me.  Even the most basic liberties such as bodily autonomy are in some ways created.  If I’m attacked, how do I defend myself?  With weapons created by and purchased from other people, with skills taught by self defense experts, with moves seen on TV.

If liberty can be created as well as destroyed, then libertarians have an obligation to support liberty creation along with liberty protection.  To care only for one half of the matter is to let liberty languish.  A person recently released from prison has regained some liberty from the state in a very meaningful way, but they also need liberty in the form of the ability to earn money legally.  Otherwise they are likely to end up back in prison.

So why are there no children in Atlas Shrugged?  Because a child is a perfect rebuke to the conception of liberty as a natural gift under threat.  A newborn has no liberty – it cannot even hold its head up!  It takes years for children to develop even a basic bodily autonomy: my niece is three and I am still always mindful of the ways I might need to intervene to protect her from herself.  As she gets older the threats change, and she will be trusted more and more to act on behalf of herself, but it is a long process.  And the ability to set her own boundaries and act on behalf of herself is itself a skill and knowledge/value set created through gifts of wisdom and care, role-modeling and expectation-setting.

We’re all children at heart.  We may have learned enough skills to get by but we all struggle both to protect our liberties and to expand them, to enforce our boundaries and to broaden our horizons.

It’s a lifelong process.  And any political movement that fails to respect both halves of that process is not one I care to be a part of.

The burden of doubt

We often talk about giving people the benefit of the doubt, but seldom talk about its opposite, to the point that no agreed upon phrase for it exists.  The best I could come up with is the “burden of doubt”, which largely applies to courtroom settings.  Even with the help of judicial documents, the phrase is not very popular.

And yet we give people the burden of doubt just as often as we give people the benefit of it.  When I am in a bad mood and a stranger cuts me off, I give them the burden of my doubt as to whether they intended it.  When I am feeling happy and lucky it is easy to give them the benefit of the doubt instead.

It might be better for the world if I could always give the benefit of the doubt to people, but at least fluctuations based on my mood aren’t particularly unfair.  Of course our decisions in these matters are also influenced by things like race, gender, and other kinds of in-group/out-group status.  Racism and sexism are sometimes viciously overt but more often they take the form of giving the benefit of the doubt to men and/or to white people, and the burden of the doubt to women and/or people of color.  For instance, when a black woman complains about the way a match is being refereed.  She acts in a similar way to many white men, but they are given the benefit of the doubt while she is given the burden of it.  A study of political candidates found that “ambiguity boosts support for white male candidates but not for black male candidates. In fact, black male candidates who make ambiguous statements are actually punished for doing so by racially prejudiced voters.”

Our world is filled with uncertainty.  We are constantly deciding whether or not to give people or groups of people the benefit or the burden of the doubt.  But it’s not enough to make these decisions in isolation.  We must look for patterns in how we distribute the weight.

The constitution of knowledge is cross-disciplinary

This old tweet has recently been making the rounds and sparked up some discussion among my Facebook friends:

As someone with a background in both science and the humanities, I am continually exhausted by the antagonism between them.  In any given argument I usually side with humanities advocates, because STEM workers are way more likely to be dismissive dicks.  But this critique really misses the mark.

All disciplines ought to teach about how knowledge is constituted in their domain.  That’s as true for biology and math and computer science as it is for history and philosophy and art.

It’s true that some of the disciplines most likely to discuss knowledge constitution are in the humanities.  Philosophy has epistemology, for instance, and there are subfields of history and sociology focused on knowledge constitution.  But in STEM we have statistics, and subfields of psychology and computer science concerned with what is knowable and how we know it.

Regardless, I don’t agree that we can or should assign ‘knowledge constitution’ to specific fields.  All knowledge is actively constructed and we ought to be teaching people about that process even as we’re teaching the current results of the process.  Which is, I think, what Neil Degrasse Tyson is getting at.

 

A summary of David Ciepley’s “Beyond Public and Private: Toward a Political Theory of the Corporation”

I recently read David Ciepley’s “Beyond Public and Private: Toward a Political Theory of the Corporation”.  I recommend reading it yourself, but here’s my best attempt to summarize it in my own words.

Historically, we used to think of corporations as institutions that existed by the authorization of the state and for the public benefit – although the public often did not benefit in practice (*coughs* British East India Company).  In the 19th century, especially in the United States, we began to view corporations instead as part of the private sphere.  This has led to incoherent legal and political results.  Ciepley argues that, rather than viewing corporations as public or private, they ought to be considered as part of their own distinct sphere.

The first corporations were created in the image of constitutional republics, such as Dutch East India Company, whose governing board held a proportional balance of representatives from each of the Dutch provinces.  Several American States such as Massachusetts and Virginia were originally chartered as corporations.  Despite this history, there are some key distinctions.  While republics are “by, of, and for the governed”, those most subject to corporate action (workers, and secondarily customers and contractors) are not represented in its governance.

There are three main spheres of rights that make a corporation a corporation, and it’s important to note that these are all governance rights, not business rights.  After all, there are many corporations that are not businesses, and many businesses that are not corporations.  The three rights are: 1) the right to own property, make contracts, and sue and by sued, as a unitary entity, which Ciepley calls ‘contractual individuality’; 2) the right to centralized management of their property; and 3) the right to establish and enforce rules within their jurisdiction beyond the laws of the land.  Ciepley only talks about the first and third rights.

Right #1: Contractual Individuality

All corporations are granted contractual individuality by the government.  Business corporations additionally are granted by the government the right to use a joint-stock mechanism – that is, to sell shares.  The three key elements of contractual individuality are asset lock-in, entity shielding, and limited liability.

‘Asset lock-in’ refers to how investors in companies cannot directly withdraw their funds.  In a business partnership, a partner leaving the business withdraws their assets.  An investor in a corporation cannot withdraw their assets, only sell their shares to another investor.  This gives corporations an enviable stability, allowing them to specialize their assets and therefore increase their productivity.

‘Entity shielding’ means that the corporation’s assets are protected by creditors going after shareholders.  A creditor may take a shareholder’s shares, but cannot take assets from the company.  ‘Limited liability’, conversely, means that shareholders’ assets are protected by creditors going after the corporation.  Limited liability makes corporations attractive to small and/or passive investors.  Together entity shielding and limited liability are what make the trading of shares possible.  Share trading in turn is what makes investment attractive in the first place, because it allows investors to get their money back when they want – they do not have to wait until the company is dissolved.

Some people like to conceive of corporations as like fancy partnerships (a legal theory we’ll return to later) but this is incorrect.  Asset lock-in and limited liability can be approximated by partnerships, although with significant shortfalls (for instance, assets cannot be locked-in indefinitely and limited liability cannot protect against tort claims).  However partnerships cannot approximate entity shielding. To do so would require every shareholder to contract with every one of his creditors, from their bank to their plumber, against their laying claim to corporate assets.  This would be wildly impractical even if shareholders were willing to do it, but they would have an incentive not to, to allow the corporation’s credit to back up their own.

Contractual individuality means that corporations rely on the government for certain key privileges in way that other businesses do not – privileges which are foundational to their functioning and which allow them dominate the market through preferential accumulation and specialization of capital, as compared to private business.

Asset lock-in, entity shielding, and limited liability also mean that shareholders do not own a corporation.  So who owns a corporation?  No one: the corporation owns itself.  It is therefore neither publicly owned nor privately owned but corporately owned.

One of the central rationales of private property is that the owners bear the consequences of use or misuse of their property, and are therefore incentivized to use it well.  Because shareholders are alienated from corporate property, and in particular because limited liability means they bear fewer consequences, they are not well incentivized.

Neoliberal reforms which attempt to tie corporate management to shareholders have therefore only made things worse.  Shareholders more than anyone else are incentivized to take risks with a corporation.  Tying management to shareholders thus increases financial and legal risk-taking.  Shareholder also tend to be more interested in short-term gains than other associates of a corporation.  Over the last forty or so years, as these shareholder-focused reforms have been made, the average amount of time a stock is held has dropped from eight years to four months.

Right #3: Establishment and enforcement of rules

Management authority within an organization is widely assumed to derive from shareholders, who as “owners” of the corporation elect the board of directors.  But as Ciepley as shown, neither shareholders nor anyone else owns a company.  Where does management authority derive from, then?

It comes from the state, as specified in the corporate charter.  The corporate charter establishes the board of directors and then authorizes it to issue stock to shareholders – hardly the order of events you’d expect if the board’s authority derives from the shareholders.  Indeed, some corporations never issue stock at all, and some corporations issue nonvoting stock.  The current popular system of boards elected by shareholders is merely one possible way a charter can delegate control.

Ciepley refers to corporations as “franchise governments”: run on private initiative, but receiving their form and purpose from the state.  This is easy to see with municipal corporations (New York City is the largest municipal corporation in the US) but harder to see with business corporations, since we tend to view commerce as a private matter.  The increasing identification of corporations with business corporations, the increasing commercialization of the public sphere, and the elimination of the requirement that corporations be for the public benefit all go hand in hand.

(Ciepley doesn’t delve too deeply into the judicial history behind these trends but does cite a number of key court cases, starting with Dartmouth vs Woodward (1819) where the Supreme Court ruled that corporations were independent of the state and did not have to prove they were acting in the public interest.  According to Wikipedia, the eloquence of Daniel Webster’s oratory was thought to have been a deciding factor in the case.  Thanks, Daniel Webster.)

Implications of Ciepley’s theory of corporations

Ciepley argues for a return to the older view of corporations: that they are an artificial entity created by government, whose enjoyment of special legal rights requires them to act in the public benefit.

He also notes that during the mid-twentieth century, the Supreme Court began requiring that states and towns observe the core provisions of the Bill of Rights.  If municipal corporations are required to respect these rights, shouldn’t business corporations also be required?

Two alternative legal theories of corporations have dominated judicial arguments, the theory of “corporation as partnership” and the theory of “corporation as real person”.  The theory of “corporations as partnership” argues that corporations are voluntary associations of individuals and therefore the constitutional rights of the individuals extend to the corporation.  But a foundational element of the corporation is the alienation of shareholders from their property.  So why would their property rights apply?  And more generally, why should corporate partners enjoy rights such as limited liability that no other individual or group enjoys?

The theory of corporations as real person answers this question by claiming that corporations are a special, emergent entity distinct from the state.  But while some corporations may predate their current government, as do some medieval towns and as did Dartmouth College (the Dartmouth in Dartmouth vs Woodward), all corporations derive their status from the government, whereas real people have rights inalienable by their government.

Despite the flaws in these theories, they have been used to grant corporations a vast number of legal rights, for instance in Citizens United.  Lawyers for Citizens United argued both the “corporation as partnership” and “corporation as real person” perspective, even though the theories are contradictory.  If corporations are partnerships are reducible to their members, why should it get extra rights?  If corporations have emergent properties distinct from their members, why should it claim their constitutional rights?

Conclusion

There’s actually nothing worth summarizing in the conclusion, but I felt bad just stopping abruptly.

tl;dr: corporations are not private concerns, shareholders do not own them, we should go back to requiring corporations act in the public benefit, there’s no rights-based reason for shareholders to have any influence on the governance of a corporation, the rights granted to corporations based on these conceptually incoherent theories are bad for the economy, bad for society, and really rather unfair to actual private businesses, and someone should rewrite The Devil and Daniel Webster to be about Darmouth vs Woodward

Tolerating Uncertainty

The business of thinking is like the veil of Penelope: it undoes every morning what it had finished the night before.

Hannah Arendt, Responsibility and Judgment, p. 166

I’m an anxious sort of person.

That’s a glib way of saying that I have an anxiety disorder.  I’m afraid a lot of the time.  I have practical fears, like heights and driving and cardiovascular disease.  I have existential fears, like global warming and the descent of American society into fascism.  And I have social fears, like public speaking and calling people on the phone and putting myself out there when I want something or someone.

Every single person feels anxiety sometimes.  What makes it a disorder is that it interferes with your life.  My disorder is not a bad one – anxiety is ever-present in my life but I largely work around it.  I hate flying and driving, for instance.  If I could take the train everywhere I would, but I can’t, so I fly and I drive and my nervous system freaks out but I’m okay.  Sometimes I wake up in the morning and I’m just anxious for no damn reason, and it lasts all day, or all week, and there’s nothing I can do.  And that’s hard but it’s also reassuring in its own way.  It’s a reminder that anxiety often can’t be reasoned with.  At a certain point, all you can do is acknowledge what’s happening.  “My nervous system is freaking out, but I’m okay.”

What I’m trying to say here is that my relationship to anxiety is very personal. I think that’s true for most people. You can reason about fear but there’s a part of it that’s inescapably embodied. And uncertainty exacerbates fear.  I’d rather get a single painful shock I knew was coming than sit around waiting for a shock that might come.  It’s less terrifying to ask out a person you know will say no than someone who might say no.  So my relationship to uncertainty is very personal too.  To tolerate uncertainty is not just an intellectual choice or an emotional choice, but a physical choice.

The Romantic poet John Keats in 1817 coined the term negative capability:

[At once it struck me what quality went to form a Man of Achievement, especially in Literature, and which Shakespeare possessed so enormously—I mean Negative Capability, that is, when a man is capable of being in uncertainties, mysteries, doubts, without any irritable reaching after fact and reason—Coleridge, for instance, would let go by a fine isolated verisimilitude caught from the Penetralium of mystery, from being incapable of remaining content with half-knowledge.

Part of why negative capability is so rare and difficult to cultivate is because uncertainty provokes for so many of us a physical fear.  So we try to escape the fear by leaving the situation, or reasoning ourselves out of it, or blaming something else for the fear, or trying to grit our way through it.  Negative capability is the decision to sit with that fear, to say, “I’m afraid, but I’m okay”.  And when you approach uncertainty with that kind of acceptance, it lets you view the world – and the uncertain issue or object – in a different way.

This way of approaching the world is not something Keats invented, of course.  From Hannah Arendt:

It is in [thinking’s] nature to undo, unfreeze as it were, what language, the medium of thinking, has frozen into thought – words (concepts, sentences, definitions, doctrines), whose “weaknesses” and inflexibility Plato denounces so splendidly in the Seventh Letter.  The consequence of this peculiarity is that thinking inevitably has a destructive, undermining effect on all established criteria, values, measurements for good and evil, in short on those customs and rules of conduct we treat of in morals and ethics.  These frozen thoughts, Socrates seems to say, come so handy you can use them in your sleep; but if the wind of thinking, which I shall now arouse in you, has roused you from your sleep and made you fully awake and alive, then you will see that you have nothing in your hand but perplexities, and the most we can do with them is share them with each other.

Hannah Arendt, Responsibility and Judgment, p. 177

In other words, Socrates sought to build a community of people with negative capability, people who could hold perplexities in their hands.

(Brief aside: I can’t help bringing up one of my favorite characters, Chidi Anagonye, again.  Chidi is a moral philosopher with severe anxiety and essentially zero negative capability, who I think would benefit enormously from having Socrates as a mentor.  Maybe I will write fanfiction about this.)

Negative capability is vital in so many endeavors:

It’s vital in scientific research, since you must tolerate uncertainty about how the world works and whether the hypotheses and theories you’re relying on are true.

It’s vital in technological innovation, since you must tolerate uncertainty about whether your inventions will work and what impact they’ll have on the world.

It’s vital in political coalition-building, since you must tolerate uncertainty about how to compromise and whose perspectives to favor.

And of course it’s vital in philosophy and art, as Socrates and John Keats would agree.

Some people argue that an inability to tolerate uncertainty predisposes people to authoritarianism and fascism.  Others link it to conservatism:

[R]ather than make an unvarnished demand for freedom to oppress he is more apt to present himself as the defender of certain values. It is not in his own name that he is fighting, but rather in the name of civilization, of institutions, of monuments, and of virtues which realize objectively the situation which he intends to maintain; he declares that all these things are beautiful and good in themselves; he defends a past which has assumed the icy dignity of being against an uncertain future whose values have not yet been won; this is what is well expressed by the label “conservative.”

Simone de Beauvoir, The Ethics of Ambiguity, p. 39.

There has not been very much research on the links between uncertainty tolerance and authoritarianism or conservatism.  There has not been much research on uncertainty tolerance as a whole.

These are, fittingly, subjects for which we must tolerate a great deal of uncertainty.  So let us do as Socrates would do, and share our perplexities with each other.

Quick decisions

Last week I went to a rally to protest a series of raids by ICE in my city. The rally turned into an unplanned march through the streets, and I had to make two quick decisions: first, whether to join the march, and second, whether to remain in the street when the police started to give warnings.

When the march started and I had to decide whether or not to join, I had the following thoughts:

  • Shit, are we going to get arrested for this?
  • There’s several hundred of us, they probably won’t arrest us.
  • But it’s illegal, so we *could* be arrested. What would happen if I did?
  • I just got arrested for civil disobedience a few weeks ago, would there be extra consequences because of that?
  • Maybe, but there’d be *less* consequences because I’m white and a woman and a citizen and all my family are citizens.
  • And they’re probably not going to arrest people anyway.
  • Okay, let’s do this.

We marched for about thirty minutes, to the location where the raids had taken place, and then we stayed in the street. At one point, the police started to give warnings, and the leaders of the protest told anyone who didn’t want to risk arrest to move to the sidewalk.

  • Okay, they really might arrest us.
  • There are still a lot of us though. And the more of us stay in the street the less likely they are to arrest any of us.  And I’m still way less vulnerable than a lot of the people here.
  • But I haven’t even told anyone I might be arrested. I have no plan for this.
  • There will be other times to risk arrest later, when I’ve had a chance to research and plan. It’s not selfish to want to be prepared.

So I got up on the sidewalk. A lot of people stayed in the street, and no one ended up getting arrested.

I do feel a little bad that I didn’t stay in the street and risk arrest, but I think it was a reasonable decision. The context changed, and my actions changed, but my values stayed the same, and I stayed consistent with them.

There’s no one set of rules that can govern all of our decisions. There’s no “right choice”, only choices that are better or worse than others, and often you don’t even know what’s better or what’s worse until everything’s over.

One of my favorite fictional characters is Chidi Anagonye from The Good Place. Chidi has severe anxiety about making morally good decisions and I identify with him so much. I mean, my job used to be to stick people inside of magnets and give them moral dilemmas, of course I identify with Chidi. But his approach to morality is unhealthy. He’s obsessed with making the right decision, when the right decision doesn’t exist. His desire to be good actually makes him do less good.

I’m not always going to make the best decision, but I can be thoughtful about the decisions I do make. There will always be room to criticize, but I can learn from self-reflection and from the feedback of others without thinking that a better person would have done something different.

Judgment Above Principle, Judgment After Principle

Principles are really important, and by and large you should try hard to stick to them.  I have tremendous respect for those who have died or gone to jail for their principles.

That said, a principle is just a rule, albeit a highly abstracted and abnormally emotional rule.  And it is important not to follow rules blindly but to consider whether they apply in a given situation.  It is rare that a principle applies in all situations for a given person (how many people truly would never kill, even in self defense?) and it is impossible for a principle to apply in all situations for all people.  So you have to use your personal judgment.

This doesn’t mean “oh, whatever, just go with your gut”.  But it does mean “I have thought deeply about this and weighed the various principles and factors involved and this is what my gut says”.  Hence the title of this post*.  “Judgment Above Principle”: aka personal judgment is more important than sticking to principles.  “Judgment After Principle”: aka personal judgment requires a thoughtful consideration of principles.  It is not a rejection of principles but a transcendence of them.

This post on Emptywheel highlights a great example of putting judgment above principle:

Marcy’s post was not primarily about the investigation into the Russian interference in the 2016 election, though that is what has gotten a lot of the attention. What she was really talking about was the practice  — or should I say “malpractice”? — of journalism. Woven into the entire post, Marcy laid out how she wrestled with a very basic question: What do you do, as a journalist, when a confidential source lies to you?

I highly recommend reading both Marcy Wheeler’s original post, Putting a Face (Mine) to the Risks Posed by GOP Games on Mueller Investigation, and Peterr’s analysis (linked above).

* The post title is also a play on a paper I was co-author on.  It’s not really relevant to this post, except as a reminder that I’ve been obsessed with morality for a while.  😉

 

Money, and the violence of lost context

It is in the very nature of a question like “What do I owe my parents?” that there is not and can never be a final, numerically answer. It is a question that we re-visit and re-negotiate every minute we are with them; obligation and love form an endless Möbius strip, through which our complex interdependence on each other makes the idea of paying off that debt – and of thereby severing the relationship – a sort of bitter joke. Precisely because it is a non-monetary “debt,” its function is to be an unpayable and unbreakable bond, one whose dividends never end and one that could and will never default.

By contrast, Graeber argues that purely monetary debts – such as the $14k I owe in student debts to a variety of banks – legitimize violence and exploitation precisely because they take an otherwise irreducibly complex human relation and reductively simplify it into a number. When you quantify a debt with financial precision – and especially when you invest paying it off with profound moral gravity, making it a fundamental moral imperative – you take what was a human relationship of mutual imbrication and co-implication into a financial one based on a kind of moral dominance, and thereby subject the indebted party to the mechanisms of financial debt collection instead of the precepts of human morality. If my relationship to my parents was a financial one, then I could pay it off and be done with them (or they could forgive the debt and be done with me). Or (and here is where it gets interesting), they could present me with a bill, demand that I pay it, and throw me in jail if I failed to do so.

This is just a thought experiment, of course, but the point of it is to bring out and make explicit that contrast. While the perversity of paying off your debts to your parents hardly needs comment – or of them garnishing your wages to pay for the hospital costs of birthing you – it is just as unspeakably normal for our debts to banks to seem, always and forever, the first thing we need to honor and respect. Graeber argues that this contrast, and our failure to register it as such, demonstrate the conceptual constriction of possibility that has come to be built into the moral landscape of our present: it is because a quantifiable debt could be paid off, with numerical precision, that it can therefore be seen as an imperative to do so, and becomes a moral failing when it is not. More than that, it becomes not only a moral failing that is enforceable and punishable, but a moral reasoning which makes the violence of that constraint your own fault, your own choice: no one forced me to take on student debt, goes the reasoning; it was my own choice. And so, the violence of debt collection is just chickens coming to roost.

Let us, then, look with new eyes at the fact that when a dictator takes out a loan from a Western bank – pledging as his surety the future revenues produced by the people who he uses men with guns to rule — we can be utterly sure that long after he is dead and gone, that debt will live on. Banks will not only claim the right to be paid back, but the moral force of the world’s political and ruling classes will assent to the proposition that children unborn when their nation went into debt will somehow still be on hold as the debt’s guarantors. This will appear normal. This will not seem a monstrous perversity.

from Aaron Bady’s review of David Graeber’s Debt: The First 500 Years

Hannah Arendt on the role of reflection in political and moral behavior

Socrates, however, who is commonly said to have believed in the teachability of virtue, seems indeed to have held that talking and thinking about piety, justice, courage, and the rest were liable to make men more pious, more just, more courageous, even though they were not given either definitions or “values” to direct further conduct.  What Socrates actually believed in in such matters can best be illustrated by the similes he applied to himself. He called himself a gadfly and a midwife, and, according to Plato, was called by somebody else an “electric ray”, a fish that paralyzes and numbs by contact, a likeness whose appropriateness he recognized under the condition that it be understood that “the electric ray paralyzes others only through being paralyzed itself.  It isn’t that, knowing the answers myself I perplex other people. The truth is rather that I infect them also with the perplexity I feel myself.”

[…]

The trouble – and the reason why the same man can be understood and understand himself as gadfly as well as electric ray – is that this same wind, whenever it is aroused, has the peculiarity of doing away with its own previous manifestations.  It is in its nature to undo, unfreeze as it were, what language, the medium of thinking, has frozen into thought – words (concepts, sentences, definitions, doctrines), whose “weaknesses” and inflexibility Plato denounces so splendidly in the Seventh Letter.  The consequence of this peculiarity is that thinking inevitably has a destructive, undermining effect on all established criteria, values, measurements for good and evil, in short on those customs and rules of conduct we treat of in morals and ethics.  These frozen thoughts, Socrates seems to say, come so handy you can use them in your sleep; but if the wind of thinking, which I shall now arouse in you, has roused you from your sleep and made you fully awake and alive, then you will see that you have nothing in your hand but perplexities, and the most we can do with them is share them with each other.

Hence, the paralysis of thought is twofold: it is inherent in the stop and think, the interruption of all other activities, and it may have a paralyzing effect when you come out of it, no longer sure of what had seemed to you beyond doubt while you were unthinkingly engaged in whatever you were doing.  If your action consisted in applying general rules of conduct to particular cases as they arise in ordinary life, then you will find yourself paralyzed because no such rules can withstand the wind of thought. To use once more the example of the frozen thought inherent in the word “house”, once you have thought about its implied meaning – dwelling, having a home, being housed – you are no longer likely to accept for your own home whatever fashion of the time may prescribe; but this by no means guarantees that you will be able to come up with an acceptable solution for your own housing problems.  You may be paralyzed.

This leads to the last and, perhaps, even greatest danger of this dangerous and resultless enterprise.  In the circle around Socrates, there were men like Alcibiades and Critias – God knows, by no means the worst among his so-called pupils – and they turned out to be a very real threat to the polis, and this not by being paralyzed by the electric ray but, on the contrary, by having been aroused by the gadfly.  What they had been aroused to was license and cynicism. They had not been content with being taught how to think without being taught a doctrine, and they changed the nonresults of the Socratic thinking examination into negative results: if we cannot define what piety is, let us be impious – which is pretty much the opposite of what Socrates had hoped to achieve by talking about piety.

The quest for meaning, which relentlessly dissolves and examines anew all accepted doctrines and rules, can at every moment turn against itself, as it were, produce a reversal of the old values, and declare these as “new values”. This, to an extent, is what Nietzsche did when he reversed Platonism, forgetting that a reversed Plato is still Plato, or what Marx did when he turned Hegel upside down, producing a strictly Hegelian system of thinking in the process. Such negative results of thinking will then be used as sleepily, with the same unthinking routine, as the old values; the moment they are applied to the realm of human affairs, it is as though they had never gone through the thinking process.  What we commonly call nihilism – and are tempted to date historically, decry politically, and ascribe to thinkers who allegedly dared to think “dangerous thoughts” – is actually a danger inherent to the thinking activity itself. There are no dangerous thoughts; thinking itself is dangerous, but nihilism is not its product.  Nihilism is but the other side of conventionalism; its creed consists of negations of the current, so-called positive values to which it remains bound. All critical examinations must go through a stage of at least hypothetically negating accepted opinions and “values” by finding out their implications and tacit assumptions, and in this sense nihilism may be seen as an ever-present danger of thinking.  But this danger does not arise out of the Socratic conviction that an unexamined life is not worth living but, on the contrary, out of the desire to find results which would make further thinking unnecessary. Thinking is equally dangerous to all creeds and, by itself, does not bring forth any new creed.

However, nonthinking, which seems to recommendable a state for political and moral affairs, also has its dangers.  By shielding people against the dangers of examination, it teaches them to hold fast to whatever the prescribed rules of conduct may be at a given time in a given society.  What people then get used to is not so much the content of the rules, a close examination of which would always lead them into perplexity, as the possession of rules under which to subsume particulars.  In other words, they get used to never making up their minds. If somebody then should show who, for whatever reasons and purposes, wishes to abolish the old “values” or virtues, he will find it easy enough provided he offers a new code, and he will need no force and no persuasion – no proof that the new values are better than the old ones – to establish it.  The faster men held to the old code, the more eager will they be to assimilate themselves to the new one; the ease with which such reversals can take place under certain circumstances suggests that indeed everybody is asleep when they occur. This century has offered us some experience in such matters: How easy it was for the totalitarian rulers to reverse the basic commandments of Western morality – “Thou shalt not kill” in the case of Hitler’s Germany, “Thou shalt not bear false testimony against thy nature” in the case of Stalin’s Russia.

Hannah Arendt, Responsibility and Judgment, p. 173-178