Two different sources for this post which puts them together for what I see as an overlap.
First off, since it oldest and clogging up my Google Keep, is Paul Nedelisky's Nothing Personal: How ideas made Derek Parfit. Parfit seems to be a source for Effective Altruism.
What, then, to make of Parfit, the immoral moral philosopher?
Edmonds, a philosopher himself and author of the renowned Wittgenstein’s Poker, hazards the hypothesis that Parfit’s callous behavior toward those close to him was the result of a sort of wager. Parfit came to believe that his philosophical work was deeply important and that anything that took him away from this work must be studiously avoided. Hence, he ate the same food and wore the same clothes every day. He avoided social engagements and nonphilosophical conversation. If he could make a significant and salutary impact on the intellectual landscape, then his resolute spurning of those close to him would be worth it. In light of his philosophical achievements, Edmonds judges that Parfit’s “gamble paid off.” But I would like to examine Parfit’s legacy from a different vantage point, one that leads to a different verdict.
***
On Parfit’s rather Humean anthropology, people aren’t a fundamental part of reality, but instead are collections of particular thoughts or experiences. These collections are grouped by degree of psychological similarity. As a result, there is no firm line where one person ends and another begins, and thus there is no firm line where one’s life ends—where one is considered dead. This indistinct threshold made death seem less terrifying to Parfit, and, indeed, he was self-aware in his therapeutic approach to metaphysics, claiming, Edmonds says, that “everything he had ever written was motivated by fear of death.”
People, then, aren’t the solid, primary entities we may have assumed; thoughts and experiences are more basic. This conclusion dovetails nicely with Parfit’s utilitarian inclination to give greater weight in philosophical evaluation to overall happier states of affairs and less weight to our duties to specific other people. Parfit spends several dozen pages in Reasons and Persons arguing against what he calls “Common-sense Morality,” the traditional ethical stance characterized by these sorts of person-specific duties. The arguments all turn on the point that there are cases in which it is possible that the people to whom we owe special duties—say, our children—would be materially better off or happier if we (and enough other people) ignored our duties and instead acted in ways that impersonally benefited the larger community. Never mind that it would usually be all but impossible to know whether enough others had opted in to such a scheme, let alone precisely how many would need to opt in, even in the abstract, for the scheme to succeed. Perhaps unsurprisingly, Parfit showed little concern about rejecting millennia-old and deeply felt moral codes in favor of recondite technical arguments grounded in culturally alternative if not alien values.
Parfit’s values were culturally alien at the time, but, in large part due to his influence, they are increasingly less so. One of his consistent claims was that spatial proximity should not matter in ethics: Whether people are next door or in Beijing, their happiness is equally important. This is the core intuition behind his work to undermine Common-sense Morality, with its special duties to family, friends, neighbors, and compatriots. Parfit extended the logic of this position to temporal proximity as well: Whether people live now or in a million years, their happiness is equally important.
***
If you look carefully at Parfit’s reasoning in Reasons and Persons, a common theme emerges: The general approach to moral life that has been taken for granted by most in the West (and, really, the world) is profoundly mistaken. Put simply, the mistake is that this approach has been too personal—too concerned with duties to those who are close to us, too preoccupied with the distinctness of individual people, too hung up on people having souls that unite our experiences, too concerned with who deserves what. Instead, as Parfit sums up at the end of the book, “Our reasons for acting should become more impersonal.”
***
Reading Edmonds’s biography, one naturally surmises that the reason Parfit’s philosophical work was insensitive to obligations to kith and kin was that he himself was insensitive in this way. His philosophy reflects his antisocial behavior. Indeed, Parfit’s friends sometimes suggested that he may have had an autism spectrum disorder. However, as Edmonds notes, ASD is not something you develop later in life. And those who knew Parfit in his youth deny that he had any such condition at that time. His unfeeling alienation of friends and family seemed instead to grow with time—and after he had begun to deeply explore the moral cosmology he would eventually introduce in Reasons and Persons. This should make us wonder if the causation here is not the other way around: The reason Parfit came to behave more impersonally toward those close to him was that he spent decades meditating on and arguing for the idea that we should behave more impersonally toward those close to us. His antisocial behavior reflected his philosophy. You might say he lived down to his principles.
Ordinarily, I believe that the order of influence is from character to ideas. But surely, what we think can affect how we act. Especially when, as in Parfit’s case, an intellectual project persists for decades and grows from fascination into monomania.
Not being the brightest person on the Internet, which is why I am still educating myself, I thought to put my thoughts in the breaks. Then I realized I had one only one good thought: what good is a philosophical that puts off duties towards others into the future? My mother called me a procrastinator, but this surpasses any of procrastination. Ethical procrastination - if we do not owe duties to present-day individuals, why do we owe duties to unborn individuals.
Now, this is not to say we do not owe duties to the future. I believe we do. I also believe that we can balance those with duties to the present day. If we do not acto to avoid eradicating our species at this time, then there will be no future humans to worry about.
Then came Mary Townsend's Effective Altruism Is a Short Circuit by way of The Bulwark.
When the existentialist Jean-Paul Sartre was asked for life advice by an earnest young man in 1946, his response was characteristically rude and purposefully unhelpful: You are free, he said, so choose. That’s it, he implies: That’s all you get. But our simple allotment of radical freedom is also an unbearable weight, and so we look for advice, rubrics, rules, anything to lift the burden of that choice just a little bit.
It is to this problem at the aching heart of modern life that the “effective altruism community” has sought to address itself—through blogs, conventions, Substacks, online forums, grant-awarding organizations, several billion dollars, and the advocacy of one or two university professors—for going on thirteen years. The big draw of this movement is that they purport to have resolved the dilemma, and in terms that are seductively easy to understand. Want to do good? The best way, the EAs claim, is to donate money to charities that can be objectively proven to work—and, with televangelistic flair, they add that the more money sent in by viewers at home, the more goodness is brought into the world. Fin.
***
That’s stupid, and we know it. An anxiety for goodness of this sort—an anxiety for rules, for shibboleths—is different from the painstaking work of figuring out what on earth is best each time life forces us to make use of our terrible freedom. That work requires us to step into a more ambiguous world where goodness is harder to achieve and sometimes even harder to discern, but also a world where, one hopes, the language we use to position and understand goodness would not be so easily co-opted by frauds like Bankman-Fried. The FTX founder had built his public profile on a commitment to effective altruism, but as he explained to Vox shortly after the discovery of the fraud he allegedly perpetrated, he always secretly took the ethos of the philosophy to be “dumb shit.”
***
MacAskill’s body of ideas, known as “longtermism,” thrills our desire for practicality with the excitement of the unintuitive conclusion—one presented as an emergency obligation, at that. As essayist Phil Christman has remarked, while a concern for the effects of our actions on the future is laudable, “there’s a difference between taking responsibility for our actions and treating the future as our problem to solve.” Christman is laudably swayed by the promise of practical benefit, but he notes that with longtermism, we all too quickly find ourselves with “duties towards phantoms.” The fun, of course, of worrying about a distant future is that it gets us off the hook for worrying about the most pressing problems of AI: namely, why are we teaching children to write in ways that a cheap toy can imitate, and why are we pretending we don’t know about the already-present, far more consequential purpose of AI—to enable nation-states to wage ever more complicated war?
But the sense that longtermism plunges you in media res into some larger and harder-to-trace story is accurate. At its heart, longtermism is only the dorky science-fiction version of the nineteenth-century classical utilitarianism that English philosophers John Stuart Mill (lifetime employee of the East India Company) and his dad’s best friend, Jeremy Bentham, propounded, and by which means they have managed to hijack public discourse on goodness with for two hundred years and counting, at least amongst the self-professed elite. Utilitarianism argues that happiness is a matter of pleasure and material comfort, in the main, but also intellectual satisfaction, for those who can manage it. What distinguishes it from simple hedonism is three things: 1) that the amount of well-being, conceived as comfort in a given human life, can be measured; 2) that this comfort should be maximized, which is to say, aggressively increased; and 3) comfort should be aggressively increased not just for one human life, but for as many human lives as possible.
Oh, boy, do I ever agree with this:
TO ME, IT SEEMS THAT EFFECTIVE ALTRUISM has a math problem in the same way that classical utilitarianism does. The problem is: Math is cool. It is learnable and knowable, and from fairly early stages in human development. Therefore, if someone tells us that the solution to our longing to be good is simply to run the numbers well enough, in order to maximize the good (fun calculus metaphor!), then it looks like there is an easy and even sexy solution to the hole in your heart—sexy, that is, if you happen to be particularly good at math. (I’m haunted by the knowledge that a “common social overture” among EAs is “to tell someone that her numbers [are] wrong.”)
And this:
As Simone de Beauvoir puts it, we need to be on our guard any time a philosophical-moral stance signals its willingness to count the human lives who stare us in the face as nothing. Chillingly, it’s the inhumanity of depending on the consequence at all costs—backed by a wrongheaded faith in the goodness of one’s chosen project and so placing project and principle above real human lives—that appears “serious” to us, Beauvoir observes, and therefore good and worthy. This, she argues, is the essence of fanaticism, and it is anything, anything but good.
And because I cannot let this post get any longer, I say go read Ms. Townsend's most effective essay.
sch 11/18
No comments:
Post a Comment
Please feel free to comment