My colleague Ian Pollock last week registered an interesting essay on Rationally Speaking (where I blog occasionally) on Daniel Kahneman’s new book, Thinking, Fast and Slow. In the book, Kahneman — who won the Nobel Memorial Prize in Economic Sciences in 2002 — differentiates between two different types of thinking:
…. between a here-and-now preferrer — the experiencing self — that wants this pleasure to continue and this pain to cease, and a storyteller — the remembering self — that looks at an experience as a whole and evaluates its worth, with special attention paid to the beginning, climax and ending.
What Pollock then procedes to do is explore the implications of these two different thought processes, these two selves, for moral decision making:
One of the areas that it seems worth applying to is ethical philosophy; specifically the contrast between virtue ethical and consequentialist strains of thought.
For virtue ethics, the point of morality is to help you to be a better, happier person. Here, happiness is emphatically not understood in the popular modern way as a mere persistent good mood. On the contrary, happiness (or eudaimonia) involves living an ethically good life, with close ties to friends and family, and strong community involvement. A lifetime of good deeds and fine company could be undone by your child’s turning out to be a villain, even if it were not your fault — hence, Solon says “call no man happy until he is dead.”
Meanwhile, consequentialism (particularly its subspecies, utilitarianism) seeks to maximize welfare or utility across all beings. In utilitarianism this gets defined as the balance of pleasure over pain, or some such concept. The definition of utility is always vexatious, but needn’t concern us overmuch here — the point is that almost all plausible consequentialist theories care quite a lot about moment-to-moment mental states like pleasure and pain.
I suspect you may be able to see where I am going with this. Virtue ethics is speaking directly and pretty much exclusively to the remembering self, while utilitarianism is much more friendly to the experiencing self. Is this a defect in one, or in both of these theories?
Keep reading here.
Addicted to heroin? Alcohol? Meth? Some neurosurgeons in China claim to have a cure:
How far should doctors go in attempting to cure addiction? In China, some physicians are taking the most extreme measures. By destroying parts of the brain’s “pleasure centers” in heroin addicts and alcoholics, these neurosurgeons hope to stop drug cravings. But damaging the brain region involved in addictive desires risks permanently ending the entire spectrum of natural longings and emotions, including the ability to feel joy.
In 2004, the Ministry of Health in China banned this procedure due to lack of data on long term outcomes and growing outrage in Western media over ethical issues about whether the patients were fully aware of the risks.
However, some doctors were allowed to continue to perform it for research purposes—and recently, a Western medical journal even published a new study of the results. … The November publication has generated a passionate debate in the scientific community over whether such research should be published or kept outside the pages of reputable scientific journals, where it may find undeserved legitimacy and only encourage further questionable science to flourish.
Carl Zimmer outlines all sides of the intense debate on this controversial method here.
(Hat tip to Kenan Malik).
I just came across an excerpt from what looks like an interesting new book, Soul Repair: Recovering From Moral Injury After War, by Rita Nakashima Brock and Gabrielle Lettini. It seems a central aim of the book is to add to our understanding of post-war trauma — now considered mainly an issue of either physical harm or Post Traumatic Stress Disorder (PTSD) — the new concept “moral injury.” Take a look:
Moral injury is not Post Traumatic Stress Disorder (PTSD), but often overlaps with it. Many books on veteran healing confuse and conflate them into one thing. The difference between them is partly physical. Post-Traumatic Stress occurs in response to prolonged, extreme trauma and is a fear-victim reaction to danger. It produces hormones that affect the parts of the brain that are involved with responses to fear, the regulation of emotions, and the connection of fear to memory. A sufferer often has difficulty forming a coherent memory of a traumatic event or may even be unable to recall it.
The moral questions emerge after the traumatizing symptoms of PTSD are relieved enough for a person to construct a coherent memory of his or her experience. We organize emotionally intense memories into a story in the brain’s prefrontal cortex, where self-control, planning, reasoning, and decision-making occur. The brain organizes experiences and evaluates them, based on people’s capacity to think about moral values and feel empathy at the same time.
You can read the full excerpt here.
When [James Holmes] walked into court Monday morning, one thing was immediately obvious. Something was wrong with this guy. Which was weirder, the dazed expression he wore most of the 11 minutes of the hearing, or the sudden bursts of wild eyes, matching his ridiculous orange hair?
The obvious explanation, which many viewers and commentators embraced, was that he was out of his mind or, medically speaking, undergoing some sort of psychotic break. But a minority view pushed back, and hard: the hair, the eyes, the sensational getup for the attack were a little too cute: a cold-blooded killer, playing crazy.
You will never understand this man if you leap to either of these conclusions. Do not look for a unified theory of mass murder, a single coherent drive. It doesn’t exist. Examining all the mass murderers together yields a hopeless mass of contradictions.
Well, of course it did, say John Monterosso and Barry Schwartz. But what does that really mean?
As a general matter, it is always true that our brains “made us do it.” Each of our behaviors is always associated with a brain state. If we view every new scientific finding about brain involvement in human behavior as a sign that the behavior was not under the individual’s control, the very notion of responsibility will be threatened. So it is imperative that we think clearly about when brain science frees someone from blame — and when it doesn’t.
“Was the cause psychological or biological?” is the wrong question when assigning responsibility for an action. All psychological states are also biological ones.
A better question is “how strong was the relation between the cause (whatever it happened to be) and the effect?” If, hypothetically, only 1 percent of people with a brain malfunction (or a history of being abused) commit violence, ordinary considerations about blame would still seem relevant. But if 99 percent of them do, you might start to wonder how responsible they really are.
Are we quickly approaching a time when the act of remembering will become a choice? And, if so, what are the ethical implications of a pill or therapy that will erase the memories we don’t want to remember?
That’s the subject of an intriguing new article on Wired.com by Jonah Lehrer.
The problem with eliminating pain, of course, is that pain is often educational. We learn from our regrets and mistakes; wisdom is not free. If our past becomes a playlist—a collection of tracks we can edit with ease—then how will we resist the temptation to erase the unpleasant ones? Even more troubling, it’s easy to imagine a world where people don’t get to decide the fate of their own memories.
“My worst nightmare is that some evil dictator gets ahold of this,” [Columbia University neurologist Todd] Sacktor says. “There are all sorts of dystopian things one could do with these drugs.” While tyrants have often rewritten history books, modern science might one day allow them to rewrite us, wiping away genocides and atrocities with a cocktail of pills.
Last week, the Potomac Institute for Policy Studies sponsored a daylong conference in Virginia featuring neuroscientists and philosophers who discussed questions such as “how does morality operate in the brain?” and “can advances in the study of the brain can tell us anything new about ethics?”
Fortunately for those who didn’t attend (including me), Ronald Bailey of Reason Magazine was there taking notes, and has since published a review of the event.
The Boston Globe has just posted an interesting interview with Liane Young, an associate professor of psychology at Boston College. Young recently won a prestigious national award for young scientists for her work, which she describes as such:
I study human moral decision-making and behavior — both the psychological processes that support moral judgment and also the neural basis of moral judgment. So, what are the brain regions that help us make moral judgments and that help us think about other people and what they’re doing?
Take a look at the full interview here.
The European Union (EU) might soon consider tougher measures on banker bonuses that go against “all reason, common sense and morality,” according to EU financial services commissioner Michel Barnier. From Bloomberg:
Barnier, who’s responsible for proposing laws that govern banks across the 27-nation EU, warned that he’ll seek extra legislation by the end of this year if lenders “carry on paying excessive bonuses.” Ideas being “worked on” include limiting the gap between minimum and maximum pay in a bank and also setting a ratio between fixed and variable pay, he said at a European Parliament hearing in Brussels today.
Most people on the right side of the political spectrum would call this socialism, but it’s interesting to see Barnier frame the issue not as one of political persuasion, but of decency. You can find previous postings on this subject here, here, and here.