The Psychologists Take Power
SOURCES DRAWN ON FOR THIS ARTICLE
The Righteous Mind: Why Good People Are Divided by Politics and Religion
The Better Angels of Our Nature: Why Violence Has Declined
Just Babies: The Origins of Good and Evil
The Power of Ideals: The Real Story of Moral Choice
Moral Tribes: Emotion, Reason, and the Gap Between Us and Them
Report to the Special Committee of the Board of Directors of the American Psychological Association: Independent Review Relating to APA Ethics Guidelines, National Security Interrogations, and Torture
The Senate Intelligence Committee Report on Torture: Committee Study of the Central Intelligence Agency’s Detention and Interrogation Program
Estimating the Reproducibility of Psychological Science
Head Strong: How Psychology Is Revolutionizing War
In 1971, the psychologist B.F. Skinner expressed the hope that the vast, humanly created problems defacing our beautiful planet (famines, wars, the threat of a nuclear holocaust) could all be solved by new “technologies of behavior.” The psychological school of behaviorism sought to replace the idea of human beings as autonomous agents with the “scientific” view of them as biological organisms, responding to external stimuli, whose behavior could be modified by altering their environment. Perhaps unsurprisingly, in 1964 Skinner’s claims about potential behavior modification had attracted funding from the CIA via a grant-making body called the Human Ecology Society.
Skinner was extremely dismayed that his promise of using his science to “maximize the achievements of which the human organism is capable” was derided by defenders of the entirely unscientific ideal of freedom. When Peter Gay, for instance, spoke of the “innate naïveté, intellectual bankruptcy, and half-deliberate cruelty of behaviorism,” Skinner, clearly wounded, protested that the “literature of freedom” had provoked in Gay “a sufficiently fanatical opposition to controlling practices to generate a neurotic if not psychotic response.” Skinner was unable to present any more robust moral defense of his project of social engineering.
In spite of the grandiosity of Skinner’s vision for humanity, he could not plausibly claim to be a moral expert. It is only more recently that the claims of psychologists to moral expertise have come to be taken seriously. Contributing to their new aura of authority has been their association with neuroscience, with its claims to illuminate the distinct neural pathways taken by our thoughts and judgments.
Neuroscience, it is claimed, has revealed that our brains operate with a dual system for moral decision-making. In 2001, Joshua Greene, a philosophy graduate student, teamed up with the neuroscientist Jonathan Cohen to analyze fMRIs of people’s brains as they responded to hypothetical moral dilemmas. They inferred from looking at neural activity in different regions that moral judgment involved two distinct psychological processes. One of the processes, a fast and intuitive one, took place by and large in areas of the brain associated with emotional processing, such as the medial prefrontal cortex and the amygdala. The other process, which was slow and rational, took place by and large in regions associated with cognitive processing, such as the dorsolateral prefrontal cortex and the parietal lobe.
Greene interpreted these results in the light of an unverifiable and unfalsifiable story about evolutionary psychology. Since primitive human beings encountered up-close dangers or threats of personal violence, their brains, he speculated, evolved fast and focused responses for dealing with such perils. The impersonal violence that threatens humans in more sophisticated societies does not trigger the same kind of affective response, so it allows for slower, more cognitive processes of moral deliberation that weigh the relevant consequences of actions. Greene inferred from this that the slower mechanisms we see in the brain are a later development and are superior because morality is properly concerned with impersonal values—for example, justice—to which personal harms and goals such as family loyalty should be irrelevant. He has taken this to be a vindication of a specific, consequentialist philosophical theory of morality: utilitarianism.
But as the philosopher Selim Berker has pointed out in his important paper “The Normative Insignificance of Neuroscience,” the claim here is that personal factors are morally irrelevant, so the neural and psychological processes that track such factors in each person cannot be relied on to support moral propositions or guide moral decisions. Greene’s controversial philosophical claim is simply presupposed; it is in no way motivated by the findings of science. An understanding of the neural correlates of reasoning can tell us nothing about whether the outcome of this reasoning is justified. It is not the neuroscience but rather our considered moral judgments that do all the evaluative work in telling us which mental processes we should trust and which we should not.
Many of the psychologists who have taken up the dual-process model claim to be dismissive of philosophical theories, generally. They reject Greene’s inferences about utilitarianism and claim to be restricting themselves to what can be proved scientifically. But in fact all of those I discuss here are making claims about which kinds of moral judgments are good or bad by assessing which are adaptive or maladaptive in relation to a norm of social cooperation. They are thereby relying on an implicit philosophical theory of morality, albeit a much less exacting one than utilitarianism. Rather than adhering to the moral view that we should maximize “utility”—or satisfaction of wants—they are adopting the more minimal, Hobbesian view that our first priority should be to avoid conflict. This minimalist moral worldview is, again, simply presupposed; it is not defended through argument and cannot be substantiated simply by an appeal to scientific facts. And its implications are not altogether appealing.
The independent path taken by prominent psychologists has been profoundly influenced by the Positive Psychology movement, which was founded in 1998 by Martin Seligman, then chair of the American Psychological Association. Seligman wanted to promote the study of strengths and virtues in order to correct what he saw as an excessively restrictive focus on pathology in the field of professional psychology.
In the 1960s, Seligman devised a theory of “learned helplessness.” He found that a state of passivity could be induced in dogs by giving them repeated and inescapable shocks. This provided the basis for the theory that human beings, in the face of events that seem uncontrollable, experience disruptions in motivation, emotion, and learning that amount to a sense of helplessness. Seligman and other researchers applied the theory to depression, but also to social problems such as “demoralized women on welfare,” “helpless cognitions” on the part of Asian-Americans, and “defeatism” among black Americans.
In developing Positive Psychology one of Seligman’s core goals has been “to end victimology,” which, he claims, pervades the social sciences and requires us to “view people as the victims of their environment.” After September 11, 2001, he came to see the cultivation of positive strengths and virtues as an urgent task for America, shoring up its people and institutions by increasing their resilience.
Jonathan Haidt, a prominent social psychologist who has been closely involved in the Positive Psychology movement since its inception, has recently employed the new dual-process model of morality to suggest ways in which we might reshape our moral lives. He denies that reason ordinarily plays any part in motivating moral judgments, seeing it rather as a post-hoc means of justifying the intuitions we form quickly and unreflectively. Since different people’s brains are “wired” to operate with different intuitions and therefore to adopt different ideologies, this presents us with a problem for cooperation, one that he sees reflected in the entrenched polarization of American politics.
In his 2012 book The Righteous Mind: Why Good People Are Divided by Politics and Religion, Haidt identifies six basic pairs of moral intuitions that ground the world’s moral systems. He describes them as care vs. harm, fairness vs. cheating, loyalty vs. betrayal, authority vs. subversion, sanctity vs. degradation, and liberty vs. oppression. He claims that whereas American conservatives employ each of these different moral foundations, liberals are disproportionately motivated by “care.” He tells us that “across many scales, surveys, and political controversies, liberals turn out to be more disturbed by signs of violence and suffering, compared to conservatives and especially to libertarians.” When they are motivated by concerns of liberty and oppression it is on behalf of “underdogs, victims, and powerless groups everywhere.” This one-dimensional concern makes them unable to comprehend the more complex moral concerns of conservatives. Haidt therefore recommends that liberals try to appreciate the richer set of moral resources employed by conservatives in order to build cooperation across the ideological divide. In offering this moral counsel he presupposes that the norm of cooperation should take precedence over the values that divide us.
Other psychologists, however, have argued that Haidt’s analysis of moral motivations involves too skeptical an account of the role of reason. One of the first major projects in Positive Psychology that Martin Seligman organized was an initiative on “Humane Leadership” in 2000. The aim was to study the peaceable as well as the warlike character of human beings and to examine the difference that leadership makes in the realization of these opposing tendencies. The Harvard psychologist Steven Pinker (who served as a member of the Senior Independent Advisory Panel for the project) there stressed the role of rationality in the form of “nonzero-sum games”—i.e., forms of cooperation in which each party can gain—in fostering cooperative motives, an emphasis that came to play a very significant part in his 2011 book The Better Angels of Our Nature: Why Violence Has Declined.
In that extremely influential work Pinker argues that our rational, deliberative modes of evaluation should take precedence over powerful, affective intuitions. But by “rationality” he means specifically “the interchangeability of perspectives and the opportunity the world provides for positive-sum games,” rather than any higher-order philosophical theory. He allows that empathy has played a part in promoting altruism, that “humanitarian reforms are driven in part by an enhanced sensitivity to the experiences of living things and a genuine desire to relieve their suffering.” But nevertheless our “ultimate goal should be policies and norms that become second nature and render empathy unnecessary.”
We find this view of the necessary, trumping role of reason echoed in a recent book, Just Babies: The Origins of Good and Evil, by Paul Bloom, a professor of psychology and cognitive science at Yale, who also serves, along with Haidt, as an instructor in Seligman’s graduate program in Applied Positive Psychology at the University of Pennsylvania. Bloom studies child development and informs us, through his accounts of various experiments involving babies, that our biology has equipped us with certain rudimentary capacities that are essential to the development of morality, such as empathy and a sense of fairness. But he stresses that our pro-social biological inheritance is limited, consisting in adaptive traits that motivate us to care for kin. Our affective moral responses, while they still have some part in our moral lives, are essentially infantile. Reason allows us to transcend them.
This might at first seem persuasive. Reasoned moral deliberation often does and should override our immediate affective reactions. But Bloom’s view of reasoning, like Haidt’s and Pinker’s, seems oddly restrictive: he equates it with impartiality in the sense of the development of “systems of reward and punishment that apply impartially within the community.” The norm of cooperation is again presupposed as the fundamental means for deciding which of our moral intuitions we should heed.
When discussing the more stringent moral principles that Peter Singer, for instance, takes to be rationally required of us concerning our duties to distant strangers, Bloom dismisses them as unrealistic in the sense that no plausible evolutionary theory could yield such requirements for human beings. But as with the claims of Joshua Greene, facts about our biological nature, as described by evolutionary psychology, cannot in themselves be a source of moral norms. Bloom is a subtler moral thinker than Haidt and less didactic in his prescriptions, but he still, like Haidt, seems to presuppose that the discipline of psychology has some special authority that provides us with moral guidance, pressing us toward social cooperation; and this guidance is superior to ordinary rational moral deliberation.
Seligman’s goal of producing a stronger, more virtuous population would seem to require more than written treatises on morality. He and his associates also take moral leadership to be indispensable. This is the dimension of moral expertise explored by William Damon, a professor of education at Stanford University, and Anne Colby, a consulting professor at Stanford, in The Power of Ideals: The Real Story of Moral Choice.
Damon and Colby appropriate for their argument the stories of moral leaders such as Eleanor Roosevelt and Nelson Mandela, analyzing the psychological qualities that enabled them to transcend both their biological makeup and their moral backgrounds. While Damon and Colby see a role for neuroscience in moral psychology, they claim that most of the current research is flawed because “thinking patterns of high-level experts or creative geniuses may have different characteristics” and these are not yet being examined. They draw on Seligman’s account of the virtues and his conception of a meaningful (as opposed to merely happy) life as a foundation for resilience in the face of moral challenges. In doing so, they aim to show us how moral expertise and leadership might be employed to end otherwise intractable moral conflicts.
Seligman is quoted on the book’s back cover as saying:
Psychology presently studies superficial morality and deep morality. Superficial morality can be found in the quick, unreflective moral intuitions that we have. Deep morality is the reflective choices we make that involve honesty, faith, humility, and ideals. Damon and Colby’s The Power of Ideals is the go-to book for deep morality.
Psychologists, on this view, can employ their expertise to discriminate between our misleading emotional instincts about moral issues and our higher moral insights. But as we have seen, it is a fallacy to suggest that expertise in psychology, a descriptive natural science, can itself qualify someone to determine what is morally right and wrong. The underlying prescriptive moral standards are always presupposed antecedently to any psychological research.
In spite of the rhetoric employed by Damon and Colby concerning the search for higher moral truths, the basic moral principle that is consistently employed in this psychological literature is the bare Hobbesian one of resolving disagreement, or promoting cooperation. In his book Moral Tribes, Joshua Greene warns that even those who seek pragmatic agreement need “an explicit and coherent moral philosophy, a second moral compass that provides direction when gut feelings can’t be trusted.” So in addition to questioning whether psychological research can vindicate moral norms, we also have to ask whether the minimal moral norm of cooperation employed by psychologists is sufficient to provide them with a reliable moral compass.
Recent developments in the profession of psychology have been discouraging in this respect. In July 2015 a team of investigators led by David Hoffman, a lawyer with the firm Sidley Austin, published a report, commissioned by the American Psychological Association in November 2014, into the collusion of APA officials with the Department of Defense and the CIA to support torture. The report details extensive evidence of collusion. The APA revised its own ethical guidelines in order to facilitate collusion over and participation in torture by providing a set of very loose moral constraints on the participation of psychologists in interrogations. In doing so, the APA leaders were apparently motivated by the enormous financial benefits conferred on the profession in the form of Department of Defense funding. The episode demonstrates well the fragility of morality.
The authors of the report say in their conclusion:
We have heard from psychologists who treat patients for a living that they feel physically sick when they think about the involvement of psychologists intentionally using harsh interrogation techniques. This is the perspective of psychologists who use their training and skill to peer into the damaged and fragile psyches of their patients, to understand and empathize with the intensity of psychological pain in an effort to heal it. The prospect of a member of their profession using that same training and skill to intentionally cause psychological or physical harm to a detainee sickens them. We find that perspective understandable.
It is easy to imagine the psychologists who claim to be moral experts dismissing such a reaction as an unreliable “gut response” that must be overridden by more sophisticated reasoning. But a thorough distrust of rapid, emotional responses might well leave human beings without a moral compass sufficiently strong to guide them through times of crisis, when our judgment is most severely challenged, or to compete with powerful nonmoral motivations.
From the Hoffman report and supporting documents we learn that following the attacks of September 11, in December 2001 the APA adopted an emergency “Resolution on Terrorism” to encourage collaborations that would help psychologists fight terrorism. Entirely understandably, many academics at this time wanted to help to ensure that such a devastating atrocity could not occur again. The APA resolution resulted in a meeting, in December 2001, at the home of Martin Seligman of “an international group of sixteen distinguished professors and intelligence personnel” to discuss responses to Islamic extremism. Stephen Band, the chief of the Behavioral Science Unit at the FBI,later reported that “Seligman’s ‘gathering’ produced an extraordinary document that is being channeled on high (very high).”
Present at that initial meeting were Kirk Hubbard, the CIA’s chief of research and analysis in the Operational Assessment Division, and James Mitchell, one of the two psychologists, along with Bruce Jessen, who became the chief architects of the CIA’s torture program, which they helped to carry out. This program was substantially based on Seligman’s theory of “learned helplessness.” The theory played an important part in Mitchell’s intellectual formation. He cites it, for example, in a 1984 article (based on research for his undergraduate thesis) on the failures of cognition that result from a depressive mood. With Seligman’s help, Mitchell introduced the theory to the military personnel who studied harsh interrogation techniques.
Seligman was invited to speak in 2002 in San Diego on learned helplessness, at a conference sponsored by the Survival, Evasion, Resistance, and Escape (SERE) program run by the Joint Personnel Recovery Agency (JPRA), a military agency under the auspices of the chairman of the Joint Chiefs of Staff. SERE offered training to both the DOD and the CIA when they began their detention programs. SERE was originally established to train pilots in how to survive if they were captured, including resisting torture. Its instructors therefore needed specialist expertise in how to resist torture and this required knowledge of how to torture. Mitchell and Jessen served as psychologists in the SERE schools. Instruction about the theory of learned helplessness became a mandatory part of this training.
Seligman maintains that he believed Mitchell and Jessen were interested in the condition of learned helplessness solely in order to facilitate resistance to torture. The authors of the Hoffman report do not find this plausible. They write:
We think it would have been difficult not to suspect that one reason for the CIA’s interest in learned helplessness was to consider how it could be used in the interrogation of others.
Learned helplessness proved immediately to be ineffective as a means of obtaining human intelligence. This failure has been vividly described by the FBI interrogator Ali Soufan in his 2011 book The Black Banners: The Inside Story of 9/11 and the War Against al-Qaeda, which gives an account of the severe torture of a previously cooperative prisoner who was wrongly identified and who yielded no useful information after the torture began.
As described in the Senate Select Committee on Intelligence Report on the Central Intelligence Agency’s Detention and Interrogation Program, many techniques were tried on this prisoner, Abu Zubaydah, including sleep deprivation, keeping him in a freezing room and waterboarding him eighty-three times, to the point that he was hysterical, vomiting, and ultimately “completely unresponsive, with bubbles rising through his open, full mouth.” The Senate Committee’s report makes clear that these “enhanced interrogation techniques” yielded no information that could not have been otherwise obtained and in many cases yielded faulty intelligence on crucial intelligence issues.
The Senate report also tells us that the CIA misrepresented the results of the program to policymakers and the Department of Justice, maintaining that it was obtaining “a high volume of critical intelligence.” In the case of two prisoners tortured by Mitchell—Abu Zubaydah and Khalid Sheikh Mohammed—the CIA attributed to them the statement that “the general US population was ‘weak’, lacked resilience, and would be unable to ‘do what was necessary’ to prevent the terrorists from succeeding in their goals.’” But the Senate report tells us: “There are no CIA operational or interrogation records to support the representation that KSM or Abu Zubaydah made these statements.”
In spite of the clear lack of effectiveness of their “enhanced interrogation techniques,” Jessen and Mitchell continued to apply them and were eventually paid $81 million for doing so. When the involvement of psychologists in interrogations in Guantánamo Bay and Iraq came to light in a New York Times article in late 2004, the APA assembled a task force to look into it and issue ethical guidelines. In discussing their report, one board member, Diane Halpern, insisted they included a statement asserting that torture was ineffective. The task force did not pursue the question of effectiveness and did not include a statement on it.
When the Senate Select Committee on Intelligence published its extensive report on official torture in December 2014, Jonathan Haidt tweeted a link to an article by Matt Motyl, his former Ph.D. student, claiming that the report would not change anyone’s views on the morality or effectiveness of torture, owing to the phenomenon of cognitive bias, which distorts people’s assessment of the relevant evidence. Motyl warned that none of us should assume that our beliefs about torture are based on facts. Nevertheless, there are established facts. One of them is that psychologists secured enormous financial gains by collaborating in official torture, while also having clear evidence that it was ineffective.
This should be an important lesson concerning our moral frailty, one that should make us wary of conferring moral authority on sources that have no plausible claims to such authority, such as scientists of human behavior. Psychological expertise is a tool that can be used for good or ill. This applies as much to moral psychology as any other field of psychological research. Expertise in teaching people to override their moral intuitions is only a moral good if it serves good ends. Those ends should be determined by rigorous moral deliberation.
Psychologists have long been essential to the military, performing valuable and humane functions, treating returning veterans, devising selection procedures, studying phenomena such as PTSD. But when the defense industry supplies hundreds of millions of dollars a year to support research (both basic and applied) that is related to military psychology, there is always a potential conflict of interest between supplying results that the military wants and producing objective science.
In August 2015, the psychologist Brian Nosek and 269 coauthors published a report, “Estimating the Reproducibility of Psychological Science,” on their attempts to replicate the conclusions of one hundred studies published in papers in three psychology journals. Only 39 percent of the replication attempts were successful. There has subsequently been widespread debate about both faulty and fraudulent methods used by psychologists. Such findings cannot have been encouraging for the Department of Defense if it has entrusted our national security to this branch of science.
And it appears that to a certain extent it has. In his 2014 book Head Strong: How Psychology Is Revolutionizing War, Michael Matthews—a professor of engineering psychology at the United States Military Academy and former division head of the APA’s Division 19, the Division for Military Psychology—describes the way in which psychology has come to be seen as a critical tool in the global war on terror. The uses of social psychology and positive psychology, in particular, have come to be priorities for the military.
This is in part, Matthews tells us, for the purposes of “winning hearts and minds” both at home and in the field of operations. He also describes one of the most important behavioral goals of the military as the creation of “adaptive killing.” He suggests that
cognitive-based therapy techniques, which focus on eliminating irrational thoughts and beliefs, could be focused on changing a soldier’s belief structure regarding killing. These interventions could be integrated into immersive simulations to promote the conviction that adaptive killing is permissible.
A new initiative known as the Comprehensive Soldier Fitness program (CSF) was established in 2009 to explore ways of creating more resilient soldiers by helping them with the necessary psychological adjustments. Seligman devised for the military a metric for assessing “resilience,” the Global Assessment Tool (GAT). Positive Psychology thereby placed itself at the center of the military’s psychological programs. In 2010, the University of Pennsylvania’s Positive Psychology Center (founded by Seligman) was awarded a $31 million contract by the DOD. The Hoffman report tells us that critics argue that this was a reward for Seligman’s counterterrorism efforts. He denies this. According to Matthews’s book, high military officials believe that the skills of positive psychologists are invaluable in fostering soldier resilience and developing new forms of warfare.
But Matthews also reveals that the military is concerned about a “very liberal if not extremely leftist” orientation among academic psychologists. He presents the relevant scale of values in sharply dichotomized terms: academics are promilitary or antimilitary. Some, he claims, “may view assisting the army as tantamount to engaging in homicide.” He tells us that when he attended the APA convention in 2007, he discovered that anger was often turned “toward any psychologist who was perceived as promilitary,” and singles out the example (the book was written before the Hoffman report was made public) of Seligman himself being accused of “assisting the military in developing torture techniques by reverse-engineering his concept of learned helplessness.” Matthews describes these accusations as “personal attacks” rooted in “antimilitary sentiment.”
Matthews hopes, however, that the tools used by Positive Psychology, such as cognitive behavioral therapy and resilience training, can be employed to change the public culture more generally, including that of universities. Jonathan Haidt has repeatedly decried the lack of conservatives in the profession of social psychology. More recently, in an essay in The Atlantic, coauthored with Greg Lukianoff and entitled “The Coddling of the American Mind,” he recommended that students use therapies derived from cognitive behavioral therapy to foster personal resilience. Such resilience is needed, they argue, to combat the culture of victimhood that appears to them to lie at the basis of campus protests over racism and sexism. In an interview, Haidt elaborated:
With each passing year, racial diversity and gender diversity, I believe, while still important, should become lower priorities, and with each passing year political diversity becomes more and more important.
His priorities appear to align closely with those of the Department of Defense. And they are supported by his view of moral psychology. But we should be wary of accepting his prescriptions as those of an independent moral expert, qualified to dispense sound ethical guidance. The discipline of psychology cannot equip its practitioners to do that.
Similarly, when Paul Bloom, in his own Atlantic article, “The Dark Side of Empathy,” warns us that empathy for people who are seen as victims may be associated with violent, punitive tendencies toward those in authority, we should be wary of extrapolating from his psychological claims a prescription for what should and should not be valued, or inferring that we need a moral corrective to a culture suffering from a supposed excess of empathic feelings.
No psychologist has yet developed a method that can be substituted for moral reflection and reasoning, for employing our own intuitions and principles, weighing them against one another and judging as best we can. This is necessary labor for all of us. We cannot delegate it to higher authorities or replace it with handbooks. Humanly created suffering will continue to demand of us not simply new “technologies of behavior” but genuine moral understanding. We will certainly not find it in the recent books claiming the superior wisdom of psychology.
Letters
Moral Psychology: An Exchange March 17, 2016
No comments:
Post a Comment