Symposium: Does Neuroscience Have Normative Implications?

Loading Events

« All Events

  • This event has passed.

Symposium: Does Neuroscience Have Normative Implications?

April 15, 2016 - April 16, 2016

Friday, April 15th, 2016

9:30 Kurt Gray, University of North Carolina
Mind Perception and Morality

10:45 COFFEE BREAK

11:00 Maria Brincker, University of Massachusetts
The Power of the Fact-Value Dichotomy as Seen through the Mirror Neuron Debate

11:30 Bongrae Seok, Alvernia University
Autistic Moral Agency and Integrative Neuroethics

12:00 Stephen Napier, Villanova University
Getting Our Moral Intuitions Right: Sympathetic Thinking and Moral Judgment

12:30 Ullica Segerstrale, Illinois Institute of Technology
Implications of Neuroscience and Sociobiology: Real and Percieved

2:00 James Giordano, Georgetown University
The Is and the Ought: On the Validity, Limits, Value—and Stewardship—of a Neuroscience of Human Ecology

3:15 COFFEE BREAK

3:30 Brett Karlan, Princeton University
Let a Thousand Methods Bloom: On the Neuroscience of Moral Judgment
Winner: Graduate Student Travel Award

4:00 Matthew Childers, University of Iowa
Naturalized Virtue Ethics and the Neuroscience of Self-Control

4:30 Matt Jeffers, Georgia State University
Moral Theory is About Reasons, Not Motivations

5:00 Geoff Holtzman, Illinois Institute of Technology
Three Goals for a Third Tradition of Neuroethics

Saturday, April 16th, 2016

9:00 Thomas Noah, University of Pennsylvania
Can Neuroscience Help Select the Correct Metamorality?

9:30 Theresa Lopez, Hamilton College
Why Targeted Debunking Arguments in Ethics Must Be More Targeted

10:00 Tommaso Bruni, Western University and Regina Rini, New York University
Archimedes in the Lab 

10:30 Isaac Wiegman, Texas State University
The Reactive Roots of Retribution: Normative Implications of the Neurobiology of Punishment 

11:00 COFFEE BREAK

11:15 Eddy Nahmias, Georgia State University 
Did My Brain Make Me Do It?: Free Will and Neuroscience

12:30 LUNCH

1:30 Rory Svarc, King’s College, London
Does Pain Asymbolia Undermine Classical Moral Hedonistic Utilitarianism?

2:00 Chris Zarpentine, Wilkes University
Neuroscience, Moral Motivation, and the Structure of Moral Psychology

2:30 Matthew Ruble, Appalachian State University
Nervous Morals: What the Mistakes of Psychiatric Ethics Can Teach Us About Neuroethics

3:00 Garrett Merriam, University of Southern Indiana
Neuromachean Ethics

Print version of schedule

Participation and attendence are free. Please let Geoff Holtzman if you would like more information at NormativeNeuroscience@gmail.com

Abstracts

Mind Perception and Morality
Kurt Gray, University of North Carolina

Decisions about animal rights, prison sentences and capital punishment are matters of life and death; yet, people base these decisions upon something ambiguous – the apparent mental capacities of others. Although we can never be certain of the exact nature of others’ minds, studies show that moral judgments hinge upon such mind perception. In this talk, I present evidence for the link between mind perception and morality. In particular, I explore dyadic morality – the idea that people understand good and evil as the combination of two perceived minds: an agent (the doer) and a patient (the recipient). Dyadic morality not only provides a template to unify morality, but also suggests two unique phenomena: dyadic completion and moral typecasting. Dyadic completion is the tendency to see blameworthy agents in response to suffering patients (and vice versa) and can help explain why people believe in God and the lasting sting of malicious harms. Moral typecasting is the tendency to see others as either moral agents or moral patients, and can help explain why we hurt the saintly, how best to escape blame, and why good deeds make people physically stronger.

The Power of the Fact-Value Dichotomy – As Seen Through the Mirror Neuron Debate
Maria Brincker, University of Massachusetts— Boston

The question of the moral responsibility of scientists has been thrusts to the fore during WWII and currently e.g. regarding CRISPR/cas9 gene editing research – but what is the moral burden of the researcher? How one receives this question will naturally be informed by ones explicit or implicit view on the fact-value dichotomy. I.e. does objective truth and knowledge in absence of value exist? And if so how could we understand the pursuit of such knowledge? I shall not in this presentation examine the individual motivations of neuroscientists. Rather I highlight how paradigms within and around neuroscience respectively enforce and challenge the fact-value dichotomy itself.
My analysis focuses on the question of the role of action/emotion in perception, and the history of what Susan Hurley labeled the “classical sandwich” of perception-cognition-action. This division can be seen in many shapes throughout the history of neuroscience, and as a contemporary example I look at how theoretical interpretations of recent mirror neuron findings diverge on this very point. In other words, the question is whether we here have empirical evidence that establishes or undermines the perception-action division – and by implication the basis of fact-value dichotomy. Via this story I also highlight how a non-normative neuroscience is hard to find in reality – even as it pertains to the nature of findings themselves. Scientists, it seems, do see and are shaped by the anticipated theoretical consequences of their work.
In conclusion, I highlight that even if knowledge is never truly value-free, it can be powerful and useful to purposefully narrow ones context and thus to actively ignore various contextual factors—including sometimes practical, ethical and political outcomes. However, this undeniable practicality of e.g. relative bubbles and “ceteris paribus” heuristics is itself normative and must be weighed against broader values. Note thus that the fact-value dichotomy can be a convenient local fiction that can support the practical isolation of scientific work. However it might also effectively insulate very powerful, influential and irreversible outcomes of science from moral evaluation. Where I am an advocate of the former relative insulation for purposes of quality, integrity, academic freedom and the lessening of special interests, I see the latter isolation as morally corrupting.

Autistic Moral Agency and Integrative Neuroethics
Bongrae Seok, Alvernia University

In this paper, I will explore interdisciplinary relations between neuroscience and ethics and discuss three possible options (autonomous, interactive, and integrative options) of interdisciplinary cooperation. Among these, I will pursue interactive and integrative options. I will argue that neuroscience can contribute to constructive moral theorizing, particularly in the development of normative standards of moral agency and moral responsibility. I will use brain imaging studies of empathy to explain that brain imaging data can be integrated in moral theories of autistic moral agency. If we characterize moral agency as an ability that includes clear understanding of and appropriate reaction to others inner cognitive and emotional states, making moral decisions and developing moral judgments are particularly challenging to those individuals whose theory of mind ability is limited or impaired. Autistic individuals, according to many psychological studies, have a great difficulty in assessing and judging others’ actions because of their limited understanding of others’ motivational, intentional, and emotional states. But these social cognitive difficulties do not imply that autistic individuals are immoral or amoral. Recently, several philosophers argue for rule based Kantianism as autistic moral agency. Instead of assessing other’s inner intentions and desires, autistic individuals can behave morally by formulating and applying general rules of conduct.
Even though this Kantian model of autistic moral agency is compatible with autistic moral behavior, it does not explain broad moral abilities of autistic individuals. Many psychologists report that people tend to and able to initiate helping and caring behaviors independently of their mind reading abilities when they observe others’ actual or potential pain and suffering. Specifically, recent brain imaging researches in social neuroscience demonstrate the existence of affective mirroring process as a basic form of empathy that functions interactively but independently of mind reading ability of moral agents. By analyzing and interpreting recent studies of neuroscience, I will argue that neuroscience can contribute to the development of normative standards of autistic moral agency and that a model of moral agency based on affective mirroring is possible for autistic moral agents independently of or in addition to Kantian model of moral agency.

Getting Our Moral Intuitions Right: Sympathetic Thinking and Moral Judgment
Stephen Napier, Villanova University

The normative implications of much of the research on moral judgment are that we need a much more radical critique of our moral belief forming practices. What I propose is a model of moral belief formation that follows the model of expertise in other domains – such as chess, or physics experts. The picture I argue for is something akin to feminist moral epistemology that is empirically informed by research on expertise. I consider what it is that confers expertise in these other well-studied domains (i.e., chess and creative problem solving) to extract lessons for moral belief formation. The result is that one must be conformed to goods and values at stake in a morally-charged situation in order to judge aright. The view is nothing new, but again, represents a feminist moral epistemology in the tradition of Murdoch (2013), Little (1995), Nussbaum (1990), and Parker (2005). What research on moral judgment confirms is that in order to conduct the critique required to correct our moral perception, we have to do something like what these philosophers recommend.
Comprehension of the moral goods at stake requires conformation to those very goods through “sympathetic” thinking (Little, 1995) or as Murdoch remarks, a “just and loving gaze directed upon an individual reality” (Murdoch, 2013, 34).

Implications of Neuroscience and Sociobiology: Real and Percieved
Ullica Segerstrale, Illinois Institute of Technology  

The Is and the Ought: On the Validity, Limits, Value—and Stewardship—of a Neuroscience of Human Ecology
James Giordano, Georgetown University

I will address what neuroscience actually studies and describes about human (interpersonal, cognition and behavior in environmental (viz-psychosocial) contexts – what is regarded as human ecology – as contributory to an understanding of what is construed to be morality and ethics (Ie- neuroethics’ “first tradition”) will then posit that while this information is important and perhaps powerful in its capacity for imparting understanding and foci for intervention and effect, it is not explicitly normative at the social level.
Instead, interpretations and uses of such understanding – and potential for targeted control – must be considered and are rendered on various social levels and contexts, and demand reflection, deliberation and guidance (ie.- neuroethics’ “second tradition”). In this light, I argue that neuroethics serves as lens and mirror to view both human cognition and behavior, and the scope and uses of neuroscience.

Let a Thousand Methods Bloom: On the Neuroscience of Moral Judgment and the Reverse Inference Problem
Brett Karlan, Princeton University

The question of whether, and to what extent, cognitive neuroscience can inform our normative theories in ethics naturally breaks down into two separate, but related, questions: (1) is it in principle possible that neuroscientific evidence could have normative implications?; and (2) even if we answer (1) in the affirmative, in practice, do neuroscientific results actually provide us with suitable reasons to change or modify our ethical theories?
Philosophical inquiry into the neuroscience of moral judgment (NMJ) has mostly focused on (1). In this talk, I use a systematic philosophical approach to examine (2). I want to do this because, even if we can answer the in-principle question in the affirmative, I believe there is a looming methodological problem within current NMJ that threatens to undermine the enterprise in practice.
Much contemporary NMJ that makes use of functional imaging techniques, I argue, falls victim to a methodological concern known as the reverse inference problem. In short, the reverse inference problem refers to the inferential weakness that results when researchers move from low-level information about neural states to inferences about the cognitive states in subjects undergoing these neural patterns. In this talk, I argue (following Klein (2011)) that several prominent examples of NMJ fall victim to this worry (e.g. the work of Greene et al. (2001) and the “ideal experiments” proposed in Berker (2009)). This, I argue, makes the prospects for in-practice integration of ethics and neuroscience seem highly problematic.
I next examine four attempts to solve the reverse inference problem (Poldrack 2010; Klein 2010; Machery 2014; Glymour & Hanson forthcoming). I argue that each fails to “solve” the problem, but that all four offer us a way to partially close the inferential gap, as it were, and make NMJ results more likely to have cognitive implications. I call this integrative  approach the methodological pluralism response to the reverse inference problem. In effect, I argue that neuroscientists should stop trying to “solve” the reverse inference problem, and should rather utilize the inferential strengths of many different cognitive neuroscientific methods in order to minimize the inferential uncertainty of NMJ results. I end the paper by expanding this methodological pluralism, noting a number of methods (especially the lesion method) that, taken in concert with the functional methods already employed by most researchers in the field, significantly reduce the amount of inferential uncertainly present when drawing conclusions about NMJ.

Naturalized Virtue Ethics and the Neuroscience of Self-Control
Matthew Childers, University of Iowa

Many social neuroscientists understand self-control and self-regulation according to a “resource” model wherein self-control is determined by a finite resource liable to exhaustion with use. Many studies seem to indicate that when this resource is depleted by subjects during various decision-intensive tasks, an absence of this hypothetical cognitive resource leads to a (decidedly akratic) refractory state called “ego depletion” which predicts a subject’s liability to failures of self-control (or “willpower”). While there are currently multiple competing models of, and challenges to, ego depletion, nonetheless the dominant model explains this phenomenon via a quantifiable absence (or presence) of blood glucose in the cortical regions of the brain, inter alia. The evidence supporting the model indicates that many “traditional” vices (e.g., irascibility, rashness, spitefulness, profligacy, etc.) are to a significant extent exacerbated by habitual states of ego depletion. Experiments which support the model indicate that infusions of carbohydrates metabolized in the brain have substantial restorative effects for a subject’s “willpower” in various situations wherein failures of self-control might otherwise be traditionally attributed to having a “vicious character.”
For those who aim to develop and defend an account of virtuous character consistent with the current neuroscience of self-control, the orthodox model of self-control affords naturalistic theories of virtue a tangible, empirically-informed model with which to aid in explaining the normative causes and effects of virtue and vice. While the model is promising for developing a more “naturalized” virtue ethics in the spirit of the ancients, alternatively it may also be profitable for bolstering extant objections to virtue ethics on the basis of “situational” factors and influences which apparently undermine the supposed “stability” of virtuous character traits. In exploring these issues, I show that the normative implications of the current neuroscience of self-control is profitable for defenders and skeptics of a naturalized virtue ethics.

Moral Theory is About Reasons, Not Motivations
Matt Jeffers, Georgia State University

My contention is that neuroscience can only have significance for ethics by (a) having action guiding implications, and (b) by describing our moral psychology, but (c) that it has no relevance at all to moral theorizing.
The hallmark of any given science is that it can conduct an experiment the results of which can either detract from, or provide evidence for, a given theory. To determine whether questions in moral theory are subject to this method we need only ask, “What experiment could be run that gives evidence for/against any normative principle or evaluative proposition?” For instance, what experiment could we run, the results of which would lend credibility or detract from the following propositions: “that humans should behave as to maximize pleasure,” or “that goodness is synonymous with pleasure,” or “that the consent of the governed gives rise to political legitimacy?” If there are no experiments which can say anything at all on questions of this sort, then science cannot contribute to normative theory.
However, science can be action guiding, indeed it guides policy frequently. Yet, the mere fact that the planet’s warming causes coastal flooding does not say anything normatively. We need to believe that flooding is harmful and that things being harmful is a reason to prevent them, before we can do any normative work. Furthermore, many disciplines are action guiding, including ornithology and carpentry, but we do not think that these are specially connected with moral theorizing. Yet, some contend that neuroscience might have a more intimate connection with ethics because, it may be able to tell us facts about our mental lives.
However, in order for these experiments to say something about moral theory, they would not only need to tell us about the source of these moral judgments, they would need to be able to say something about the reasons for having these moral judgments. Even if our deontological judgments have emotional sources, as theorists like Joshua Greene contend, showing this says nothing what-so-ever, about whether the reasons we give for endorsing deontological judgments are sound or unsound. Even if Greene’s neuroscience is correct, all he has shown is that our moral beliefs have emotional causal antecedents, not that the reasons given for holding our deontological beliefs were false. If anything such neuroscience could describe our psychological character, a descriptive fact, but it does not influence our moral theory.

Three Goals for a Third Tradition of Neuroethics
Geoff Holtzman, Illinois Institute of Technology
Neuroethics is typically conceived as consisting of two traditions: the normative ethical implications (or biomedical ethics) of neuroscientific practice, and the neuroscience of moral judgment. However, recent interest has emerged regarding the normative ethical implications of the neuroscience of moral judgment. Projects pursuing these implications are sufficiently different from those in the first two traditions to be considered exemplars of a third, distinct tradition. Recognizing this, we can begin to identify the specific goals, characteristic assumptions, and unique arguments of this tradition, and the sorts of objections they face. It is from this point of departure that I suggest three strategies that may help gird neuroethicists working within this third tradition against the sorts of objections they have typically faced.

Can Neuroscience Help Select the Correct Metamorality?
Thomas Noah, University of Pennsylvania

Psychologist Joshua Greene thinks that neuroscience (or, more properly, neuroscientific evidence) can held us to identify the correct metamorality. Metamorality, according to Greene, is a global morality that rationally resolves disagreements between competing local moralities.
According to Greene, any acceptable metamorality must satisfy what I callPossession: human beings with otherwise normal psychologies must possess the cognitive and motivational resources necessary to both understand and care about that which the candidate system says they must understand and care about. Greene thinks that only classical utilitarianism satisfies Possession. By contrast, rights-oriented deontology and neo-Aristotelian virtue ethics fail as metamoral candidates because they do not satisfy Possession. This kind of argument relies upon acceptance of the “ought implies can” principle and the belief that neuroscientific data can help identify the space of possibility. I provide evidence that we should think that if Possession is a matter of actual rather than possible understanding and valuing, then rights-based deontology and neo-Aristotelian virtue ethics do not satisfy Possession but neither does classical utilitarianism. Further, Greene’s theory requires that we do not read Possession in terms of actuality in order for the metamoral problem of conflicting local moralities to get off the ground.
However, if we move from actual to possible Possession, then, I argue, Greene is forced into a dilemma:
(D1) Either people have the cognitive and motivational resources such that they can “get” utilitarianism and also deontology and virtue ethics, or
(D2) People do not have the cognitive and motivational resources such that they can “get” deontology and virtue ethics and also utilitarianism. The dilemma is true on most readings of “can.” What Greene needs then, I argue, is a sense of “can” such that people can get utilitarianism but not deontology or virtue ethics.
I argue that he thinks that science will provide us with the relevant sense of “can.” There is something about the brain, he thinks, that makes utilitarianism quite attractive. However, neuroscience does not provide the right interpretation of the modality such that classical utilitarianism is uniquely selected as the correct metamorality.

Why Targeted Debunking Arguments in Ethics Must Be More Targeted
Theresa Lopez, Hamilton College

While global debunking arguments aim to undermine all of our moral beliefs, targeted arguments aim to undermine just some subset of our moral beliefs. Based on the growing body of scientific research on moral judgment in recent years, some have claimed we have reason to be suspicious of our intuitive moral judgments in particular. For example, Joshua Greene (2008) appeals to neuroscientific findings to argue that we should replace much of our intuitive “deontological” moral views with a utilitarian moral framework. I argue that this scientific case for utilitarianism is in error. Greene’s (2014) revised case against deontology employs an analogy with a digital camera: he likens our dual systems of moral judgment to a digital camera’s automatic and manual modes. He argues that in familiar situations automatic mode works well, efficiently but inflexibly, while unfamiliar situations demand the flexibility of the manual mode and hence utilitarian thinking. Unfamiliar situations are defined as those “with which we have inadequate evolutionary, cultural, or personal experience.” However, specifying which of our automatic responses are insufficiently shaped by relevant experience without begging any normative questions proves to be difficult. Indeed, there is reason to believe many of our automatic responses are attuned via experience to abstract moral considerations such as intentions, which are holistically assessed along with other factors like consequences (Kahane, 2014; Lopez et al., 2009; Railton, 2014; Sterelny, 2010). Scientific findings, I argue, are unlikely to settle debates among competing normative-ethical schools of thought. However, in contrast to those who claim science is of little relevance to questions of moral theory, I defend an important albeit more limited role for science in helping us discover blind spots and biases in moral thought that should prompt further reflection. Sometimes the path to revision is clear-cut; in many cases, difficult questions about how we ought to weigh competing moral considerations are brought to the fore. What we gain from a scientifically informed understanding of moral thought thus supplements, but cannot supplant properly moral theorizing.

Archimedes in the Lab
Tommaso Bruni, Western University and Regina Rini, New York University

Moral disagreement is commonplace. Since it is notoriously difficult to resolve moral disagreements by using the tools and resources of moral philosophy, a possible strategy is to look for a solution outside of moral philosophy.
In particular, a very well-established body of work in cognitive science apparently shows that people tend to make predictable mistakes in certain reasoning domains. If we could show that some patterns of moral reasoning resemble defective non-moral reasoning, then we might have identified bad moral reasoning. Some ethicists actually adopt this strategy and try to settle moral disagreement by ruling out particular types of moral reasoning on the basis of cognitive scientific evidence.
If cognitive science could be used in this way, it would play an Archimedean role in ethics. In other words, it would act as a neutral perspective allowing resolution of disagreement. More specifically, cognitive science could be employed as an Archimedean cleaver: a criterion, expressed in neutral scientific terms, that allows us to divide the good moral reasoning from the bad.
An Archimedean cleaver must possess two essential features:
(1) to be Archimedean, the criterion really must be morally neutral and cannot assume any moral beliefs,
(2) to be a cleaver, it must demonstrate clearly that some forms of moral reasoning are good and some are bad.
We argue that the cognitive science of reasoning is not well-suited to this Archimedean role. Through discussion of several influential research programs, we show that such attempts tend to either fail to be Archimedean (by assuming controversial moral views) or fail to settle disagreement (by getting caught up in unsettled debates about rationality). Finally, we speculate that these outcomes reflect a fundamental sort of normative disagreement, which can be applied to the domains of both morality and rationality, but cannot be avoided.

The Reactive Roots of Retribution: Normative Implications of the Neurobiology of Punishment
Isaac Wiegman, Texas State University

I argue that knowledge of the neural processes underlying phenomena of anger and punishment is likely to undermine the role of desert in decisions about punishment. I begin by arguing for a distinction between different motivational processes, based on research in motivational neuroscience: reactive processes and prospective processes. The latter processes select actions based on the value of their anticipated outcome, estimated by a causal model. By contrast, reactive processes select actions in relation to past or present occurrences, free of any causal model of the expected outcome. For instance, a revenge motive is reactive, because it selects actions that “payback” past transgressions. In other words, the action is not selected because of the anticipated outcome but because of its “fit” with past actions of others, usually understood in terms of what someone else “deserves” given what they did. When reactive processes motivate action, the agent sometimes acts “as if” there is some reason to act aside from the consequences of the action (non-consequentialist reasons). For instance, payback seems fitting when it gives an offender what they deserve, and it is fitting independently of its other consequences. However, if reactive processes are a product of natural selection, they were selected for the good consequences that they bring about (e.g. increased fitness through deterrence). These processes are thus prone to mislead us. Their functional role creates the illusion of non-consequentialist reasons, even though the illusion was successful in our evolutionary past precisely because of its biological consequences. Common intuitions about punishment seem to be illusory in exactly this way. Moreover, emerging work in neuroeconomics and affective neuroscience can help to undermine these intuitions. First, this work suggests that punishment intuitions and behaviors are reactive rather than prospective. Second, this work coheres with a prominent evolutionary explanation for why these punishment strategies were adaptive in our recent evolutionary past.

Did my Brain Make Me Do It?: Free Will and Neuroscience
Eddy Nahmias, Georgia State University

‘Willusionists’ argue that science is discovering that free will is an illusion. Their arguments take a variety of forms, but they often suggest that if the brain is responsible for our actions, then we are not. And they predict that ordinary people share this view. I will discuss some evidence that most people do not think that free will or responsibility conflict with the possibility that our decisions could be perfectly predicted based on earlier brain activity. I will consider why this possibility might appear problematic but why it shouldn’t. Once we define free will properly, we see that neuroscience and psychology can help to explain how it works, rather than explain it away. Human free will is allowed by a remarkable assembly of neuropsychological capacities, including imagination, control of attention, valuing, and ‘self-habituation’.

Does Pain Asymbolia Undermine Classical Moral Hedonistic Utilitarianism?
Rory Svarc, King’s College, London

In this paper I will argue that neuroscientific evidence regarding the phenomenon of ‘pain asymbolia’ is normatively relevant, and represents a fruitful opportunity for interdisciplinary engagements between clinicians, neuroscientists, and philosophers. First, I will begin by outlining cases of pain asymbolia, where patients say things like ‘it hurts… but it doesn’t bother me’ (Pötzl and Stengel, 1937), alongside other counter-intuitive examples where subjects insist they feel pain, but that it is not ‘bad’ for them. Following (Grahek, 2001) I will show that this allows to tease apart two concepts of pain – ‘being in pain’ and ‘feeling pain’. I will then give a brief overview of the neuroscientific evidence surrounding the cause of the unusual symptoms, trying to highlight what may be the neural structures underpinning these higher-level psychological concepts. Next, I will delve more deeply into normative theory, looking at many of the classic formulations of consequentialism that focus on the concept of ‘pain’, (Sidgwick, 1874), (Popper, 2012), (Singer and Lazari-Radek, 2014), going on to argue that the morally relevant concept that these thinkers were trying to latch onto is best described by ‘feeling pain’ rather than ‘being in pain’. I will claim that this makes somewhat less plausible ‘hedonic’ accounts of utilitarianism, and offer a tentative argument that a more neuroscientifically informed account means that it collapses into a sophisticated variant on ‘preference utilitarianism. I will conclude with a reflection on the relationship between neuroscience and moral philosophy. On the one hand, I will argue that neuroscience and clinical data have a role not just in the application of our moral theories, but in their underlying axiology as well. I will claim that it is too often ignored that, even if some concepts can in principle be teased apart by armchair analysis, it often requires empirical information to shock us into realising it is possible. Still, I will conclude by saying that we need philosophers to reflect on the new data, carefully looking out for the normatively relevant parts of neuroscience, and how they fit into the broader historical debates in moral philosophy.

Neuroscience, Moral motivation and the Structure of Moral Psychology
Chris Zarpentine, Wilkes University

Many philosophers have been generally skeptical that neuroscience has any normative implications: while neuroscience might tell us how things do work, they don’t tell us how they ought to. Others have drawn on neuroscience to provide counterexamples to philosophical claims in ethics and moral psychology: neuroscience can have normative implications after all. Here, I take a different approach. In epistemology, Gilbert Harman has suggested that reasoning is like digestion: we are limited in the kinds of normative guidance that can be given by the structures and processes that are already in place for reasoning and digestion, respectively. I think a similar claim applies to moral psychology: when it comes to moral decision making, our normative theories must be informed by an understanding of the structures and processes that are already in place. While many philosophers pay lip service to such accounts, few have seriously taken up this approach. In my view, the best way to evaluate such an approach is to try it and see how well it works. That is what I do in this paper. I focus on philosophical debate about the relation between moral judgment and motivation. Drawing on recent work in neuroscience and psychopathology, I argue that there is a structural dissociation between emotional or affective processes that are motivational and more ‘cognitive’ processes, which lack a direct connection to motivational structures. While some philosophical accounts seem to privilege the processes on one or another side of this structural divide, I argue that it is a mistake to do so. Moral agency requires the appropriate interaction of both of these kinds of processes. This claim is supported by reflection on an empirically-informed account of the structure of moral psychology. Not only does the account sketched here help to explain the persistence of this philosophical debate, it also provides practical, normative guidance about moral thought and action: how we can improve moral deliberation and how we can overcome the kinds of motivational failures to which, because of the way human moral psychology is structured, we are particularly prone. In this way, I aim to demonstrate one way in which neuroscience can contribute to normative theorizing.

Nervous Morals: What the Mistakes of Psychiatric Ethics Can Teach Us About Neuroethics
Matthew Ruble, Appalachian State University

Why should we think that ‘more facts’ entails ‘better morality’? We do think this way for a great number of contemporary moral issues. After all, forming a moral view in an evidence free manner seems both morally and epistemically vicious. But failing to muster an empirically informed moral view is only one way to err regarding the relationship between empirical facts and moral norms. We might also make the mistake of overly relying on facts when engaging moral queries. Such is the mistake that psychiatric ethics commits when attempting to adopt a ‘facts first then values’ approach employed in medical ethics. This methodology assumes that we begin moral inquiry well equipped with uncontroversial factual evidence only after which we engage contested moral values. This methodology of arguing from undisputed facts to disputed values is a methodology doomed to fail psychiatric ethics simply because we do not have the luxury of undisputed facts, and thus, cannot move from factual claims to moral claims as easily as we do in non-psychiatric medicine. This mistake has been well documented by KWM Fulford (1989, 1990, etc.). This (chapter) attempts to apply this mistake of psychiatric ethics as a moral and methodological lesson for future neuroethics. Nowhere is this mistake more evident than the neural evidence alleged to establish the moral innocence of the psychopath. Any normative claim that rests on neural observation and empirical evidence faces two objections: first, that we should not assume that alleged ‘neural facts’ are value-free, and second, that a ‘facts first then values’ approach in medical ethics is an appropriate methodology for neuroethics. We should avoid simultaneously over-valuing facts and under-valuing values regarding future inquiry into the normative implications of neuroscience. In so far as we are able to accomplish this, then neuroethics stands to benefit from the hard-earned lessons of psychiatric ethics. Of course neuroscience has normative implications! The challenge is for us to do neuroethics well, and we cannot accomplish this by delaying moral analysis until the facts have departed the laboratory. This chapter closes with a plea for interdisciplinary teams to be the fundamental until of research into the normative implications of neuroscience rather than isolated scholars in isolated pre-defined academic specialties.

Neuromachean Ethics
Garret Merriam, University of Southern Indiana

There is a common view that science can only give us facts, and can say nothing about ethics or other value-laden fields. This ‘is/ought gap’ is starting to face challenges from developments in neuroscience. Close study of the human brain is starting to reveal novel insights into the nature of how moral reasoning works, and what makes it break down. In this paper I will argue for two central conclusions. First, that the ‘is/ought gap’ is not as capacious as it may seem; science can indeed help us derive a (moral) ought from a (neural) is. Second, an Aristotelian eudemonistic ethics offers a promising, but underappreciated framework from which to understand both the methods and the substance of neuroethics.To the first point: moral reasoning is something that the brain does. It is a function that has evolved over time as the result of a variety of evolutionary pressures. Our brains have numerous moral ‘centers’ that operate in different ways, sometimes cooperatively, sometimes combatively. We understand that when the somatosentory cortex is damaged a variety of perceptual impairments result; why should we not, by parity of reasoning, understand that when these moral ‘centers’ are damaged, a variety of ‘moral impairments’ result?
Such a naturalistic approach to neuroethics fits comfortably a broadly Aristotelian approach to moral thinking, both methodologically, and substantively. Methodologically, Aristotle championed an approach that was pluralistic and pragmatic, rather than principled. Aristotle would no doubt find himself right at home using neuroimaging to better understand moral reasoning and the good life.
The substance of Aristotle’s moral thinking brings us to our second point: a eudemonistic theory of ethics offers a promising framework for thinking about many recent findings in neuroscience and cognitive science more broadly. There has been an Aristotelian vein running through psychology for over a century; Jung, Maslow, and Frankl all had decidedly Aristotelian affinities. More recently the positive psychology of Seligman and Csikszentmihalyi have also stressed ‘flourishing’ as a core concept in their work, and neuroscience is starting to follow suit. (e.g.—work by Gary Lewis; Britta Holzel). Such work implies a neuroscientifically objective framework for understanding human wellbeing that is based in a pluralistic value system that is broadly Aristotelian in nature.
In closing, I suggest that the over-emphasis on ‘trolleyology’ cases in neuroethics may led to a neglect of broader questions of ‘living-well.’ Given that human beings with healthy, functioning brains flourish best when they are responsive to a plurality of moral values, we should be skeptical of theories that offer straightforward solutions to complex moral dilemmas.

Venue

Hermann Hall, Alumni Lounge (Lower Level), Illinois Institute of Technology
3241 S. Federal Street
Chicago, IL 60616 United States
+ Google Map