Sunday, April 13, 2008

Sylviane--Week 11 Blog (Decision Making and Neuroethics)

A theme that I found particularly interesting, and one that resonated in many of this week’s readings on decision making and neuroethics, was the idea of evolutionarily conserved mechanisms. In the Sanfey article on neuroeconomics and cross-currents in research on decision making, he discusses a “growing tradition in neuroscience in which optimal performance is defined for a given behavioral domain, and is then used for constructing theories about underlying neural function.” He goes on to comment that while this technique has its merits, and that complex behavior can be optimal, “simpler evolutionarily conserved mechanisms might prove to be closer to optimal, or at least to have been so in the environment in which they evolved.” This idea intrigued me, for it seems that a great deal of human behavior, and therefore likely the neural foundations of these behaviors, takes the simplest form that has benefited mankind from the earliest generations. A number of the other articles also conveyed this idea; Daw’s article on cortical substrates for exploratory decisions in humans mentioned that the classic “exploration-exploitation” dilemma is “far from representing idle curiosity” and that “such exploration is often critical for organisms to discover how best to harvest resources such as food and water.” Greene’s article on moral judgment states that intuitions such as reciprocity, loyalty, purity, and suffering, are shaped as much by natural selection as they are by cultural forces. Finally, Grimes’s article on human trust discusses the evolutionary advantages of trusting one another: “Out social brain is also adapted to be cooperative. Individuals can benefit by working together. But that requires trust, which is why, according to Zak, we have a biological urge to trust one another.”

I never cease to be amazed by molecular biology, but this is one of the first times I have been so fascinated by evolutionary biology. Morality, specifically trust, is something that I have never truly considered the origin of, seeing as I have encountered both trusting and untrusting people in my life. Grimes’s explanation for the simple evolutionary advantage to this human trait appears so obvious after reading his article, leading me think that many of human behaviors are likely as result of such simple biological adaptations as well. It would be extremely interesting if there were some way to compare the brains, both in structure and in functioning, of the earliest humans with humans today to see how they have evolved over time, or if they even have.

The other article on morality (Zimmer’s “Whose Life Would You Save) was also interesting to read but for different reasons. In his brief recap of the history of the study of morality, he mentions a philosopher named David Hume who argued that people can an act good not because they rationally determine it to be so but because it makes them feel good. Similarly, an act is deemed bad if it fills someone with disgust, and these ideas led to him propose that moral knowledge comes partly from an immediate feelings and diner internal sense.” This reminded me of countless conversations concerning moral issues in which someone said that something was wrong “because it just was.” I am curious about the neural mechanisms that could potentially support Hume’s theory. In the article, Greene uses fMRI to examine brain patterns while patients ponder moral dilemmas. Are there specific regions of the brain that are present in all humans that will allow not only for a general sense of morality but also for a similar sense of what is right and wrong? Further, is empathy the key to this? Later in the article, Greene mentions studies where it has been determined that while criminal psychopaths can acknowledge emotions in others, they often have trouble recognizing these emotions. Finally, Greene argues that “different cultures produce different kinds of moral intuition and different kinds of brain.” This view, which I suppose is a sort of cultural morality, seems to suggest that morality and moral development is guided more by social and cultural factors than biological ones. I am curious about how the brain activation patterns would compare in individuals from a variety of cultures.

8 comments:

Oliver Edwards said...

I also found Zimmer's comments on morality very interesting, and I think that examining the subject on a neurological level is a fairly novel and important approach to the subject. Hume was revolutionary in that he transformed our notions of right and wrong by suggesting that we actually make our decisions based on indirect personal gain. He questioned the power of rationality in the process of our act of deciding. The acceptance of the fact that decision making could be heavily influenced by emotion may partly be due to the work of philosophy, and only now are we discovering the neural substrates that prove this abstract proposition.

I am also reminded of LeDoux's assertion that almost all cognitive appraisals of our environment are influenced by a mutual interaction of emotion and reason. We could think of decision making, even in the most banal form, as an interaction between the high an low roads of our brain. When deciding whether or not to do a good deed, our brain may experience a wrestling match between the amygdala and the cortex. And when the amygdala wins, we are conspicuously unaware of why we choose the way we do.

Kevin Goldstein said...

I’m suddenly fascinated by the anterior cingulate cortex or ACC, not least of all because it opens up a number of our persistent questions about the instability of the reason-emotion dichotomy. To a certain extent we find in moral ambivalence / anguish a distancing effect, whereby spontaneous decision making gives way to a relatively extended evaluative procedure. One would think this problem solving type of procedure would lead toward abstraction, or a cognitive-philosophical decision, the “Kantian” or “utilitarian” position, but is this the case? How is the ACC affected by so-called impersonal versus personal moral evaluations, as explained in the Greene and Haidt article?

The subject seems to return to an impersonal stance, a hypothetical, even after this sequence of deliberation. But would the subject act in the same way outside of these controlled conditions, when the hypothetical becomes the palpable? I think even in these controlled experiments the subjects are experiencing a highly emotionally charged evaluative process; in accordance with other readings this week, the emotional component of this process cannot be overlooked, bringing us back to the Greene etc article.

Evolutionary history, as we have discussed a great deal, represents an underlying stratum of affective evaluation. It is fairly obvious that reason as such, whether utilitarian or Kantian, emerges from this well of experience. The variety of moral responses, as the Zimmer article highlights, reflect rapidly changing environments in which to make these moral decisions. More than ever it is evident that the cognitive-philosophical-social functions on a continuum with the evolutionary-emotive.

Molly Moody said...

I thought that Bechara’s “Neuroethics” paper was one of the most practical and easily relatable topics that I’ve read in this class so far. Economics brings choice and judgment to neuroscience’s “ specialized neural systems of the brain [which] coordinate their activities to solve complex and often novel problems.” This puts the entire subject of neuroscience into a greater context for learning with these economic behaviors. In other words: we can learn so much about the brain through some of the big-picture processes that occur in government and economics. I personally feel like I would have found my economics class a lot less boring if I could have looked at it from a neuroethical view. I thought Martino’s “Frames” article interesting and relatable in the way subjects chose the more rational “sure” answer over an unforeseeable emotional gamble. The idea that valence, or greater emotional decision-making, seems more stressful is backed up by the amygdala’s role in decision making and the enhanced ACC activity that this paper describes. This idea ties in so perfectly with Zimmer’s “Who’s life” paper. Zimmer says that the feeling of “good” comes not from rational thought, but emotional thought. This raises a lot of questions about the validity of moral judgments and the legal system as a whole. Why is right, right? I always felt like in the movie “The Good Son” the mother probably would have saved her own son (Macaulay Culkin) from falling off the cliff rather than her distant nephew (Elijah Wood). Obviously her decision to save the child that wasn’t her own (no matter how evil) was to rational to ever happen.

Sarah Reifschneider said...
This comment has been removed by the author.
Sarah Reifschneider said...

I never before thought of moral judgment as a genetic outcome—Though as you comment on Greene’s final argument that he ‘seems to suggest that morality and moral development is guided more by social and cultural factors than biological ones’—It is one of the most intriguing questions whether our moral judgment in a certain moment is the unconscious decision of emotions influencing cognition, or social rules that have become so intrinsically part of our natural thought process. Take for example prejudices against people, I doubt we are born with them. Then again as you say sometimes there is the undeniable intuition when we know that something is wrong because we somehow just know it is. How come?
The neroanatomical table in the article was very helpful to me in relating the brain to the idea of right and wrong judgment. However, knowing that these regions are not only and specifically patricians in moral judgment leaves a place of wonder. Here is where for me the ancient idea of a soul becomes manifest in the moral-brain.
I really enjoyed your ideas about looking at different cultured brains and comparing theses regions; further I wonder how they look developmentally. I mean by this if the regions become more receptive or adapt as one matures or if they are as such from birth. I also wondered about people who constantly will seem to make bad choices for themselves and are caught in vicious circles, do some people simply have ‘superior’ orbifrontal cortexe’s? And animals, could they make moral choices if the process was merely based on emotions rather then intellect?
I also liked the neuroeconomic idea and the greatness of field cross learning, a sort of self congratulatory achievement in this class, and perhaps the only way to try and untangle such complex issues is by looking at the problem from all the different angles.

Endira said...

I also found Greene's argument that moral intuition is driven more predominantly by social factors rather than biological ones intriguing. I also wonder, if this is the case, how much is plasticity relevant, or to what extent does the brain absorb such cultural or social norms that provide the basis for a moral system? Do these cultural factors shape the brain during primary development years? The fact that Greene emphasizes the variety of processes involved in the formation of moral judgment and intuition to me reveals that perhaps it's impossible to determine the extent to which social and biological factors influence moral development. As some of the examples also reveal, instinctive moral judgement seems to also based so much on situation, context, and individual experience, which influence whether or not impersonal or personal judgment is used.

Molly Esp said...
This comment has been removed by the author.
Molly Esp said...

This week's intersection of economics, psychology and biology fascinated me. I especially appreciated the application of economic terms such as uncertainty and game theory to more psychological ones such as reward, motivation and morality and then the application of these principles to the overall study of images of the brain in order to conclude something greater about how emotion affects rationality.
I had never before considered how economic theory could affect neuroscience. The third article's section on "How neuroscience can inform enonomics: the benefits of a mulitple system approach" encompassed a greater theme of this week's readings: decisions are not informed by one thought process alone and the study of decision-making can not be best undertaken through one academic discipline. I suppose this way of thinking appeals to me because it promotes moderation and balance.