Sunday, April 13, 2008

Sylviane--Week 11 Blog (Decision Making and Neuroethics)

A theme that I found particularly interesting, and one that resonated in many of this week’s readings on decision making and neuroethics, was the idea of evolutionarily conserved mechanisms. In the Sanfey article on neuroeconomics and cross-currents in research on decision making, he discusses a “growing tradition in neuroscience in which optimal performance is defined for a given behavioral domain, and is then used for constructing theories about underlying neural function.” He goes on to comment that while this technique has its merits, and that complex behavior can be optimal, “simpler evolutionarily conserved mechanisms might prove to be closer to optimal, or at least to have been so in the environment in which they evolved.” This idea intrigued me, for it seems that a great deal of human behavior, and therefore likely the neural foundations of these behaviors, takes the simplest form that has benefited mankind from the earliest generations. A number of the other articles also conveyed this idea; Daw’s article on cortical substrates for exploratory decisions in humans mentioned that the classic “exploration-exploitation” dilemma is “far from representing idle curiosity” and that “such exploration is often critical for organisms to discover how best to harvest resources such as food and water.” Greene’s article on moral judgment states that intuitions such as reciprocity, loyalty, purity, and suffering, are shaped as much by natural selection as they are by cultural forces. Finally, Grimes’s article on human trust discusses the evolutionary advantages of trusting one another: “Out social brain is also adapted to be cooperative. Individuals can benefit by working together. But that requires trust, which is why, according to Zak, we have a biological urge to trust one another.”

I never cease to be amazed by molecular biology, but this is one of the first times I have been so fascinated by evolutionary biology. Morality, specifically trust, is something that I have never truly considered the origin of, seeing as I have encountered both trusting and untrusting people in my life. Grimes’s explanation for the simple evolutionary advantage to this human trait appears so obvious after reading his article, leading me think that many of human behaviors are likely as result of such simple biological adaptations as well. It would be extremely interesting if there were some way to compare the brains, both in structure and in functioning, of the earliest humans with humans today to see how they have evolved over time, or if they even have.

The other article on morality (Zimmer’s “Whose Life Would You Save) was also interesting to read but for different reasons. In his brief recap of the history of the study of morality, he mentions a philosopher named David Hume who argued that people can an act good not because they rationally determine it to be so but because it makes them feel good. Similarly, an act is deemed bad if it fills someone with disgust, and these ideas led to him propose that moral knowledge comes partly from an immediate feelings and diner internal sense.” This reminded me of countless conversations concerning moral issues in which someone said that something was wrong “because it just was.” I am curious about the neural mechanisms that could potentially support Hume’s theory. In the article, Greene uses fMRI to examine brain patterns while patients ponder moral dilemmas. Are there specific regions of the brain that are present in all humans that will allow not only for a general sense of morality but also for a similar sense of what is right and wrong? Further, is empathy the key to this? Later in the article, Greene mentions studies where it has been determined that while criminal psychopaths can acknowledge emotions in others, they often have trouble recognizing these emotions. Finally, Greene argues that “different cultures produce different kinds of moral intuition and different kinds of brain.” This view, which I suppose is a sort of cultural morality, seems to suggest that morality and moral development is guided more by social and cultural factors than biological ones. I am curious about how the brain activation patterns would compare in individuals from a variety of cultures.

To be, or not to be.... a proponent of the multi-system view?

Super corny title, yes.

For anyone that has iTunes, and that's probably everyone, I found a Stanford podcast that relates directly to the topic we're discussing, though I haven't had a chance to read it yet. Get on iTunes, select "iTunes Store" from the toolbar on the left > search "Stanford U" > select "Stanford U"> go to "Health and Medicine" > choose "Mental Health > the podcast is "Perception, Decision and Reward: Toward a Neurobiology of Decision-making" by William T. Newsome. Hope you enjoy!

I'd like to start the body of my post with a quick summary of the Somatic Marker Hypothesis, which I feel that the Bechara et al., article did not quite explain. (This is a mix my my own limited knowledge supplemented by Wikipedia.)Essentially, the SMH proposes the existence of a mechanism through which emotional processes may either guide or bias behavior, particularly in the realm of decision-making. This proposal indicates that the view held by the authors is one of a multi-system process in decision-making. Oftentimes, one has to make a decision between conflicting alternatives, at which point cognitive processes may become overloaded and are unable to provide an informed option. It is here that somatic markers come into play; somatic markers are psychologically affective states that have been induced by reinforcing stimuli from the environment.
On a superficial level, I was highly entertained by the way Bechara et al. went about defending their hypothesis, and given what little I do know about the matter, I adamantly support them in their defense. Though I am not certain that I wholeheartedly agree with their hypothesis, they present incredibly valid points. For instance, Bechara et al. studied patients with VMPC damage, whereas Maia et al. studied regular Moreover, Bechara et al. point out that Maia et al.'s study, "undermines traditional methods for identifying implicit knowledge" (159). The reason for this accusation is that Maia et al. simply questioned their participants about what they know, undercutting the idea of implicit knowledge, which may be unconscious. In the end, Maia et al. show that even normal participants without damage to the VMPC, with adequate knowledge, are not guaranteed to make the correct decision.
Moving on, while reading the article by DeMartino et al., I found myself thinking, "Aha! This sounds like the SMH". However, I feel that while DeMartino et al. present two systems through which information is processed in decision-making, that they put forth the idea that the overriding system is one of "simple heuristics," or trial and error, while Bechara et al. (though I may be mistaken), propose that emotions, of affective states, are the deciding factor. To simplify, both believe that decisions are not made in the brain by only one system; when the fast and dirty system falls through, another one comes up to take the slack and makes the final decision. The difference between the two papers is that the authors differ on which of the two systems is the overriding one.

When I read the title for the Sanfey et al. article, I was a bit taken-aback to see the term "Neuroeconomics". However, once I began to read the article, it made perfect sense to me. An idea that caught my attention right off was the idea that behavior can be interpreted as choosing alternatives with the goal of maximizing utility. To me, this seems intuitively true, and as such, I thought it might be interesting to discuss this idea in class and see how other people feel. Later, Sanfey et al. provide two processes for decision-making (as do Bechara et al. and DeMartino et al.): automatic processes and controlled processes. To my understanding, automatic processing, true to its name, is quick, or "fast and dirty" and can be compared to the low road in the brain system, while controlled processing, as it is flexible and can support many goals, resembles the high road. Included in the functions of controlled processing are introspection, reasoning, etc., and so it would be reasonable to fit emotions into this category rather than that of automatic processing. This model for the interaction between the two systems, at least to me, resembles that presented by Bechara et al., though I do not wish to simplify the complexities of each of the different models. So my question is simple: Have I gotten it all wrong?
I would really love to hear others voice in on the similarities and differences of all these models of multi-system processing proposed by the different authors we read.

Automatic and Controlling Systems

Frances Clayton

Radio Lab - http://www.wnyc.org/shows/radiolab/episodes/2006/04/28

This is a fascinating Radio Lab that talks about exactly what we have been looking at this week. The episode is called Morality and the link above should take you right to it. Not only do they talk to Dr. Joshua Greene and Franz da Waal (both discussed in the Whose Life Would You Save? article), but the look at morality in a play group of 3-year olds and at some of the roots of our early penitentiary system. Some of the examples we read about (the M.A.S.H. reference for example) are mentioned and taken a step further. It is cool to hear some of the folks we are reading about talk about their own work.


The Green and Haidt article looks at the differences in moral and non-moral decisions and in personal and impersonal experiences. In looking at brain area activation, Greene and Haidt align impersonal moral decisions more closely with other non-moral conditions than with personal moral choices. The main difference pointed to is the activation of social/emotional areas in personal moral decision-making. It seems then that impersonal moral decisions are not processed in the brain much differently than is other decision-making. The areas of the brain show in Table 1 in the Greene and Haidt article that are activated in impersonal moral judgment are also activated when doing other cognitive tasks.
These two different processes of decision-making can be looked at in light of automatic and controlled processing as described in the Neuroeconomics article. (112) Emotions are highly automatic (System 1) and contrast with the deliberative manner of controlled processing (System 2). It could also be said that the automatic process is inline with Kantian philosophy and controlled processing with Mill/utilitarian philosophy. (To be clear, I know that these systems are best seen on a continuum as stated in the Neuroeconomics article.) The field of economics is based on the idea that “behavior can be interpreted as choosing alternative with the goal of maximizing utility.” (108) which can be seen as part of controlled processing. It is the automatic/System 1 kinds of processes that the economic framework does not take into account.
It is the controlled system that people are using in both non-moral and impersonal moral decision-making. It seems that it is the “personal” aspect of the moral questioning that does not allow the control system to override the automatic. We do see a few people allowing the override – those few who are willing to push the man over the footbridge. The Neuroeconomics model clearly says that controlled/System 2/utilitarian processing “monitors the quality of the answer provided by System 1 and sometimes corrects and overrides these judgments.” (111)
The areas of the brain that light up with personal moral decisions and automatic processing, Green suspects are “part of a neural network that produces the emotional instinct behind many of our moral judgments.” (Zimmer, 4) There will be times that System 2 disagrees with the emotional intuitions of System 1 and at point in time the ACC acts as mediator – as the scales deciding if moral intuition or rationality overrides. The Ultimatum Game mentioned in both of articles I have mentioned looks at how this balance works much of the time – with the evolutionary instilled sense of fairness outweighing reason. *
According the “Whose Life Would You Save?”, it could be that evolution is not the only way the instincts of the automatic system are formed. Towards the end of the article Greene uses Haidt to aid his proposition that culture may also have significant influences on a persons sense of morality. “All human societies share certain moral universals, such as fairness and sympathy. But Green argues that different cultures produce different kinds of moral intuition and different kinds of brains.” (Zimmer, 5) This concept is taken even further and the suggestion is made that many of the great conflicts of humans may be rooted in brain circuitry.

* Could one not put up an argument that rationality is at the basis of this dominance of fairness – a way to prevent a person from setting him or herself up to be “taken advantage of”? How does time play into this? What may not be reasonable in the moment could be argued to be reasonable over time. If this is the case however, it seems to be the opposite side of the ‘hyperbolic time discounting’ coin? This question may not make much sense – simply mental ramblings on my part….

Moral Questions

Suzanne Ardanowski

Feeling Brain

4-11-08

 

            Whose life would you save?  Those moral questions were always so impossible to answer.  I can follow the logic of moral judgments occurring on a neuronal level, but I think Zimmer is a little confusing/misleading when he states “if right and wrong are nothing more than the instinctive firing of neurons, why bother being good?” (p.5).  He then goes on to say, “by the time we become adults, we’re wired with emotional responses that guide our judgments for the rest of our lives” (p.5).  This statement sounds like the pathways are a result of learned behavior. Maybe he was referring to the firing as instinctual, but saying “nothing more than” is confusing to me. It is more, a lot more, as he recognizes when he discusses genes, culture, and personal experience. Furthermore, it is possible to change these pathways. The firing may be instinctual, but the pathway isn’t.

            While we may wish for the decision making process to be “understood in terms of unitary evaluative and decision making systems” (Sanfey et al, p.111) as the economic approach suggests, I think this is a tall order for the subjective, multifaceted human experience. I think science, and people in general, want a clear definitive answer, with clearly predictable and measurable results. Sanfey et al also speak to the “assumption of optimality” and desire for formal theory (p109).   They question the possibility of a single system, noting how different systems can compete, causing different disposition toward the same information (p.111).   The descriptions of System 1 (automatic) and System 2 (controlled) reminded me of Ledoux’s low road/high road comparison. It is amazing how much of our functioning is the combination of the unconscious/automatic and the conscious/cognitive.  We have discussed this theme a lot. I think we often tend to minimize the intuitive System 1 in favor of the mighty System 2, because as a culture we devalue things we cannot measure.  But haven’t we all had those times when we say, “I knew it, but I didn’t say it, do it…etc.”  I think you can become more in tune with System 1 if you give it more value. It makes sense to me that “strategic interactions between individuals involves an interplay between emotion and deliberation” (p.113). It makes me think of the cartoon angel/devil on your shoulder.

            Speaking of consciousness, the Bechara et al article states, “pure cognitive processes unassisted by emotional signals do not guarantee normal behavior in the face of adequate knowledge” (p.160).  I think this is strong support for my value of System 1.  

            I remember learning about the Kohlberg moral reasoning scale last semester.  We spoke a lot about how biased this scale was, and how it valued certain kinds of reasoning while not even considering others.  So I am not convinced that it is a good measure of moral judgment.  I thought it was really interesting that despite having preserved IQ and cognitive function, and abstract social knowledge, patients with prefrontal damage had “disastrous real life judgment” (Greene & Haidt p.518).  The whole idea of emotions influencing moral judgment makes sense.  It makes me wonder if there is “no specifically moral part of the brain” (Green & Haidt p.522) than is it possible to really be objective?  When we give advice, are on a jury, work with children and families, can we ever truly be impartial?  I don’t think so, even if we think we are.

             I am also curious if antipsychotic drugs are targeting the areas discussed in the Green & Haidt article and would like to know if drug treatment can improve moral behavior.