Sunday, April 13, 2008

Automatic and Controlling Systems

Frances Clayton

Radio Lab - http://www.wnyc.org/shows/radiolab/episodes/2006/04/28

This is a fascinating Radio Lab that talks about exactly what we have been looking at this week. The episode is called Morality and the link above should take you right to it. Not only do they talk to Dr. Joshua Greene and Franz da Waal (both discussed in the Whose Life Would You Save? article), but the look at morality in a play group of 3-year olds and at some of the roots of our early penitentiary system. Some of the examples we read about (the M.A.S.H. reference for example) are mentioned and taken a step further. It is cool to hear some of the folks we are reading about talk about their own work.


The Green and Haidt article looks at the differences in moral and non-moral decisions and in personal and impersonal experiences. In looking at brain area activation, Greene and Haidt align impersonal moral decisions more closely with other non-moral conditions than with personal moral choices. The main difference pointed to is the activation of social/emotional areas in personal moral decision-making. It seems then that impersonal moral decisions are not processed in the brain much differently than is other decision-making. The areas of the brain show in Table 1 in the Greene and Haidt article that are activated in impersonal moral judgment are also activated when doing other cognitive tasks.
These two different processes of decision-making can be looked at in light of automatic and controlled processing as described in the Neuroeconomics article. (112) Emotions are highly automatic (System 1) and contrast with the deliberative manner of controlled processing (System 2). It could also be said that the automatic process is inline with Kantian philosophy and controlled processing with Mill/utilitarian philosophy. (To be clear, I know that these systems are best seen on a continuum as stated in the Neuroeconomics article.) The field of economics is based on the idea that “behavior can be interpreted as choosing alternative with the goal of maximizing utility.” (108) which can be seen as part of controlled processing. It is the automatic/System 1 kinds of processes that the economic framework does not take into account.
It is the controlled system that people are using in both non-moral and impersonal moral decision-making. It seems that it is the “personal” aspect of the moral questioning that does not allow the control system to override the automatic. We do see a few people allowing the override – those few who are willing to push the man over the footbridge. The Neuroeconomics model clearly says that controlled/System 2/utilitarian processing “monitors the quality of the answer provided by System 1 and sometimes corrects and overrides these judgments.” (111)
The areas of the brain that light up with personal moral decisions and automatic processing, Green suspects are “part of a neural network that produces the emotional instinct behind many of our moral judgments.” (Zimmer, 4) There will be times that System 2 disagrees with the emotional intuitions of System 1 and at point in time the ACC acts as mediator – as the scales deciding if moral intuition or rationality overrides. The Ultimatum Game mentioned in both of articles I have mentioned looks at how this balance works much of the time – with the evolutionary instilled sense of fairness outweighing reason. *
According the “Whose Life Would You Save?”, it could be that evolution is not the only way the instincts of the automatic system are formed. Towards the end of the article Greene uses Haidt to aid his proposition that culture may also have significant influences on a persons sense of morality. “All human societies share certain moral universals, such as fairness and sympathy. But Green argues that different cultures produce different kinds of moral intuition and different kinds of brains.” (Zimmer, 5) This concept is taken even further and the suggestion is made that many of the great conflicts of humans may be rooted in brain circuitry.

* Could one not put up an argument that rationality is at the basis of this dominance of fairness – a way to prevent a person from setting him or herself up to be “taken advantage of”? How does time play into this? What may not be reasonable in the moment could be argued to be reasonable over time. If this is the case however, it seems to be the opposite side of the ‘hyperbolic time discounting’ coin? This question may not make much sense – simply mental ramblings on my part….

4 comments:

Tessa Noonan said...

This week's readings focused on a very palpable tension between "autonomic" and "controlled" systems (or, as you like, emotion/deliberation, exploitive/exploring, etc.) that we have seen several times now in different manifestations. However, in terms of decision-making and moral distinctions, time after time these articles pointed to our illusions of purely "rational" thought and decision. Although "control" systems served to monitor the "autonomic" systems in most models, the two had to work together, switching off for optimal action selection. Amygdala input to decision-making schemas seems invaluable to this these processes, and, as Green and Haidt observed, "normal decision-making is more emotional and less reasoned than many have believed" (p. 518).

However, besides this very informative revelation, I think several of the articles postulated a really interesting question about the nature of our decisions and the processes that govern them. At the end of DeMartino et. al.'s article, they state: "in modern society, which contains many symbolic artifacts and where optimal decision-making requires skills of abstraction and decontextualization, such mechanisms may render human choices irrational. Greene puts in more simply in Zimmer's article, "We're using our brains to make decisions about things that evolution hasn't wired us for." Certainly an interesting predicament, and one that much more research should tackle.

Amy Fleischer said...

I also found Greene's observation of the difference between pesonal and impersonal moral decisions very interesting (Zimmer 4). He suggests, like several other researchers we have learned about (i.e. Bruce Perry), that technology has evolved faster than our capacity to use it. The moral decisons required by our present surroundings may well exceed our ability to make sound moral decisions. However, the relative nature of morality reminds me of our previous discussion about cross-cultural studies in the field of emotion.

Another interesting aspect of decision making and morality is the time involved in these processes (Zimmer 5). Should this quality effect how we perceive the outcome? In other words, what comes quickly may not be the best decision... but over time most of us aim to incorporate our intuition with our reasoning. I'm still somewhat mystified as to how these two systems interact beyond negotiations in the ACC. Finally, Greene raises the point that understanding dysfunctional neurology can incite pity, which is meaningful since pity is not necessarily the most positive or useful reaction to disease. Meanwhile, it is clear that studying the chemical makeup of the brain can inspire meaningful progress through a more thoughtful or reflective understanding of the human condition.

Mikal Shapiro said...

Mikal Shapiro
It seems like the brain has thought of everything. Every time I run across the study of a particular anatomical response to a very particular stimuli (i.e. - the Anterior insula and dlPFC activation during “unfair offers”--Sanfrey, et al p. 113), I am amazed at the micro-specialization of our hard-to-distinguish mushy grey matter clumps. Even more amazing is how these clumps, which have been conserved for millennia under very different circumstances, continue to respond to a modern world far removed from the stimuli that for so long motivated their conservation.
In response to Frances’ blog: Frances recaps the neuroeconomics article by writing, “It is the automatic/System 1 kinds of processes that the economic framework does not take into account.” This automatic system deals in the realm of the emotions and their impact on decision-making, drawing into question the neuroeconomic notion of “utility.” I am not that familiar with economics but according to Dictionary.com, economics is “the science that deals with the production, distribution and consumption of goods and services, or the material welfare of humankind.” The authors of the article (Sanfrey, McClure, and Cohen) compare the pursuit of new endeavors and subsequent decision-making to the economics-driven goal acquisitioning of corporations (p. 109). Though the authors seem to acknowledge the short-comings of this top-down, hierarchical, utilitarian approach, they believe neuroscience would benefit from further developing this metaphor. I do not disagree but I also think economic theory was designed by a certain modern cultural paradigm--not quite the universal science Dictionary.com declares as concerning the “material welfare of humankind.” I wonder if the study of certain non-industrial, tribal economic systems would shed a more accurate light on neuroscience by providing a paradigm which functions closer to the natural environment of our genetic upbringing. If we are to assume the macro-organization of collective decision-making reflects the micro/nano processes of the brain, by looking at the competitive/collaborative goal-acquisitioning of tribal societies, we could perhaps create a metaphor more grounded in the bulk of our evolutionary history (?).

Katie Moeller said...

Greene's discovery that the degree to which a moral decision can be classified as either personal or impersonal does have an impact on which part of the brain is activated when the decision is being made prompted me to think back to our discussions of flashbulb memory, and the role that personal meaning plays in how vividly a memory is retained. It seems that for us humans, there is something endlessly significant about being able to put ourselves (or some abstract, projected idea of our "self") into the equation that is powerful. This makes sense from an evolutionary standpoint - it is smart for the lengths we go to and the risks we take to protect ourselves and those in our community that are personally tied to us (and to our ability to survive, as well as the memories that we carry of events related to these people to be calculated differently than those less personally threatening or significant. I guess my question, then, in pulling together these connections between the role of the personal in decision-making and in memory, is whether or not the memories we form of the personal moral decisions we make are stronger after the fact that when these decisions are nonmoral or impersonal. Can people more accurately go back and explain why they made a particular decision when it was one imbued with some sense of personal meaning? Does the the fact that some of these types of decisions are made quickly, in the heat of the moment (ie pushing someone off a footbridge to stop an oncoming train from hitting five other people) have any affect on what we later remember about our reasoning for doing what we did?