ST. PETERSBURG CENTRE FOR INTERDISCIPLINARY NEUROSCIENCE
русский
английский

Neuromorality

Articles in .pdf

Moral Thinking in Nina Slanevskaya, Brain, Mind and Social Factors, 2014 (in English)

Neuroscience about moral thinking in "Brain, Mind and Society ", part 2, Nina Slanevskaya, 2014 (in Russian).

Moral thinking and the struggle for its realization in "Brain, Mind and Society ", part 2, Nina Slanevskaya , 2014 (in Russian).

 

The Clash of Social and Inborn Moral Values

Neither a materialist neuroscientist, nor a non-materialist neuroscientist will deny that people have social thinking by nature. It can be considered a widely recognized fact (Heberlein, et al, 1998). Some neuroscientists insist that it is possible to describe the special structures of the brain that participate in social thinking. According to Adolphs, a “social” brain consists of ventromedial prefrontal cortex (social reasoning and decision-making), amigdala (fear, distrust, reading theinformation on the face), right somatosensory cortex (a body state during socializing; empathic reaction), insula (shares functions with the somatosensory cortex), cingulatecortex (finds errors), visual association areas in the temporal cortex (participate in emotions and influence the body state). Some structures in the hypothalamus, thalamus, and brain stem also participate in social thinking (Adolphs,1999). People often and involuntarily imitate the behaviour of others in their social surrounding and guess what is expected fromthem. Neuroscientists explain it by the presence of mirror neurons, emphatic reaction, and ToM.People have thebrain mechanisms which help them to play a social role which is assigned by society. The psychological experiment carried out by Philip Zimbardo in the Stranford University in 1971 shows the readiness of people to play their social roles. Students were divided into “prisoners” and “warders” at random (Zimbardo et al, 2000). “The prison” was made in the basement of the faculty of psychology at Standford University. Soon the students began to feeland behave like warders and prisoners. Many “warders” began to demonstrate sadism, and many “prisoners” became passive and depressed, though there were no reasons why the “prisoners” could not refuse from participation in the experiment if they felthumiliated. Instead, they diligently played their social roles. None of them had been a criminal or a sadist before the experiment. Social factors made them feel criminals and sadists. A similar dependence on the social framework was seen in the series of socio-psychological experiments made by Stanley Milgram in the 1960s and 1970s (Milgram, 2009). Milgram conceived his experiment to assess the readiness of people to surrender to the authorities, giving them the task that would be against their conscience. He wanted to understand why so many Germans had agreed to perform cruelties under Hitler, and what was the psychology of mass immoral behaviour. To the surprise of many Milgram’s colleagues, who did not believe that normal people (“teachers” in the experiment) would agree to continue participation in the experiment, if forced to increase punishment (voltage of electric shock up to 450 volts) to other people (“learners”) for their mistakes, the majority of “teachers” continued and increased the punishment up to 450 volts irrespective of their real professions, gender, and age (i.e. 65% of “teachers” instead of 1-2% as Milgram’s colleagues had thought, and it didn’t matter if there were only women or men in the groups of “teachers”). Though the “teachers’ showed their psychological discomfort (nervous laughter, sweating) while they were increasing the punishment (voltage), nevertheless they did not make up their mind to interrupt the experiment and to rebel against the social authority of the scientist, who had a legitimate power to carry out such experiments in their eyes. However, they did not increase voltage if the scientist went out of the room for some minutes. It means that they did not want to hurt other people if the authority did not press them to behave immorally. But placed in a certain social framework they started to play their social role in the name of “scientific progress and mankind” or whatever.

The criteria of moral behaviour in society depend on the system of values in this society based on something axiomatic, which is not questioned. Such axioms form social values, define development of social institutes and scientific research, and influence the building of the socioeconomic and political system in which these people live. Darwin’s conclusion that in nature only the strongest survives and Malthus’s advice to make peace with this idea served as an excuse for economic inequality, death from hunger, poverty, political violence, etc. It was morally acceptable for society at that time. As soon as people refused from the idea that only the strongest human being is worth living and adopted the axiom of equality and care for the weakest, they created either welfare capitalist states or socialist states (in which the social value of individual accumulation of capital and property became not praiseworthy at all).
When people are provided with ideological and social legitimacy, and they have institutional support, the majority of them seems to prefer following social norms to their own moral reflection. The pressure of immoral social surrounding makes them doubt or suppress their inborn moral values. Social norms are usually spoken in moral terms, so it becomes rather difficult to separate moral norms from social ones and to judge on the moral basis.

Nevertheless, Victoria McGeer asserts that people are capable of discerning the violation of moral behaviour from the violation of social norms (McGeer, 2008). Analyzing neurological basis for the possibility of moral behaviour of psychopaths and people suffering from autism, McGeer distinguishes socially approved behaviour and morally approved behaviour. A moral action is not determined by prescribed social norms and does not depend on the permission or approval of official authorities. Social norms of behaviour, on the contrary, have a temporary character and are defined by the norms existing in that society. If the norms change, the previously condemned behaviour is no longer considered to be immoral (McGeer, 2008). In other words, on the one hand, there is the set of some absolute moral rules, which exist in all centuries and for all people, and, one the other hand, there are temporary social norms presented as moral ones, which exist in the particular society. McGeer, Kennett, and Fine state that adult psychopaths and children with psychopathic symptoms do not feel the difference between the actions based on conventional norms of behaviour in the society and those based on universal moral norms (Kennett, Fine, 2008; McGeer, 2008). Frederique de Vognemont and Uta Frith think that such division between conventional social norms and universal moral norms is a great discovery in the study of moral thinking (Vignemont, Frith, 2008). The difference is understood even by three-to-four-year-old children, and it is a cross-cultural phenomenon. Vignemont and Frith agree with Nichols and Folds-Bennett that people usually consider something “moral” if it has universal and permanent moral value and “social” as something dependent on thesocial context and power (Nichols, Folds-Bennett, 2003).

For those who have doubts what is moral the famous German philosopher Immanuel Kant (1724-1804) suggests the formula based on three principles, which leads to moral actions irrespective of the epoch and social system. The principles are the following: 1. Good will (no selfish interest in the moral action); 2. The universalizability of an action (a chosen action will become a universal law applied to others and to oneself); 3. A human being is an end in itself (the respect to the intrinsic worth of a human being).

Human beings, unlike animals, can create different social organizations to their taste, irrespective of natural surrounding, and this particular organization of social life will demand special rules of behaviour. Thus, we can create any social system we like and think good for us. The matter seems to depend on the knowledge of our human nature, and whether there is a powerful group, which can enforce its principles of social organization on the rest of us.

References
- Adolphs, R. (1999) “Social Cognition and Human Brain. Review” in Trends in Cognitive Sciences, Vol. 3, No. 12: 469-479.
- Heberlein, A.S., Adolphs, R., Tranel, D., Kemmerer, D., Anderson, S., Damasio, A.R. (1998) “Impaired Attribution of Social Meanings to Abstract Dynamic Geometric Patterns Following Damage to the Amygdala” in Society for Neuroscience Abstracts, 24: 1176–7.
- Kennett, J., Fine, C. (2008) “Internalism and the Evidence from Psychopaths and ‘Acquired Sociopaths’” in Walter Sinnott-Armstrong (ed.) Moral Psychology, the Neuroscience of Morality: Emotion, Brain Disorders, and Development, Massachusetts, the MIT Press, Vol. 3: 173-190.
- McGeer, V. (2008) “Varieties of Moral Agency: Lessons from Autism (and Psychopathy)” in Walter Sinnott-Armstrong (ed.) Moral Psychology, the Neuroscience of Morality: Emotion, Brain Disorders, and Development, Massachusetts, the MIT Press, Vol. 3: 227-258.
- Milgram, S. (2009) Obedience to Authority. An Experimental View, New York, Harper Perennial Modern Thought.
- Nichols, S., Folds-Bennett, T. (2003) “Are Children Moral Objectivists? Children’s Judgments about Moral and Reponse-dependent Properties” in Cognition, 90(2), B23-B32.
- Vignemont, F., Frith, U. (2008) “Autism, Morality and Empathy” in Walter Sinnott-Armstrong (ed.) Moral Psychology, the Neuroscience of Morality: Emotion, Brain Disorders, and Development, Massachusetts, the MIT Press, Vol. 3: 273-280.
- Zimbardo, P., Maslach, C., Haney, C. (2000) “Reflections on the Stanford Prison Experiment: Genesis, Transformation, Consequences” in T. Blass (ed.) Obedience to Authority: Current Perspectives on the Milgram Paradigm, Mahwah, NJ, Lawrence Erlbaum Associates: 193-237.

(in Nina Slanevskaya "Brain, Mind and Social Factors", St.Petersburg, Centre for Interdisciplinary neuroscience, 2014)

Ethical theories

There are ethical theories of the first order (how we should behave) and ethical theories of the second order (meta-ethics, i.e. theorizing about ethical theories) . Among the first-order theories we can discern three main groups: (1) duty-based (e.g. Kantís ethics); (2) consequentialist (e.g. Benthamís utilitarian ethics); (3) virtue theories (e.g. Aristotleís ethics).  

(1) Duty-based ethical theories assert that acting morally means acting according to our duties (we ought to perform some actions disregarding consequences, which might follow them). The motives for actions must be “pure” and cannot include any calculated benefits. The word “duty” actually means a morally necessary thing to do, which you also want other people to do to you, and which can be regarded as a universal law for all; all people will behave like that. Happiness cannot be a universal moral principle because a person may want to become happier at the expense of other people’s unhappiness. When, for example, for their greater future happiness and integrity of the territory Georgians having an ethnic conflict with South Ossetians attacked and killed South Ossetians, who wanted autonomy (the conflict in August 2008), it was an immoral act to solve a problem in such a way according to Kant. Nobody wants to be killed for any reasons: a human is an end in itself.
(2) Consequentialist ethical theories are based on the principle of the greatest beneficial consequences of the action: “good” is what brings the greatest total happiness. Thus, it was morally right for Georgians to attack South Ossetians because it could have brought the greatest total happiness to Georgians, who are the majority in Georgia, and who want territorial integrity.
(3) Virtue ethical theories focus on the character of an individual and his personal life unlike the previous ones focusing on the rightness or wrongness of particular actions. Happiness comes from coping with life’s problems morally, which is due to the acquired virtues. So, if all Georgians and South Ossetians had been brought up in the right way and had really developed moral virtues individually, no killings would have taken place on the territory of Georgia at all. Georgians and South Ossetians would mutually have respected each other and lived peacefully.

The ethical theories of the second order (meta-ethical theories) can be divided into two broad groups: ethical realism and ethical anti-realism.
Ethical realism presupposes the existence of objective moral truths.
Ethical anti-realism, on the contrary, claims that there are no objective moral truths at all.
There are two main groups of ethical theories belonging to realism: ethical naturalism and ethical intuitionism. And there are three main groups of ethical theories belonging to anti-realism: subjectivism (moral statements are not objectively but subjectively true), non-cognitivism (moral statements are neither false nor true), and nihilism (moral statements are false).

(1) Ethical subjectivism holds that moral values are subjective: it is the individual’s or group’s attitude of considering something as “good”. The value facts are reduced to psychological preferences. If I say, “The Russian President is good”, it shows only my attitude to him. If someone else says, “The Russian President is bad”, it shows his attitude. No objective truth is possible.
(2) Ethical non-cognitivism claims that evaluative statements cannot explain what the world is. They express only a speaker’s emotions, or can be treated as imperatives. If I say, “The Russian President is not good”, for a non-cognitivist it sounds like, “Boo to the Russian President!” or “Do not deal with the Russian President!”
(3) Ethical nihilism (called also “the error theory”) claims that evaluative statements are generally false because they assert things that do not exist in reality. If I say, “The Russian President is good”, it is neither a false, nor a true statement because there is no such a property as “goodness”, there is only the Russian President out there.
(4) Ethical naturalism argues that objective moral properties exist, and they are reducible to non-evaluative terms. If I say, “The Russian President is good”, he is good objectively but only if he improves the well-being of his citizens, etc. Moral statements must be expressed either in non-evaluative terms or justified empirically on the basis of observation.
(5) Ethical intuitionism claims that moral properties are objective: there are such objective properties as “goodness” or “evilness”, and they do not depend on someone’s attitude. They are irreducible (we cannot but use the evaluative language speaking about value facts saying “good”, “evil”, “desirable” and so on). If I say, “The Russian President is good”, other people will understand me because they know what the word “good” means. I do not need to use any other non-evaluative words.

 

Ethical naturalism and ethical intuitionism in neuroscience

Ethical naturalism in social neuroscience should be understood as ethics which asserts that healthy brain structures are equivalent to moral behaviour, meanwhile ethical intuitionism asserts that a human has inborn mental (or/and spiritual) quality called moral intuitions.
Ethical intuitionism is based on inborn moral intuitions. Intuitions are defined by Huemer as “mental states in which something appears to be the case upon intellectual reflection (as opposed to perception, memory, or introspection), prior to argument” (Huemer, 2005: 232). We have intuitions (“intellectual appearances”) about certain abstract truths similar to perceptual experiences (“sensory appearances”) about the physical world. Our intuitions are merely the form of our awareness: we are directly aware of moral facts. It can be compared to our awareness of the physical world through sense perception: we are directly aware of physical objects. Moral intuitions can conflict with our moral theories or fixed moral beliefs resulting from culture, religion, and ideology. But our sensory experience can be affected by bias as well (Huemer, 2005).
The main objection against ethical intuitionism is that you cannot be certain of moral truths based on intuition unless you find a way to show that intuitions are reliable and can be verified. But no such verification is required for sensory perception and memory. Huemer states, “it appears, then, that the present objection relies on an epistemological double-standard: the objector imposes demands on intuition that would not be placed on any other fundamental source of knowledge” (Huemer, 2005: 236). He asks why this process of cognition should demand a second cognitive process and remarks that even a utilitarian will use his intuition and will say that to kill a healthy human being to distribute his organs to five other people is not good in spite of the basic principle of the utilitarian (consequentialist) theory: the greatest total happiness. Intuition should be considered as a good and reliable source in moral knowledge. Intuition may fail sometimes because it can be affected by cultural, ideological, and religious indoctrination, but human beings are subject to making mistakes in all fields of human activities, and intuition is not the exception from the rule.
The interpretation of moral intuition by the neuroscientist and ethical naturalist Joshua Greene is quite different from Huemer’s. According to Greene moral intuition is based on emotions and basic instincts and is genetically inherited, “The emotions most relevant to morality exist because they motivate behaviors that help individuals spread their genes within a social context” (Greene, 2008: 59). “The theory of reciprocal altruism explains the existence of a wider form of altruism: Genetically unrelated individuals can benefit from being nice to each other as long as they are capable of keeping track of who is willing to repay their kindness” (Greene, 2008: 59). The materialist neuroscientist Greene thinks in the framework of Darwin’s theory and insists that Kant’s deontology cannot be considered as moral philosophy because people giving deontological answers show an emotional reaction in the brain while it is being scanned, i.e. there is the activation of brain structures responsible for the emotional reaction when they give deontological answers; they have no time for moral reflection necessary for philosophy (Greene, 2008). Meanwhile the consequentialist decision of a moral dilemma shows the activity of the brain in the areas responsible for cognitive thinking. Greene comes to the conclusion that Kant invented his deontological theory trying to rationalize moral emotions. Greene expresses the idea that deontology is a kind of moral talking caused by a strong feeling indicating that certain things must not be done (Greene, 2008).

Greene points out two reasons why deontology and moral emotions are inseparable (Greene, 2008):
(1) moral emotions allow us a natural solution of certain problems occurring in social life. Moral emotions are the creation of nature. It is a reliable, quick and effective answer to repeated situations, meanwhile moral reasoning is not in this context;
(2) deontological philosophy provides us with cognitive interpretation of natural moral emotions.

Greene emphasizes the fact that the answers given from the deontological position are much quicker than those given from the consequentialist position because consequentialist decisions demand more time, and they cannot be made on the intuitive and emotional level. To support his point of view, Greene presents the neurobiological results of scanned brains, which also demonstrate people’s different attitude to personal and impersonal moral dilemmas. Personal moral dilemmas cause greater activity in three areas connected with emotions: posterior cingulate, medial prefrontal cortex, and amygdala, and there is also a greater activity in the superior temporal sulcus. Moral dilemmas which are not connected with the person himself are accompanied by the greater activity in two classically cognitive areas of the brain: dorsolateral prefrontal cortex and inferior parietal lobule.

John Mikhail does not agree with Greene and sees another basis for moral intuition (Mikhail, 2008). Mikhail is sure that the human brain works within a computational “moral grammar”, which is similar to other “mental grammars” in other spheres of human activities such as language, music, recognition of faces, etc.. A quick moral answer is caused by cognitive dissonance in the brain due to the computational moral grammar (Mikhail, 2008). So Greene’s conclusion that quick deontological answers have no cognitive background is wrong.

The fact that we have inborn moral intuition (some call it “conscience”) is stated by many neuroscientists. The neuroscientist Svyatoslav Medvedev, the Director of the Institute of the Human Brain in St.Petersburg, Russia, says, “Conscience is not an abstract concept, it is quite a real one and, if you wish, a universal mechanism, which nature has given to both a righteous person and a sinner”, and although “conscience does not prevent us from doing evil, it prevents us from enjoying it”, and he gives the data and graphs with 150-200 milliseconds jump on the electroencephalogram if a person is lying (Zernova, 2007).

Perhaps, moral action, in principle, cannot involve the calculation of advantages even for the majority’s benefits. There is a fundamental difference between a rational decision (the calculation of advantages for the majority’s benefits) and a moral decision, which disregards any advantages for anyone except a moral duty.
First goes moral intuition, then moral emotion, which raises motivational force to perform a moral action, and only later we rationalize our moral action trying to explain why we did so. If we start with the rationalisation what is moral for us to do, there is something wrong with either our moral intuition, or we want to suppress it in order to get some advantages for ourselves.

The materialist neuroscientists put forward different theories concerning moral thinking and mentioning the areas of the brain involved in this process. Moll lists the areas, which, if damaged, worsen moral thinking (Moll et al., 2005): anterior prefrontal cortex (aPFC), dorsolateral prefrontal cortex (DLPFC), ventral sectors of prefrontal cortex (vPFC), ventromedial sectors of prefrontal cortex (vmPFC), lateral orbitofrontal cortex (lOFC), medial orbitofrontal cortex (mOFC), posterior superior temporal sulcus (pSTS), anterior temporal lobe (aTL), hypothalamus, septal nuclei, basal nuclei and neighbouring structures, and other limbic and paralimbic structures.
Moll defines the specific problems of moral thinking and behaviour accoding to the damaged area of the brain. For example, if the ventromedial sectors of prefrontal cortex are damaged, a human being lacks the feelings of proudness, embarrassment, and regret.

The following hypotheses of ethical naturalism are worth mentioning: Pfaff’s hypothesis of the Golden rule (Pfaff, 2007); Moll’s hypothesis of the Event-feature-emotion complex framework (Moll et al., 2008); Greene’s hypothesis of the Conflict processing in moral judgments (Greene, 2008); Moll, de Oliveira-Souza and Eslinger’s hypothesis of Moral sensitivity (Moll, de Oliveira-Souza, Eslinger, 2003); Blair and Cipolotti’s hypothesis of Social response reversal (Blair, Cipolotti, 2000); Wood and Grafman’s hypothesis of the Structured-event-complex framework (Wood, Grafman, 2003); Lough, Gregory and Hodges’s hypothesis of the Impairment of the theory of mind mechanism in sociopathy (Lough, Gregory, Hodges, 2001)

Moll is, perhaps, the most prominent researcher in this field. His hypothesis of the Event-feature-emotion complex framework proposes the connection between cognitive social activities (events) and emotional states (emotions), where social characteristics (features) are interwoven into one whole (complex) (Moll et al., 2008). Hence, the name of the hypothesis is “event-feature-emotion complex framework”. Moll and his co-authors believe that moral knowledge should be considered as a whole consisting of three components bearing the construction (framework) (Moll et al., 2005):
(1) context dependent structured knowledge of events with the activation of prefrontal cortex of the brain. It is the knowledge about the essence of the event, its proceeding, possible result (for example, looking at a child whose parents died a man begins to imagine the child’s future);
(2) context independent knowledge of social perception and functional features with the activation in the front and rear parts of the temporal cortex (social perception refers to the ability to see sadness on the faces of people and understand their body language; the ability to understand the concept of “helplessness”, for example).
(3) context independent ability of having motivational and emotional states with the activation in the limbic and paralimbic structures of the brain (the ability to feel attachment, anxiety, and sadness).
All these three components taken together allow a man to feel moral emotions of sympathy, for example, when he is looking at a child who lost parents.
Moll points out the difference between his and Greene’s hypotheses. Greene asserts that the prefrontal cortex performs the cognitive control over emotional reactions, which leads to more rational moral choice (consequentialist choice). Moll is sure that the participation of prefrontal cortex is only one of the aspects of social knowledge, and this knowledge is always connected with relevant emotions (Moll et al., 2008).

Pfaff in his Golden rule hypothesis speaks about an instant loss of your own personality and getting into the state of the other person, which is the basis for moral actions towards the other person because you think more about him than about yourself (Pfaff, 2007). Such a moral state is also caused by the rise of hormone release. It is oxytocin, the hormone that releases when a man and a woman love each other, or when parents love their child. Such hormone release makes a person take care of the other one, i.e. to behave morally (Pfaff, 2007). The Golden rule (do to others what you want them to do to you) is the principle of reciprocal altruism, which developed in the processes of evolution and helped people to survive. Pfaff reminds us that Kant also used this principle in his deontological philosophy. (I will give my objections further).

Laurence Tancredi, the representative of ethical naturalism, thinks that a human has a “moral” brain, which consists of two broad regions: (1) “emotional” brain (limbic system or our “old brain”) and (2) “rational” brain (frontal lobes) (Tancredi, 2005). The emotional brain includes four main parts: amygdala, hippocampus, hypothalamus, and the anterior cingulated cortex. The rational brain is the frontal lobes. The prefrontal cortex is the brain’s “command post” (near the forehead above the eyes). It is supposed to be the centre of personality and identity, and the integration of emotions and thoughts. Virtually every functional part of the brain is directly or indirectly interconnected to this cortex and is controlled by it (Tancredi, 2005). There must be special social conditions to activate an inherited ability, which is present in the genes. Tancredi is sure that in some cases a human cannot control himself under certain social conditions because his neurobiological deficiency takes an upper hand in the struggle for moral behaviour. Thus a criminal becomes a victim of his brain deficiency: he must be treated in hospital.
Tancredi declares that a mortal sin is the pathological functioning of the brain, which does not correlate with the conscious choice to commit a sin (Tancredi, 2005). For example, laziness and apathetic listlessness result from depression, when the main neurotransmitters - serotonin, dopamine, and norepinephrine – decrease in quantity in the synapses of neurons in the limbic structures. Such a sin as lust is caused by the release of too much testosterone, and so on. Tancredi asserts that moral choice is biologically motivated, and that it is a revolutionary hypothesis, which contradicts religious doctrines and social traditions considering a human as a free agent responsible for his thoughts and actions. Tancredi is convinced that the brain directs the mind, and that moral thinking became genetically present in human beings in the course of human cooperation to survive and to bring up children (Tancredi, 2005). Inborn moral thinking is confirmed by clinical cases: children with inborn brain deficiency or with a damaged brain in the childhood are incapable of moral thinking and social consciousness (Tancredi, 2005; Chayer, Freeman, 2001). Tancredi thinks that neurobiological factors influence not only the depth of thinking morally, or how the brain processes the information, but also the content of moral thinking (Tancredi, 2005).

Summary: Materialist researchers such as Tancredi, Green, Moll, and Pfaff belong to ethical naturalism and insist that moral thinking is the product of evolutionary pressure. The evolutionary pressure formed social cognitive and motivational mechanisms. Reciprocal altruism developed under this pressure and became the basis for moral thinking. Moral behaviour resulted from the feeling of love (man and woman’s love, parents’ love) which later turned into the feeling of care for an unknown person. Moral thinking, as they suppose, is based on the activation of certain brain structures and a special hormone release.

Objections:
(1) Sometimes attachment and love to someone pushes a person to commit an immoral action towards other people. Hynes draws our attention to the fact that attachment can bring about immoral behaviour: nepotism, racism, and sexism (Hynes, 2008). Aggressiveness is considered to be a bad quality of character, but it can support a moral action, meanwhile the lack of it can bring to the passive cooperation with the immoral power and immoral social system.

(2) If a person expects reciprocity, his action is not altruistic by definition. Besides only the participants can define whether the action is altruistic. Thus, a human altruistic act cannot be altruistic in the evolutionary meaning.
De Waal explains the difference between the biological and social understanding of altruism (De Waal, 2008). Biologists classify the behaviour as selfish or altruistic according to the effect, whether it is good for others, or only for the performer of the action; it is not based on motivation and intention (De Waal, 2008). When a bee stings someone who intrudes into the bee-hive and dies saving others, it is called an altruistic act, and it does not matter if the bee is conscious of its altruistic action or not. However, the bee’s behaviour can simply be an act of aggression, and it stings anyone and anywhere if one gets into its way. It behaves so without the purpose of saving other bees. De Waal agrees with Trivers that, if you start studying something where the motivation is present you immediately get outside the evolutionary theory, and you have to use concepts and theories of psychology instead of biology. The study of motivation automatically excludes the explanation based on a biological theory. Motivation for a human is the force by itself. Biologists ignore such motivation (De Waal, 2008). De Waal distinguishes evolutionary altruism (an example with a bee), which the majority of animals have, and psychological altruism, which is typical for people and which is socially motivated as an answer to the needs, distress and request of others when the effect of the action is anticipated (De Waal, 2008).
(3) Pfaff assures us that reciprocal altruism in his hypothesis “Golden rule” (do to others what you want them to do to you) is Kant’s moral principle. However, Kant’s moral principles (categorical imperatives) are not based on the principle of reciprocity, but on the moral “duty”, which does not presuppose any expected benefits for oneself. The principle of reciprocity - “I have done something for you and you will do the same to me” - is the basis of all corruptive schemes and has nothing in common with Kant’s deontology. It is “good will”, which means that you will act morally because you rationally want to do so without any benefits for yourself (Kant, 1995b). You have strong motivation to perform an altruistic moral act, and it is decided a priori and consciously without the involvement of feelings and emotions. Kant distrusts feelings and emotions as the criteria of moral behaviour (attachment, love, happiness) unlike Pfaff: circumstances can change and a person can become unhappy and lose his natural inclinations, feelings and desire to behave morally towards others. We see that Kant’s own description of the mechanism of his moral principles does not fit either Pfaff’s or Greene’s understanding of Kant’s philosophy.
(4) While studying the brain during moral decisions many neuroscientists and Moll, in particular, consider that moral behaviour is equal to the obedience to the social norms of behaviour. However, it is a doubtful proposition as it can be seen in Stanley Milgram’s series of psychological experiments on the obedience to authority. Milgram’s experiment showed that conformity itself to social rules and norms is not necessarily morally praiseworthy (Milgram, 1963; Milgram, 2009). Milgram reminds us that in the course of history we can find many examples when the conformity to social norms and obedience to authority caused much more trouble and mass immoral behaviour than the disobedience of individuals.
Moll and colleagues have developed a naturalist ethical theory using the evolutionary theory in the explanation of moral emotions. Their ontology is fundamentalist: people have genetic predisposition to moral behaviour due to their cooperation in collectives; morality is acquired while learning moral norms of society; moral thinking gets fixed in genes. “Morality is a product of evolutionary pressures that have shaped social cognitive and motivational mechanisms” (Moll et al., 2005: 799). In other words, morality is proposed to be the set of social norms, habits, and values of the particular society, which must be learnt by an individual as a guidance for his moral behaviour.
But all of us know that social norms and moral ones do not necessarily coincide. It is interesting that  psychopaths and those suffering from autism cannot distinguish them (McGeer, 2008). Social norms change but moral values are permanent for humans. However, Moll seems to deny absolute moral values.
It is interesting to read Kant’s reply made in the 18th century in his preface to The Critique of Practical Reason, “A reviewer who wanted to find some fault with this work has hit the truth better, perhaps, than he thought, when he says that no new principle of morality is set forth in it, but only a new formula. But who would think of introducing a new principle of all morality and making himself as he were the first discoverer of it, just as if all the world before him were ignorant of what duty was or had been in thorough-going error? But whoever knows of what importance to a mathematician a formula is, which defines accurately what is to be done to work a problem, will not think that a formula is insignificant and useless which does the same for all duty in general” (Kant 1995a: 127).In the Foundation of the Metaphysic of Morals Kant compares moral rules with “universal laws of nature” (Kant, 1995b: 96). In other words, Kant believes in permanent moral values,
“Two things fill the mind with ever new and increasing admiration and awe, the oftener and the more steadily we reflect on them: the starry heavens above and the moral law within. (…) The former begins from the place I occupy in the external world of sense, and enlarges my connection therein to an unbounded extent with worlds upon worlds and systems of systems, and moreover into limitless times of their periodic motion, its beginning and continuance. The second begins from my invisible self, my personality, and exhibits me in a world which has true infinity, but which is traceable only by the understanding, and with which I discern that I am not in a merely contingent but in a universal and necessary connection, as I am also thereby with all those visible worlds. (…) The second, on the contrary, infinitely elevates my worth as an intelligence by my personality, in which the moral law reveals to me a life independent of animality and even of the whole sensible world, at least so far as may be inferred from the destination assigned to my existence by this law, a destination not restricted to conditions and limits of this life, but reaching into the infinite” (Kant, 1965: 498).
(5) Though the research of ethical naturalists in neuroscience concerning the brain functioning and moral thinking is valuable, it can be misleading and take us away from moral concepts at all. They study the interconnection between moral behaviour and the disorders in the brain with the hidden assumption that human behaviour is fully determined by the functioning of brain structures and their neurobiological characteristics. If it is true then we cannot speak about moral behaviour, in principle, because moral behaviour is traditionally understood as what we must do but not what it is.
(6) All genetic theories, in fact, presuppose the lack of human free will, which is important for moral choice. Non-materialist neuroscientists fiercely argue with materialist neuroscientists and give crucial examples from their medical practice when treatment of patients is based on free will and mind, which control and direct psychological and physiological processes.

Neurons never stop learning due to the neuroplasticity of the brain, and it depends on our free will to teach them. The physical state of neurons and neurochemistry of the brain change when we regularly repeat the actions, or reprogramme our mind at our free will.
The neuroscientists Newberg and Waldman confirm that every emotion or thought causes the change of blood flow to certain brain structures and the change of their electro-chemical activity (Newberg, Waldman, 2009). The scanning of the meditating people shows that meditation helps to stimulate some structures to function better and to control emotions (Newberg, Waldman, 2009). Meditation practice helps to overcome anger at free will. Anger releases a cascade of neurochemical substances, which practically prevent human ability to control emotions (Newberg, Waldman, 2009).
Thus, the expression of genes is partially dependent on human free will to meditate and develop certain structures of the brain.
Comparing contending hypotheses of ethical naturalism and ethical intuitionism (both belonging to ethical realism), we can foresee a certain impact of their conclusions on our social life.

Summary:
(1) Neuroscientists working within ethical naturalism explain the objective existence of human moral thinking by inborn neurobiological characteristics of the healthy brain, which “directs” the mind, and believe that moral thinking developed in the evolutionary process because social moral norms helped people living in groups to survive.
Neuroscientists preferring ethical intuitionism consider moral thinking as inborn mental or/and spiritual quality of a human. Unlike ethical naturalists, they assert that the mind “directs” the brain, and the brain changes under the influence of thinking process, so a moral choice depends on the mind, but free will and not the deficiency of brain unless it is considerably damaged.
(2) Both the parties agree that moral thinking is an objective process and that the social surrounding produces its influence upon human moral thinking.
After defining their positions to physical and mental substances (ethical naturalists are materialists and monists, but the majority of ethical intuitionists are dualists), neuroscientists suggest their hypotheses in which one can see the different understanding of human responsibility for immoral actions.
(3) If moral thinking is defined by the neurobiological work of the brain, then a person cannot be responsible for his immoral actions. The concept of free will disappears due to the fixed way of behaviour determined by the peculiarities of a particular brain.
(4) If moral thinking depends on mind and free will, then a person bears all responsibility for his actions.
After getting such different scientific conclusions based on different ontologies on brain and mind, the society has a choice of the implementation of scientific recommendations. What are the social implications in both cases?
(5) If we chose the conclusion of ethical naturalism, the tendency would be to develop the medicated correction of an “immoral” brain; to improve genetic characteristics of a human; to implant devices into the brain of criminals to control them; to scan the brains of people to find inborn neurobiological deficiencies before taking them to certain jobs, i.e. to find “immoral” individuals who give an unusual reaction of brain structures when answering the questions.
However, social norms are not necessarily moral norms. There are neither perfect people, nor perfect social systems. Political disobedience of citizens is, as a rule, a moral phenomenon because they consider it their moral duty to oppose the authorities in order to improve the existing economic and political system for all people. Those individuals who were against slavery were much more moral than the majority of people enjoying slaves’ work in the slavery society. Who will define moral criteria? Who will define the moral state of the brain? Who will keep this information? Evidently, it will be in the hands of power elites, which will never allow any criticism. To criticize the power will be immoral.
(6) If we believe in two substances, as ethical intuitionists do, the picture will be different. Ethical intuitionists would recommend the improvement of thinking abilities through education, religion, art, literature, meditation. However, there is another danger: if people enter spiritual practices deeply enough, they may prefer staying at the individual level to being involved into economic and political life of the society, because the aim of spiritual practices is to liberate yourself from your social and individual problems.

References
- Blair, R.J., Cipolotti, L. (2000) “Impaired Social Response Reversal. A Case of ‘Acquired Sociopathy’” in Brain 123: 1122-1141.
- De Waal, F.B.M. (2008) “How Selfish an Animal? The Case of Primate Cooperation” in Paul J.
Zak (ed.) Moral Markets. The Critical Role of Values in the Economy, Princeton University Press, Princeton and Oxford: 63-76.
- Greene, J. (2008) “The Secret Joke of Kant’s Soul” in Walter Sinnott-Armstrong (ed.) Moral Psychology, the Neuroscience of Morality: Emotion, Brain Disorders, and Development, Massachusetts, the MIT Press, Vol. 3: 35-79.
- Huemer, M. (2005) Ethical Intuitionism, Palgrave Macmillan.
- Hynes, C. (2008) “Morality, Inhibition, and Propositional Content” in Walter Sinnott-Armstrong (ed.) Moral Psychology, the Neuroscience of Morality: Emotion, Brain Disorders, and Development, Massachusetts, the MIT Press, Vol. 3: 25-30.
- Kant, I. (1965) The Critique of Practical Reason, Moscow, Mysl.
- Kant, I. (1995a) The Critique of Practical Reason, St.Petersburg, Nauka.
- Kant, I. (1995b) Foundations of the Metaphysic of Morals, St.Petersburg, Nauka.
- Lough, S., Gregory, C., Hodges, J.R. (2001) “Dissociation of Social Cognition and Executive Function in Frontal Variant Frontotemporal Dementia” in Neurocase 7: 123-130.
- McGeer, V. (2008) “Varieties of Moral Agency: Lessons from Autism (and Psychopathy)” in Walter Sinnott-Armstrong (ed.) Moral Psychology, the Neuroscience of Morality: Emotion, Brain Disorders, and Development, Massachusetts, the MIT Press, Vol. 3: 227-258.
- Mikhail, J. (2008) “Moral Cognition and Computational Theory” in Walter Sinnott-Armstrong (ed.) Moral Psychology, the Neuroscience of Morality: Emotion, Brain Disorders, and Development, Massachusetts, the MIT Press, Vol. 3: 81-91.
- Moll, J., Zahn, R., De Oliveira-Souza, R., Krueger, F., Grafman, J. (2005) “The Neural Basis of Human Moral Cognition” in Nature Reviews Neuroscience, Vol. 6: 799-809.
- Moll, J., De Oliveira-Souza, R., Eslinger, P.J. (2003) “Morals and the Human Brain: a Working
Model” in Neuroreport, 14: 299-305.
- Moll, J., De Oliveira-Souza, R., Zahn, R., Grafman, J. (2008) “The Cognitive Neuroscience of Moral Emotions” in Walter Sinnott-Armstrong (ed.) Moral Psychology, the Neuroscience of Morality: Emotion, Brain Disorders, and Development, Massachusetts, the MIT Press, Vol. 3: 1-17.
- Newberg, A., Waldman, M. (2009) How God Changes Your Brain, New York, Ballantine Books.
- Pfaff, D. (2007) The Neuroscience of Fair Play. Why We (Usually) Follow the Golden Rule, New York, Washington, Dana Press.
- Tancredi, L. (2005) Hardwired Behavior: What Neuroscience Reveals about Morality, Cambridge University Press.
- Wood, J.N., Grafman, J. (2003) “Human Prefrontal Cortex: Processing and Representational Perspectives” in Nature Reviews Neuroscience, 4: 139-147.
- Zernova, T. (2007) The interview “Games with conscience” in the newspaper “24 hours”, No. 50, p. 6, and “The Russian Newspaper”, No. 201 (the title translated from Russian).

(in Nina Slanevskaya "Brain, Mind and Social Factors", St.Petersburg, Centre for Interdisciplinary neuroscience, 2014)

 

Socio-politico-economic system which suppresses natural neuromorality

Neuroscience confirms that a human with a normal brain cannot but think about anything he comes across in a social and moral ways, and inborn morality and adjustment to social norms compete. Moral judgments penetrate all our life including international policy. We react empathically to the events at the other end of the world feeling the sufferings of unknown people while watching them on TV and get angry with unfairness towards them. Moral anger forces people to go to the extreme.
Analyzing the economic practices at present, the neuroeconomist Paul Zak gives an example of the suicide committed by Clifford Baxter, the former vice chairman at the “Enron” company, the seventh largest corporation in the USA. Clifford Baxter started complaining to Jeff Skilling, the chief executive officer, about the inappropriateness of their business practices towards the end of the 1990s. In 2001 he resigned, and in 2002 he committed suicide. Baxter was known to be a successful man and a man of high morality: he had a happy family life, he served with distinction in the military and always criticized the company’s ethical transgressions and legal abuses. Thinking about the reasons why the majority of employees at the Enron kept silence and did not support Baxter, Zak puts forward the following ideas (Zak, 2008: 260-261):
1. “...the process of economic exchange values greed and self-serving behaviors, and inadvertently produces a society of rapacious and perhaps evil people”. Modern economies are dehumanized;
2. “…there could be a selection bias in which amoral greedy people were hired in key posts, and this behavior filtered down to other employees”;
3. Most people behave ethically most of the time, “nevertheless, in the right circumstances, many people can be induced to violate what seems to be an internal representation of values that holds unethical behavior in check”.
Unethical senior management introduced such a system of compensation, incentives and other company’s procedures that employees were encouraged to behave unethically to one another and to clients. Institutional environment encouraged immoral behaviors.

What Zak writes about the Enron is true for any social group, let it be a company, university, non-governmental organization or government itself. Social adjustment competes with internal representation of moral values that holds unethical behaviour in check, and it is much more difficult for inborn morality to win if the head of social structure is an immoral person who chooses people like himself to manage the company or the government. Unfortunately, conformity to senior immoral management is not punished by law in the modern society.

Moral, or immoral behaviour can spread in the society due to a human ability to imitate and thus, to learn new things.
Materialist neuroscientists have found that moral thinking is connected with empathy, which involves the activation of mirror neurons. The mirror neurons in the human brain automatically reflect the activation of the other man’s neurons if he is observed directly (Gazzola, Aziz-Zadeh, Keysers et al, 2004; Wicker et al., 2003). The mirror neurons are located in those areas of the brain where visual, motor and emotional states merge. The networks of mirror neurons are considered to be in the parietal lobe, Broca’s area, the premotor cortex of the frontal lobe and the superior temporal sulcus of the temporal lobe (Christian, 2008; Rizzolatti, Fogassi, Gallese, 2006).

Experiments show that if a person is hurt the same areas of brain automatically get activated in the brain of the observing person, especially if both of them are in good relationship (Singer et al., 2004a). Empathy is considered to be an inborn automatic and unconscious process (Christian, 2008; Gallese, 2003; Botvinick, Jha, Bylsma, Fabian, Solomon, Prkachin, 2005).

Some neuroscientists include conscious attitude in the definition of empathy. They consider that empathy has two components:
(1) automatic affective reaction,
(2) cognitive ability to take the perspective of the observed person but at the same time to be aware that he is not you (Christian, 2008; Jackson, Meltzoff, Decety, 2005). In the social perspective, empathy is connected with the desire to help or support another person. The social interrelationship could be difficult without human empathy.

Neuroscientists draw our attention to the slight difference while comparing the areas activated when an observing person is looking at the man suffering from pain with the areas activated when he is hurt directly (see the figure below).

The neuroscientists Tania Singer and Nikolaus Steinbeis consider that there are two very important motivations for decision making: fairness and sympathy. The disregard of social fairness makes people angry and provokes the desire to punish the violater of the moral value of social fairness. Sympathy, on the contrary, makes them forgive him (Singer, Steinbeis, 2009).

Carrying out the experiment on the emphatic neurobiological reaction connected with fairness, Tania Singer and her colleagues found out that the emphatic reaction was noticeably lower when the dishonest partner, after the game, was subjected to pain in the subsequent experiment on empathy (Singer et al., 2004b). This lower reaction was even followed by the activation of brain structures responsible for reward and pleasure, especially it was typical for men in comparison with women (Singer et al., 2006). In other words, instead of the expected emphatic reaction, the participants felt pleasure that their dishonest partners had pain. These data are in conformity with other researchers’ experiments where participants showed the inclination to altruistic punishment of moral violators. They were ready to lose their financial reward for the pleasure to punish dishonest partners just to satisfy their moral anger. So, no wonder that we find such cases in the mass media, as described further.

The ultimatum game is popular in neuroeconomics, and the results are similar in all experiments made by researchers in different countries. Glimcher describes the ultimatum game in the following way, “Two players in different cities, who have never met and who will never meet, sit at computer monitors. A scientist gives one of these players $10 and asks her to propose a division of that money between the two strangers. The second player can then decide whether to accept the proposed split. If she accepts, the money is divided and both players go home richer. If she rejects the offer, then the experimenter retains the $10, and the players gain nothing. What is interesting about this game that when the proposer offers the second player $2.50 or less the second player rejects the offer. The result is that rather than going home with $2.50, the second player goes home with nothing. Why does she give up the $2.50?” (Glimcher, 2008).
Camerer states that it is 40-50% of the sum that is considered to be fair, and if it is 20%, the money is rejected and the game stops (Camerer et al., 2005). However, if to change the rules of the game and tell the players that they will compete for the greatest sum of money, there behaviour will change: the offered sums will reduce, and the refusals to take amounts of money smaller than 40-50% will reduce as well (Chorvat et al., 2004).
Chorvat and colleagues assert that there is a fundamental difference in the work of the brains of the trustworthy and untrustworthy partners during this game. The activation of brain structures of the untrustworthy partner resembles the activation when people think that they play a game with a computer (Chorvat et al., 2004). If participants play a game with a computer, they know that the computers cannot be ascribed a guilty mind and a violation of moral code, so they do not refuse to take amounts of money smaller than 40-50% (Chorvat et al., 2004).

Social norms of the present socio-economic structure forces people to compete for being the greediest. The inborn moral values are distorted by the economic system based on competitiveness, greediness, and incentives encouraging immoral behaviour in the companies. The conflict between the socio-politico-economic system and inborn moral values provokes irrational behaviour and social clashes.
Here are some examples from our every day life:

- Germany.
Scenery: Germany.
Main character: Erika Schmidt, 62, a manager in the saving bank, who worked there for decades rising from a counter clerk to the manager.
Plot: Starting from 2003, Schmidt had been taking cash from the accounts of her rich customers to lend money to the poorer customers of her bank. She took no money for herself. She allowed overdrafts for customers who would not normally qualify for them. She used the money from richer customers to temporarily disguise the loans during the bank’s monthly audit of overdrafts. The woman knew most of the clients of her small rural branch and said that she couldn’t bear to see her less-fortunate customers go hungry. She met them personally to be sure that they were “needy cases” and insisted that they should pay back when they were on their feet. Unfortunately, not all her clients gave money back in time, and in 2009 Erika Schmidt was arrested.
Epilogue: Erika Schmidt lost her job, was arrested and had to return all the money. However, instead of 4 years in prison she was given a 22-month suspended sentence.
Reaction of the society: Erika Schmidt was compared with Robin Hood, a heroic outlaw in English folklore who robbed from the rich and gave to the poor. She was called “Die Robin Hood Bankerin”. She was also compared with the Brecht character who believed she could do good in the bad world. The woman could have faced a four-year prison sentence, but the court decided on leniency, and the judge said, “It’s difficult to find an appropriate punishment here. On the one hand, we have big losses. But on the other hand we have here this altruistic behaviour, which makes the case very different from the norm.”
Conclusion: People are characterized by empathic reaction. Schmidt’s inborn human morality and empathic reaction outweighed her fear and rational calculation. In spite of official condemnation of Robin Hood people composed the ballad about his noble behaviour. Social moral norms dictated by the power and supported by the economic and political system were in conflict with inborn human moral norms in old times too (News: ‘Robin Hood’ bank manager accused of stealing to help poor, 2009; News: German banker admits transferring money from rich to help poorer, 2009).

- The USA.
Scenery: The USA, a Florida beach.
Main character: Tomas Lopez, 21, a lifeguard.
Plot: Tomas Lopez was patrolling part of Hallandale Beach north of Miami when he was told that a swimmer was in trouble in an unguarded area of the beach. Lopez and an off-duty nurse ran to help the swimmer, who had already been pulled out of the water by beachgoers by their arrival. The swimmer was taken to hospital.
Epilogue: Tomas Lopez was sacked because he had broken the company’s rules. His boss of the “Jeff Ellis and Associates” said, “We have liability issues and can’t go out of the protected area.”
Reaction of the society: A colleague, on finding out the reason why Lopez had been fired, radioed his manager at once saying that he was leaving their company. Two other colleagues also resigned in protest.
Conclusion: When Lopez was interviewed, he said that he could not but run to a drowning man though it was not his duty, “I think it’s ridiculous, honestly, that a sign is what separates someone from being safe and not safe” (News: Florida lifeguard fired for helping drowning man, 2012).
If a human is not restrained by immoral instructions and laws aiming to get financial profits in the first place, he usually takes the right moral decisions quite automatically and very quickly.

- Great Britain.
Mark Duggan, 29, was shot dead by police in Tottenham, north London, after they stopped the minicab he was travelling in on 4 August, 2011. Independent Police Complaints Commission (IPCC) revealed that there was no evidence that Duggan had opened fire at police before being shot dead by a firearms officer. The shooting of Mark Duggan by police caused the widespread public disorder including looting, arson, and violence across London and other English cities in which many young people participated. A total of 1,292 offenders were jailed for their part in the trouble during a year (News: Mark Duggan death: Timeline of events, 2011).
Camila Batmanghelidjh, who has spent decades working with poor and disenfranchised youth, states that “the insidious flourishing of anti-establishment attitudes is paradoxically helped by the establishment”, and that the police unjustly carry the consequences of a much wider social dysfunction in Great Britain (News: Batmanghelidjh: Caring costs – but so do riots, 2011). She blames social exclusion and deprivation. The social care agencies are too under-resourced to compete with the illegal drug economy, which facilitates a parallel subculture of violence. If the community is perceived not to care for an individual, and he is “repeatedly humiliated and continuously dispossessed in a society rich with possession”, young, intelligent citizens of the ghetto will seek an explanation for why their humanity is not valued enough to be helped, and the acquisition of goods of this community through violence becomes justified in their eyes. In the end, they develop the dark side of their nature as many of us could do under permanent humiliating circumstances. Camila Batmanghelidjh draws our attention to the false morality of the British economic system. She says that “the perverse insidious violence delivered through legitimate societal structures” is less visible than riots and is not condemned. She concludes that though caring costs a lot, the price of failing to care is higher. Some other journalists also identified poverty, high youth unemployment, illiteracy, drug abuse, the spending cuts of the government and the growing gap between rich and poor as causative factors. High youth unemployment combined with the government’s decision to cancel the education maintenance allowance, to reduce university places, to close youth centres, and to treble the university tuition fees alienated and angered the youth population (News: Young people have no right to riot, but they have a right to be angry, 2011). Saci Lloyd, a teacher of the college, says, “But ask me if I think young people have the right to be angry as all hell and I will give you an unequivocal yes. And what we saw last week was simply that: an outpouring of their blind rage against the system” (News: Young people have no right to riot, but they have a right to be angry, 2011).
Thus, moral anger brought about violent bahaviour. The real cause of riots is the faults of the economic and social policy. Humiliation and social exclusion is a prolonged social stress and deprives a person of motivation to behave pro-socially (Hartling, 2007). Adolescents are particularly vulnerable to social exclusion, and if they are rejected, they behave less pro-social (Sebastian et al., 2010; Twenge et al., 2007). The amygdala and the right ventromedial prefrontal cortex begin to respond differently in the brain (Taylor et al., 2006).

After studying the mechanism of genetic involvement into the development of psycho-emotional disorders caused by social stress, neuroscientists have come to the conclusion that social factors affect the gene expression, i.e. a biological cell undergoes some changes under social stress as well as neurochemistry of the brain, and the behaviour changes (Kudryavtseva, Avgustinovich, 2006; Filipenko, Alekseyenko, Beilina, et al., 2001). Prolonged social stress results in depression, anxiety, pathological aggression, and other abnormal manifestations of altered gene expression (Kudryavtseva, Avgustinovich, 2006). The neuroscientist Damasio says that social factors interact with biological ones, and sociopathy can be caused not only by an inborn anomalous neurophysiology of the brain, but sociocultural factors as well (Damasio, 2006).
Neuroscientists show that the violation of the principle of fairness is unbearable for people. People try to take revenge if society pursues the policy of unfairness towards them (Glimcher, 2008; Singer et al, 2004a; Camerer et al., 2005), and the satisfaction of moral anger is rewarding for them, with the empathic reaction being blocked (Singer et al., 2006).

- Spain.
In 2011-2012 in Spain, indignant people organized the series of protests demanding a radical change in Spanish political and economic system (News: Indignados en la calle, 2011). They called for the nationalization of banks and demanded basic rights to have work, homes, education, health care system, and the support of culture (News: Miles de personas exigen dejar de ser ‘mercancías de políticos y banqueros’, 2011). They started the international march of “Indignados” (“Indignant People’s march”) from Madrid to Brussels in July 2011 to say that they were fed up with the way the economic crisis was dealt with in Europe (welfare cuts, job losses, and privatizations) while those who caused the recession remained unaffected (News: Spanish Indignants start long protest march to Brussels, 2011).

- The USA.
In the USA Americans also rebelled against bankers under the slogan “Occupy Wall Street” (News: OccupyWallSt.org, 2012; News: Occupy Prescott protesters call for more infrastructure investment, 2011). This mass movement arose in August/September of 2011 and spread to more than 100 cities of the USA, and similar actions took place in more than 1500 cities of the world in 2011-2012 (News: OccupyWallSt.org, 2012). The protesters announced that they expressed the opinion of 99% of population, and that they were against unemployment, welfare cuts, the dictatorship of big corporations, the policy of the authorities, financial institutions and the rich and demanded to stop war and feed the poor (News: Occupy Prescott protesters call for more infrastructure investment, 2011).
Alexei Kudrin, the former minister of finance in Russia, has often complained saying that it is difficult for Russian mentality to adjust to the values of capitalism: Russians do not like their oligarchs, do not respect the rich, dislike market economy, etc. (News: Do not cry for Kudrin, 2011). How would he explain the behaviour of Americans, whose mentality has never been “spoilt by socialism” but, nevertheless, they dislike the same?
Mass Media report that American veterans supported “Occupy Wall Street” movement and went out in the streets with the slogans “We are veterans! We are 99%!” They demanded the government to stop wars, which the USA leads all over the world, “We did not want to believe that our presence in the Middle East was to ensure an oil supply, or to deepen the pockets of the financial elites. Many…lost their life out there, and the suggestion that their sacrifice was for profits, or oil, is unbearable” (News: Veterans Occupy Wall Street, 2011).

People in many other countries supported Americans and organized similar manifestations. Doesn’t it all reveal that the socio-politico-economic system of highly-developed countries does not satisfy the population?
Inborn moral values and moral assessment is indispensable part of human mentality where fairness is the basis of social moral norms. Social conflicts are unavoidable if we ignore the laws of human mentality.

References
- Botvinick, M., Jha, A.P., Bylsma, L.M., Fabian, S.A., Solomon, P.E., Prkachin, K.M. (2005) “Viewing Facial Expressions of Pain Engages Cortical Areas Involved in the Direct Experience of Pain” in Neuroimage, 25: 312-19.
- Camerer, C., Loewenstein, G., Prelec, D. (2005) “Neuroeconomics: How Neuroscience Can Inform Economics” in Journal of Economic Literature, Vol. XLIII: 9-64.
- Chorvat, T., McCabe, K., Smith, V. (2004 ) “Law and Neuroeconomics” in George Mason University, School of Law, Law and Economics working paper series, Social Science Research Network Electronic Paper Collection.
- Christian, D. (2008) “The Cortex: Regulation of Sensory and Emotional Experience” in Noah Hass-Cohen and Richard Carr (eds.) Art Therapy and Clinical Neuroscience, London and Philadelphia, Jessica Kingsley Publishers: 62-75.
- Damasio, A. (2006) Descartes’ Error, London, Vintage.
- Filipenko, M.L., Alekseyenko, O.V., Beilina, A.G., Kamynina, T.P., Kudryavtseva, N.N. (2001) ”Increase of Tyrosine Hydroxylase and Dopamine Transporter mRNA levels in Ventral Tegmental Area of Male Mice under Influence of Repeated Aggression Experience” in Molecular Brain Research, 96(1-2): 77-81.
- Gallese, V. (2003) “The Roots of Empathy: The Shared Manifold Hypothesis and the Neural Basis of Intersubjectivity” in Psychopathology, 36: 171-180
- Gazzola, V., Aziz-Zadeh, L., Keysers, C. (2006) “Empathy and the Somatotopic Auditory Mirror System in Humans” in Current Biology, 16: 1824-1829.
- Glimcher, P.W. (2008) “The Neurobiology of Individual Decision Making, Dualism, and Legal Accountability” in C. Engel and W. Singer (eds.) Better Than Conscious? Implications for Performance and Institutional Analysis, Strüngmann Forum Report 1, Cambridge, MA, MIT Press, retrieved 01.02.2012.
- Hartling, L. (2007) “Humiliation: Real Pain, a Pathway to Violence” in Brazilian Journal of Sociology of Emotion 6(17): 466-479.
- Kudryavtseva, N.N., Avgustinovich, D.F. (2006) Molecular mechanisms of social behavior: comments to the paper of Berton et al., in the journal “Neurosciences”, 4(6): 33-35 (the title translated from Russian).
- Rizzolatti, G., Fogassi, L., Gallese, V. (2006) “Mirrors in the Mind” in Scientific American, 295 (5): 54-61.
- Sebastian, C., Viding, E., Williams, K., Blakemore, S. (2010) “Social Brain Development and the Affective Consequences of Ostracism in Adolescence” in Brain and Cognition, 72(1): 134-145.
- Singer, T., Kiebel, S., Winston, J., Dolan, R.J., Frith, C.D. (2004b) “Brain Responses to the Acquired Moral Status of Faces” in Neuron, 41(4): 653-62.
- Singer, T., Seymour, B., O’Doherty, J., Kaube, H., Dolan, R., Frith, C. (2004a) “Empathy for Pain Involves the Affective but Not Sensory Components of Pain” in Science, Vol. 303, No. 5661: 1157-1162.
- Singer, T., Seymour, B., O’Doherty, J., Stephan, K., Dolan, R., Frith, C. (2006) “Empathy Neural Responses Are Modulated by the Perceived Fairness of Others” in Nature, Vol. 439: 466-469.
- Singer, T., Steinbeis, N. (2009) “Differential Roles of Fairness – and Compassion-Based Motivations for Cooperation, Defection, and Punishment” in Values, Empathy, and Fairness Across Social Barriers, Annals of the New York Academy of Sciences, 1167: 41-50.
- Taylor, S., Eisenberger, N., Saxbe, D., Lehman, B., Lieberman, M. (2006) “Neural Responses to Emotional Stimuli Are Associated With Childhood Family Stress” in Biological Psychiatry, 60:
296-301.
- Twenge, J., Ciarocco, N., Baumeister, R., DeWall, C.N., Bartels, J.M. (2007) “Social Exclusion Decreases Prosocial Behavior” in Journal of Personality and Social Psychology, Vol. 92, No. 1: 56-66.
- Wicker, B., Keysers, C., Plailly, J., Royet, J., Gallese, V., Rizzolatti, G. (2003) “Both of Us Disgusted in My Insula: The Common Neural Basis of Seeing and Feeling Disgust” in Neuron, Vol. 40: 655-664.
- Zak, P.J. (2008) “Values and Value” in Paul J. Zak (ed.) Moral Markets. The Critical Role of Values in the Economy, Princeton University Press, Princeton and Oxford: 259-279.

(in Nina Slanevskaya "Brain, Mind and Social Factors", St.Petersburg, Centre for Interdisciplinary neuroscience, 2014)

 

 

ST. PETERSBURG CENTRE FOR INTERDISCIPLINARY NEUROSCIENCE

 

| ©2009 N.M.Slanevskaya I