You are on page 1of 20

Social dilemmas, bargaining and other-regarding preferences1

Antonio Cabrales Daniel Hojman


University College of London Universidad de Chile






First Draft: April 2014
Preliminary and Incomplete

DO NOT CITE OR DISTRIBUTE WITHOUT PERMISSION


1 These notes were originally produced in preparation for the INET CORE curriculum.


The Stern Review, commissioned by the British Chancellor of the Exchequer to a
distinguished group of economists led by Lord Stern to assess the evidence and building
understanding of the economics of climate change, begins its executive summary rather
bluntly: The scientific evidence is now overwhelming: climate change presents very
serious global risks, and it demands an urgent global response.

The costs of inaction in the face of this challenge are immense, With 5-6C warming
which is a real possibility for the next century existing models that include the risk of
abrupt and large-scale climate change estimate an average 5-10% loss in global GDP, with
poor countries suffering costs in excess of 10% of GDP. But something can, and probably
should, be done: The benefits of strong, early action on climate change outweigh the
costs. More recent reports such as the 2013 Fifth Assessment Report (AR5) by the
Intergovernmental Panel on Climate Change (IPCC) are in broad agreement with the
previous numbers, as the following Figure of a variety of estimates reported in the AR5
shows.

Figure 1: Global warming: Temperature and the impact on welfare (IPCC, 2013)

The Stern review was published in 2006, when the problem was already very well
understood. Given this evidence and the fact that CO2 is well documented scientifically to
be the main source of climate change, you will perhaps find Figure 2 surprising
English: Emissions of CO2-equivalents per year by the 10 largest emitters (note: the European Union is
lumped as a single area, because of their integrated carbon trading scheme). Data based on EU's EDGAR
database (here). Data sorted based on 2010 contributions
China (party, no binding targets)
United States (non-party)
European Union (party, binding targets)
India (party, no binding targets)
Russia (party, binding targets 2008-2012)
Indonesia (party, no binding targets
Brazil (party, no binding targets)
Japan (party, no binding targets)
Congo (DR) (party, no binding targets)
Canada (former party, binding targets 2008-2012)
Other countries

Figure 1: Global warming: CO2 Emissions for different countries 1990-2010 (IPCC, 2013)

Indeed, CO2 emissions have grown sharply over the last two decades, especially since the
year 2000. You would be forgiven for thinking that the world is consciously committing
suicide, and someone must be guilty and put in jail. But not so fast, please. We will deal
specifically (and at length) with the problem of climate change later in this course, but at
this time what we want emphasize is that this very hard problem is not special for humanity
(except perhaps for the size and urgency of the challenge). Homo sapiens as a species, and
others before us, have dealt with qualitatively similar problems for a long time, sometimes
successfully. Our aim in this chapter is to understand both the difficulties involved in such
problems, as well as the ways in which we can deal with them.
On a more general level, the ecologist Garret Harding published in the journal Science a
famous article called, rather dramatically The Tragedy of the Commons where he argued
that commonly owned resources, such as the Earths atmosphere, or the stocks of
particularly valuable species of fish can easily be overexploited. The consumers, and the
fishermen of the world as a whole would be better off not consuming as much tuna, or
emitting less CO2, but refraining unilaterally from consuming/emitting may do little good
to solve the problem. This is what we will call a social dilemma: what is good for the
collective need not be good for the individual from a purely material and egoistic point of
view.

The following Figure shows the results of laboratory experiments (Hermann, Thni,
Gchter 2008). Participants are asked to contribute to a common pool that benefits all
participants but has a cost for the contributor. Each line represents the evolution over time
of average contributions in a different locale. The difficulty (or tragedy) is plainly in sight,
the contributions to the common good decrease over time universally. But it also shows that
there is significant variation across societies, and also that most of them preserve significant
contribution levels even towards the end of the experiment.

Figure 3: Worldwide public good experiments: contributions over 10 periods (Herrmann,


Thni, and Gchter [2008])
The good news from this difficult problem is that humans have invented or inherited
various ways to deal with social dilemmas and other mutual interaction problems, and we
will review some of them in this chapter. To begin with, there are reasons to believe that
some of the ability to solve the problem is innate, internalized in preferences. For example,
one particularly difficult social dilemma is illustrated by Aesops fable belling the cat, in
which a community needs to sacrifice, or at least seriously endanger, one of its members to
defend itself. This situation, sometimes called the volunteers dilemma, is in fact
common in many populations of social animals, which use alarm calls to warn others of
predators. The alarm is useful to the community, but dangerous to the caller as it makes it
more conspicuous to the predator. Human history is also full of examples of quite
extraordinary, as well as common and recurrent, altruistic behavior towards strangers.

But altruism is not the only way to solve the problem. Humans also create institutions to
deal with it. Irrigation communities in Valencia, Spain, use an informal arbitration court
(the Tribunal de las Aguas) since the Middle Ages whose only power comes from social
respect (its decisions are not legally enforceable) to solve conflicts between farmers about
resource overuse, yet its decisions are almost universally followed. Pastures and forest in
the Italian Alps from the 13th century have been successfully managed by community
contractual systems. The recovery of whale stocks in recent times derives from
international agreements. Human societies have achieved notable levels of efficiency in
managing the tragical commons over history through clever and diverse institutional
arrangements. Even modern global environmental problems have sometimes been tackled
effectively. The Montreal protocol to phase out and eventually ban chlorofluorocarbons
threatening to destroy the ozone layer -which protects us against harmful UV radiation- has
been remarkably successful.

Let us now produce a roadmap for the unit. In section 1 we will discuss in a more analytical
manner some examples of social dilemmas, like the ones we have described in the
introduction, to fix ideas, a dilemma of farmers who may invest in some commonly useful
activities for an irrigation facility, and the well-known Prisoners dilemma. We first study
what would be the behavior in those situations of a world populated by rational egoists, so
that the dilemma becomes more apparent. In section 2 we investigate how the dilemmas
bad consequences can be ameliorated when individuals have social (or other-regarding)
preferences, i.e. preference that take into consideration the welfare of others. We have
mentioned in this introduction that a common way to solve the dilemmas is through
institutions and contracts. Both to arrive at a contract and within institutions, agreements
are reached through negotiations, so in section 3 we analyze a very simplified negotiation
situation. We analyze the situation first when the participants are rational egoists, and then
when they social preferences. In section 4 we provide an account of how the pro-sociality
that humans exhibit (including its heterogeneity) must have arisen as the result of its history
and evolution. We finish the unit with section 5, which shows that the institutions and pro-
sociality which humans use to solve their collective action problems interact in both
positive and negative ways in different situations.
[SUBHEAD] 1. Cooperation in Social Dilemmas

In South East Asia, many farmers rely on a shared irrigation facility to produce. This
common-pool resource requires constant maintaining and investments. Each farmer faces
the decision of how much to contribute to these activities that benefit the entire community
of farmers. To fix ideas, consider five farmers, who are deciding whether or not to
contribute to a common irrigation project, a public good in this community. Suppose that
the individual cost of maintaining the project is $10. The benefit of this individual action in
terms of agricultural productivity is $8. That is, if Ana incurs in the cost of maintaining the
project all five farmers receive a benefit of $8 each. We summarize the earnings for Ana in
each situation in the following table

Table 1. Material payoffs matrix in the Public Good Game

The others choose


CCCC CCCN CCNN CNNN NNNN
Ana chooses C 30 22 14 6 -2
Ana chooses N 40 32 24 16 8

While the earning for the whole group are


Decisions CCCCC CCCCN CCCNN CCNNN CNNNN NNNNN
Group earnings 150 120 90 60 30 0

The table summarizes the farmers interaction in the public good game (PGG). We use C
for the decision to contribute the cooperative or pro-social action, and N for no cooperation.
Each available choice for Ana corresponds to a row, while each of choices of the rest of the
group is a column. Each box in the matrix has one number corresponding to the monetary
payoff obtained by Ana for a given profile of actions

Observe that the outcome in which all farmers cooperate, attains the highest aggregate
payoff. Do we expect them to cooperate?

As we discuss below, the answer to this question is subtle. It depends on the efficacy of
institutions that foster cooperation and the motivations and expectations that drive each
individual.

The Rational Egoist

As a benchmark, consider the assumption that individuals are rational egoists. This means
that they care exclusively about maximizing their monetary payoffs and have no other-
regarding motives. What should the rational egoist incarnation of Ana choose? Lets see.
Suppose that Ana thinks the others will choose CCCC. In this case, from table 1, if Ana
chooses C, her monetary payoff is 28 while choosing N gives her a payoff of 40. Thus,
egoist Ana would prefer not to cooperate. Suppose instead that Ana believes that Beatriz
will play CNNN. In this case, if Ana plays C she would get a negative payoff of 6 while
choosing N gives her 16. Once again, not to cooperate yields a higher monetary payoff. In
fact, you can check that for all profiles of actions of the other farmers Ana has a higher
monetary payoff by not contributing.

It follows that regardless of what the other community members choose or Anas beliefs
about this behavior, it is always better for Ana to choose not to cooperate if she is only
interested in maximizing material well being. We say that action N is dominant for Ana.).

Since the game is completely symmetric, applying the same argument for any other
farmers choice implies that N, no cooperation, is also a dominant action for her.

Dominant strategy: A strategy that a player would like to follow regardless of the other
players behavior.

If a player has a dominant action it seems reasonable to assume she will use it. Given out
previous analysis our predicted outcome for this situation is that both (rational selfish) will
not contribute. The profile of actions in which both players play N is an equilibrium in
dominant strategies.

Does this prediction make sense given they all would both much better if they decided to
cooperate? It does. Intuitively, the private benefit of a farmer if she contributes to the public
good is $8 which is less than the private cost of $10. Thus, each farmer has no individual
incentive to cooperate. On the other hand, the social benefit if her investment is $40 (85)
which is larger than the cost. As a result, private incentives are at odds with the common
good. Free-riding is a dominant strategy even though everyone is better of being pro-social.

Prisoners Dilemma. An even simpler canonical representation of this type of social


dilemmas is the Prisoners dilemma. The story goes like this. Two suspects of a major
crime, Thelma and Louise are held in separate rooms. There is enough evidence to convict
each of them of a minor offense, but not enough to convict either of them of the major
crime unless one of them testifies against the other (confess). If they both stay silent, each
will be convicted of the minor offense and spend one year in prison. If one and only one of
them confesses, she will be freed and used as a witness against the other, who will spend
four years in prison. If they both confess, each will spend three years in prison. Each
prisoner has to decide what to do without consulting the other. What will each decide?

We can summarize the game using a payoff matrix with punishments associated to each
action profile:
Table 2. Prisoners Dilemma

Louise
Silence Rat
Thelma Silence -1, -1 -4, 0
Rat 0, -4 -3, -3

Each available choice for Thelma corresponds to a row, while each of choices of Louise is
a column. Each box in the matrix has two numbers: the first one corresponds to the
sentence given to Thelma for a given profile of actions (with a negative sign, since she will
not particularly enjoy her time in prison), while the second one is Louises sentence.

As in our farmers example, both Thelma and Louise have a dominant strategy, to Rat.
Indeed, regardless of what Louise does, Thelma is better off confessing. No matter what
Thelma does, Louise is better off confessing.

The PD captures an important tension in many social situations. It has two defining
features. First, each player has a dominant strategy. Second, there is an outcome of the
game that would make both players better off (both stay silent). However, this outcome is
not self-sustainable as if it is proposed as an agreement each parties have an incentive to
unilaterally deviate and use N instead. Each individual acting in their own interest leads to
an outcome that makes everyone worse off. If individuals were able to sign "binding
contract" to cooperate, both would be better off.

Of course, the PD is not interesting per se. There are many important social dilemmas that
share these features. Examples include global warming, tax collection, teamwork,
partnerships, voluntary contributions to a public good as in our example, contract
compliance, driving in traffic, among others.

In principle, if we believe these predictions, cooperation in a social dilemma relies on the


power of external and internal mechanisms that enforce cooperation. External mechanisms
could be formal institutions such as contracts and laws that rely on state coercion.
[footnote: External mechanisms could also involve social sanctions by other members of a
community.] Indeed, an important function of the law and order system is to impose social
cooperation. It might be privately convenient to elude taxes and free ride on the
contributions of other who pay taxes that finance public goods. If everyone did this, the
system would unravel. Most advanced democracies spend significant resources in agencies
devoted to collecting taxes and imposing penalties on those who dont pay their share.

However, many countries in the world do not have well functioning institutions. Still, even
in countries with relatively weak institutions we observe people who do not cheat with their
taxes, contribute voluntarily to public goods (Wikipedia). Moreover, empirical research
both field and experimental work- has consistently shown significant levels of cooperation
and large variations in pro-social behavior in social dilemmas like the ones just mentioned.
The observed levels of cooperation and community organization in the absence of
governmental controls are hard to reconcile with the rational egoist and are consistent with
the fact that individuals have other-regarding motives. In what follows we explore some of
these internal mechanisms.

2. Cooperation and other-regarding preferences

In this section we explore how different types of other-regarding or social motives can help
sustain cooperation in social dilemmas.

2.1 Norm compliance

Perhaps the simplest form of other-regarding preferences is associated to norm compliance.


Basically, your mum, or your dad, or some other authority figure, told you that some types
of behavior were unacceptable when you were very young and it is just hard for you to
do otherwise. In principle, different cultures, social groups, or communities or even the
same community at different points in time may have different pro-social attitudes. In some
societies, being cooperative is the norm while in others, being selfish is the prevalent
behavior. Individuals may feel cognitive discomfort, face social disapproval or even social
sanctions (e.g. ostracism, gossip) if they fail to follow a norm. Norm compliance is a strong
motivator for most individuals as it is associated with a sense of belonging.

Going back to our social dilemma, in a society with strong cooperative norms, even though
monetary motives may push someone to behave selfishly, complying with a norm of
cooperation may counterbalance this drive.

A simple model of norm compliance is to assume that preferences have two terms. On the
one hand, the individual cares about monetary payoffs, on the other, an action that deviates
from the norm is associated with cost on non-compliance. That is,

Utility = material payoff cost of non-compliance.

To make things interesting, suppose there are only two farmers (Ana and Beatriz) and that
the norm is to cooperate, so Anas total payoff when she cooperates and Bea decides to use
any action a is given by UA(cooperate, a)=m(cooperate, a).

In this case her overall payoff coincides with the material payoff: since Ana conforms to
the norm, there is no cost of non-compliance. In contrast, if Ana does not cooperate she
bears a penalty of K. In this case, the payoff matrix when the norm is to cooperate is
Table 3. Payoff matrix with a social norm

Bea
C N
Anne C 6, 6 -2 , 8-K
N 8-K, -2 -K, -K

We see that cooperation is now an equilibrium outcome is the payoff when both cooperate,
6, exceeds the payoff of a unilateral deviation in which the deviator chooses not to
cooperate in violation of the norm. This latter payoff is 8 K. Hence, as long as 8 K > 6
or K > 2, cooperation will arise as an equilibrium. Furthermore, for these values of K,
cooperation is a dominant strategy. Intuitively if the cost of contradicting the cooperation
norm is large enough relative to the material advantage of not cooperating, cooperation will
prevail.

2.2 Unconditional Altruism

Some people actually like and genuinely care about others. They are plain altruists in the
sense that they are directly motivated by other peoples well being. Readers who thinks
this is a too rosy perspective of human beings, may feel perhaps relieved by noticing that
plain altruists are observationally equivalent to plain selfish individuals who gain utility by
showing off or improving their self-image if they are nice to someone else.

Unconditional altruists have a utility that increases with their own material payoff but also
with others material wellbeing. For example, suppose that Anas utility is given UA= mA +
l*mB, where mA is Anas material payoff, mB is Beas material payoff and the constant l is a
measure of altruism. If l=0 then Ana is rational selfish; instead, if l=1 Ana gives the same
weight to her own wellbeing than she gives to Beas material wellbeing. Similarly, suppose
Bea has a utility given by UB= mB + l*mA. In our social dilemma, aggregate material
payoffs are maximized by the cooperative outcome. Hence, we expect that for l large
enough, cooperation will emerge.

The table below summarizes the payoffs for each action profile with these preferences.
Once again, we compare the payoff of the cooperative outcome with the one in which one
of the parties deviates unilaterally. We find that the cooperative outcome yields a higher
payoff than the deviation if 6+l6> 8-l2. This yields, when l>1/4. As expected, if the weight
on the other farmers material wellbeing is high enough, we might expect cooperation to
prevail.
Table 4. Payoff matrix with a social norm of cooperation

Bea
C N
Anne C 6+L6, 6+L6 -2+L8 , 8-L2
N 8-L2, -2+8L 0, 0

2.3 Reciprocity

Reciprocation is sometimes referred as conditional altruism. The basic idea is that


individuals are nice to someone else they behave pro-socially- if the others have been nice
to them or if they expect them to be nice. Thus, pro-social behavior by one individual is
conditional on the behavior of the other individuals with whom we interact. Observe that
for reciprocation to unfold, we need multiple interactions over time. For example, going
back to our farmers public good game, it seems natural that the villagers will interact
repeatedly and that future cooperation will be determined by behavior in previous
interactions.

To understand how this might work, we consider the simplest possible model. Suppose that
Anne and Beatriz play two repetitions of the PGG described in Table 1. The overall payoff
will then be a sum of first and the second period payoff. From the perspective of the first
period, the second period payoff is discounted by a time-discount factor d that reflects some
amount of impatience or simply that there is a chance that farmers interact again in the next
period (they could die, move away or a flood could drastically change the nature of the
interaction).

We consider the following Reciprocity rule: The actions of the second period are
conditional on the first-period choices. In particular, a farmer cooperates (chooses C) in the
second period only if both cooperated in the first period. Otherwise, the farmer does not
cooperate (chooses N).

This rule can be easily enforced if people have norm-compliance preferences as in section
2.1, where the norm is that a C action in period 2 creates strong discomfort (K is really
high) if the opponent chose D in period 1. And vice versa, choosing D after a C by the other
player inflicts K on you.

Given these period 2 preferences, the farmers only need to think about the first period
actions. We have three possible cases:

If both players start cooperating, each of them obtains 6 in the first period and reciprocation
leads both to cooperate in the second period. From the perspective of period 1 the overall
payoff for each farmer is 6+d6.
Suppose instead that both farmers do not cooperate in the first period. In this case,
reciprocation calls for both farmers not to cooperate in period 2 and their overall payoff is
zero: from Table 1 they get cero in period 1 due to the lack of investment, and since they do
not invest in period 2, the same goes for the second period.

Finally, suppose that only one of the farmers cooperates in period 1. To fix ideas, suppose
that Anna starts cooperating and Beatriz does not. This gives Anna a period 1 payoff of -2
and Bea receives instead 8. Given our coordinated reciprocation assumption, no one will
cooperate in period 2, and both receive 0. Thus, from the perspective of period 1, Anna gets
-2 and Beatriz 8. The case in which Anna does not star cooperating and Beatriz does is
analogous.

The table below summarizes the overall two-period payoffs from the perspective of period
1 for each of period 1 choices.

Table 5. Payoff matrix with reciprocation

Beatriz
C N
Ana C 6+d6, 6+d6 -2 , 8
N 8, -2 0, 0

What would we predict in this situation? Can cooperation emerge a stable outcome? It is
easy to verify that, in this case, if d>1/3 then there is no dominant strategy for each player
(as 6+d6>8). Someone will figure out that more periods are needed. Need to talk about it.

We will define a stable outcome as a profile of choices one for each farmer- such that no
individual would unilaterally want to choose something different. In other words, if
individuals agreed to behave according to a stable outcome, no one would have an
incentive to renege this agreement after the fact.

Equilibrium: An equilibrium is a profile of choices (sA,sB), one for each farmer, such that
a) Keeping the behavior of Beatriz sB fixed, Anne is doing the best that she can.
b) Keeping the behavior of Ana sB fixed, Beatriz is doing the best that she can.

To check if the outcome in which both Ana and Beatriz start cooperating in period 1 is an
equilibrium, observe that this agreement gives each farmer a payoff of 6+d6. Lets see if
Anna has an incentive to unilaterally change her decision. If she decides not to cooperate
while Beatriz cooperates, then her total payoff is 8. Thus, Ana has no incentive to
unilaterally deviate from the cooperative outcome if 6+d6>8 or d>1/3. The same argument
holds for Beatriz. Hence, if d>1/3, the cooperative outcome is an equilibrium.
Intuitively, the temptation to break the agreement is driven achieving a higher payoff in
period 1 (8 rather than 6). However, by not cooperating in the first period reciprocation
triggers a non-cooperative response in period 2. This leads to a payoff of 0 in period 2 and
forgoes the second-period benefit of cooperation (this d6, 6 times the time discount factor).
If the farmer is patient enough, the future gains of cooperation exceed the short-term
temptation.

An interesting fact about reciprocation is that for d>1/3 there are two equilibria. Indeed,
outcome in which both agents do not cooperate (both choose N) is still an equilibrium. The
expectations of farmers prior to the game is what determines whether the game ends up in
the cooperative equilibrium rather than the non-cooperative one. In particular, as
emphasized by the empirical literature, if expectations are grounded on trust and history,
communities with lower trust levels or a poor history of previous pro-social behavior may
explain the balance.

2.4. Cooperation and punishment opportunities

We argued that in some cases social dilemmas are solved by external mechanisms - that
enforce cooperation. These mechanisms include formal laws and the threat of state coercion
and also social sanctions. In principle, any

Consider an experimental public goods game similar to the social dilemma previously
analyzed. Subjects are given an initial endowment of $20 and play twenty rounds of a
public goods game with voluntary contributions in groups of four. In each round,
individuals groups are changed at random. At each round each subject decides how much to
collaborate to a common pool. For each dollar invested all individuals in the group would
receive $0.80. As before, the private investment is associated with a net loss of $0.2 for
each dollar contributed (0.8 -1=-0.2) even though all agents would be better off
contributing all the money (as the social benefit, 4*0.8=3,2, exceeds the private cost of 1).
Once again, for a rational egoist the dominant strategy is not to contribute and free ride.
The experimental twist is that, in some rounds, subjects can use part of their endowment to
inflict a costly punishment on someone else. The cost of punishment is high and since
subjects are rarely matched together more than once, the incentive to teach someone else a
lesson in the expectation that the next time around cooperate is weak. Thus, a population of
rational egoist should not cooperate nor punish.

Figure X below illustrates the experimental results for 20 rounds of play (Fehr and Gachter
2001). The first 10 rounds allow for punishment, the final 10 rounds do not. In the absence
of punishment, we observe that contributions decrease sharply, although they are never
zero, as in Figure 3 in the introduction. Instead, we observe that with a punishment
technology contributions are high, they average more than $12. Further, individuals
consistently punish those who do not contribute.
More generally, in experiments that allow for costly punishment find that people are willing
to sacrifice material wellbeing to punish free-riders. This provides strong evidence of
negative reciprocity. People who consider that others have been unfair, may retaliate even
if the costs of doing so are high.

Figure 4: Cooperation with and without punishment opportunities (Fehr-Gachter [2000]).


3. Bargaining

Humans commonly resort to negotiation to solve their economic and social problems. Even
a competitive market with lots of participants can be understood as resulting from many
interacting simultaneous negotiations (which you may be able to experience experimentally
in chapter 8). We will now study a very simplified negotiation, to see how the outcomes
depend on preferences and expectations.

Consider the following bargaining situation known as the Ultimatum Game (UG). There
are two individuals, a Proposer and a Responder. The Proposer is given an initial
endowment of $10 and offers a split of this amount. A split takes the form: x for me, y for
you where x+y=10. After observing the offer, the Responder accepts or rejects it. If the
offer is rejected, both individuals get nothing. Otherwise, if the offer is accepted, the split is
realized. Consider the following experimental questions.

Experiment 1, Part 1

Suppose that you are the Responder in this game. What is the minimum offer you are
willing to accept?

Experiment 2, Part 2

Suppose now that you are the Proposer. What split would your offer to the responder?

A rational egoist Responder should accept any offer. The reason is that something, no
matter how small the offer, is always better than nothing. Note that in a world of rational
egoists the Proposer could anticipate that the Responder will accept any offer and, thereby,
she could offer the minimum possible amount, to fix ideas, a penny. Does this prediction
match the experimental data?

The graphs show experimental results for a population of Chilean undergraduates.


[FOOTNOTE] The subjects were undergraduate students at the Faculty of Economics and
Business at Universidad de Chile, during 2013-2014. The amount to split was the
equivalent of 10 dollars. The results are qualitatively similar to the ones obtained in
populations of students in American and European universities. [END FOOTNOTE]
Experiment, Part 1: Minimum acceptable
offers in the Ultimatum Game
35

30

25

20
Number of
students
15

10

Minimum acceptable offer

Figure 5: Distribution of acceptable offers in the Ultimatum Game (Responder).

The main stylized facts are that most individuals do not accept offers that are too low. In
this population, the average minimum acceptable offer is 34% of the total, three quarters of
the subjects only accept offers that are 30-70 or better. The results show significant
heterogeneity in the population. A minority of students is willing to take offers below 20%
of the total amount, which might be consistent with individuals who, in this experiment,
consider exclusively maximizing material outcomes. On the other hand, the modal
individual not only demands a relatively high offer, she demands an equal split.

This evidence is hard to reconcile with individuals who care exclusively about maximizing
material outcomes. It is consistent with individuals with different kinds of social
motivations. For example, the rejection of a low offer could be a sign of negative
reciprocity: an individual who deems an offer to be abusive could be willing to sacrifice
material well being to punish the proposer. It is also consistent with people who care about
fairness and reject offers that are too far from an equal split. Inequity aversion is consistent
with equal split norms that seem to be very engrained in human societies. In many cultures,
if a parent divides a loaf of bread or chocolate among its children what you expect is an
equal split.

We can rationalize the behavior using preferences that are similar to the ones we mentioned
with norm compliance in the previous section, which are sometimes called social (or
interdependent) preferences. Consider a respondent that dislikes unfair outcomes (i.e.
because she thinks uneven outcomes violate a social norm)
Utility = Material payoff Psychological cost of unfair outcomes.

In this example, her material payoff is the amount of money she receives. We call this y,
which is zero if the offer is rejected and (10-x) if it is accepted, so that

Material payoff = y.

Let x be what the proposer receives. Suppose that the norm compliance part is associated to
an equal-split norm (inequity aversion). It is only natural to assume that the cost of non-
compliance increases with the distance between x and y, that is, the difference between
what each party gets. Hence,

Cost of unfair outcomes = C|x-y|

Where C is a cost parameter. The larger is C, the more the responder cares about the equal
split norm relative to her material payoff. If C = 0, she is a selfish rational who does not
care at all about the norm.

If the offer is accepted, the responders utility is UA = y - C|x-y| = y - C|y-5|, where we


used that x + y = 10. If the offer is at most y = 5 (equal split), in line with the data, we have
that

UA = (1+C)y - 5C.

If instead the offer is rejected, then both x and y are zero, so the material payoff is zero and
so is the compliance with the fairness norm term, as x = y. Her utility is then UB = 0.
Hence, the responder will accept an offer if , or equivalently, (1 + ) 5
0. This means that the offer y she receives must be high enough to satisfy

!!
!!! .

The right hand side is precisely the minimum offer the responder is willing to accept. If C
= 0 (rational selfish) then this quantity is cero. Instead, as C becomes larger (strong norm
complier) the quantity approaches $5, the equal slit.

The distribution of offers shows, as before, that there is significant variation. However,
offers are quite generous: most offers are between 70-30 and 50-50, almost 40% of the
subjects offered an equal split, and the average offer leaves the responder 40% of the total
pie.
Experiment 1, Part 2: Distribution Offers in the
Ultimatum Game
40
35
30
25
Number of
20
Students
15
10
5
0
0%
5%
10%
15%
20%
25%
30%
35%
40%
45%
50%
55%
60%
65%
70%
75%
80%
85%
90%
95%
100%
Offer

Figure 6: Distribution of offers in the Ultimatum Game (Proposer).

Other-regarding preferences and Moral Sentiments

An important finding is that the generosity of the offers observed in distributive dilemmas
depends on the social distance among individuals. In a UG, if offers are randomly
generated by a computer the minimum acceptable is much lower. In the population of
Chilean students, the average minimum acceptable offer falls from 34% to 19%. At the
same time, the evidence suggests that we tend be more pro-social with friends than with
strangers or with people who belong to our social group defined by ethnicity, religion or
nationality- than those outside the group.

This has important practical implications. For example, pro-social behavior across races or
ethnic groups could be deeply influenced by the flow of interactions across groups or the
degree of segregation. It may also explain why firms sometimes outsource negotiations
with workers or layoffs, or why a politician may resort to a personal testimony as opposed
to an anonymous fact- to persuade an electorate about a distributive policy.

But it also illuminates the nature of other-regarding preferences. These seem to embody
deeply engrained moral sentiments. In contrast to humans and other animals, we do not
expect current computers to display moral reasoning or intentions. Humans seem to have a
hard time to be selfish towards other humans when they see them and are willing to give up
material resources if others .

History: Adam Smith is often times regarded as one of the fathers of modern economics.
He is better known for describing the advantages of labor division and free markets soon
after the rise of the industrial revolution in his book The Wealth of the Nations (A more
detailed discussion of Smiths insights on free markets is in chapter 8). However, prior to
this work, in his essay The Theory of Moral Sentiments he identified other-regarding
motives including empathy and justice as passions central to civilization. The first
sentence of this essay goes How selfish soever man may be supposed, there evidently
some principles in his nature, which interest him in the fortune of others, and render their
happiness necessary to him, though he derives nothing from it except the pleasure of seeing
it. On the desire for justice and its function Smith wrote: All men, even the most stupid
and unthinking, abhor fraud, perfidy, and injustice, and delight to see them punished. But
few men have reflected upon the necessity of justice to the existence of society, how
obvious soever that necessity may appear to be. [END SURPRISE ME]

Ultimatum game with multiple respondents and competitive market outcomes

We have mentioned that social preferences can explain behavior in ultimatum games which
departs from what a rational egoist would do. But, as usual, things can be more
complicated. Compare the offers in the figure XXX above with the ones we can see in the
following one:
Figure 7: UG offers accepted with various numbers of responders.

The circles represent the mean offers in experiments like the one described so far in this
section. The crosses the behavior in experiments with a similar protocol but where there are
two proposers and just one responder, who may take either of the proposers offers. The
triangles and squares represent situations with one proposer and three (triangles) or five
(squares) responders. It is clear that competition moves the observations closer to those that
would obtain if the world were populated with rational egoists. One possibility to explain
this is that competition (and by extension, markets) changes peoples preferences (some
would say it corrupts them).

An alternative explanation uses the fact that, as the other experiments show clearly,
peoples preferences are very diverse to begin with. Some would like the market or
contractual outcome to be the fair price/contract, and they would be prepared to withdraw
the custom from the offending party who offers an unfair price/contract. But others are
not so fussy, and are prepared to take the unfair (for others offer). Which means that in
the end, the outcome will be unfair and the sacrifice of the fair-minded person would have
been useless. One thing is to be fair-minded, and a different one is to look silly.

This is important because when the time arrives we can, and we will, study competitive
market interactions as if people were selfish egoists. This does not necessarily mean that
they are, but since the behavior of selfish and reciprocal agents is hardly distinguishable, at
that point we do not need to concern ourselves with the interdependency of preferences.

4. Social preferences, nature and culture

Many species show altruistic behaviors, including monkeys and dogs. This suggests that
there might be a biological basis for social preferences.

[SURPRISE ME] [VIDEO]

This fairness experiment with capuchin monkeys originally devised by Brosnan and De
Waal (2003) is quite telling. https://www.youtube.com/watch?v=gOtlN4pNArk

[END SURPRISE ME]

However, other-regarding behavior in humans seems to extend much more than in other
species. In non-humans altruism is usually restricted to a very small group in the family-
, a behavior akin with the preservation of those with similar sometimes, almost identical-
genetic pool. The fact humans are capable of other-regarding behavior with humans beyond
family ties and even with strangers, suggests that extensive social preferences could be a
distinctive feature of human beings.

You might also like