Almost. I have sent a full draft following external feedback and review (next stage: copy-editing). All going well, it will be out in November 2019.
Rational choice theory provides a way of thinking about collective action problems. There is great potential for choices made by individuals to have an adverse societal effect when there is an absence of trust, obligation, or other incentives to cooperate. People may have collective aims that require cooperation, but individual incentives to defect. While the action of one individual makes little difference, the sum total of individual actions may be catastrophic.
Simple ‘games’ provide a way to think about these issues logically, by limiting analysis to very specific situations under rather unrealistic conditions, before we consider possible solutions under more realistic conditions. For example, in simple games we assume that individuals pursue the best means to fulfil their preferences: they are able to act ‘optimally’ by processing all relevant information to rank-order their preferences consistently.
Go with it just now, and then we can consider what to do next.
The ‘prisoner’s dilemma’
Two people are caught red-handed and arrested for a minor crime, placed in separate rooms and invited to confess to a major crime (they both did it and the police know it but can’t prove it). The payoffs are:
Also assume that they take no benefit from the shorter sentence of the other person (a non-cooperative game).
It demonstrates a collective action problem: although the best outcome for the group requires that neither confess (both would go to jail for a total of 2 years), the actual outcome is that both confess (16 years). The latter represents the ‘Nash equilibrium’ since neither would be better off by changing their strategy unilaterally. Think of it from an individual’s perspective:
The effect of Paul and Linda acting as individuals is that they are worse off collectively. Both ‘defect’ (confess) when they should ‘cooperate’ (stay silent).
The ‘logic of collective action’
Olson argues that, as the membership of an interest group rises, so does:
(a) the belief among individuals that their contribution to the group would make little difference and
(b) their ability to ‘free ride’.
I may applaud the actions of a group, but can – and will try to – enjoy the outcomes without leaving my sofa, paying them, or worrying that they will fail without me or punish me for not getting involved.
The ‘tragedy of the commons’
The scenario is that a group of farmers share a piece of land that can only support so many cattle before deteriorating and becoming useless. Although each farmer recognizes the collective benefit to an overall maximum number of cattle, each calculates that the marginal benefit she takes from one extra cow for herself exceeds the extra cost of over grazing to the group. Individuals place more value on the resources they extract for themselves now than the additional rewards they could all extract in the future.
The tragedy is that if all farmers act on the same calculation then they will destroy the common resource. The group is too large to track individual behaviour, individuals place more value on current over future consumption, and there is low mutual trust, with minimal motive and opportunity to produce and enforce binding agreements
This ‘tragedy’ sums up current anxieties about one of the defining problems of our time: global ‘common pool resources’ are scarce and the world’s population and consumption levels are rising; there is no magic solution; and, collective action is necessary but not guaranteed. We may value sustainable water, air, energy, forests, crops, and fishing stocks, but find it difficult to imagine how our small contribution to consumption will make much difference. As a group we fear climate change and seek to change our ways but, as individuals, contribute to the problem.
Overall, these scenarios suggest that individuals have weak incentives to cooperate even if it is in their interests and they agree to do so. This problem famously prompted Hardin (to recommend ‘mutual coercion, mutually agreed upon’ to ensure collective action.
What happens when there are many connected games?
In real life, it is almost impossible to find such self-contained and one-off games. In many repeated – or connected – games, the players know that thereare wider or longer-term consequences to defection.
Evolutionary game theory explores how behaviour changes over multiple games to reflect factors such as (a) feedback and learning from trial and error, and (b) norms and norm enforcement.
For example, player 2 may pursue a ‘tit-for-tat’ strategy. She cooperates at first, then mimics the other player’s previous choice: defecting, to punish the other player’s defection, or cooperating if the other player cooperated. Knowledge of this strategy could provide player 1 with the incentive to cooperate. Further, norms develop when players enforce and expect sanctions for non-cooperation, foster socialisation to discourage norm violation, and some norms become laws.
In other words, this focus on the rules of repeated games gives us more hope than the tragedy of the commons. Indeed, it underpin Ostrom’s famous analysis of the conditions under which people can govern the commons more effectively.
This post is one of four updates to the post Policy Concepts in 1000 Words: Rational Choice and the IAD
See also this tweet – and many others paying homage to it – to explain the title of the post:
See also this tweet thread on Hardin: