Contributing Factors to the Emergence of Systemic Risks

Schwerpunkt: Systemic Risk as a Perspective for Interdisciplinary Risk Research

Contributing Factors to the Emergence of Systemic Risks

by Belinda Cleeland, International Risk Governance Council, Geneva[1]

IRGC’s emerging-risks project explores the origins of emerging systemic risks, and, in ongoing work, is developing guidance for practitioners on how to improve their anticipation of and response to these risks. This article describes the IRGC’s concept of “contributing factors&rdquo to risk emergence: generic factors that can affect the likelihood that a new risk will emerge, or the severity of its consequences. We explore here the factors that are particularly pertinent to systemic risks, because they derive largely from interactions and interdependencies, and relate to the properties of complex systems. We also emphasise the importance of taking a systems perspective and of understanding traits common to complex systems.[2]

1     Introduction

The International Risk Governance Council (IRGC) defines as “emerging” a risk that is new, or a familiar risk that becomes apparent in new or unfamiliar conditions. Of particular interest to the IRGC are emerging risks of a systemic nature, which typically span more than one country, more than one economic sector, and may have effects across natural, technological and social systems. These risks may be relatively low in frequency, but they have broad ramifications for human health, safety and security, the environment, economic well-being and the fabric of societies.

In its latest report – the outcome of phase 1 of its ongoing project on emerging risks – the International Risk Governance Council (IRGC) explores the origins of emerging systemic risks, in describing and illustrating twelve generic contributing factors that can affect the likelihood of a new risk emerging, or the severity of its consequences. Rather than simply listing and describing important emerging risks, the aim of IRGC’s project is to examine how these risks eventuate, and to provide risk practitioners with insights that can help them better anticipate and deal with systemic risks in the early phase of their development. Being aware of the twelve contributing factors, and appreciating their potential impacts can provide new perspectives on risks or inform risk management decisions.

In order to understand the role of these factors in risk emergence, consider the following metaphor of a plant emerging from fertile ground: once a seed is sown, there is a key set of factors that affect the probability that a plant will grow, including, for example, nutrient and mineral content, pH, soil structure, drainage and micro-organism content. In the same way, the contributing factors described by the IRGC can combine to create “fertile ground” from which new risks can emerge and be amplified.

The twelve contributing factors below are highly interdependent, and may be ordered or prioritised in many different ways (see Fig. 1). The numbers do not by any means indicate an order of importance – indeed, such an assessment could only be usefully made with a specific situation in mind. One possible way to conceptualise the list of factors is to view them as operating at three different “levels”: factors 1–4 are more structural in nature, and have to do with the properties of the complex systems often implicated in systemic risk emergence, or elements (e.g., geography, genetics) that interact with these properties. Factors 5–7 operate more at the level of human society, and deal with aspects that derive from human nature, behaviour and actions, with a focus on social and cultural relations and advancement. Moving from the broader societal level to the level of individual actors, factors 8–12 deal with the impact that personal or institutional decisions can have on risk emergence. Of course, these categories are not air-tight, and some factors, communication in particular, could be interpreted as influencing many of the others.

Fig. 1:   IRGC’s 12 contributing factors to risk emergence

 IRGC’s 12 contributing factors to risk emergence

Source:   Own compilation

While some of the twelve contributing factors may be considered to be relevant for all risks (ordinary and systemic alike), there are others that are particularly pertinent to systemic risks, because they derive largely from interactions and interdependencies, and relate to the properties of complex systems. We will discuss some of these factors below, namely: “loss of safety margins”, “positive feedback” and “varying susceptibilities to risk”.

Before turning to discussion of these factors, however, it is first important to think about perspective and context – risks do not emerge in an isolated manner, and the emergence of risks that exhibit systemic character is particularly unlikely to be a straightforward process, where cause and effect are easily identifiable. For these reasons, the IRGC stresses the significance of taking a systems perspective and recognising complexity: understanding what is meant by “complex system” and some of the traits commonly associated with complexity.

2     The Systems Perspective

This “systems perspective” refers to a school of thought that is based on the work of systems theorists, and may be applied to any type of system, whether biological (the human heart), engineered (the electric power grid), mechanical (transport and logistics systems), ecological (a forest), economic (the stock market), social (a neighbourhood) or geopolitical (the Middle East). When considering parts of a system, system theorists are particularly interested in how the parts relate to each other and their context within the larger system. While a single dominant cause may sometimes explain an emerging risk, it is more common that multiple interacting factors are at work, with interactions occurring both within the system and between systems (system-system interactions). Therefore, professionals responsible for anticipating the emergence of risks can benefit from a systems perspective.

The systems perspective advocates viewing systems in a holistic manner, meaning that the system is seen as representing more than just the sum of its parts, and that the whole influences how the parts behave. Describing the system as a whole can stimulate insights about emerging systemic risks and about how they should be addressed. For example, in a recent safety scandal that damaged one of the most successful companies in the automobile industry, Toyota found that it was not sufficient to test thoroughly the parts of a system that comprise the automobile. As acknowledged by Toyota’s chief quality officer at a news conference: the company did not look carefully enough at “how vehicle parts perform as a whole inside the car under different environmental conditions” (Linebaugh et al. 2010).

In contrast, reductionism (which proposes that the behaviour of a system can be explained by breaking it down into its component parts) can be useful for understanding the emergence of simple risks, but it is usually unable to explain fully and to anticipate some risks that emerge and exhibit systemic character.

The science of ecology provides many examples of how a systems perspective is useful in understanding the complexity of interactions between elements of a whole, as well as system-system interactions. For example, climatic cues such as water availability and temperature affect the timing of pollination and the life cycles of pollinators, as do invasive species and local and regional chemical pollution. The interaction of some or all of these elements could lead to a dangerous decline in the frequency and rate of pollination, which, through system-system interactions, could pose environmental risks (loss of plant and animal biodiversity), climate risks (loss of vegetative cover could further influence climate change), and social and economic risks (production of fruit, vegetables, meat and milk could be diminished, and many diverse industrial interests harmed, from pharmaceuticals to perfume to bioenergy) (IRGC 2009).

3     Recognising Complexity

A systems perspective is especially relevant when considering complex systems, as it is from complex systems that emerging risks (especially systemic ones) often arise.

Complex systems may be defined scientifically as systems “composed of many parts that interact with and adapt to each other” (OECD 2009). In most cases, the behaviour of such systems cannot be adequately understood only by studying their component parts. This is because the behaviour of such systems arises through the interactions among those parts. When considering the factors that contribute to the emergence of risks, a discussion of the role of complexity and the traits of complex systems is a useful place to start, because complexity can encompass, or at least strongly influence, many of these factors. It can be, in many cases, part of the background conditions or context within which these factors operate.

The behaviour of complex systems may involve random variation, and is therefore often unpredictable and hard to control (Helbing 2009).[3] Complex adaptive systems (CAS) are of particular relevance: they are special cases of complex systems with the capacity to change and learn from experience. When a CAS is perturbed, it tries to adapt. If the system fails to adapt, this may undermine its resilience and sustainability, potentially resulting in collapse (or a flip to a new equilibrium). Examples include ecosystems, ant colonies, the immune system and political parties.

The following traits, common to many complex systems, are relevant to emerging risks. They have the effect of increasing the unpredictability of the system’s future behaviour and, as a result, risk anticipation becomes more difficult.

The above characteristics of complex systems demonstrate why it is difficult for risk managers to anticipate system behaviour, or to attempt any control of it. However, the IRGC believes that an understanding of these key traits can nevertheless inform and improve risk governance. Furthermore, other traits common to complex systems may act to make risk emergence less likely. Adaptability and self-organisation are examples of such traits.

A first step for risk managers is to examine the system closely, to determine whether or not it is “complex” (in the scientific sense). If this is the case, then the next step is to determine which of the common traits described above could apply, and therefore, which actions could be most effective.

This background information – this context of systems complexity – should be kept in mind as we move on to examine some of the generic factors that can contribute to the emergence of systemic risks.

4     Endogenous Factors of Systems that Can Influence Risk Emergence

The three contributing factors described below are those that, out of the twelve described by the IRGC, are among the more dependent on complex system dynamics. As the number of components/actors and interdependencies in society’s functional systems continues to increase, these are some of the factors at work that can result in the degree of complexity in a system crossing the threshold from “high, yet functional” to “dysfunctional and susceptible to emerging risks”.

4.1   Loss of Safety Margins

Increasing interconnectedness is evident in today’s globalised world. Greater (and faster) connectivity is appealing, because it can boost communication, economic production and societal innovation. The connectivity of social systems allows people to exchange experiences and knowledge on an international scale, which can act as an important attenuator of risk. However, as systems become more interdependent, faster and more complex, they may also become more tightly-coupled, where the links between the components in the system are very short, meaning that each component can have an almost immediate and major impact on one or more other components in the system (see Perrow 1999). This tight coupling is synonymous with a loss of safety margins in a system, which leaves the system more vulnerable to surprises – even a small mechanical failure or accident can have grave consequences, perhaps even leading to a system breakdown (Homer-Dixon 2006).

A system’s safety margin can be understood to be its buffering capacity or slack. But perhaps the most useful way to grasp the concept is to compare the stress a system is exposed to with its coping capacity. Once increasing stress exceeds the coping capacity, the system has lost its safety margin, and enters a state of overload, which can precipitate a breakdown, or other kind of non-linear shift in behaviour (see Figure 2).

Fig. 2:   Relationship between system stress and risk, holding system coping capacity constant

Relationship between system stress and risk, holding system coping capacity constant
 

Source:   Own compilation

Tight coupling and the corresponding loss of safety margins are features that characterise many emerging systemic risks, whether in the context of financial, environmental, or technological systems. Policy responses to these emerging risks, too, must operate in a context of high and increasing connectivity, creating an environment where the amplification of emerging risks could occur if interventions to mitigate one risk inadvertently exacerbate others in unforeseeable ways by reducing safety margins.

There are two key situations that can arise in coupled systems, both of which may result in the emergence of systemic risks:

First, there is an increased risk of unanticipated interactions occurring among previously separated system components (or even among previously separated whole systems) (see Vespignani 2010). Thus, if two or more failures affecting different system components occur independently, these failures may interact in an unexpected way, resulting in an unforeseeable, undesirable outcome.

Second, there is an increased risk of cascading failures, when a failure of one component in a system can cause failures or other disturbances in other components. The more tightly the components in the system are connected, the faster and further a shock or failure can propagate throughout the system.

Illustrations of cascading effects abound: the failure of one major financial institution can cause others to fail; one malfunction in an electrical system can trigger massive widespread blackouts; or when the leader of a political party suffers a popularity setback, the adverse effects can extend to the entire party. An ecological example is that of the collapse of the Barents Sea capelin fishery in 1986 (Hamre 2003).

Fortunately, risk managers have several options for minimising undesirable outcomes that can result from tight coupling and the loss of safety margins. In some systems, firewalls can be added to limit the spread of damage among the components (e.g., they are used to protect electrical systems, or to defend computer systems against malicious intrusion). Building system structures with more redundancy and resilience (where each component in the system has not only the ability to draw on other components for support, but also, crucially, a degree of self-sufficiency to fall back on in case of emergency) can limit cascading effects. However, specific incentives are often needed to encourage these measures, which may be costly to put in place, and provide no benefit, except in case of emergency (Homer-Dixon 2006). Making investments such as these can be problematic, as it involves resisting pressure from shareholders or taxpayers to reduce what is seen as unnecessary spending. Such pressures often lead organisations to reduce their safety margins to dangerously low levels.

4.2   Positive Feedback

A system exhibits positive feedback when, in response to a perturbation, the system reacts in such a way that the original perturbation is amplified. A perturbation that is initially small can therefore grow to become so large as to destabilise the whole system. In this context, the term “positive” does not refer to the desirability of the outcome, but only to the direction of change (amplification of the perturbation). Because positive feedback tends to be destabilising, it can potentially increase the likelihood or consequences of the emergence of a new systemic risk. In contrast, negative feedback is fundamentally stabilising as it counteracts the initial change. For example, many systems in the human body use negative feedback to maintain system parameters within a narrow functional range (e.g., regulation of blood pressure or body temperature).

Positive feedback occurs in both natural and social systems. With regard to climate change, for example, various positive feedback dynamics within the carbon cycle are well known. The warming of the atmosphere that is occurring due to increased anthropogenic emissions of carbon dioxide and other greenhouse gases is causing (among other things) permafrost melt and tropical forest dieback. Melting permafrost releases trapped methane, which is a powerful greenhouse gas, and tropical forest dieback reduces the strength of an important carbon sink, which results in less carbon dioxide uptake from the atmosphere – both of these processes further amplify global warming, and are thus instances of positive feedback (Frame, Allen 2008).

A financial panic or a stock market collapse is a classic example of positive feedback within a social system. In this case, if some market actors become nervous and sell stocks, this behaviour makes others more fearful, and they sell, too. As fears are further amplified, panic selling ensues, resulting in plummeting prices and financial losses. Because of the high degree of connectivity in today’s financial markets (allowing for fast communication and transactions), positive feedback can cause a crisis to spread quickly, thus greatly amplifying the financial consequences (Homer-Dixon 2006).

Although, as the previous example demonstrates, the occurrence of positive feedback is related to the level of connectivity in a system – in that a more connected system offers more possibilities for feedback, both positive and negative – powerful positive feedback dynamics can nevertheless occur in relatively simple systems. For this reason, risk managers should look specifically for the presence of feedback, and not simply at connectivity. Sharp flips of system behaviour, or, more generally, disproportionality of cause and effect, are both strong indicators that positive feedback dynamics may be operating.

The presence of feedback in systems is common, and does not necessarily lead to systemic risks or even to a negative outcome. On the contrary, both positive and negative feedback can be essential for the proper functioning of systems, and it is the interplay of both kinds of feedback that gives rise to the system’s ultimate behaviour. It is therefore important for analysts to identify feedbacks (both positive and negative) occurring in a system, and assess their function and their relative balance (if either positive or negative dominates), in order to anticipate better when risks might emerge or be amplified.

4.3   Varying Susceptibilities to Risk

Risk does not affect all individuals or populations in an equal manner. Contextual factors, such as geographical location, genetic makeup (biological fitness), resource availability or prior experience all affect susceptibility, which in turn impacts the probability, scale and severity of the risk and its consequences. Neglect of varying (or differential) susceptibilities – or of changing susceptibilities over time – can therefore lead to over- or underestimation of the emergence and possible impacts of a systemic risk, as well as miscalculation of the risk’s projected future development.

Many weather-induced risks – drought, hurricanes, ice storms – affect only limited parts of the world and a minority of the world’s population. The impacts of climate change will be felt all over the world, but the precise impacts will vary: coastal areas affected by rising sea levels will not be affected equally, depending on local factors such as coastal slope, the built infrastructure, the occurrence of storms and surges, and the ability of coastal ecosystems to adapt to sea level changes and storm damage.

Indeed, the same phenomenon that places susceptible people at risk of harm may benefit others. Most people view the melting of the Arctic ice sheet as an event with only adverse consequences, but it has already opened up a summer shipping route north of Russia that can shorten some voyages, and will offer some commercial benefits.

Evolution is an on-going process, and is, for example, the natural phenomenon behind the emergence of new viruses and bacteria and the ability of bacteria to mutate and to develop resistance to antibiotics. Natural selection, a key mechanism of evolution, explains why some human populations are less susceptible to some diseases than others – for example, some populations living in areas where malaria is endemic show greater resistance to the parasite (Fortin et al. 2002). But the genetic variation that is a driving force of evolution can also create gene variants that predispose individuals to disease. For example, specific gene variants are known to contribute to causing obesity, some cancers, and other diseases.

When it comes to risks arising from personal behaviour, psychology also plays a central role. Due to what has been called “optimism bias”, people often see themselves as being less susceptible to risks than others, with this “risk denial” being stronger when people feel they have a degree of control over the hazard (e.g., smoking, alcohol) (Sjöberg 2000). At the personal level, therefore, perceived variability in susceptibility may not match real variability.

Where there is real variability in susceptibility at the personal level, this is frequently a result of people adapting their behaviour in response to risk as they learn from past experiences. For example, the experienced skier or sailor is less at risk than a beginner, particularly in difficult conditions. In Japan, for example, the knowledge of what to do in case of an earthquake is widespread in the population. But people and governments differ in their capacities for responding to risks, whether due to differing resources, traditions or other factors. History suggests that, for many risks, it is the low-income households and countries that are both more susceptible and less able to respond.

Thus, as susceptibility varies between different individuals, groups or locations, as it increases or decreases over time as a result of physical changes (e.g., of climate or genetic makeup) or behavioural changes (e.g., via learning or changing norms), the consequences of an emerging systemic risk may be amplified or attenuated, and its future trajectory may be altered.

5     Conclusion

The three factors that we have outlined here can all – either alone or, more likely, in combination with the remaining nine factors described by IRGC (IRGC 2010) – contribute to the emergence or amplification of systemic risks. For the first two factors, the focus is on complexity, as both factors describe mechanisms and interactions that are endogenous to complex systems:

For the third factor, however, the focus is on the systemic dimension of the risk:

As important as the twelve contributing factors are, they are not necessarily exhaustive, and they are certainly not a substitute for detailed subject knowledge of each emerging risk. Rather, managers may find it useful to consider the ramifications of detailed subject knowledge by thinking through the factors, and determining which ones are relevant to the emerging risk in question.

In formulating and describing the contributing factors, the IRGC has drawn on insights from concepts and applications in systems theory, especially on the recent advances in understanding how complex systems give rise to unexpected risks. While the concept of factors is useful for understanding the “mechanisms of systemic risk production” better, it does not immediately suggest many concrete solutions for risk managers who must address the challenge of how better to anticipate and respond to emerging risks. Overcoming obstacles such as uncertainty, knowledge gaps, conflicting values and interests, and cognitive biases will require not only enhanced capabilities (e.g., for surveillance and data collection, understanding human decision-making, regularly reviewing communication and decision-making processes, increased organisational flexibility and building robustness, redundancies and resilience), but also an organisational risk culture that can utilise these capabilities properly. This risk culture, which embodies the organisation’s risk “appetite”, reflects its goals and strategies, and informs how its risk-related decisions are made, should strive to establish a climate of openness and humility during the early phases of identifying and responding to emerging risks. Such a change in risk culture will be difficult, but it may be a necessary precondition for truly adaptive approaches to emerging systemic risks. The importance of risk culture plus insights into means of overcoming some of the key obstacles to changing risk culture and to building the necessary capabilities mentioned above are the focus of phase two of the IRGC’s emerging-risks project. The phase 2 Concept Note (IRGC 2011) presents eleven themes, each derived from a commonly encountered obstacle to effective emerging risk management. It describes and illustrates the themes in such a way as to provide clarity to risk managers and to set forth ideas for more proactive emerging risk management. The next steps in phase 2 of the project will involve the development of an emerging risk protocol aimed at providing practical guidance on how to manage risks upstream of the conventional processes (where the IRGC’s Risk Governance Framework ca be applied).

Notes

[1]  This text is compiled by Belinda Cleeland. She is project manager at the International Risk Governance Council.

[2]  This article is based on the IRGC report “The Emergence of Risks: Contributing Factors”. The principal authors of this report are Dr. John D. Graham (Dean, Indiana University School of Public and Environmental Affairs, USA) and the participants in the IRGC’s December 2009 workshop on Emerging Risks: Dr. Harvey Fineberg (President of the Institute of Medicine, United States National Academy of Sciences), Prof. Dirk Helbing (Chair of Sociology, in particular, of Modeling and Simulation, ETH Zurich, Swiss Federal Institute of Technology (ETH) Zurich, Switzerland), Prof. Thomas Homer-Dixon (CIGI Chair of Global Systems, Balsillie School of International Affairs, Director of the Waterloo Institute for Complexity and Innovation, and Professor of Political Science, University of Waterloo, Canada), Prof. Wolfgang Kröger (Director, Laboratory for Safety Analysis, Swiss Federal Institute of Technology (ETH) Zurich, Switzerland), Dr. Michel Maila (Vice President, Risk Management, International Finance Corporation, USA), Jeffrey McNeely (Senior Scientific Advisor, IUCN – The International Union for Conservation of Nature, Switzerland), Dr. Stefan Michalowski (Executive Secretary, Global Science Forum, OECD, France), Prof. Erik Millstone (Professor of Science and Technology Policy, University of Sussex, UK) and Dr. Mary Wilson (Associate Professor in the Department of Global Health and Population, Harvard School of Public Health, Harvard University, USA), with support from Martin Weymann (Vice President, Risk Management, Swiss Reinsurance Company) and the IRGC staff members Belinda Cleeland and Marie Valentine Florin.

[3]  In contrast, complicated systems may have numerous components, but these components will always interact in a predictable way, making them much more controllable.

References

Fortin, A.; Stevenson, M.M.; Gros, P., 2002: Susceptibility to Malaria a Complex Trait: Big Pressure from a Tiny Creature. In: Human Molecular Genetics 11/20 (2002), pp. 2469–2478

Frame, D.; Allen, M.R., 2008: Climate Change and Global Risk. In: Bostrom, N.; Cirkovic, M.M (eds.): Global Catastrophic Risks. Oxford, pp. 265–286

Hamre, J., 2003: Capelin and Herring as Key Species for the Yield of North-east Atlantic Cod. In: Sciencia Marina 67 (2003), pp. 315–323

Helbing, D. (ed.), 2008: Managing Complexity: Insights, Concepts, Applications. Berlin

Helbing, D., 2009: Systemic Risks in Society and Economics. Geneva. Paper prepared for the International Risk Governance Council Workshop on Emerging Risks; http://irgc.org/IMG/pdf/Systemic_Risks_Helbing2.pdf (download 26.7.11)

Homer-Dixon, T., 2006: The Upside of Down: Catastrophe, Creativity and the Renewal of Civilisation. London

IRGC – International Risk Governance Council, 2009: Risk Governance of Pollination Services. Geneva

IRGC – International Risk Governance Council, 2010: The Emergence of Risks: Contributing Factors. Geneva

IRGC – International Risk Governance Council, 2011: Improving the Management of Emerging Risks: Risks from New Technologies, System Interactions and Unforeseen or Changing Circumstances. Geneva; http://www.irgc.org/IMG/pdf/irgc_er2conceptnote_2011.pdf (download 22.11.11)

Linebaugh, K.; Mitchell, J.; Shirouzu, N., 2010: Toyotas Troubles Deepen. In: Wall Street Journal 03 February (2010)

OECD – Organisation for Economic Cooperation and Development, 2009: Applications of Complexity Science for Public Policy: New Tools for Finding Unanticipated Consequences for Unrealized Opportunities. Paris

Perrow, C., 1999: Normal Accidents: Living with High Risk Technologies. Princeton

Scheffer, M.; Bascompte, J.; Brock, W.A. et al., 2009: Early-warning Signals for Critical Transitions. In: Nature 461/7260 (2009), pp. 53–59

Sjöberg, L., 2000: Factors in Risk Perception. In: Risk Analysis 20/1 (2000), pp. 1–11

Vespigiani, A., 2010: The Fragility of Interdependency. In: Nature 464/7291 (2010), pp. 984–985

Kontakt

Belinda Cleeland
International Risk Governance Council (IRGC)
Chemin de Balexert 9, 1219 Châtelaine, Geneva, Switzerland
Phone: +41 22 7951737
Email: belinda.cleeland∂irgc.org