Variability: Threat or asset?

Ben J.M. Ale, Des N.D. Hartford, David H. Slater

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

2 Citations (Scopus)

Abstract

In the philosophy of SAFETY-I variability is seen as a threat, because it brings with it the possibility of an unwanted outcome. Variability of hardware is curtailed by precise specifications, controlled manufacturing and installing. Variability of human behaviour is curtailed by training and selection of personnel and by regulations, prescriptions and protocols. In the philosophy of SAFETY-II variability is seen as an asset. In SAFETY-II, humans are seen as able to cope with the variability and imperfections of technology and the variability of circumstances to keep systems working. In SAFETY-II this capacity of coping has been often designated as resilience. Recently the meaning of resilience has been further stretched to include the ability of restoring the operational state after an excursion into the realm of inoperability, or failure. Artificial intelligence allows systems to evolve by processing information acquired by sensing the result of their actions and variable environment in which they operate. This makes such systems intrinsically more variable than deterministic systems and therefore less predictable. For operators of these systems it is essential that they understand and are able to deal with this variability in order to keep systems operational and adaptive on the one hand and prevent excursions into unwanted territory on the other. The SAFETY-II philosophy seems to be more suitable to such an environment. At the same time it increases uncertainty about potential future states. The belief that humans will cope if an unexpected situation may arise, will reduce the emphasis on defensive, prevention measures that can limit the probability that the system may behave in an unwanted, unsafe manner. The stretched meaning of resilience exacerbates this problem, because there is no real limit of what systems or society using these systems may bounce back from. A highway bridge that collapses can be re-built. Thus society is resilient against bridge collapses. The question is however, should society accept a situation in which there is a significant probability that such a bridge collapses as safe or safe enough. The philosophies behind SAFETY-II and resilience engineering promote safety by exploiting self-correcting mechanisms in technology and the ingenuity of humans to keep systems within the desired operating envelope. In this approach, a form of trial, error and correct, the prior occurrence of the error, or deviation is essential. Unfortunately the error may also be fatal or catastrophic: maybe not for society as a whole, but surely for an individual, a group of individuals or a company. With an increasing tendency to evaluate every decision in terms of – preferably monetarized – costs and benefits, striking a balance between a SAFETY-I, a SAFETY-II and a resilience approach is not made easier by the inherent vagueness of the definition of success and the essentially qualitative nature of the latter two concepts. In this paper we explore how Safety I, Safety II and resilience can be cast in a way that one levers off the strengths of each one to compensate for the weaknesses of the other.

Original languageEnglish
Title of host publicationInstitution of Chemical Engineers Symposium Series
PublisherInstitution of Chemical Engineers
Volume2019-May
Edition166
ISBN (Electronic)9781510889781
Publication statusPublished - 2019
Event29th Institution of Chemical Engineers Symposium on Hazards 2019, Hazards 2019 - Birmingham, United Kingdom
Duration: 22 May 201924 May 2019

Conference

Conference29th Institution of Chemical Engineers Symposium on Hazards 2019, Hazards 2019
Country/TerritoryUnited Kingdom
CityBirmingham
Period22/05/1924/05/19

Fingerprint

Dive into the research topics of 'Variability: Threat or asset?'. Together they form a unique fingerprint.

Cite this