Golem, Inc.: A Comment on Certain Points Where Cybernetics Impinges on Religion

Edgar A. Whitley (Information Systems Department, London School of Economics and Political Science E‐mail: HYPERLINK mailto:E.A.WHITLEY@LSE.AC.UK E.A.WHITLEY@LSE.AC.UK Web page: http://is.lse.ac.uk/edgar)

Information Technology & People

ISSN: 0959-3845

Article publication date: 1 June 1999

75

Citation

Whitley, E.A. (1999), "Golem, Inc.: A Comment on Certain Points Where Cybernetics Impinges on Religion", Information Technology & People, Vol. 12 No. 2, pp. 1-5. https://doi.org/10.1108/itp.1999.12.2.1.1

Publisher

:

Emerald Group Publishing Limited


According to Jewish legend, the Golem is a person made from clay and given the breath of life. Once created it follows orders diligently, but if it isn′t kept under control, its enthusiasm may cause it to end up working against the interests of its masters rather than for them. The Golem is not, in and of itself, bad, but rather it is unaware of its own strengths and therefore needs to be carefully guided in its actions. Perhaps unsurprisingly, therefore, the Golem has been used as a metaphor for discussing the limitations of man‐made artifacts, especially science and technology; they are not good or bad, but we need to carefully control them so that we use them for ourselves rather than have them work against us.

One book from the 1960s is written by Norbert Wiener, the founder of the field of cybernetics. The other two, by Harry Collins and Trevor Pinch, are more recent and draw from themes in social studies of science and technology. Despite being written in very different times and drawing on very different theoretical perspectives, their stories are remarkably similar and relate to important issues in information systems that are being widely discussed at this time.

Wiener′s book is subtitled “A Comment on Certain Points where Cybernetics Impinges on Religion”. It is not, however, a religious text, but raises moral and ethical issues, traditionally addressed by religion, that arise from the development of technological artifacts. In particular, Wiener is concerned that the scope of new technology means that it now has the opportunity to wreak far more havoc than older technologies could do: “In the past, a partial and inadequate view of human purpose has been relatively innocuous only because it has been accompanied by technical limitations that made it difficult for us to perform operations involving a careful evaluation of human purpose. This is only one of the many places where human impotence has hitherto shielded us from the full destructive impact of human folly” (p. 64). This ties in very closely with Ulrich Beck′s notion of risk society (1992) which argues that we are no longer exclusively concerned with “making nature useful, or with releasing mankind from traditional constraints, but also and essentially with problems resulting from techno‐economic development itself” (p. 19).

Collins and Pinch adopt a slightly different perspective. Coming from a background of science and technology studies, they have a strong empirical awareness of what actually goes on in scientific and technological work. Their books emphasise the messiness of the worlds of science and technology and the complex activities involved in bringing some structure and stability to this work. To be sure, the aim of the authors is not to discredit the work of scientists and technologists, who they acknowledge as skilled craftspeople in their areas of expertise, but rather to make us aware of the fact that as craftspeople they are not infallible.

One of their books is devoted to exploring what is involved in scientific work, the other on issues associated with technology. Each book has the same structure, namely the discussion of seven detailed case studies in a variety of areas, all of which are fascinating in their own rights, but some of which do not relate directly to questions of concern to information systems researchers.

In their work on science, what is of most interest to information systems researchers is their discussion of experiments that proved the theory of relativity (their emphasis) and their description of the problems of doing empirical work and the underdetermination of results by data.

Thus, in order to test Einstein′s theory of relativity against Newton′s theory scientists tried to observe the displacement of light from stars as the light passed near the sun. However, since the distances involved are very small (1.7 seconds of an arc compared to 0.8 seconds of an arc; a second is 1/3600 of a degree) these observations can only be made during solar eclipses which are typically only viewable from remote locations. Observers need to have cloud‐free days, and be able to adjust for the rotation of the earth, the effects of variations of temperature on the equipment etc. As Collins and Pinch show, an awful lot of work had to be done to obtain any usable results at all and even then the results are hardly clear‐cut.

This problem of the underdetermination of results also arises in one of their technology cases where they analyse the effectiveness of Patriot missiles in the Gulf War. This case is further complicated by having no clear measures of the success of Patriot missiles, which can range from all, or nearly all, Scud warheads dudded (where the warhead fails to explode on landing, due to the action of the Patriot) through all, or nearly all, Scuds intercepted (the Patriot approached the Scud and its warhead fired, but it is not clear whether the warhead was damaged) through to the new anti‐tactical missile programme being given credibility. Given this range of possible ways in which Patriot missiles could be successful, it soon becomes apparent that there is plenty of scope for disagreement about the effectiveness of the system and the Golem raises its head. It is not that we cannot trust the experts about the issue of success of Patriots, but rather that the role that they play has become less straightforward. The experts that we once trusted implicitly to give us the answers are now just as likely to be raising issues of perplexity (Is this a success?) as they are to be resolving and institutionalising their results (This was a success).

Norbert Wiener makes a similar point in his discussion about machines that can play games. For him a game‐playing machine can only succeed if there are “some objectively recognizable criterion of the merit of the performance of this effort. Otherwise the game assumes the formlessness of the croquet game in Alice in Wonderland, where the balls were hedgehogs and kept unrolling themselves, the mallets were flamingoes, the arches cardboard soldiers who kept marching about the field, and the umpire the Queen of Hearts, who kept changing the rules and sending the players to the Headsman to be beheaded. Under these circumstances, to win has no meaning, and a successful policy cannot be learned, because there is no criterion of success” (Wiener, 1964, pp. 25‐6).

Another chapter of direct relevance to information systems researchers uses the issue of nuclear fuel flask transportation and anti‐misting kerosene to explore the difference between experiments and demonstrations. Experiments, in their vocabulary, are undertaken to investigate issues that are still controversial, where there is still perplexity. Experiments aim to develop new knowledge about the phenomena; they have the capacity to surprise us and as a result require specialist expertise to evaluate the outcome properly. Demonstrations, in contrast, are explicitly designed to not reveal any new knowledge. Demonstrations are prepared and rehearsed; they are intended to provide a convincing performance and typically have unequivocal results.

This is an important lesson for information systems, where the distinction between experiment and demonstration is not so clearly maintained. Is the small‐scale trial of a new system intended as an experiment, to enable the organisation to learn about the capabilities of the technology, or as a demonstration to convince the organisation to take up the system? Many new technology innovations fail because of the failure to distinguish between these two roles. The system may start life as a demonstration, but end up being an experiment as its use introduces new issues into the organisation.

The chapter on the science of economics is also particularly fun, not just because of my own institutional affiliation, but also because it raises important questions about abstraction and modelling which can be put to many systems modellers and analysts as well. Collins and Pinch examine the predictions about the UK economy made by a number of leading economists and show how much they vary. For example, in 1993, predictions for growth ranged from 0.2 to 2.0 per cent and for inflation from 3.1 to 4.8 per cent. The actual values for this period were 2.0 per cent growth and 2.7 per cent for inflation, suggesting that as a whole the economists were poor predictors of the economy. Moreover, no single economist made good predictions for both measures. The question then arises, was this a problem of the particular predictions, or is it a more fundamental problem with the models that were being used? Collins and Pinch call this the experimenter′s regress and suggest that it occurs when it is not possible to decide on what the outcome of an experiment should be and what accuracy it should have and so cannot use the outcome to determine whether the experiment worked or not. One of the economists they quote came up with a novel answer for addressing this problem: “The fact that virtually all the models, all the sort of formal fully developed models failed to predict, suggests that it was not that our model was particularly bad, but that the underlying economy had changed”.

Again, Wiener has also spoken about this problem of relating economic models with the world: “An econometrician will develop an elaborate and ingenious theory of demand and supply, inventories and unemployment, and the like, with a relative or total indifference to the methods by which these elusive quantities are observed or measured ... Very few econometricians are aware that if they are to imitate the procedure of modern physics and not its mere appearances, a mathematical economics must begin with a critical account of these quantitative notions and the means adopted for collecting and measuring them” (Wiener, 1964, p. 90).

The final chapter in the book on technology addresses questions of lay expertise in relation to AIDS. It describes how grass‐roots activism changed the way that scientific studies of medicine were undertaken to make them more reliable, given the messiness of the real world. For example, traditional clinical trials involved comparing two groups of patients, those taking the new drug and those given a placebo. Even ignoring the ethical problems associated with denying half of the study group the potential benefits of the new drug, the clinical trials for AIDS drugs were fraught with practical problems. People dying of AIDS were not prepared to just wait for the formal studies of new drugs to be evaluated and they would try to obtain any drugs that might help them with their illness. For example, patients taking part in studies were rumoured to be trying to reduce their risk of being given a placebo by sharing pills with other participants. The idea of a control group in such circumstances becomes meaningless as nothing is being controlled. Issues like these resulted in changes to the protocols for testing these drugs which more realistically addressed the messiness of the world, where systematicity is the result of the work of many actors rather than a given starting point.

Norbert Wiener also talks about the effects of technology on medicine, although with a different focus. He cites the example of machines to help doctors perform their diagnosis. As he states, “Such machines are very much in vogue in plans for the medicine of the future. They may help pick out elements that the doctor will use in diagnosis, but there is no need whatever for them to complete the diagnosis without the doctor. Such a closed, permanent policy in a medical machine is sooner or later likely to produce much ill health and many deaths” (Wiener, 1964, p. 81).

All three books are well worth reading and I recommend them all to you, both as general interest books and as texts with specific lessons for researchers in information systems. After reading them, it is apparent that we need a better understanding of what is involved in science and technology, so that we can make informed choices about the evidence that is presented before us and so that we can keep our Golem under control. We cannot, therefore, simply leave science and technology to their own devices but must act on them and with them. As Norbert Wiener argues: “The future offers very little hope for those who expect that our new mechanical slaves will offer us a world in which we may rest from thinking. Help us they may, but at the cost of supreme demands upon our honesty and our intelligence. The world of the future will be an ever more demanding struggle against the limitations of our intelligence, not a comfortable hammock in which we can lie down to be waited upon by our robot slaves” (Wiener, 1964, p. 69).

Reference

Beck, U. (1992), Risk Society: Towards a New Modernity, Sage, London

Related articles