Humanist Discussion Group

Humanist Archives: April 27, 2024, 8:53 a.m. Humanist 37.569 - human error & the infallible computer

				
              Humanist Discussion Group, Vol. 37, No. 569.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2024-04-27 07:28:01+00:00
        From: Willard McCarty <willard.mccarty@mccarty.org.uk>
        Subject: the infallible computer: the British Post Office scandal and beyond

There is a crucial point relevant to all of us in or near to digital
humanities in the following. Kindly read on.

> The British Post Office scandal, also called the Horizon IT scandal,
> involved Post Office Limited pursuing thousands of innocent
> subpostmasters for shortfalls in their accounts, which had in fact
> been caused by faults in Horizon, accounting software developed by
> Fujitsu. Between 1999 and 2015, more than 900 subpostmasters were
> convicted of theft, fraud and false accounting based on faulty
> Horizon data, with about 700 of these prosecutions carried out by the
> Post Office. Other subpostmasters were prosecuted but not convicted,
> forced to cover Horizon shortfalls with their own money, or had their
> contracts terminated. The court cases, criminal convictions,
> imprisonments, loss of livelihoods and homes, debts and bankruptcies,
> took a heavy toll on the victims and their families, leading to
> stress, illness, family breakdown, and at least four suicides. In
> 2024, Prime Minister Rishi Sunak described the scandal as one of the
> greatest miscarriages of justice in British history.
>
(<https://en.wikipedia.org/wiki/British_Post_Office_scandal>. A softer 
version of the above is available on the Post Office website :-).)

Of the many, often daily reports on the injustices suffered by British
posties, a remark in one report grabbed my attention. I paraphrase from
memory of a radio interview in which a managerial employee of the Post
Office said that he never thought to look to the Horizon software
system because computers don't make mistakes. Yes, a commonplace 
misunderstanding, but what makes it serious in general is it attests to a 
widespread ignorance of the relation between computing systens and 
real life.

In 1985 Brian Cantwell Smith, with the near miss of 5 October 1960[1] in
mind, made the point to a conference on unintended nuclear warfare:

> The point is that even if we could make computers reliable, they
> still wouldn't necessarily always do the correct thing . People
> aren't provably "correct", either : that's why we hope they are
> responsible, and it is surely one of the major ethical facts is that
> correctness and responsibility don't coincide . Even if, in another
> 1,000 years, someone were to devise a genuinely responsible computer
> system, there is no reason to suppose that it would achieve "perfect
> correctness" either, in the sense of never doing anything wrong .
> This isn't a failure in the sense of a performance limitation ; it
> stems from the deeper fact that models must abstract, in order to be
> useful . The lesson to be learned from the violence inherent in the
> model-world relationship, in other words, is that there is an
> inherent conflict between the power of analysis and
> conceptualization, on the one hand, and sensitivity to the infinite
> richness, on the other. [2]

The 'near miss' of 1960 was due to a computer error in the North
American Aerospace Defense Command (NORAD) software, which 
mistook a moon rise for Soviet nuclear missiles coming over the 
horizon. Cantwell Smith wraps up his point thus:

> But perhaps this is an overly abstract way to put it . Perhaps,
> instead, we should just remember that there will always be another
> moon-rise .

On how many occasions, when the opportunity is right, do we remind our
audiences that smart machines act according to models of the world, 
not the reality that is modelled? Many posties would have been saved 
disgrace, fines, ruination, prison in some cases had their managers 
understood the inherent fallibility of the smart machines designed and 
built by fallible humans. And that's not a nuclear consequence.

Comments welcome/

Yours,
WM


-----
[1] See John G. Hubble, "'You are under attack!' The Strange incident of
October 5". Reader's Digest, April 1961. According to Donald K.
MacKenzie, ""Hubble's article... remains the best available account of
the incident." Mechanizing Proof: Computing, Risk, and Trust Inside
Technology (MIT Press, 2001, n. 4, p. 340).
[2] Brian Cantwell Smith, "The limits of correctness". ACM SIGCAS
Computers and Society, Volume 14,15 Issue 1,2,3,4 (January 1985), p. 25.

--
Willard McCarty,
Professor emeritus, King's College London;
Editor, Interdisciplinary Science Reviews;  Humanist
www.mccarty.org.uk


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php