Humanist Discussion Group, Vol. 37, No. 469. Department of Digital Humanities, University of Cologne Hosted by DH-Cologne www.dhhumanist.org Submit to: humanist@dhhumanist.org Date: 2024-02-29 06:28:35+00:00 From: Willard McCarty <willard.mccarty@mccarty.org.uk> Subject: asking questions I suspect everyone here, or nearly so, knows the story of Joseph Weizenbaum's secretary but will summarise it as a reminder. In Computer Power and Human Reason (1976, pp. 6f), he describes reactions to the DOCTOR version of his ELIZA software, which he wrote in order to show everyone how simple it is to fake natural language conversation. He was surprised, indeed shocked at the reactions: > I was startled to see how quickly and how very deeply people > conversing with DOCTOR became emotionally involved with the computer > and how unequivocally they anthropomorphized it. Once my secretary, > who had watched me work on the program for many months and therefore > surely knew it to be merely a computer program, started conversing > with it. After only a few interchanges with it, she asked me to leave > the room. Another time, I suggested I might rig the system so that I > could examine all conversations anyone had had with it, say, > overnight. I was promptly bombarded with accusations that what I > proposed amounted to spying on people's most intimate thoughts; clear > evidence that people were conversing with the computer as if it were > a person who could be appropriately and usefully addressed in > intimate terms. I knew of course that people form all sorts of > emotional bonds to machines, for example, to musical instruments, > motorcycles, and cars. And I knew from long experience that the > strong emotional ties many programmers have to their computers are > often formed after only short exposures to their machines. What I had > not realized is that extremely short exposures to a relatively simple > computer program could induce powerful delusional thinking in quite > normal people. This insight led me to attach new importance to > questions of the relationship between the individual and the > computer, and hence to resolve to think about them. Let us suppose, all these many years later, that we were designing something like ELIZA, but rather than the psychotherapeutic ambitions that ELIZA stirred at that time we wanted something that would help you or me shed our assumptions about a research problem, say. Within a society where a body of wisdom literature is the go-to authority, such as the 'praise poetry' of certain African societies, we might come up with software based on Italo Calvino's description of the storyteller who endlessly recombines folkloric elements to produce combinations that will sometimes jolt the listener into a new awareness of his or her situation. The recombinant potential of our machine comes to mind. Google and later software uses the mass of random available text. It does fairly well helping to locate stuff and people, but there's simply too much noise to get close to Calvino's storyteller's imagined effectiveness. Or is 'noise' not a problem but an opportunity? What would the design of 21st-century software for clearing the mind or actually coming up with good advice look like? How would it manage to address the individual asking the questions? If clearing the mind or actually coming up with good advice does not nail the objective well enough, how would you describe an objective worth the candle? Comments? Yours, WM -- Willard McCarty, Professor emeritus, King's College London; Editor, Interdisciplinary Science Reviews; Humanist www.mccarty.org.uk _______________________________________________ Unsubscribe at: http://dhhumanist.org/Restricted List posts to: humanist@dhhumanist.org List info and archives at at: http://dhhumanist.org Listmember interface at: http://dhhumanist.org/Restricted/ Subscribe at: http://dhhumanist.org/membership_form.php