21.310 the difference now

From: Humanist Discussion Group (by way of Willard McCarty willard.mccarty_at_kcl.ac.uk>
Date: Wed, 24 Oct 2007 06:03:03 +0100

               Humanist Discussion Group, Vol. 21, No. 310.
       Centre for Computing in the Humanities, King's College London
                     Submit to: humanist_at_princeton.edu

         Date: Wed, 24 Oct 2007 05:56:54 +0100
         From: Geoffrey Rockwell <georock_at_mcmaster.ca>
         Subject: Re: 21.306 what's different now?

Dear Willard,

I would connect this to Searle's Chinese Room thought experiment and
the place of it in the discussion of AI. If I was addressing digital
humanists I would point out how an early experiment of the late 1940s
in the digital humanities by Andrew Booth and company actually tested
an algorithm for machine translation with "a human untutored in the
languages concerned, who applied only those rules which could
eventually be performed by a machine." (_Mechanical Resolution of
Linguistic Problems_ 1958) Booth (who it could be argued is before
Busa in applying computing to humanities problems) set up the
experiment so that the human would not learn, but would behave as the
machine in order to test a hypothesis about the machine where he
didn't have access to the machine. He was, to borrow from your work,
modeling the machine through the human. Searle's thought experiment
to illustrate the difference between AI and HI sounds inspired by
what was an actual practice in the days when you couldn't afford
computing time.

Did Booth's humans learn the languages they were pretending to
translate as a machine? Did they learn something other than the
language? And ... how do we know that Russell's machine isn't
learning just because it does things the same way? Perhaps the wear
and tear of the machine is its learning.


Geoffrey R.

On 23-Oct-07, at 1:31 AM, Humanist Discussion Group (by way of
Willard McCarty <willard.mccarty_at_kcl.ac.uk>) wrote:

> Humanist Discussion Group, Vol. 21, No. 306.
> Centre for Computing in the Humanities, King's College London
> www.kcl.ac.uk/schools/humanities/cch/research/publications/ humanist.html
> www.princeton.edu/humanist/
> Submit to: humanist_at_princeton.edu
> Date: Wed, 17 Oct 2007 19:03:48 +0100
> From: Willard McCarty <willard.mccarty_at_kcl.ac.uk>
> >
>In "Mind and Matter", Bertrand Russell writes as follows:
> >When you offer a coin to an automatic machine, it reacts precisely
> >as it has done on former occasions. It does not get to know that the
> >offer of a coin means a desire for a ticket, or whatever it may be,
> >and it reacts no more promptly than it did before. The man at the
> >ticket office, on the contrary, learns from experience to react more
> >quickly and to less direct stimuli. This is what leads us to call
> >him intelligent. It is this sort of thing which is the essence of
> >memory. You see a certain person, and he makes a certain remark. The
> >next time you see him you remember the remark. This is essentially
> >analogous to the fact that when you see an object that looks hard,
> >you expect a certain kind of tactile sensation if you touch it. This
> >is the sort of thing that distinguishes an experience from a mere
> >happening. The automatic machine has no experience; the man at the
> >ticket office has experience. This means that a given stimulus
> >produces always the same reaction in the machine, but different
> >reactions in the man. (Portraits from Memory, 1958/1956, p. 148)
>My question is this: if you were writing this passage today and
>wanted to make the sort of distinction Russell is making, what would
>you write?
>Willard McCarty | Professor of Humanities Computing | Centre for
>Computing in the Humanities | King's College London |
>http://staff.cch.kcl.ac.uk/~wmccarty/. Et sic in infinitum (Fludd
>1617, p. 26).
Received on Wed Oct 24 2007 - 01:41:59 EDT

This archive was generated by hypermail 2.2.0 : Wed Oct 24 2007 - 01:41:59 EDT