Humanist Discussion Group

Humanist Archives: Oct. 7, 2024, 10:15 a.m. Humanist 38.173 - a paradox (?) commented

				
              Humanist Discussion Group, Vol. 38, No. 173.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2024-10-04 08:29:57+00:00
        From: Tim Smithers <tim.smithers@cantab.net>
        Subject: Re: [Humanist] 38.163: a paradox (?) commented

Dear Jim,

Thank you for your generous response and your further remarks.
The Wordsworth lines you quote are jewels, and these words of
your are brilliant.

   "...  All of the nonsense in the world about AI proceeds
   from black box models of both: human consciousness is an
   electrical black box, computers are electrical black boxes,
   they're parallel!  But that's nonsense.  We know more about
   both than that.  ..."

And, you make a challenge.  This, I think, sets the question
anybody working in AI should be able to answer: how do you
justify saying your machine knows, understands, and reasons?

I'll try to give my answer to this, and try to keep it short,
but it does take some doing.

I'm not going say how the Wolfram Mathematica system knows,
understands, and reasons.  It's a commercial system which I
don't know the insides of in detail.  (I knew more a long time
ago when I knew some people who worked at Wolfram.)  Instead,
I want to treat your challenge in a more general way, but a
way that I believe does include Mathematica.  It's how I think
AI research should be practiced.  AI is a research field.
It's not a thing, and it makes no sense to me to call anything
an AI.

AI is the investigation of intelligent behaviour by trying to
build and investigate it in the artificial, in contrast to
investigating it in the natural, as Cognitive Scientists do,
for example.  Usually, to keep the AI research doable and
productive, we investigate particular kinds of intelligent
behaviour in particular settings and conditions.  Often, but
not always, the kind of intelligent behaviour we study is
inspired by, perhaps informed by, a kind of intelligent human
behaviour.  Chess playing is an example, and a kind of
intelligent behaviour long studied in AI.

To investigate intelligent behaviour in the artificial we have
to design and build systems, machines, if you prefer, that, to
some observable degree, at least, display the intelligent
behaviour we say we are investigating.  But that's not enough.
We must also show that what we decide to design and build into
our systems is the causal mechanism, and the only causal
mechanism, that generates the observed intelligent behaviour,
and we must show these designed and built mechanisms are
properly described as kinds of knowing, understanding, and
reasoning.  In AI research black boxes are not allowed, and
any admission that [some aspect of] our system is a black box
is, I would say, an admission of failure.  And, in AI research
just saying our machines know, understand, and reason, is not
allowed: we must show how they do this, and do this in a way
others can see and appreciate, but not necessarily agree with.
The test we need for this is not the so called Turing Test,
it's what I call the Artificial Flower / Artificial Light
test: artificial flowers can look very like real flowers, but
they are not any kind of real flower; artificial light,
despite being generated by artificial means, is made of
photons.  The artificial is in the artificial way making the
real, not in the artificial way of making it look like the
real.

Designing and building systems in AI, just as in any domain we
design and build thing in, is not some kind of dreaming up
something to try because we like the idea, and think it might
generate behaviour that looks like intelligent behaviour.
It's a strongly disciplined activity which depends upon
clearly specified foundations, or, at least it should be.
This means if we intend to design and build a machine which
behaves in a way that can properly be described in terms of
knowing, understanding, and reasoning, we must first define
what we take knowledge, understanding, and reasoning, to be,
and then design and build our machine using these definitions,
and show that what we have built really does implement these
definitions in a way that all can see and appreciate.

In AI research these foundational definitions are "working
definitions," they are hypotheses: if we define knowing,
understanding, and reasoning as X, Y, and Z, and design and
build a working machine which can be shown to correctly
implement these notions in the way we define them, do we get
the intelligent behaviour we assumed or presumed can be
achieved in this way, and which can fairly and accurately be
described in these terms?  If yes, then our hypothesised
definitions of knowledge, understanding, and reasoning, now
have some empirical support for this way of understanding
these notions in the context of the intelligent behaviour we
are investigating.  If not, then we need to re-think how we
might define knowledge, understanding, and reasoning, or, it
could be we need better ways to implement them.  Further work
is required to sort out this divergent diagnosis.

All research has to have this kind of starting position.  If
we investigate naturally occurring intelligent behaviour, as
Cognitive Scientists do, for example, we still need to set out
what it is that make the behaviour intelligent behaviour, and
thus a proper example of the phenomenon we seek to study.
And, we need to set out the best characterisations, if not
definitions, of the concepts we use to both guide our
investigations and talk about the outcomes.  If you're a
historian working on what happened in a region of we call
Europe during a period of time, and you describe times of war,
we do, I would say, need to say what, for you at least, is
war, and you need to say this with sufficient precision to
make your history making useful meaningful to, and
understandable by others.

This is already long, but I want to complete this
justification with examples of how knowledge, understand, and
reasoning, are defined and used in AI: definitions I use, and
thus investigated, in the AI in Design research I have done
since 1984, often with others, and still do.  To do this we
must return to Old Fashioned symbol processing AI, and I'll
only sketch things about here.  (But I'll add more details if
people would like more.)

Why symbols?  The hypothesis that supports this idea first was
set out by Newell and Simon in 1976 as the physical symbol
system hypothesis (PSSH).  It says:

    A physical symbol system has the necessary and sufficient
    means for general intelligent action.

This did not come out of the blue, there's lots of history to
this idea some of it going back thousands of years, but I'll
leave out these details.

What is knowledge?  The idea for this has been around in AI
since the beginnings, in Dartmouth in 1955, but Newell
presented a definition of this in his "Knowledge Level" paper
published in 1982.  Newell defined knowledge this way:

    Knowledge is a capacity for rational action, where an
    action is rational if the outcome of the action changes
    the state of the agent executing the action to one nearer
    the agents goal state, in the current conditions.

Notice, this is very different from the classical definition
of knowledge, as justified true belief.  Newell's definition
has proved to be both practical and usable -- in knowledge
modelling, a theory of designing, and approaches to knowledge
management -- and not suffer from continuing difficulties to
agree on what "justified" is, what "true" must mean, and what
a "belief" is.

There is no widely accepted and published definition of
understanding that I know in the AI literature, so, in our AI
in Design work, we added the following working definition of
understanding:

     Understanding is a capacity to form rational explanations
     of rational actions, where an explanation is composed of
     atomic [the smallest] inference actions.

This is not, of course, the only way to define understanding,
and is perhaps not an obvious one, but it suited what we
needed in the AI systems we were building to support
designing, and which needed to deliver explanations of the
reasoning they were supposed to do to support someone doing
some designing.

Similarly, there is no one always cited definition of
reasoning in symbol processing AI, but what everybody accepts
and uses is this, or something very like this:

     Reasoning is the execution of rational actions according
     to well defined sound logical inference.

Thus, using different logics gives us different kinds of
reasoning, and this is what we see in [symbol processing] AI
research.

The way implementations of these definitions were put together
was usually by designing and building some kind of Knowledge
Representation scheme and inference making systems that used
the represented knowledge to reason with, by making logical
inferences from it.  And, as you'd expect, these knowledge
representation schemes were symbol using.

But there's one last definition, one last hypothesis, needed
here, to capture the idea that sufficient symbol processing
based knowledge, understand, and reasoning, can be implemented
in a way that does result in the intelligent behaviour we are
investigating; real, not just look-a-like.  This hypothesis
was presented by Brian Smith in 1982, and it says:

    Any mechanically embodied intelligent process will be
    comprised of structural ingredients that a) we as external
    observers naturally take to represent a propositional
    account of the knowledge that the overall process
    exhibits, and b) independent of such external semantic
    attribution, play a formal but causal and essential role
    in engendering the behaviour that manifests that knowledge.

These definitional hypotheses are supposed to be quite precise
-- and this precision is needed if we want useful research
outcomes -- but they give plenty of scope for what to design
and build, and this variation and variety shows up in the
symbol processing AI literature.  And it give rooms for
arguments, sometimes strong ones, like between the Neats and
the Scruffies over how to do proper symbolic knowledge
representation, in the 1970s and 80s, see Wikipedia: Neats and
scruffies.

What we see in AI systems built this way is computationally
implemented symbolic structures, described by us as knowledge
representations, used in a causal chain by the logical
inference mechanisms we implement, to infer new symbolic
structures, or state descriptions -- that is execute rational
actions relevant to, and generating of, the intelligent
behaviour our systems are supposed to exhibit.  And, using the
same symbolic structures and inference mechanisms, execute
further rational actions to generate what is, for us, an
explanation of how certain inferred outcomes where arrived at.
This is what, I think, justifies saying such systems know,
understand, and reason.

I am not asking anybody here to accept these hypothesised
working definitions, nor like them, nor agree that they make
good sense, or warrant serious research effort, but, if you
want to do symbol processing AI these are the definitions of
knowledge, understanding, and reasoning, you could work with,
and would need to commit to, and show how well they turn out
to work or not.  And, therefore, could use to justify saying
that the AI system you have designed and built for your
investigations does know, understand, and reason.  This is not
a claim that our system knows, understands, and reasons, in
the same way humans do these things, or that other animals do.
This is not what AI claims, nor can it.  It may, with good
progress, come to collaborate with work in the Cognitive
Sciences and Animal Behaviour, to compare this AI way of
understanding these notions, with the ways these other
researchers have of defining these notions for their work, and
they may, or may not, be made to map into each other in a well
defined way.  But we're still a long way from doing this, I
would say.

An admission is required here.  I would be the first to admit
that much, too much, work in symbol processing AI has failed
to show that, and how, it is done with respect to these
fundamental working definitions, and thus fails to show how,
if at all, the outcomes of this research sustains these
hypothesised ways of understanding our notions knowing,
understanding, and reasoning.  It's embarrassing, certainly,
and it means AI research is often poorly practiced.
Nonetheless, taking these foundational hypotheses seriously
does, I think, give us a way to justify claims of our AI
machines do, in well defined ways, know, understand, and
reason, about certain particular things, such as playing
chess, or some of what happens in designing.

Connectionism -- alias Artificial Neural Networks -- is no
better.  Indeed it is worse, I would say.  It has no
hypotheses, no working definitions of knowing, understanding,
and reasoning, just a dogma: call your computer implemented
matrix arithmetic a "neural network" and Hey Presto you have
intelligence ...  "because it's a neural network what does
it."  And if you make the matrices big enough, and I mean
ginormous, you can make it look like you have human level
intelligence.  ChatGPT, in terms of the matrix arithmetic it
implements, is not a black box: we do understand and can
explain what it does in these well defined terms.  In terms of
notions like knowing, understanding, and reasoning, it is,
necessarily, a black box: nobody has any definitions of these
notions that can be used properly to show that and how they
are implemented by ChatGPT. ChatGPT is an Artificial Flower
type AI, it just looks like intelligent behaviour, but isn't,
not really!  (And, of course, it's sold as "intelligent
behaviour.")

Before I go, finally, I want to add: I like a lot your
insistence, Jim, on embodiment being important for human, and
other animal, intelligent behaviour, cognition, and for
consciousness too, perhaps.  I also do work on what's called
Behaviour Based robotics, a la Rod Brooks, and have done since
the mid 1980s.  This makes radically different hypotheses
about the nature and mechanisms of intelligent behaviour, and
it takes building real robots that work in the real world to
properly investigate these.  You can see some useful outcomes
of this kind of AI in the floor cleaning robots from iRobot.
Commercially, the most successful robots, so far.

I'm sorry for the length of my reply, but does this symbol
processing way of doing AI work for you as a way to justify
describing the machines we build as knowing, understanding,
and reasoning machines, albeit in terms we define and
investigate?

-- Tim


References

Allen Newell and Herbert A Simon, 1976.  Computer Science as
Empirical Inquiry: Symbols and Search, Communications of the
ACM, 19 (3), pp 113–126, <doi:10.1145/360018.360022>.

Allen Newell, 1982.  The knowledge level, Artificial
Intelligence, Volume 18, Issue 1, pp 87-127,
<doi.org/10.1016/0004-3702(82)90012-1>.

Brian C Smith, 1982.  Prologue to "Reflections and Semantics
in a Procedural Language," in Ronald J Brachman and Hector J
Levesque, 1985, Reading in Knowledge Representation, chapter
3, pp 31-39.  (This first appeared in Brian Smith's PhD
dissertation and Tech Report MIT/LCS/TR-272, MIT, Cambridge,
MA, 1982.)

Wikipedia: Neats and scruffies
<https://en.wikipedia.org/wiki/Neats_and_scruffies>




> On 29 Sep 2024, at 08:10, Humanist <humanist@dhhumanist.org> wrote:
>
>
>              Humanist Discussion Group, Vol. 38, No. 163.
>        Department of Digital Humanities, University of Cologne
>                      Hosted by DH-Cologne
>                       www.dhhumanist.org
>                Submit to: humanist@dhhumanist.org
>
>
>
>
>        Date: 2024-09-26 15:32:15+00:00
>        From: James Rovira <jamesrovira@gmail.com>
>        Subject: Re: [Humanist] 38.157: a paradox (?) commented
>
> Many thanks to Tim, Willard, and Jerry for their recent responses to my
> post. Willard's and Tim's responses to me illustrate that the conversation
> moves forward as we get increasingly more precise with our language --
> that  precision allows us all to hone in more on the real object of our
> query. And Jerry, as a Romanticist who taught me (indirectly, through his
> books) about Romanticism in grad school, I hoped would agree. The idea of
> the organic body being essential to human consciousness permeates Romantic
> poetry, including Wordsworth's "Expostulation and Reply":
>
> "The eye--it cannot <https://www.definitions.net/definition/cannot> choose
> but see;
> We cannot <https://www.definitions.net/definition/cannot> bid the ear be
> still;
> Our bodies <https://www.definitions.net/definition/bodies> feel, where'er
> they be,
> Against or with our will."
>
> Those lines articulate a fundamental way in which human cognitive processes
> are forever and inextricably bound up with our external environments via
> unending and inescapable sensory input. There is no machine in a box that
> experiences the world in that way.
>
> I could respond to Tim by justifying my claim, "calculators do math," in a
> very generic sense. They take inputs and produce outputs. That could also
> be very superficially extended to the human mind. But he's right -- the
> human mind and calculators do not do math the same way, as he explains very
> clearly. The point to me is that we should move away from generalities and
> start getting into the details of human cognitive functioning and machine
> "intelligence." All of the nonsense in the world about AI proceeds from
> black box models of both: human consciousness is an electrical black box,
> computers are electrical black boxes, they're parallel! But that's
> nonsense. We know more about both than that. So I appreciate Tim's critique
> of my language. That's a way I need to get more precise.
>
> I do have a question for Tim: how can you possibly justify this claim?
>
> "If you want a good example of some real AI take a look at
> the Wolfram Mathematica system.  This does do math.  Lots of
> different kinds of math, and lots of hard to do math: it knows
> and understand lots of math and does lots of mathematical
> reasoning."
>
> How can anyone know that the machine *knows and understands* lots of math
> in any way comparable to a human being? I will confess my complete
> ignorance of that particular machine, but I think the people working with
> it know more about the machine than about human consciousness, and they may
> be making broader claims than they justifiably can. To me, "knowing and
> understanding" requires a certain degree of self-consciousness about the
> activity while the activity is being carried out, which is certainly (at
> least potentially) human, but I think would be impossible to detect in any
> machine environment. A million or billion subroutines followed after
> extensive machine training isn't quite the same thing, I suspect.
>
> Thank you all for a great discussion, and I hope I receive further replies.
>
> Jim R
>
> --
> Dr. James Rovira <http://www.jamesrovira.com/>
>
>   - *David Bowie and Romanticism
>   <https://jamesrovira.com/2022/09/02/david-bowie-and-romanticism/>*,
>   Palgrave Macmillan, 2022
>   - *Women in Rock, Women in Romanticism
>   <https://www.routledge.com/Women-in-Rock-Women-in-Romanticism-The-
> Emancipation-of-Female-Will/Rovira/p/book/9781032069845>*,
>   Routledge, 2023
>
>
> _______________________________________________
> Unsubscribe at: http://dhhumanist.org/Restricted
> List posts to: humanist@dhhumanist.org
> List info and archives at at: http://dhhumanist.org
> Listmember interface at: http://dhhumanist.org/Restricted/
> Subscribe at: http://dhhumanist.org/membership_form.php


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php