Humanist Discussion Group

Humanist Archives: Sept. 20, 2024, 6:04 a.m. Humanist 38.147 - a paradox (?) discussed

				
              Humanist Discussion Group, Vol. 38, No. 147.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: James Rovira <jamesrovira@gmail.com>
           Subject: Re: [Humanist] 38.143: a paradox? (31)

    [2]    From: maurizio lana <maurizio.lana@uniupo.it>
           Subject: Re: [Humanist] 38.145: a paradox (?) discussed (135)

    [3]    From: Bill Pascoe <bill.pascoe@unimelb.edu.au>
           Subject: Re: [Humanist] 38.145: a paradox (?) discussed (82)


--[1]------------------------------------------------------------------------
        Date: 2024-09-19 19:57:16+00:00
        From: James Rovira <jamesrovira@gmail.com>
        Subject: Re: [Humanist] 38.143: a paradox?

I'm not sure what you mean by "holding the two in mind" in a way that
excludes a "point-by-point comparison"? When we hold two similar things
together in our minds without comparing them, what exactly are we doing?

To me, the purpose of modeling is to create a simulation that's more
controllable than in real life - to better understand the primary object.
We can do things with the model that we can't do with the primary object.
The process you're describing, however, might be a way to come up with
something entirely new, to consider third possibilities. I'm not sure,
though.

Jim R

On Wed, Sep 18, 2024 at 1:12 AM Humanist <humanist@dhhumanist.org> wrote:

> Here's a question I am pondering and would like some help with.
>
> Much is written about modelling, a bit of it by me. But I am bothered by
> the built-in assumption that the role of the machine in this instance is
> to imitate the modelled object or process as closely as possible or
> practical. If, however, we juxtapose the computational machine as we
> know it to a human process or practice, neither to model the latter by
> the former nor to do a point-by-point comparison but to hold the two in
> mind in order to see what happens, what happens then? Where might one
> find a way to think about this situation?
>
> Comments welcome.
>
> Yours,
> WM
>

--[2]------------------------------------------------------------------------
        Date: 2024-09-19 07:27:45+00:00
        From: maurizio lana <maurizio.lana@uniupo.it>
        Subject: Re: [Humanist] 38.145: a paradox (?) discussed

dear Tim

you wrote
> Models are built from well chosen simplifications and idealisations of the
subject, as we known and understood it.

i agree. because the frame, for me, is that "reality" (things, the world, the
subject, ...) is much more complex than we can ever perceive and
understand. all of our knowledge is only an asymptotical approach to
[things, the world, the subject, ...].
hence the fact that any model cannot but be a well chosen simplification
and idealisation of the subject. without this simplification and
idealisation no knowledge is possible because [things, the world, the
subject, ...] is too ‘more’ than the mind which wants to know it.

We shall not cease from exploration / And the end of all our exploring /
Will be to arrive where we started /
And know the place for the first time

otherwise we should hypothetically think of a time when nothing will
remain to be known, because everything is already known.
Maurizio

Il 19/09/24 06:58, Tim Smithers <tim.smithers@cantab.net> ha scritto:
> Dear Willard,
>
> I'd like to take exception to your assertion that when we
> build and use a model, we make ...
>
>      "...  the built-in assumption that the role of the
>       machine in this instance is to imitate the modelled
>       object or process as closely as possible or practical."
>
> Making and using a good model is not, in my experience, nor in
> what I teach PhDers, about imitating as closely as possible or
> practical the subject to be modelled.
>
> Making a model is about making modelling decisions, and these
> involve deciding what of the subject you need to, or want to,
> model, and further decisions about how to implement these
> chosen [observable] aspects, qualities, or features, of your
> subject, so that your model is good enough for your purpose;
> for what you want to use your model.
>
> The goodness of your model is a matter of how fit for purpose
> it is, not how closely it imitates the subject being modelled.
> Without having clear what the purpose of your model is, there
> is no way to do good model making, or using.  And, anyway,
> there is usually no satisfactory way of knowing how closely
> something imitates something else, even if it's supposed to.
> Deciding this well takes lots of knowledge and understanding
> of both the subject and thing that's supposed to imitate it,
> and if we had all this knowledge and understanding, we
> probably would not need a model of it, not for research at
> least.
>
> Models, for research, are instruments of investigation,
> "epistemologically equivalent to the microscope and the
> telescope," as Marcel Boumans (2012) nicely puts it.  Models
> are built from well chosen simplifications and idealisations
> of the subject, as we known and understood it.  Models are not
> made from "similarities."  Even if you think you have a good
> definition of what "similarity" is, and, better, one with
> which other people agree, it still takes lots of verified
> knowledge and understanding of your subject to apply any tests
> of your "similarity" notion, knowledge and understanding we
> don't usually have.  But to make and use good models we don't
> need to do any of this similarity checking.  What we do need
> to do is to show that, and how, our model satisfies the
> Modelling Relation well enough for our purpose.  This is what
> it takes properly to have a model of what we say it is a model
> of for our purpose.
>
> Here’s one way of putting all this which I like.
>
>     "A model is a representation of something by someone for
>      some purpose at a specific point in time.  It is a
>      representation which concentrates on some aspects —
>      features and their relations — and disregards others.  The
>      selection of these aspects is not random but functional:
>      it serves a specific function for an individual or a
>      group.  And a model is usually only useful and only makes
>      sense in the context of these functions and for the time
>      that they are needed." -- Fotis Jannidis (2018)
>
> Making and using a model is, in my experience, always an
> iterative business: earlier modelling decisions, and
> implementation decisions, are revised on the basis of what we
> learn from verifying, validating, and using our model, and
> thereby gradually discover what we need to simplify and
> idealise of our subject, and how, to have ourselves a good
> enough model for our investigations.  It's a conversation with
> our subject enabled by trying to model it in some useful way,
> rather than by trying to observe it in some way, using a
> microscope or telescope, for example.  Which is also involves
> a conversation.
>
> This conversation, enabled by our model making and model
> using, is like, I would say, your idea
>
>       "...  to hold the two [your subject and your model] in
>       mind in order to see what happens ..."
>
> There's no paradox here that I see.
>
> -- Tim
>
>
> References
>
>   1.  Marcel J Boumans, 2012: Mathematics as Quasi-matter
>       to Build Models as Instruments, in: Dieks D, Gonzalez W,
>       Hartmann S, Stöltzner M, Weber M (eds), Probabilities,
>       Laws, and Structures.  The Philosophy of Science in a
>       European Perspective, Vol 3, Chapter 22, pp 307–316,
>       Springer, Dordrecht.
>
>   2.  Fotis Jannidis, 2018: Modeling in the Digital Humanities:
>       a Research Program?, a chapter in Historical Social
>       Research Supplement 31, pp 96-100, published by GESIS
>       DOI: 10.12759

------------------------------------------------------------------------

a questo punto devo fare una confessione:
come il mio amico Erri De Luca, sono un europeista estremista.
Questo significa che, per  me, l’Europa unita è l’unica utopia politica
ragionevole che noi europei abbiamo coniato.
xavier cercas, inaugurazione del salone del libro, torino 2018

------------------------------------------------------------------------
Maurizio Lana
Università del Piemonte Orientale
Dipartimento di Studi Umanistici
Piazza Roma 36 - 13100 Vercelli

--[3]------------------------------------------------------------------------
        Date: 2024-09-19 06:58:32+00:00
        From: Bill Pascoe <bill.pascoe@unimelb.edu.au>
        Subject: Re: [Humanist] 38.145: a paradox (?) discussed

Hi Willard,

The history of AI might be a good example of what you are talking about. Let's
say in AI, the aim is to model (human) intelligence. Sometimes this has been
done based on some assumption about how intelligence works, and what it is. This
has been so from the very beginning - Ada Lovelace's notes cautiously consider
the tantalising prospect that by scaling up the logical processing capacity we
could make some sort of artificial mind - because for her and her time, I
suppose, what made humans humans and minds minds was the capacity for reason,
and the assumption is that we go about reasoning our way through the world. In
that sense people thought large collection of logic circuits, might have the
capacity for thought - the model of mind was a very complex logic machine.
The Turing Test (or the Turing Fallacy) is important because it set in course a
focus on imitating intelligent behaviour, rather than trying to figure out what
intelligence is, because in this test it doesn't matter if a machine is 'really'
intelligent, only whether a person can tell the difference between a human
intelligence and a machine intelligence. This frees researchers up to try all
kinds of model which do not represent, or reflect the inner workings of
intelligence, but none the less produce intelligent seeming outcomes, such as
chatbots and chess playing machines, etc.

By contrast, there is development in neural networks which are intended to
represent and function similarly, as a model of the inner material workings of
human and animal intelligence. However, these still tend to be focused either on
mimicing human and animal like behaviour (such as walking without running into
things), or on things which are simply useful to humans, like automatically
categorising images - rather than actually really being intelligent, or figuring
out what 'intelligence' even is.

By comparing ourselves with these models that attempt to mimic intelligence
either a) as a behaviourist black box where the inner workings could be anything
like or unlike the inner workings of organic intelligence or b) mimic
intelligence by replicating the material functioning of organic intelligence in
some artificial (software or mechanical) form; inevitably we always find the
result is not quite human, or not quite intelligent. In considering in what way
it falls short, or excels in some area while lacking in others, and why, we
learn something more about ourselves by comparison - and about what intelligence
is, and so we try something else. Humans that play chess are intelligent, but we
have learned that computers that play chess aren't intelligent, they are just
machines playing chess. Neural networks are good at fuzzy categorisation. Humans
can do that well, but a neural net trained to categorise images is not
intelligent, etc.

Another analogy is the development of artificial flight. People tried to model
flight by making wings that look like bird or bat wings, perhaps even flapping
them. In time they learned to make wings that in some ways were like birds wings
and in other ways not. In this way they learned more about what flight is, what
causes it, how it works, how to control it, etc. And ultimately, artificial
flight in some ways can look like a bird, planes have wings, but in other ways
it doesn't, planes don't flap their wings but have propellers. Some things are
necessary to flight and some are incidental. You don't need feathers and a beak.
You do need some kind of forward propulsion and aerofoils.

In artificial intelligence we should not expect it to be exactly human, but by
comparing ourselves to not quite human intelligence, one day we should have a
pretty clear understanding of intelligence - and so a better understanding of
ourselves. We're not there yet.

More than a decade ago I was working on this question, and developed a
particular kind of neural net, meant to be a working demonstration of the
simplest possible (or stupidest) model for genuine intelligence. The first test
worked. The hope was to purposefully go through this iterative philosophical-
technical process of testing theory with implementation, and retheorising based
on the flaws in the implementation. Rather than get confused by the vast array
of things that very intelligent animals do which seems to be what most people do
in trying to emulate humans, I hoped to arrive at the simplest possible
essential principles for intelligent, upon which everything else would be an
elaboration or advancement etc. Lets try to get something as intelligent as a
lobster before we try to teach it chess. The simplest definition I arrived at
the time for intelligence, grounded in thermodynamics, information theory,
evolutionary biology, philosophy, psychology, ethics, literature, the history of
AI, etc was 'Intelligence is doing more good.' But explaining that would take
many thousands of words - so hopefully some other time.

It's possible that someone else has also figured this stuff out by now, and gone
further, as I haven't been watching, and it seemed to me years ago that all the
pieces of the puzzle were there and you just needed to put them together - but I
feel like everyone is a bit preoccupied with ChatGPT and drones to think about a
simple model for genuine intelligence.

Bill Pascoe



_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php