Humanist Discussion Group

Humanist Archives: April 29, 2023, 6:34 a.m. Humanist 36.560 - weakness of 'strong AI'; chatbots & specialisation

				
              Humanist Discussion Group, Vol. 36, No. 560.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: Manfred Thaller <manfred.thaller@uni-koeln.de>
           Subject: Re: [Humanist] 36.557: some naive musings about AI / artificial intelligence (104)

    [2]    From: Robert A Amsler <robert.amsler@utexas.edu>
           Subject: A question about differences between artificial and human intelligence (58)


--[1]------------------------------------------------------------------------
        Date: 2023-04-28 07:50:31+00:00
        From: Manfred Thaller <manfred.thaller@uni-koeln.de>
        Subject: Re: [Humanist] 36.557: some naive musings about AI / artificial intelligence

Dear James,

Let me start with your second point as, I think, our opinions are quite
close here.

> I think we've already wasted too much time talking about what you identify
> as "strong AI." I prefer to ask this question: On what basis do we
> believe strong AI is even possible?

I could not agree more. Indeed, that is the very reason, why I wanted to
remind us of this difference. I am simply tired of "experts" and "public
intellectuals", which every five years or so encounter a highly
specialized success of the weak paradigm and immediately start
discussing the day on which the human race will be superseded by
artificial intelligences. And I notice that at least in the German press
and in those parts of the US and the UK press I try to follow - leave
alone Elon Musk & Co. - we are in the full stampede again. Why a program
recombining chains of symbols will mutate tomorrow into one which
decides on its own to replace humanity beats me; admittedly, I am
irritated, that too few people emphasize the difference. And I would
consider it a noble challenge for the Humanities to control its own
share of such "public intellectuals" a bit more closely.

In the other point you raise, we disagree more: Yes, of course, writing
a legible and meaningful summary IS a valuable qualification.  Generally
speaking, I have nevertheless the feeling that in the last ten years the
notion of extending Humanities' knowledge in the Digital variety thereof
has been downplayed a bit too much in favor of mustering digital
technologies for publishing, communicating, visualizing etc.

So I really would be happy, if we could agree that gaining new knowledge
is at least as important as polishing its presentation; particularly
when we see that the polishing seems to be quite suitable for automation.

And, maybe, confronting a student at the earliest possible stage with
the task to try to get meaning from some stuff or question, where no
such synthesis exists yet, might encourage them to look a bit less for
an easy way out?

Best regards,
Manfred

Am 28.04.23 um 08:26 schrieb Humanist:
>                Humanist Discussion Group, Vol. 36, No. 557.
>          Department of Digital Humanities, University of Cologne
>                        Hosted by DH-Cologne
>                         www.dhhumanist.org
>                  Submit to: humanist@dhhumanist.org
>
>
>
>
>          Date: 2023-04-27 13:34:30+00:00
>          From: James Rovira <jamesrovira@gmail.com>
>          Subject: Re: [Humanist] 36.553: Some naive musings about AI /
artificial intelligence
>
> Thanks very much, Dr. Thaller, for your very engaging post. To speak to
> item 7 below, as a writing instructor, I regularly assign at least one
> summary writing assignment in my first year writing courses. We should keep
> in mind that writing instruction occurs at all levels. ChatGPT is very good
> at summary writing. Students who simply copy and paste a ChatGPT summary of
> anything will tend to be easily detectable, even without software, because
> it does indeed have its own voice. It's easily recognizable once you've
> read a little bit of it, especially if at any point the student has
> submitted some of his or her own writing. But either way, I think it's
> valid for instructors at some levels to be concerned about AI generated
> text being substituted for student writing.
>
> I think we've already wasted too much time talking about what you identify
> as "strong AI." I prefer to ask this question: On what basis do we
> believe strong AI is even possible?
>
> Jim R
>
> On Thu, Apr 27, 2023 at 1:16 AM Humanist <humanist@dhhumanist.org> wrote:
>
>> (7) (At least if you are a historian) If you are afraid, that a weak
>> paradigm tool like ChatGPT is able to submit a valid student’s paper:
>> maybe you should change the assignments?
>>
>> Apologies for being loquacious,
>> Manfred
>>
>> --
>> Prof.em.Dr. Manfred Thaller
>> formerly University at Cologne /
>> zuletzt Universität zu Köln
>>
>> --
> Dr. James Rovira <http://www.jamesrovira.com/>
>
>     - *David Bowie and Romanticism
>     <https://jamesrovira.com/2022/09/02/david-bowie-and-romanticism/>*,
>     Palgrave Macmillan, 2022
>     - *Women in Rock, Women in Romanticism
>     <https://www.routledge.com/Women-in-Rock-Women-in-Romanticism-The-
> Emancipation-of-Female-Will/Rovira/p/book/9781032069845>*,
>     Routledge, 2023

--
Prof.em.Dr. Manfred Thaller
formerly University at Cologne /
zuletzt Universität zu Köln

--[2]------------------------------------------------------------------------
        Date: 2023-04-28 07:44:54+00:00
        From: Robert A Amsler <robert.amsler@utexas.edu>
        Subject: A question about differences between artificial and human intelligence

Basically, long ago (late 1960s) when I started studying artificial
intelligence, particularly  question-answering capabilities of computers
with access to machine-readable human text resources, a question that
bothered me was that educated humans divide themselves into experts in
certain professions and those professions each have "their literature" in
which such professionals use language specific to their discipline to
discuss peer-to-peer questions. Human experts from different disciplines
decided during their education what professions they would "go into" and
often label themselves thereafter with their professional discipline's job
titles. They become lawyers, doctors, scientists and as the amount of
recorded knowledge accumulated we've continued subdividing the fields of
knowledge into finer and finer professional categories. Scientists became
"astronomers" and then became "radio-astronomers", until today wikipedia
notes fields such as "infrared astronomy", "optical
astronomy", "ultraviolet astronomy", "X-ray astronomy", "gamma-ray
astronomy" and goes on to describe fields not based on the electromagnetic
spectrum such as "neutrino astronomy" and "gravitational-wave astronomy".

If we create artificial intelligence to read everything and access it to
answer questions are we going to be faced with deciding "what" professional
expert will answer or can an artificial intelligence be developed to
encompass all of human knowledge at once. I'd suspect that since we have
recorded our knowledge by disciplines, current AI will be limited by those
subdivisions. What seems to happen if you talk to a chatbot is that it goes
through answers based on educational levels, perhaps based on the
vocabulary you use in your questions. If your questions become more
detailed it resorts to using wikipedia texts; but what happens if you use
language so specific that only an expert at the highest level of human
knowledge in one very specific field can even understand what you're
asking?

Can we build an AI that can do more than what humans have been able to do
through subdividing knowledge into separate very specific disciplines of
ever-smaller scope. If we don't know how to do such a task will the AI's we
build be limited to taking on a particular "hat" when answering our
questions. There is a matter of creativity to be achieved here. A fusion of
multiple fields of knowledge that marks innovation in human knowledge
resulting in the creation of entirely new fields of knowledge. How do we
design AI's to exceed what we can do? Sure, they can study more data; but
how do we build AI's to have "eureka" moments?

But, if we build chatbots that have read all of the machine-readable text
we've managed to accumulate, from every discipline, in every language; how
will we construct them to respond to questions when the answers could be
formulated as different expert humans from different disciplines would
respond.

In short, did humans decide it was impossible to know "everything" from
"everywhere" all at once and start subdividing recorded knowledge into
categories tailored to human professions because one couldn't be an expert
on everything, everywhere, all at once -- while now we're faced with
creating computer artificial intelligence software that can access all the
stored information the human race has created to answer any question. Is
human knowledge at the most expert level inherently limited by the
boundaries we've created to store and study it such that it may be
impossible for one artificial "mind" to answer without having to take on
the persona of experts in each discipline to answer as experts in that
discipline would?


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php