Humanist Discussion Group

Humanist Archives: June 24, 2022, 7:51 a.m. Humanist 36.78 - artificial sentience and mimicry

				
              Humanist Discussion Group, Vol. 36, No. 78.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: Robert Royar <robert@royar.org>
           Subject: Re: [Humanist] 36.76: artificial sentience (55)

    [2]    From: James Rovira <jamesrovira@gmail.com>
           Subject: Re: [Humanist] 36.76: artificial sentience (10)

    [3]    From: Mcgann, Jerome (jjm2f) <jjm2f@virginia.edu>
           Subject: Re: [Humanist] 36.76: artificial sentience (11)

    [4]    From: Willard McCarty <willard.mccarty@mccarty.org.uk>
           Subject: Fwd: Amazon shows off Alexa feature that mimics the voices of your dead relatives - The Verge (59)


--[1]------------------------------------------------------------------------
        Date: 2022-06-23 13:44:59+00:00
        From: Robert Royar <robert@royar.org>
        Subject: Re: [Humanist] 36.76: artificial sentience

I wonder whether the dilemma of whether we can determine that a human has
produced a text when we are not interlocutors in that discussion is
different from the Socratic/Platonic argument that writing is flawed
because we cannot interrogate it? I am reminded of one of the 1970s' PROLOG
expert systems with a module named after the rhetorician Cicero.

On Thu, Jun 23, 2022 at 1:12 AM Humanist <humanist@dhhumanist.org> wrote:

>
>               Humanist Discussion Group, Vol. 36, No. 76.
>         Department of Digital Humanities, University of Cologne
>                       Hosted by DH-Cologne
>                        www.dhhumanist.org
>                 Submit to: humanist@dhhumanist.org
>
>
>
>
>         Date: 2022-06-22 06:49:39+00:00
>         From: Robert A Amsler <robert.amsler@utexas.edu>
>         Subject: Re: [Humanist] 36.74: artificial sentience?
>
> We are facing a new dilemma. Everything known may be accessible online at
> some point and clever software developers are devising interfaces to that
> information that seem to be interactive agents capable of human-grade
> fluent speech for communication with us. The dilemma is if a computer
> program speaks/writes human language fluently enough how can we determine
> whether we're speaking with a computer or an actual human. And, when that
> program has access to all the digital text we've put online; including all
> the conversations on Twitter and other social media sites, how can we know
> whether it's answer to any question we ask is the product of a conscious
> mind vs. a program that accesses that information and just follows the
> rules of fluent communication to sound like it knows what it is saying;
> including saying that it  is aware of what it has said and it saying about
> being aware.
>
> At present, I think we may be able to 'trick" such programs into saying
> something that indicates they are an artifact; but I'd say reading a posted
> conversation between a human and a computer can't be relied on to prove
> that. You'd have to be able to ask your own questions of the program to
> follow up the answers it gives with your own questions. So, proving a
> program has become "conscious" of what it is saying may not be provable
> from selective dialogs recorded by someone else.
>
> My initial guess is that we will lose the ability to rely on the media
> we've been creating and posting online as being "original" and "authentic"
> products of human creation vs. recopied and generated information from
> programs. I think courts do not allow photographs to be the basis of proof
> of what is in the photos any longer. It's impossible to distinguish between
> made up photos and actual photography. Sure, that's a photo of Abraham
> Lincoln being shot in the theater that very night.

--
               Robert Delius Royar
 Caught in the net since 1985

--[2]------------------------------------------------------------------------
        Date: 2022-06-23 14:18:34+00:00
        From: James Rovira <jamesrovira@gmail.com>
        Subject: Re: [Humanist] 36.76: artificial sentience

Just curious, but why do those questions matter? I conduct a Google search,
I get results, I know they're computer generated and skewed any number of
ways, but I still work through the results. And then I use other search
engines as well. I don't care that I'm dealing with a computer program to
produce those results.

If I'm on a date, however, I'd like to know that it's a real person.

Jim R


--[3]------------------------------------------------------------------------
        Date: 2022-06-23 10:45:30+00:00
        From: Mcgann, Jerome (jjm2f) <jjm2f@virginia.edu>
        Subject: Re: [Humanist] 36.76: artificial sentience

Just a brief salute to Robert Amsler and Willard, and a relevant reference:

https://www.newyorker.com/magazine/2019/10/14/can-a-machine-learn-to-write-for-
the-new-yorker

There is much that could and should be added in re the language, expression, and
authority of “authorship”.  As perhaps was best realized in the Middle Ages, vox
populi and vox dei (and vox diaboli) pervade every venture in language.

Jerry


--[4]------------------------------------------------------------------------
        Date: 2022-06-24 06:38:09+00:00
        From: Willard McCarty <willard.mccarty@mccarty.org.uk>
        Subject: Fwd: Amazon shows off Alexa feature that mimics the voices of your dead relatives - The Verge

Charlie Brooker's "Be right back", an episode in the first series of
Black Mirror, does the job on this following, and should be viewed with
his "Enemy of the people" in a later series of Black Mirror. View them
tonight!

WM


-------- Forwarded Message --------
Subject:        Amazon shows off Alexa feature that mimics the voices of your
dead relatives - The Verge
Date:   Thu, 23 Jun 2022 17:49:13 -0400
From:   William Benzon <bbenzon@mindspring.com>
To:     Willard McCarty <willard.mccarty@mccarty.org.uk>



Willard,

This is a BIG mistake.
>
> Amazon has revealed an experimental Alexa feature that allows the AI
> assistant to mimic the voices of users’ dead relatives.
>
> The company demoed the feature at its annual MARS conference, showing
> a video in which a child asks Alexa to read a bedtime story in the
> voice of his dead grandmother.
>
> “As you saw in this experience, instead of Alexa’s voice reading the
> book, it’s the kid’s grandma’s voice,” said Rohit Prasad, Amazon’s
> head scientist for Alexa AI. Prasad introduced the clip by saying that
> adding “human attributes” to AI systems was increasingly important “in
> these times of the ongoing pandemic, when so many of us have lost
> someone we love.”
>
> “While AI can’t eliminate that pain of loss, it can definitely make
> their memories last,” said Prasad. You can watch the demo itself below:...
>
> Amazon has given no indication whether this feature will ever be made
> public, but says its systems can learn to imitate someone’s voice from
> just a single minute of recorded audio. In an age of abundant videos
> and voice notes, this means it’s well within the average consumer’s
> reach to clone the voices of loved ones — or anyone else they like.
>
Treat artificial devices as artificial devices. We should always know
when we’re interacting with an artificial device. As long as we’re
honest, we can integrate them into our lives as, what they are,
artificial devices, not ersatz humans.

BB

https://www.theverge.com/2022/6/23/23179748/amazon-alexa-feature-mimic-voice-
dead-relative-ai
<https://www.theverge.com/2022/6/23/23179748/amazon-alexa-feature-mimic-voice-
dead-relative-ai>

William Benzon
bbenzon@mindspring.com <mailto:bbenzon@mindspring.com>
917.717.9841


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php