Humanist Discussion Group

Humanist Archives: Jan. 29, 2023, 8:12 a.m. Humanist 36.362 - ChatGPT as author

				
              Humanist Discussion Group, Vol. 36, No. 362.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2023-01-28 19:03:34+00:00
        From: Tim Smithers <tim.smithers@cantab.net>
        Subject: Re: [Humanist] 36.360: ChatGPT as author

Hello

If we are to believe the fan[atico]s of ChatGPT, and its ilk,
we should consult ChatGPT about the matter of should it be
named as a coauthor.

So I did.

This is what happened.  It takes a bit to sort things out,
but, in the end, it's clear what the answer is, I think.

=============================================================

 An interaction with ChatGPT on 2023.01.28

 TS: Should ChatGPT be named as a coauthor when it is used in
 some way to write something?

 ChatGPT:

  It depends on the context and the level of contribution of
  the model.  If the model was used to generate a significant
  portion of the content, or if the content would not have
  been possible without the model, then it may be appropriate
  to list the model as a co-author.  However, if the model was
  only used as a tool to assist in the writing process, it may
  not be necessary to list the model as a co-author.
  Ultimately, the decision should be made by the authors and
  the publisher based on the specific circumstances of the
  project.

 TS: What would be a "significant portion of the content"?

 ChatGPT:

  A significant portion of the content would depend on the
  context and the nature of the project.  In some cases, it
  could be a large percentage of the text, while in others it
  could be a smaller amount that is considered crucial to the
  overall work.  It would ultimately be up to the authors and
  the publisher to determine what constitutes a significant
  portion of the content in a particular project.

 TS: How best could authors and publishers decide on this
 "large percentage" or "crucial" content?

 ChatGPT:

  Authors and publishers could use a variety of methods to
  determine the level of contribution of a language model in a
  particular project.  One way would be to quantify the amount
  of text generated by the model as a percentage of the total
  text.  For example, if a model generated 50% of the text in
  a paper, it could be considered to have made a significant
  contribution to the work.

  Another way would be to evaluate the content generated by
  the model for its relevance and importance to the overall
  work.  For example, if the model generated a particularly
  important or novel idea or insight that forms a central part
  of the work, it could be considered to have made a crucial
  contribution even if the percentage of text generated is
  relatively small.

  Ultimately, the authors and publishers should use their best
  judgement and consider the specific circumstances of the
  project when determining the level of contribution of a
  language model.

 TS: So, it is reasonable to consider ChatGPT to be an author
 of words in the same way as a person can properly be
 understood as the authors of the words they write?

 ChatGPT:

  ChatGPT and other language models are tools that can be used
  to assist in the writing process, and they are capable of
  generating text.  However, they are not sentient beings and
  they do not have the capacity to conceptualize, create, or
  intend the written work.  Therefore, it may not be
  appropriate to consider the model as an "author" in the
  traditional sense.  The authorship should be attributed to
  the person or persons who have used the model as a tool to
  generate the text and have actively curated, edited and
  contributed to the final product.  It's important to note
  that there are also ethical and legal considerations, for
  instance, copyright laws which protect the authorship rights
  of natural persons and not artificial intelligence.

 TS: This last reply seems to contradict the previous
 responses which talk about under what conditions it is
 reasonable to count ChatGPT as a coauthor.

 ChatGPT:

  I apologize if my previous responses were unclear.  You are
  correct that in some cases, it may be appropriate to list a
  language model as a co-author if it has made a significant
  or crucial contribution to the work.  However, it is
  important to note that the model is not a sentient being and
  does not have the capacity for authorship in the traditional
  sense.  Therefore, it would not be appropriate to consider
  the model as an "author" in the same way that a human author
  would be considered.  Instead, the authorship should be
  attributed to the person or persons who have used the model
  as a tool to generate the text and have actively curated,
  edited and contributed to the final product.

 TS: Right.  So, in what new sense of "author" does it make
 sense to call ChatGPT an author?

 ChatGPT:

  When referring to a language model like ChatGPT as a
  "co-author", it is in the sense that the model has made a
  significant or crucial contribution to the work, not in the
  traditional sense of authorship where the authors have the
  capacity for conceptualizing, creating, or intending the
  written work.  In this sense, authorship would be attributed
  to the person or persons who have used the model as a tool
  to generate the text and have actively curated, edited and
  contributed to the final product.  It would be more accurate
  to say that the model has been a collaborator, a tool or an
  assistance in the writing process, but not an author.

 TS: A collaborator?  The usual use of this term refers to a
 person, someone who can help with what needs to be done, and
 take responsibility for what they do in helping to get it
 done.  Is that how ChatGPT can work as a collaborator?

 ChatGPT:

  When referring to a language model like ChatGPT as a
  "collaborator", it is in the sense that the model is able to
  assist the human authors in creating written content.  The
  model can help by generating text, providing suggestions,
  and even providing insights that the human authors may not
  have thought of.  In this sense, the model can work as a
  collaborator, however it is important to remember that the
  model is not a sentient being and does not have the capacity
  for decision making, intentions or taking responsibility for
  what it does, it is simply a tool that can help with the
  creation of the written work.  The final decision and
  responsibility for the content still lie with the human
  authors, who have the capacity to curate, edit and
  contribute to the final product.

 TS: It sounds like ChatGPT is being described here as a
 writing assistant.  Is that right?

 ChatGPT:

  Yes, that is correct.  ChatGPT and other language models can
  be used as a writing assistant to help generate text,
  provide suggestions, and even provide insights that the
  human authors may not have thought of.  The model can help
  the authors in different stages of the writing process, from
  generating ideas and structuring the text to proofreading
  and editing.  However, it's important to note that the model
  is not capable of having creative intent or decision-making
  abilities and the final decision and responsibility for the
  content still lie with the human authors.  This means that
  the authors have the final say on what gets included in the
  final product and how it's presented.

 TS: These replies are all heavily qualified.  Wouldn't it be
 clearer to call ChatGPT as just a new kind of writing tool: a
 tool that can be used, by a person, to write with?

 ChatGPT:

  Yes, that is an accurate way to describe ChatGPT and other
  language models.  They are tools that can be used by human
  authors to assist in the writing process.  They can generate
  text, provide suggestions, and even provide insights that
  the human authors may not have thought of.  However, it is
  important to note that the model is not capable of having
  creative intent or decision-making abilities, and the final
  decision and responsibility for the content still lie with
  the human authors.  Therefore, it would be accurate to call
  ChatGPT a new kind of writing tool, and it's important to
  remember that like any other tool, it's only as good as the
  person using it.

 TS: So, if ChatGPT is accurately described as a tool, it
 would not be accurate to describe it as a coauthor when it is
 used in the writing of something, right?

 ChatGPT:

  Yes, that is correct.  ChatGPT is a tool that can assist in
  the writing process, but it is not a sentient being and does
  not have the capacity for authorship in the traditional
  sense.  Therefore, it would not be accurate to consider the
  model as a "co-author" or "author" when it is used in the
  writing of something.  The authorship should be attributed
  to the person or persons who have used the model as a tool
  to generate the text and have actively curated, edited and
  contributed to the final product.  It's important to
  remember that ChatGPT is a tool and the final decision and
  responsibility for the content lies with the human authors.

=============================================================

Thus, the answer, despite the somewhat repetitive, and winding
route to it, is, no ChatGPT is not a coauthor, not unless we
want to change the meaning of the terms 'coauthor' and
'author' to not include all the human qualities we take these
terms to carry, like responsibility for what has been written,
and an understanding of what has been written.

I also notice that ChatGPT response patterns have changed
since I first started playing with it.  It now uses "I" much
less often, which, for me, is a big improvement.  I strongly
object to these systems using "I".  The exchange above only
contains one example, when it "apologises" for being unclear.
Which is, I would say, a return to poor a presentation style.
I don't think ChatGPT really knows what an apology is, and it
would be better if it didn't try to give the impression it
does.  Accepting this kind of thing just empties out the
needed meaning of what an apology is.

Best regards,

Tim



> On 26 Jan 2023, at 07:20, Humanist <humanist@dhhumanist.org> wrote:
>
>
>              Humanist Discussion Group, Vol. 36, No. 360.
>        Department of Digital Humanities, University of Cologne
>                      Hosted by DH-Cologne
>                       www.dhhumanist.org
>                Submit to: humanist@dhhumanist.org
>
>
>    [1]    From: Christian-Emil Smith Ore <c.e.s.ore@iln.uio.no>
>           Subject: Re: [Humanist] 36.358: ChatGPT as co-author (145)
>
>    [2]    From: Robert A Amsler <robert.amsler@utexas.edu>
>           Subject: Re: [Humanist] 36.358: ChatGPT as co-author (32)
>
>    [3]    From: Fabio Ciotti <fabio.ciotti@uniroma2.it>
>           Subject: Re: [Humanist] 36.358: ChatGPT as co-author (25)
>
>
> --[1]------------------------------------------------------------------------
>        Date: 2023-01-25 11:08:47+00:00
>        From: Christian-Emil Smith Ore <c.e.s.ore@iln.uio.no>
>        Subject: Re: [Humanist] 36.358: ChatGPT as co-author
>
> Hi,
>
> The most correct scholarly practice is to put chatBot and similar tools in the
> reference list, explain the use of the tool and mark text output from such
tools
> as citations in quotes if used in the text of a paper.
>
> Best,
>
> Christian-Emil
>
>
> ________________________________
> From: Humanist <humanist@dhhumanist.org>
> Sent: 25 January 2023 11:05
> To: Christian-Emil Smith Ore
> Subject: [Humanist] 36.358: ChatGPT as co-author
>
>
>              Humanist Discussion Group, Vol. 36, No. 358.
>        Department of Digital Humanities, University of Cologne
>                      Hosted by DH-Cologne
>                       www.dhhumanist.org<http://www.dhhumanist.org>
>                Submit to: humanist@dhhumanist.org
>
>
>    [1]    From: maurizio lana <maurizio.lana@uniupo.it>
>           Subject: Re: [Humanist] 36.354: ChatBot as co-author? (31)
>
>    [2]    From: Henry Schaffer <hes@ncsu.edu>
>           Subject: Re: [Humanist] 36.354: ChatBot as co-author? (50)
>
>    [3]    From: William Benzon <bbenzon@mindspring.com>
>           Subject: Re: [Humanist] 36.354: ChatBot as co-author? (13)
>
>
> --[1]------------------------------------------------------------------------
>        Date: 2023-01-22 14:04:07+00:00
>        From: maurizio lana <maurizio.lana@uniupo.it>
>        Subject: Re: [Humanist] 36.354: ChatBot as co-author?
>
> hi Willard,
> would anyone understand that
>> ChatGPT produces 'finite' text by combining strings of text whose
>> meaning it knows nothing about, in a manner statistically consistent
>> with the use of these strings in the texts with which it was programmed.
>> The text produced therefore has no informational value or
>> communicative intent.
>> ChatGPT is not programmed to produce correct answers, but sequences of
>> words that are statistically similar to those in the texts chosen by
>> the programmers.
> the insertion of chatGPT as coauthor wouldn't happen, i think; unless
> one wants to draw attention to what has written.
> Maurizio
>
> Il 21/01/23 08:50, Willard McCarty <willard.mccarty@mccarty.org.uk> ha
> scritto:
>> An article in Nature might interest some here: "ChatGPT listed as author
>> on research papers: many scientists disapprove", at
>> <https://tinyurl.com/4jc3j7ve>
>
> -----
> non credo a nessuna liberazione né individuale né collettiva
> che si ottenga senza il costo di un’autodisciplina,
> di un’autocostruzione, di uno sforzo
> i. calvino
>
> -----
> Maurizio Lana
> Università del Piemonte Orientale
> Dipartimento di Studi Umanistici
> Piazza Roma 36 - 13100 Vercelli
>
> --[2]------------------------------------------------------------------------
>        Date: 2023-01-21 21:24:57+00:00
>        From: Henry Schaffer <hes@ncsu.edu>
>        Subject: Re: [Humanist] 36.354: ChatBot as co-author?
>
> I read that Nature article earlier - wow, there is a lot of discussion of
> the use of AI/ML text generation in the academe - and what struck me was
> this part of the article:
>
> The ethics of generative AI
>
> There are already clear authorship guidelines that mean ChatGPT shouldn’t
> be credited as a co-author, says Matt Hodgkinson, a research-integrity
> manager at the UK Research Integrity Office in London, speaking in a
> personal capacity. One guideline is that a co-author needs to make a
> “significant scholarly contribution” to the article — which might be
> possible with tools such as ChatGPT, he says. But it must also have the
> capacity to agree to be a co-author, and to take responsibility for a study
> — or, at least, the part it contributed to. “It’s really that second part
> on which the idea of giving an AI tool co-authorship really hits a
> roadblock,” he says.
>
>  I think that settles the issue.
>
> --henry
>
>
> On Sat, Jan 21, 2023 at 2:50 AM Humanist <humanist@dhhumanist.org> wrote:
>
>>
>>              Humanist Discussion Group, Vol. 36, No. 354.
>>        Department of Digital Humanities, University of Cologne
>>                      Hosted by DH-Cologne
>>                       www.dhhumanist.org<http://www.dhhumanist.org>
>>                Submit to: humanist@dhhumanist.org
>>
>>
>>
>>
>>        Date: 2023-01-20 20:30:07+00:00
>>        From: Willard McCarty <willard.mccarty@mccarty.org.uk>
>>        Subject: ChatBot as co-author?
>>
>> An article in Nature might interest some here: "ChatGPT listed as author
>> on research papers: many scientists disapprove", at
>> <https://tinyurl.com/4jc3j7ve>
>>
>> Yours,
>> WM
>> --
>> Willard McCarty,
>> Professor emeritus, King's College London;
>> Editor, Interdisciplinary Science Reviews;  Humanist
>> www.mccarty.org.uk<http://www.mccarty.org.uk>
>
>
> --[3]------------------------------------------------------------------------
>        Date: 2023-01-21 08:41:42+00:00
>        From: William Benzon <bbenzon@mindspring.com>
>        Subject: Re: [Humanist] 36.354: ChatBot as co-author?
>
> Willard,
>
> Meanwhile Karsten Wenzlaff, Sebastian Spaeth, and Christopher Loreck have
asked
> their colleagues to make suggestions for guidelines to be followed when
> crediting ChatGPT in a paper. They have compiled a list of relevant articles:
>
>
> <https://docs.google.com/document/d/1mg5uHT3KXyAbNDo200EdQgYqs7JLg-yf-
> oCEzLbenP8/edit#heading=h.5nqtknt597v9>
>
> Bill B
>
>
> --[2]------------------------------------------------------------------------
>        Date: 2023-01-25 11:06:06+00:00
>        From: Robert A Amsler <robert.amsler@utexas.edu>
>        Subject: Re: [Humanist] 36.358: ChatGPT as co-author
>
> ChatGPT can't qualify as an "author" or "co-author" of an article, any more
> than the use of a spelling corrector or grammar error detector program can
> qualify as equivalent to a human "editor" of the work. Even human editors
> of individual articles are not mentioned in most newspaper or
> periodical articles. Translators are mentioned and the fact of a work being
> a translation (whether or not the translator is mentioned by name) is
> almost always mentioned because it can affect the choice of semantic
> meanings.
>
> The dilemma comes when "statistical" likelihood in context becomes
> "knowledge". The sentence "The first president of the United States was
> George Washington"  is a statement of fact--yet it is also a statement of
> statistical probability in the context of the beginning words "The first
> president of the United States was". A program remembering ever-longer
> sequences of words as "context" would likely remember "George Washington"
> as more likely in that "context". So, as programs remember more context for
> subsequent words, knowledge gets confused with probability.
>
> The problem gets compounded when the "text" fed to programs as
> "statistical" knowledge includes multiple types of text, such as "fiction"
> being included as "training material" for a program. The program will
> become more "fluent" in how to compose the language, but extremely likely
> to make false (fictional) statements. To some degree, we ought to be told
> about the source program used in the production of the text and the source
> corpora used to train the program.
>
> The human practice of granting degrees to individuals is one method society
> has used to make these distinctions for authors. Without such a system in
> place for software, we are not being given knowledge of the author's
> qualifications for writing the text.
>
>
>
> --[3]------------------------------------------------------------------------
>        Date: 2023-01-25 10:20:01+00:00
>        From: Fabio Ciotti <fabio.ciotti@uniroma2.it>
>        Subject: Re: [Humanist] 36.358: ChatGPT as co-author
>
> Dear Colleagues,
>
> I would be much more deflationist about the set of requirements for
> defining an agent as truly [conscious | intelligent | agentive | ... ]. I
> think it's worth listening to Dave Chalmers' thoughts on this since he is
> notoriously one of the most acute philosophers that has
> conceptualized consciousness as the hard problem and, nonetheless, is very
> liberal on the possibility that LLMs can be said at a certain
> moment conscious in a strong sense (although not necessarily in the same
> way human species members are).
> Cfr. https://bit.ly/3wycN2c
>
> FC
>
>
>
> --
>
> Professor Fabio Ciotti
> Dept. "Studi letterari, filosofici e di storia dell’arte"
> University of Roma "Tor Vergata"
> Chair, European Association for Digital Humanities (EADH)
> Editor-in-Chief, "Umanistica Digitale" https://umanisticadigitale.unibo.it/
> @Fabio_Ciotti @fabiociotti@fedihum.org
> f.ciotti@pec.it



_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php