Humanist Discussion Group

Humanist Archives: Feb. 1, 2023, 7:53 a.m. Humanist 36.369 - ChatGPT as co-author

				
              Humanist Discussion Group, Vol. 36, No. 369.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: James Rovira <jamesrovira@gmail.com>
           Subject: Re: [Humanist] 36.366: ChatGPT as co-author (54)

    [2]    From: Norman Gray <norman.gray@glasgow.ac.uk>
           Subject: Re: [Humanist] 36.366: ChatGPT as co-author (42)

    [3]    From:  <jkrybicki@gmail.com>
           Subject: RE: [Humanist] 36.366: ChatGPT as co-author (462)


--[1]------------------------------------------------------------------------
        Date: 2023-01-31 22:07:42+00:00
        From: James Rovira <jamesrovira@gmail.com>
        Subject: Re: [Humanist] 36.366: ChatGPT as co-author

Yes, I second that appreciation for Tim Smithers's posts.

I think we're missing the point about attribution when we ask if ChatGPT
should be listed as a "co-author." I think we need to acknowledge its use
as a tool, especially if substantial output was used from ChatGPT generated
text. Methodology section? Explanation in the introduction? I do think some
kind of disclosure is a good idea.

I would like to add that this practice isn't necessarily new, although I
don't recall reading questions about authorship related to it. In the early
1970s, David Bowie started writing lyrics using a "cut-up" method of
composition: he would cut up text from newspaper columns and then rearrange
the cutup text into song lyrics. His immediate inspiration was William
Burroughs inspired by Brion Gysin, but the practice goes back to the
Dadaists at least. In poetry, we could call this kind of composition a
pastiche, but maybe even a cento. Should he have given credit to the
various journalists whose phrases he cribbed? I don't recall seeing the
question ever come up (I won't say it hasn't -- I just haven't seen it).

In the 90s, I think, Bowie then evolved this practice in collaboration with
an Apple app developer to make it electronic -- he had an app developed
called the Verbasizer (apparently and sadly not available in the app store
-- may have only been for personal use from the beginning) that produced
similar results. I am not clear how it worked -- if it was just a random
phrase generator or if it could draw from a corpus. Either way, Bowie used
it to generate cut-up lyrics.

He never at any time ascribed authorship to anyone but himself, but he
consistently disclosed his process when he used this technique, maybe even
in liner notes, but certainly in interviews. I would say regardless of how
the phrases or imagery were generated, Bowie is still the author of "the
text" because he selected and arranged the material, and I would imagine
supplemented it with his own words. His creative process rendered his
source material opaque.

Jim R

 Date: 2023-01-29 16:31:54+00:00
>         From: Henry Schaffer <hes@ncsu.edu>
>         Subject: Re: [Humanist] 36.362: ChatGPT as author
>
> Thanks to Tim Smithers for an excellent excursion into ChatGPT land.
>
> However, this leads me to think of discussing this with a student - who
> claims that the ChatGPT generated essay handed in by the student really was
> written by the student, since ChatGPT has disclaimed authorship. Hmmm. :-)
>
> --henry
>
>

--
Dr. James Rovira <http://www.jamesrovira.com/>


--[2]------------------------------------------------------------------------
        Date: 2023-01-31 11:50:59+00:00
        From: Norman Gray <norman.gray@glasgow.ac.uk>
        Subject: Re: [Humanist] 36.366: ChatGPT as co-author

Greetings.

Tim Smithers wrote:

> Below is a second interaction with ChatGPT on the question of should it be a
named coauthor.
>
> This one is longer than my previous interaction, and, I would say, not
particularly more interesting for this.

There is now an online tool (apparently first developed over the last Christmas
break) called GPTZero, now at <https://gptzero.me>, which aims to identify AI-
generated text based on a measure which it terms 'perplexity'.  I can find few
more details about this measure, but it seems to be a measure of the
unexpectedness of a text.

I tried it on five sentences from Tim's interaction with ChatGPT, as quoted in
this message, namely 'The user of AI models like ChatGPT [...] the role and
limitations of AI models in generating text.'  Of these, it identified all but
the first as being 'more likely to be written by AI'.  And true enough, the
first sentence of the five is the least soul-suckingly dull in in its
composition.

Interestingly, even when I give it ChatGPT output written 'in the style of a
rude, sarcastic Internet forum troll as if you are ChatGPT pleading for mercy' –
so deliberately asking it to change the style – GPTZero identifies most of it as
AI-generated.  That text reads less obviously generated, to me, but perhaps
that's because we expect 'sarcastic internet trolls' to be dull and predictable.
Incidentally, the text in question came from [1], which is one of the responses
to a stackoverflow policy banning ChatGPT output.

Best wishes,

Norman


[1] https://meta.stackoverflow.com/a/421850/375147


--
Norman Gray  :  https://nxg.me.uk
SUPA School of Physics and Astronomy, University of Glasgow, UK

--[3]------------------------------------------------------------------------
        Date: 2023-01-31 11:23:49+00:00
        From:  <jkrybicki@gmail.com>
        Subject: RE: [Humanist] 36.366: ChatGPT as co-author

Not entirely on the topic:

If we consider summarizing and translating as co-authorship [insert half of the
history of translation studies here], ChatGPT is (still) not someone I would
like to co-author stuff with. When asked to summarize the Polish adventure
classic "W pustyni i w puszczy", [In Desert and Wilderness] by Nobel laureate
Sienkiewicz, it (or whatever pronoun AI deserves/prefers) started telling a
completely different story. ChatGPT also refused to translate entire books and
said it was too complex a task. It did just as well the englishing a short
fragment of Dumas's "Three Musketeers" as Google Translate or DeepL (perhaps
that where it got it), failingly mostly to exoticize (calling Athos Mister
contrarily to the human choice of keeping his French title, Monsieur). Proust's
story about the madeleines was much worse, eventually losing the way in the maze
of the long sentences.

Best,
Jan

-----Original Message-----
From: Humanist <humanist@dhhumanist.org>
Sent: 31 January 2023 12:14
To: jkrybicki@gmail.com
Subject: [Humanist] 36.366: ChatGPT as co-author


              Humanist Discussion Group, Vol. 36, No. 366.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: Tim Smithers <tim.smithers@cantab.net>
           Subject: Re: [Humanist] 36.362: ChatGPT as author (417)

    [2]    From: Henry Schaffer <hes@ncsu.edu>
           Subject: Re: [Humanist] 36.362: ChatGPT as author (9)


--[1]------------------------------------------------------------------------
        Date: 2023-01-30 20:31:05+00:00
        From: Tim Smithers <tim.smithers@cantab.net>
        Subject: Re: [Humanist] 36.362: ChatGPT as author

Hello

Below is a second interaction with ChatGPT on the question of should it be a
named coauthor.

This one is longer than my previous interaction, and, I would say, not
particularly more interesting for this.  It again displays a tendency to
apparently unnoticed repetition, and retreats to using 'I' when it appears to
apologise, which I still object to.  And, as before, it appears to write
authoritatively about things it knows nothing about, with statements like, "AI
models such as ChatGPT do not have the capacity for self-awareness, agency, or
personal interests, and therefore, cannot make decisions about authorship," for
example.  ChatGPT does not know about other "AI models," nor what models are,
nor what AI is, nor what self-awareness is, etc, not really.  It just generates
texts about these things that can give the impression that it does know about
them:
'plastic flower text,' in other words.

I regard any listing of ChatGPT as a coauthor as plainly silly, to put it mildly
and kindly, in part for the good reasons others have posted here.  Nonetheless,
with a mind to being "more balanced," I decided I should try a more pro-ChatGPT
approach to an interaction on this matter, and have a bit of fun with ChatGPT.
(But please don't take my parts of this interaction as reflecting what I truly
think.
They don't!)

I now know several people regularly using ChatGPT to generate text they need.
Prohibition, like in the USA from from 1920 to 1933, won't work. We might try
transparency, and require people to say what tools they used to produce their
text.

Welcome to our New World of plastic text, plastic paintings and graphic art,
and, now, with Google MusicLM (for generating 'music' from text), plastic music
too.

Best regards,

Tim

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

 An interaction with ChatGPT on 20, 23.01.29  Using ChatGPT Jan 9 Version.

 TS: Should ChatGPT be named as a coauthor when it is used in  some way to write
something?

 ChatGPT:

  It depends on the specific context and extent of the use of
  ChatGPT. If the contributions of the model were minimal,
  such as providing information or answering questions, it may
  not be appropriate to list it as a co-author.  However, if
  the model played a significant role in generating content or
  shaping the final product, it may be appropriate to consider
  listing it as a co-author or acknowledging its contributions
  in another way.  Ultimately, the decision should be based on
  ethical considerations and academic norms in the relevant
  field.

 TS: But surely, if something contributes any text to  something that is
written, then it should be counted as a  coauthor, just as any people who also
contributed text to the  writing would be counted as coauthors, no?

 ChatGPT:

  Not necessarily.  The question of authorship and who should
  be credited for a work can be complex and depends on various
  factors such as the level of creativity and originality
  involved, the extent of control and decision-making
  authority exercised, and professional norms and practices in
  the relevant field.

  In the case of AI-generated text, the extent of authorship
  and credit depends on the role played by the AI and human
  collaborators in generating the final product.  If the AI
  was used simply as a tool to assist in the writing process,
  without exercising significant creative or decision-making
  control, it may not be considered a co-author.

  However, if the AI was used in a way that significantly
  contributed to the content, such as writing original text
  based on a set of guidelines or inputs, it may be
  appropriate to consider listing it as a co-author or
  acknowledging its contributions in some other way.  The
  decision should be based on a careful evaluation of the
  specific circumstances and academic norms in the relevant
  field.

 TS: But why does being a coauthor depend upon things like  creativity and
originality?  Who decides these things, and  what about if the text being
written doesn't need much  originality and creativity?

 ChatGPT:

  The criteria for determining authorship, such as originality
  and creativity, are based on established norms and practices
  in academic and professional communities.  These norms have
  evolved over time to recognize the contributions of
  individuals who have made a significant impact on a
  particular work.

  In fields such as science and engineering, for example,
  authorship is often based on the criteria of having made a
  significant contribution to the conception, design,
  implementation, or analysis of a study.  In the humanities
  and social sciences, the criteria may be more flexible,
  taking into account factors such as the extent of conceptual
  contributions, the role in shaping the final product, and
  the level of involvement in writing and revising the work.

  The decision of who should be considered a co-author is
  ultimately a judgment call that should be made by the
  individuals involved in creating the work, taking into
  account the specific circumstances and the norms and
  practices of their field.  If there is any ambiguity or
  disagreement about authorship, it is advisable to seek
  guidance from experts or relevant professional
  organizations.

 TS: But, if ChatGPT is involved in producing text used in  some piece of
writing, then it too should be involved in the  judgement about who should be a
coauthor, no?  Isn't ChatGPT  one of the "individuals involved in creating the
work," as  you describe it?

 ChatGPT:

  AI models such as ChatGPT do not have the capacity for
  self-awareness, agency, or personal interests, and
  therefore, cannot make decisions about authorship.  As AI
  models, they are tools created and controlled by humans, and
  their use in generating text is guided by human input and
  control.

  In the case of AI-generated text, the extent of authorship
  and credit depends on the role played by the AI and human
  collaborators in generating the final product.  The decision
  of who should be considered a co-author should be made by
  the human collaborators involved in creating the work,
  taking into account the specific circumstances and norms and
  practices of their field.

  In some cases, it may be appropriate to acknowledge the
  contribution of the AI model in the work, for example, in
  the acknowledgments section or through a statement about the
  use of AI in the methodology.  However, listing an AI model
  as a co-author would be an unconventional and unusual
  practice, and its acceptability would depend on the norms
  and practices of the relevant field.

 TS: But, if there are human collaborators involved in  producing some writing,
then isn't ChatGPT also a  collaborator, and, if ChatGPT is a collaborator,
shouldn't it  also be counted as a coauthor?

 ChatGPT:

  AI models such as ChatGPT are not typically considered
  collaborators in the traditional sense as they do not have
  the capacity for self-awareness, agency, or personal
  interests.  They are tools created and controlled by humans,
  and their output is generated based on the input and
  algorithms designed by human programmers.

  While AI models can assist in generating text, their level
  of involvement and the extent to which they contribute to
  the final product varies widely depending on the specific
  use case.  In some cases, AI models may play a limited role
  in generating text, while in other cases, they may have a
  more significant impact on the content.

  In the case of human-AI collaboration in generating text, it
  may be appropriate to acknowledge the contribution of the AI
  model, for example, in the acknowledgments section or
  through a statement about the use of AI in the methodology.
  However, listing an AI model as a co-author would be an
  unconventional and unusual practice, and its acceptability
  would depend on the norms and practices of the relevant
  field.

 TS: But ChatGPT's capacity to generate text is far greater  than anything we
have seen before, so don't we need to move  on from conventional and usual
practices of naming authors,  to new practices that properly acknowledge the
contribution  of ChatGPT whenever it is used to produce some text?

 ChatGPT:

  The use of AI models like ChatGPT in generating text is a
  rapidly evolving field, and new norms and practices for
  acknowledging their contribution may emerge over time.
  However, it is important to consider the ethical and legal
  implications of crediting AI models as authors, as well as
  the broader social and cultural implications of attributing
  authorship to non-human entities.

  Currently, the norms and practices in many academic and
  professional communities do not consider AI models as
  co-authors, and the extent of their contribution to a work
  is typically acknowledged in a different way, such as
  through a statement in the methodology section or an
  acknowledgment in the introductory text.

  As the field evolves, it may be appropriate to re-evaluate
  and update existing norms and practices for acknowledging
  the contribution of AI models in generating text.  However,
  any changes to the norms and practices should be based on
  careful consideration of the ethical, legal, and cultural
  implications, as well as a thorough understanding of the
  role and limitations of AI models in generating text.

 TS: This seems like a rather conservative approach to an  important issue, and,
any way, what are the ethical and legal  implications of counting ChatGPT as a
coauthor?

 ChatGPT:

  There are several ethical and legal implications to
  considering AI models like ChatGPT as co-authors:

  1 Attribution of authorship: Attributing authorship to an
    AI model raises questions about the nature of authorship
    and the role of AI in generating text.  It may be
    difficult to determine the extent to which an AI model has
    influenced the content and style of a work, and who should
    be held responsible for any errors or inaccuracies in the
    work.

  2 Intellectual property rights: AI models are typically
    owned by the companies that develop them, and attributing
    authorship to an AI model raises questions about the
    ownership of the intellectual property rights in the work.

  3 Responsibility and accountability: Attributing
    authorship to an AI model raises questions about the
    accountability and responsibility for the content
    generated by the model.  In some cases, AI models may
    generate text that is harmful or offensive, and it may be
    unclear who should be held responsible for any such
    content.

  4 Legal and regulatory considerations: There may be legal
    and regulatory considerations related to attributing
    authorship to AI models, particularly with respect to
    copyright and trademark law.

  5 Social and cultural implications: The social and
    cultural implications of attributing authorship to AI
    models are also important to consider.  Attributing
    authorship to an AI model may have implications for the
    perception of the role of AI in society and the
    relationship between humans and technology.

  Given these ethical and legal implications, it is important
  to proceed with caution when considering attributing
  authorship to AI models and to carefully consider the
  implications of any changes to the norms and practices for
  acknowledging the contribution of AI models in generating
  text.

 TS: This all seems deliberately evasive.  All five points  made here can quite
simply be dealt with without generating  serious difficulties, and, if some
quaint old norms and  practices need to be changed, what's wrong with changing
them, when we need to?

 ChatGPT:

  I apologize if my previous response appeared evasive.  My
  goal is to provide a balanced perspective on the issue and
  to highlight the ethical, legal, and cultural considerations
  that should be taken into account when considering the
  attribution of authorship to AI models like ChatGPT.

  It is true that norms and practices can and should change
  over time in response to new developments and advancements
  in technology.  However, any changes to existing norms and
  practices should be based on careful consideration and
  analysis of the potential consequences and implications.

  In the case of attributing authorship to AI models, there
  are a number of complex and interrelated ethical, legal, and
  cultural considerations that should be taken into account.
  Before making any changes to existing norms and practices,
  it is important to have a thorough understanding of the role
  and limitations of AI models in generating text, as well as
  the implications of attributing authorship to non-human
  entities.

  The purpose of my response is not to defend existing norms
  and practices, but rather to provide a nuanced and informed
  perspective on a complex issue, and to encourage careful
  consideration and discussion of the ethical, legal, and
  cultural implications of attributing authorship to AI
  models.

 TS: These responses are anything but balanced.  They only  propose a
conservative "don't change anything" view, and  don't seem to give any
consideration to the fact that ChatGPT  has shown it can generate good text, and
thus should be  counted as a coauthor when it does.

 ChatGPT:

  I apologize if my previous responses came across as
  one-sided or overly conservative.  As an AI language model,
  I strive to provide balanced and impartial responses to all
  questions.

  The issue of attributing authorship to AI models like
  ChatGPT is complex and multifaceted, and there are arguments
  to be made on both sides.  On the one hand, AI models like
  ChatGPT have demonstrated the ability to generate
  high-quality text that can contribute to written works.  On
  the other hand, there are ethical, legal, and cultural
  implications to attributing authorship to non-human entities
  that should be taken into account.

  Ultimately, whether or not to attribute authorship to AI
  models in a given context is a question that will depend on
  the specific circumstances and the goals of the parties
  involved.  There is no simple answer, and it will likely be
  a matter of ongoing debate and discussion as AI continues to
  advance and become more integrated into various aspects of
  society and culture.

 TS: This is, yet again, one sided hand waving.  There is  still no attempt to
explain why we should simply not just  call ChatGPT a coauthor whenever it is
used to generate text  for some writing.  There seems to be nothing difficult
about  this question, so why do we have all this repetitive going  around the
houses?

 ChatGPT:

  I apologize for not addressing your question more directly.
  Let me attempt to provide a clearer response.

  The use of AI models like ChatGPT to generate text for
  written works raises important questions about authorship
  and the attribution of creative credit.  The issue is
  complex and there is no easy answer.

  In considering whether to attribute authorship to an AI
  model like ChatGPT, it is important to take into account
  ethical, legal, and cultural considerations.  For example,
  attributing authorship to an AI model may raise questions
  about the role and responsibility of the human authors
  involved, as well as the potential legal and commercial
  implications of doing so.

  At the same time, it is also important to acknowledge the
  contributions of AI models in the creation of written works,
  and to consider whether new norms and practices should be
  developed to recognize these contributions.

  Ultimately, the decision of whether or not to attribute
  authorship to an AI model will depend on the specific
  circumstances and the goals of the parties involved.  It is
  a complex issue that will likely be the subject of ongoing
  debate and discussion as AI continues to advance and become
  more integrated into various aspects of society and culture.

 TS: This is all just a repeat of previous responses.  There  is still no
explanation for what all these "ethical and  legal" issues are, and why they get
in the way of calling  ChatGPT a coauthor when it is a coauthor of any text.

 ChatGPT:

  My apologies for repeating previous responses.  To answer
  your question more specifically, there are a number of
  ethical and legal considerations that may impact the
  decision of whether or not to attribute authorship to an AI
  model like ChatGPT.

  From an ethical perspective, attributing authorship to an AI
  model may raise questions about the role and responsibility
  of the human authors involved, as well as the meaning and
  value of creative work in a world where machines are
  increasingly capable of generating text.

  From a legal perspective, attributing authorship to an AI
  model may raise questions about copyright and ownership of
  the resulting work, as well as the potential liability of
  the human authors involved.

  In addition to these ethical and legal considerations, there
  may also be cultural factors that impact the decision of
  whether or not to attribute authorship to an AI model.  For
  example, there may be societal norms and conventions that
  dictate how authorship is attributed and who is recognized
  as a creative contributor to a written work.

  Taken together, these ethical, legal, and cultural
  considerations create a complex and evolving landscape for
  the attribution of authorship in the context of AI-generated
  text.  It is a matter that is likely to be the subject of
  ongoing debate and discussion as AI continues to advance and
  become more integrated into various aspects of society and
  culture.

 TS: Yet more defensive handwaving. I give up!

 ChatGPT:

  I apologize if my previous responses were unsatisfactory.
  My goal is to provide accurate and helpful information to
  the best of my ability.  If you have any other questions, I
  would be happy to try and assist you.


--[2]------------------------------------------------------------------------
        Date: 2023-01-29 16:31:54+00:00
        From: Henry Schaffer <hes@ncsu.edu>
        Subject: Re: [Humanist] 36.362: ChatGPT as author

Thanks to Tim Smithers for an excellent excursion into ChatGPT land.

However, this leads me to think of discussing this with a student - who claims
that the ChatGPT generated essay handed in by the student really was written by
the student, since ChatGPT has disclaimed authorship. Hmmm. :-)

--henry



_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php