Humanist Discussion Group

Humanist Archives: June 8, 2023, 6:22 a.m. Humanist 37.82 - pubs cfp: hype, silence, scandal of AI

				
              Humanist Discussion Group, Vol. 37, No. 82.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2023-06-07 18:18:17+00:00
        From: Fenwick Mckelvey <mckelveyf@gmail.com>
        Subject: CFP: Special issue on (un)Stable Diffusions

Please circulate widely and consider applying.

CFP: (un)Stable Diffusions: General-purpose artificial intelligence’s
publicities, publics, and publicizations

Edited by Fenwick McKelvey, Joanna Redden, Jonathan Roberge, and Luke Stark
(names in alphabetical order)

To be published in the open access Journal of Digital Social Research
<https://www.jdsr.io/>. Please submit your abstracts here:
https://forms.gle/hWQxvuRTWoFhVs3W6

The recent release of so-called “general-purpose artificial intelligence”
(GPAI) systems have prompted a public panic that AI-generated fiction is
now indistinguishable from fact. General-purpose AI refers to systems
“intended by the provider to perform generally applicable functions such as
image and speech recognition, audio and video generation, pattern
detection, question answering, translation and others” (Artificial
Intelligence Act, Council of the European Union, Procedure File:
2021/0106(COD), Title 1, Article 3(1b)). Contemporary technologies referred
to as GPAI include OpenAI’s ChatGPT as well as numerous text-to-image
generation software such as DALL-E and Stable Diffusion.  GPAI systems are
already being deployed to support and entrench existing asymmetries of
power and wealth. For instance, the online news outlet CNET recently
disclosed that it has been publishing stories written by an AI and edited
by humans for months (Main, 2023).

The current concern over ChatGPT is an important moment in AI’s publicity
and publicization (Hansen, 2021), or what Noortje Marres refers to as
“material things” acting as “crucial tools or props for the performance of
public involvement in an issue” (Marres, 2010, p. 179). Amidst countless
opinion pieces and hot takes discussing GPAI, this special issue details
how scandal, silence, and hype operate to promote and publicize AI. We seek
interventions that question AI’s publicity and promotion as well as new
strategies of engagement with AI’s powerful social and political influence.

Concern over Generative AI, we argue, is limited by publicity around these
systems that has been framed by hype, silence, or scandal (Brennen, 2018,
2018; Sun et al., 2020). Publicity refers to the relations between affected
peoples and matters of shared concerns (Barney, 2014; Marres, 2015, 2018).
Historically, these relations have been mediated by the press (Schudson,
2008), but GPAI’s coincides with uncertainty about journalism’s status and
a rise of direct, one-step flow like effects of either citizen-to-citizen
or, in the case of ChatGPT, a direct link (Bennett & Manheim, 2006).

Scholars have observed that publicity around AI follows several distinct
patterns:

    1. Hype (Ananny & Finn, 2020; Broussard et al., 2019; Mosco, 2004). This
    discourse is prominent. Concepts like the fourth industrial
    revolution and disruption function as self-fulfilling, with the consequence 
    that technology arrives always as good news. The launch of ChatGPT is a
    hallmark of this hyped mode, fitting into a well-worn mode of a “normative
    framework of publicity [that is] drained of its critical value, and
    convert[ed] from a democratic asset to a democratic liability”
    (Barney, 2008, p. 92). Opening AI to the public, no matter the 
    consequences for society, as in the case of ChatGPT, is portrayed as a 
    good even as this publicization shifts focus to acceptance and inevitability.

    2. Silence. If not positive, AI coverage is marked by gaps and aporias,
    or in effect, closures when aspects of AI remain too uncontroversial to
    report as well as larger work into the logics of AI imaginaries
    (Bareis & Katzenbach, 2021). The result is that AI-related issues seldom 
    make the political information cycle, as in the case of ChatGPT’s potential
    violations of privacy and copyright law, discussed below (Chadwick,
    2013).

    3. Scandals – or what we refer to as proofs of social transgressions –
    are a pronounced feature of contemporary technological coverage and
    governance (Bosetta, 2020; Lull & Hinerman, 1998; McKelvey et al., 2018;
    Trottier, 2017). Scandals, which we stress do not necessity involve
    opportunities for public engagement nor democratic praxis, result from a
    mutually reinforcing relationship between newsrooms looking for easy,
    high-engagement stories (Blanchett et al., 2022; Cohen, 2015; Dodds
    et al., 2023) and the affordances of social media, largely functioning as a
    distraction from other tasks (Hall et al., 2018).

    4. Inevitability. AI discourses are dominated by technology firms,
    government representatives, AI investors, global management
    consultancies, and think tanks. These voices present a faith in data driven 
    systems to address social problems while also increasing efficiency and
    productivity (Bourne 2019, Beer 2019). Such discourses are used to 
    reinforce the idea that the increasing use of AI applications across all 
    spheres of life is inevitable, while sidelining or ignoring meaningful 
    engagement with the ways these applications cause harm (Walker 2022).

Our special issue seeks interventions focused on:

    1. Critical and comparative studies of AI’s publicities with regards to
    the launch and hype of AI. We particularly welcome papers that focus on
    cases outside the Global North;

    2. Ethnographic, discursive, or engaged research with AI’s public such
    as AutoGPT, HustleGPT, or other publics forming around the use,
    misuse or resistance to GPAI;

    3. Interventions or reflections on critical practices, such as community
    engagement and mobilization, futures literacy, or capacity building, for
    better publicizations of AI de-centering the strategic futuring
    employed by large technology firms.

Please submit an extended abstract (1000 words) by 1 August 2023.
Accepted full papers due 1 December 2023.
Planned publication Spring 2024.

https://forms.gle/hWQxvuRTWoFhVs3W6



_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php