Humanist Discussion Group

Humanist Archives: Sept. 13, 2023, 7:21 a.m. Humanist 37.205 - pubs: Moral Codes (online & forthcoming, MIT Press)

				
              Humanist Discussion Group, Vol. 37, No. 205.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2023-09-12 11:39:19+00:00
        From: Willard McCarty <willard.mccarty@mccarty.org.uk>
        Subject: Moral Codes

Moral Codes: Designing alternatives to AI
Alan Blackwell (Cambridge)

Is it too late to design alternatives to AI? Or must commentators and
regulators simply respond to the products that Google, Facebook or
Microsoft choose to release? Alan Blackwell, Professor of Design in the
Computer Science department at Cambridge University, offers some
surprising insights to our current dilemmas.

In his new book Moral Codes: Designing alternatives to AI, he draws on
his 40 years experience in the field to reflect on the paths we have
chosen, and those we have not. The central argument is that software
systems are representations, designed to describe ourselves and the
world around us. Many of these representational codes are known as
programming languages, used by specialists to solve engineering
problems. Blackwell explains how historical ambitions for more
human-centric programming were the starting point for user interfaces
from the iPhone screen to Wikipedia, and argues that similar radical
advances are available for the future.

AI systems, including ChatGPT and other large language models, are also
representations. But rather than addressing engineering or scientific
problems, the ambition of AI has been to create representations of
humans, following the tradition of Pygmalion, Frankenstein, Rossum’s
Universal Robots, or the clockwork automata of Jaquet-Droz and
Vaucanson. Blackwell argues that this kind of AI is a work of literature
rather than a branch of science. We will always be fascinated by
representations of ourselves, but we should not confuse imagination with
engineering.

If computers are to be practically useful, we must be able to give them
instructions, understand what they are doing, and make changes on our
own terms. Blackwell offers the acronym MORAL CODES to promote More Open
Representations, Accessible to Learning, with Control Over Digital
Expression. His practical design advice for alternatives to AI focuses
on the creative and labour-saving opportunities that new codes can
bring, rather than pursuing either the utopian or dystopian versions of
the Turing Test, where the easiest way to pass the test is by making
humans more stupid rather than making computers more intelligent.

Moral Codes will be published by MIT Press in 2024, in print and
electronic open access editions. The full text is already available as a
pre-release edition:
https://moralcodes.pubpub.org/

--
Willard McCarty,
Professor emeritus, King's College London;
Editor, Interdisciplinary Science Reviews;  Humanist
www.mccarty.org.uk


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php