Humanist Discussion Group

Humanist Archives: March 23, 2022, 5:49 a.m. Humanist 35.612 - The 'No-Code' Wizard of Oz

              Humanist Discussion Group, Vol. 35, No. 612.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                Submit to:

        Date: 2022-03-22 13:18:49+00:00
        From: Henry Schaffer <>
        Subject: Re: [Humanist] 35.608: 'No-code' AI

I just read the thread on No-code AI, and it got me thinking of
over-simplification. The NYT article which Robert Amsler provided made me
wonder if there was a serious lack of understanding. E.g., “Just as
clickable icons have replaced obscure programming commands on home
computers, new no-code platforms replace programming languages with simple
and familiar web interfaces.“ Wow! Clicking on an icon which says "Excel"
is simple, but typing in "Excel<CR>" is obscure and requires a deep
acquaintance of one or more programming languages?

I was going to write about this, but, thankfully, Gioele Barabucci's long
treatment and Willard's short note covered the points I wanted to make -
perhaps stating them better than I would have, so I'll just end with a
comment related to Willard's.

"Pay no attention to that man behind the curtain." is a memorable line from
a wonderful story, but it is extremely relevant to this discussion. How can
a user of AI (or any computerized functionality) be a responsible user if
totally ignorant of what's going on "behind the curtain"? I'll end with a
personal story - I was giving a lecture to a grad students group on
statistical analysis (probably using SAS) and said that one should always
start by copying the data from a textbook problem into the computer and
doing the desired analysis and checking to see that the output was what the
textbook said. One grad student raised her hand for a question, "Prof.
Schaffer, why don't you trust the computer?"


On Sun, Mar 20, 2022 at 1:58 AM Humanist <> wrote:

>               Humanist Discussion Group, Vol. 35, No. 608.
>         Department of Digital Humanities, University of Cologne
>                       Hosted by DH-Cologne
>                 Submit to:
>     [1]    From: Gioele Barabucci <>
>            Subject: Re: [Humanist] 35.604: 'No-code' AI (151)
>     [2]    From: Willard McCarty <>
>            Subject: trustworthy AI? (14)
> --[1]------------------------------------------------------------------------
>         Date: 2022-03-17 10:36:20+00:00
>         From: Gioele Barabucci <>
>         Subject: Re: [Humanist] 35.604: 'No-code' AI
> On 17/03/22 06:42, Humanist wrote:
> > In essence, large collections of data, especially in images, audio
> > files, or structured text (e.g., spreadsheets), or even unstructured
> > text can now be used in new A.I. software that doesn't require
> > programmers to write new code to reach conclusions about what that data
> > contains.
> Dear Robert,
> allow me a slightly verbose reflection (rant?) on the words "no-code"
> and "programming".
> My take:
> a) "to program" is just a synonym for "to explain", and
> b) "no code" is, therefore, impossible.
> ## To program == to explain
> The whole point of a program is to lay out instructions on how a machine
> should perform a task to achieve a desired output/state given an input.
> We cannot use the usual human-to-human communication channels to explain
> to a machine the task nor the desired output.
> Writing machine code (numbers) is the only way we know to instruct a
> machine to perform certain tasks. Because we don't like writing machine
> code we wrote some machine code (bootstrap assemblers) that turned
> letters (assembly code) into machine code, and then we wrote many more
> letters (compilers) to turn more complex words (programs in C,
> Javascript) from an artificial language (programming language) into
> other machine code.
> We are so used to think that writing in a programming language is what
> defines programming, that we often forget that the whole point of
> writing a program in the first place is to _explain_ what we want.
> The encoding of our desires and instructions in a programming language
> is just an artifact that arises from the constraints that define
> human-to-computer communication.
> Similarly, the fact that I am encoding my thoughts as a written text in
> English is an artifacts of the constraints of this conversation. I could
> very well encode my thoughts in German. Or as a recording Italian.
> But then I could not "program" your brain to the state I desire, namely
> "you get what I mean".
> After this introduction, let's move to the next point...
> ## "No code" is impossible
> Just after the concept of _sequence_, the second most basic building
> block of programming is the _if_ construct. We want something to happen
> in certain situations.
> Depending on your choice of language, you can encode this in many
> different ways.
> * English: "Could you please do this when this happen"
> * BASIC: "IF condition THEN action"
> * Ternary operator: "condition ? action1 : action2"
> * Prolog: "foobar(condition) :- action1 / foobar(X) :- action2"
> * Flowchart/Labview: a rhombus
> Everybody agrees, I hope, that all these are "instructions in a
> programming language".
> No-code platforms promise you that you will not need to learn how to
> program. Then how will I explain to the machine what is the desired
> output/state? Do I just dream of it and the machine makes it happen?
> How is the action of marking, say, an image in a dataset with "yes" and
> another one with "no", not a form of programming?
> The marking is the programming; running the "no-code AI" is the
> compiling; the result is the compiled program.
> If the compiled program "has a bug" (it answers "no" when it should have
> answered "yes), what do you do? You go through your markings, try to
> understand why the no-code tool may have come to a certain conclusion,
> and make some changes to your markings so that the no-code tool behaves
> better next time. How is that different from debugging a program?
> One claim is that marking a piece of data in a dataset is fundamentally
> simpler than programming in a conventional programming language.
> To me that sounds like saying that speaking is easier than explaining.
> True, but not relevant. One is strictly mechanical act, the other one
> requires an understanding of the problem at hand as well as some sort of
> intentionality.
> Marking a couple of data points in a pretty UI is indeed easy, but that
> is the equivalent of "PRINT hello world". That's also no-code, that's
> plain English.
> The issues arise when you try to make the machine behave in the way you
> want. At that point you need much more that marking data points. You
> need to know _which_ data points to mark and _how_ to mark them. You
> need experience to forecast how the machine will react to changes in the
> markings.
> What is this if not programming? What is that series of markings if not
> a program in a programming language?
> Yes, you don't see the text that we usually associate with programming
> languages. But all the concepts (and experience requirements) are there.
> If no experience is needed on a no-code platform, then every user should
> be just as expert as the creator of the no-code platform. What are the
> tutorials for, then?
> No-code cannot possibly exist. At best, users will engage in
> domain-specific or task-specific programming via specialized UIs. But
> that is and remain programming. With Visual code perhaps. Nothing new.
> I understand that people are scared of the word "programming". In my
> opinion the right approach is not to tell people "no programming needed"
> but rather "programming is not hard" while, at the same time, improve
> the ergonomics of programming. Similar discussions have been held for
> decades in mathematics. People are scared of mathematics, but resorting
> to the no-code equivalent "learn how to split the bill without math" is
> not the solution.
> Let me close this rant with an historical perspective. SQL was born as a
> database interface for the masses. A no-code tool of its time. The boss
> will generate the sales report they need by typing a few instructions at
> the terminal. No more discussions with the IT department and the
> programmers in the basement. This is reflected in the abstract of the
> original 1974 SEQUEL publication [1] by Chamberlin and Boyce:
> > In this paper we present the data manipulation facility for a
> > structured English query language (SEQUEL) which can be used for
> > accessing data in an integrated relational data base. [...] A SEQUEL
> > user is presented with a consistent set of keyword English templates
> > which reflect how people use tables to obtain information. [...]
> > SEQUEL is intended as a data base sublanguage for both the
> > professional programmer and the more infrequent data base user.
> Regards,
> [1]
> --
> Gioele Barabucci <>
> --[2]------------------------------------------------------------------------
>         Date: 2022-03-17 05:47:39+00:00
>         From: Willard McCarty <>
>         Subject: trustworthy AI?
> Thanks to Robert Amsler for the notice of the NYT article on 'No-code
> AI'. My question is this: how reliable is the translation from what I
> write and the actions taken? Would it not be foolish to assume that the
> translation is perfect? And one other thing. Let's suppose the
> translation is perfect. Would not the outcomes of such a facility be
> analogous to all those three-wishes stories?
> Yours,
> WM
> --
> Willard McCarty,
> Professor emeritus, King's College London;
> Editor, Interdisciplinary Science Reviews;  Humanist

Unsubscribe at:
List posts to:
List info and archives at at:
Listmember interface at:
Subscribe at: