Humanist Discussion Group, Vol. 39, No. 29. Department of Digital Humanities, University of Cologne Hosted by DH-Cologne www.dhhumanist.org Submit to: humanist@dhhumanist.org [1] From: Willard McCarty <willard.mccarty@mccarty.org.uk> Subject: intelligences (16) [2] From: David Zeitlyn <david.zeitlyn@anthro.ox.ac.uk> Subject: 39.4: repetition vs intelligence? (13) [3] From: Gabriel Egan <mail@gabrielegan.com> Subject: Re: [Humanist] 39.28: repetition vs intelligence (106) --[1]------------------------------------------------------------------------ Date: 2025-05-27 08:30:07+00:00 From: Willard McCarty <willard.mccarty@mccarty.org.uk> Subject: intelligences What's the problem, I wonder, with plural kinds or modes of intelligence? Why be defensive over other kinds or modes? Are we worried? I rather like Geoffrey Lloyd's "semantic stretch", for which see The Revolutions of Wisdom: Studies in the Claims and Practice of Ancient Greek Science. Sather Classical Lectures, Vol. 52. (University of California Press, 1987), pp. 175-176. The idea is applied and developed throughout his many books and papers. Best, WM -- Willard McCarty, Professor emeritus, King's College London; Editor, Humanist www.mccarty.org.uk --[2]------------------------------------------------------------------------ Date: 2025-05-26 18:46:41+00:00 From: David Zeitlyn <david.zeitlyn@anthro.ox.ac.uk> Subject: 39.4: repetition vs intelligence? Hello from Cameroon I’ve never tried to contribute to humanist from here before But Tim Smithers talk of “ The word 'intelligence' is what I call an ice-hockey-puck word.” Prompts me to remind humanist readers of Gallie’s 1956 paper on “essentially contested concepts”. Intelligence is a case in point. I discuss this and give citations in a 2022 or 2023 article on humility in JASO Journal of the anthropological society off Oxford. (Oh the irony of citing myself on humility). This should be easy to find online if you have good access to tinterweb which I do not from here. David en brousse Sent from my iPhone --[3]------------------------------------------------------------------------ Date: 2025-05-26 11:44:35+00:00 From: Gabriel Egan <mail@gabrielegan.com> Subject: Re: [Humanist] 39.28: repetition vs intelligence James Rovira asks two questions about Word Embedding as performed by algorithms such as Google's word2vec: 1) how does the "training process" identify that words are "close in meaning"? The closeness in meaning that we are referring to here is not the quality of being synonymous. The closeness in meaning of the words 'king' and 'queen' is a similarity-with-a-distinction, and that distinction is gender. That is: king is to queen as uncle is to aunt as rex is to regina as waiter is to waitress We wouldn't say that 'uncle' is a synonym of 'aunt', but they are close in that both mean 'a sibling of one's parent', but with a distinction of gender. In Word Embedding, two words are identified as being close in meaning (in the above sense) by their both being found in the company of the same set of other words ('the company they keep'). This is not, as Tim Smithers has it, the same as saying that words are similar in meaning if they "frequently occur close to each other". Rather, two words are similar in meaning if they share a high likelihood of being seen with the same set of other words near them. That is what I meant by "the things we say of kings are like the things we say of queens". These words are similar because they are more than usually likely to have near them words such as 'sovereign', 'reign', 'monarch', 'throne' 'crown', 'usurp', 'abdicate', and so on.* James's second question is: 2) "And how would the process that you describe generate readable text rather than, say, a list of words resembling a thesaurus?" Answer: why would it do that? A thesaurus gathers words that are synonyms (and sometimes antonyms). It doesn't capture the similarity of meaning discussed above. Regards Gabriel Egan * Of course, there is another set of words that also are likely to be found in the company of 'king' and 'queen', and they are 'pawn', 'rook', 'bishop', 'knight', 'mate', 'check', and so on. One or more of the numbers in the vector for 'bishop' will capture its shared meaning with other chess pieces, and another one or more of the numbers in the vector for 'bishop' will also capture its shared meaning with terms such as 'priest', 'prelate', 'deacon', 'vicar', and so on. That is, words are similar to one another along multiple dimensions at once. Large Language Models became impressive as Artificial Intelligence only once we gave them enough dimensions to capture something of how we carve up reality ourselves. This really is, pace Smithers, all about semantics. _______________________________________________ Unsubscribe at: http://dhhumanist.org/Restricted List posts to: humanist@dhhumanist.org List info and archives at at: http://dhhumanist.org Listmember interface at: http://dhhumanist.org/Restricted/ Subscribe at: http://dhhumanist.org/membership_form.php