It’s OK to call it Artificial Intelligence: I wrote about how people really love objecting to the term "AI" to describe LLMs and suchlike because those things aren't actually "intelligent" - but the term AI has been used to describe exactly this kind of research since 1955, and arguing otherwise at this point isn't a helpful contribution to the discussion.
@simon Hearty agreement - though we now have the challenge of how to manage the boom of popular awareness, balanced with their understandably non-technical definition of intelligence. I'd argue ;) that the "arguing otherwise" is often about exactly that vocabulary mismatch.
Short version: "I’m going to embrace the term Artificial Intelligence and trust my readers to understand what I mean without assuming I’m talking about Skynet."
@simon maybe, but because some science people working on it called it that doesnt mean we have to accept the word.
the more general term hides the more specific and nuanced and more informative details, also once introduced into the mainstream vocabulary it might clash with other mainstream meaning and it is easier for a small group to change their wording than for a large group.
i generally think scientists should strive to simplify their language, but some actually hide behind it.
@simon "AI" isn't wrong, but I think it is most helpful to use the most specific term that applies. So if you are talking about issues with LLMs in particular, better to say LLMs.
@simon I so want to agree with you. What's making me a ReplyGuy is that people outside the field put far too much weight on what AI means. Too many don't understand how narrow LLMs are, spinning doomsday scenarios far too easily. (but they ARE powerful!) I don't like to use the term just to back these people off the ledge
@simon "Artificial intelligence has been used incorrectly since 1955" is not a convincing argument to me (and means our predecessors are as much to blame for misleading the general public as contemporary hucksters claiming ChatGPT is going to cause human extinction).
@simon I’ve been thinking about this too, but on a slightly different line. It’s not about science fiction, it’s that we so strongly tie language with intelligence. The Turing test is based on this connection. We measure children’s development in language milestones, and look for signs of language in animals to assess their intelligence. It goes back a long way—“dumb” in English has meant both “unable to speak” and “unintelligent” for 800 years. The confusion is reflexive and deep-seated.
@simon one argument that you’re not addressing here is that it dates anything you are writing, in a way that makes it hard to understand without first understanding its contemporaneous terminology. Our current view of AI as an actual *technology*—statistical machine-learning techniques, as opposed to just the chatbot UI paradigm—is quite new and quite *at odds with* previous understanding of the term (like, say, expert systems). It may be at odds with future understandings as well.
@simon I’m inclined to disagree, but I do think that it’s a bit of a lost battle. I’d rather encourage people to “yes, and” and just get more specific:
https://www.aspendigital.org/report/ai-101/#section6
I don’t take issue with the term “AI,” however, and I think that’s a handy alternative. Sisi Wei actually beat me to the punch on this in a recent #TalkBetterAboutAI conversation: https://youtu.be/KSsxuEtGgEg
@simon "so-called AI", "technology marketed as 'AI'", or even just "AI" in quotes, seem to solve the "most people [..] don’t know what it means" issue, while contributing a lot less to the other problem: while "AI is [..] already widely understood", its common understanding is something way beyond what it actually does, which is dangerous for all the reasons we seem to be agreeing on in this thread.
@simon I’m with you on this. Nothing published in the journal Artificial Intelligence in the 50 years of its existence qualifies as “artificial intelligence” in the sense of the word that people concerned about its use impute. That people misinterpret a term used in academic research isn’t something to be fixed by changing academic terminology, but by changing lay understanding of what is and isn’t implied imo. The key thing is increasing understanding of what #LLMs do and don’t do - as you are!
@simon @UlrikeHahn
But isn't there a difference between the research and "those things" (i.e. recent consumer products like bing chat etc., which are not research about intelligence but consumer products marketed as intelligent)?
@simon
"The most influential organizations building Large Language Models today are OpenAI, Mistral AI, Meta AI, Google AI and Anthropic. All but Anthropic have AI in the title; Anthropic call themselves “an AI safety and research company”. Could rejecting the term “AI” be synonymous with a disbelief in the value or integrity of this whole space?"
Rejecting those companies and their business models? Yes. For me "AI" is a marketing phrase and using it to describe #MOLE is doing unpaid PR work.
@simon Counterargument: https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-for-intelligence/
"AI" as a term, like many other things, was a male ego thing. McCarthy: "I wished to avoid having either to accept Norbert (not Robert) Wiener as a guru or having to argue with him." https://en.m.wikipedia.org/wiki/History_of_artificial_intelligence
"AI" is the biggest terminology stretch in the history of computing, and using it is "OK" only because everybody else is doing it, but that's a weak excuse.
@simon I have seen claims that "smart" was used within industry to avoid claims of intelligence after the AI winters. But that term is, of course, also not very informative today.
@simon At least my motivation for challenging the term is _not_ that AI is not actually intelligent, but to spark discussion about the level of abuse by AGI proponents. Just look at OpenAI’s mission statement: they are actively abusing what the “I” implies to the general public with a pompous vision, intentionally shifting the meaning of “I”. They should call themselves ClosedAGI instead. We should focus on “Useful Computation”, whatever paradigms that requires.
@simon
I think there is a point because something has changed. People are suddenly experiencing something uncannily like all the fictional AIs they've read about and watched in movies.
Many people, including plenty I expect to know better are seeing a conversational UX with a black box behind it, as opposed to a few lines of basic, and then make wildly overblown assumptions about what it is. Deliberately encouraged by those using deceptive framing such as 'hallucinations' to describe errors.
Using words that have achieved common meaning through time (despite their origin) is how we are able to communicate.
This is a thoughtful justification, but it's also a support of common sense.
@simon I always thought that if it's actually intelligent then it would just be AI, Actual Intelligence.
@simon I agree! I wrote a bit about the terminological critique here: https://sanchom.github.io/atlas-of-ai.html
@simon I propose we split off the term “Eh Eye” to refer to the at best useless and at worst harmful hype driven vaporware emerging from the LLM boom, and leave the computer scientists, neuroscientists, philosophers and theologians to argue about the definition of Artificial Intelligence.
@simon I feel like LLMs are one of the first technologies where "Artificial Intelligence" sort of applies. GPT4 can do things I cannot do, do tasks which it wasn't explicitly trained on, etc. It's not very good at a lot of this and has obvious limitations. But it seems much harder to explain it away as "just" doing XYZ, as with earlier AI technologies like symbolic calculus, expert systems or statistical classifiers.
@astrojuanlu I hadn't seen that quote regarding cybernetics before, that's fascinating!
@astrojuanlu @simon some (such as me) might claim everything about AI, not just the name, is a male ego thing. Also cybernetics was about much more than artificial intelligence.
I didn't know that either. I can see why one would want to disassociate symbolic AI from cybernetics, but of course there's an irony given where AI ended up. The trend towards connectionism in AI was already well underway by the early 90s, though; considering neural networks as AI is nothing new.
@deadwisdom I think we should keep AI and push AGI for the science fiction version https://simonwillison.net/2024/Jan/7/call-it-ai/#not-agi-instead
@pieist yes, absolutely - I think the thing that's not OK here is fiercely arguing that people who call LLMs AI shouldn't do that to the point of derailing more useful conversations
@simon Also, not for nothing but you are giving the lay public _way_ too much credit when it comes to understanding the limitations of LLMs and PIGs. Numerous people are doing additional jail time because even highly-educated, nationally-renowned *lawyers* cannot wrap their heads around this. The term very definitely obscures more than it reveals, and the “well, actually” pedantic conversation about it’s inappropriateness *does* drive deeper understanding of it.
@glyph @simon I feel like “AI” has a very precise layman’s definition and a very vague practitioner’s definition. To a layman AI means AGI, “a computer that can think like a person.” To a practitioner AI means…? “Statistical ML ish?” “LLMs and PIGs?” “I get more funding if I call this AI?” The public has a very precise definition! That’s so rare. We shouldn’t water it down and say “oh that’s actually A~G~I” for no reason.
@simon @carlana this strikes closer to the heart of my objection. A lot of insiders—not practitioners as such, but marketers & executives—use "AI" as the label not in spite of its confusion with the layperson's definition, but *because* of it. Investors who vaguely associate it with machine-god hegemony assume that it will be very profitable. Users assume it will solve their problems. It's a term whose primary purpose has become deceptive.
@simon
The less you know the more confident you are. Just ask an LLM.
I intentionally avoid the term AI and advise other technically minded folks to do the same because it is a purely Marketing term. It will never have a meaningful definition.
Everything I've ever worked on to automate tasks with computers in the past 30 years would be called AI today by a Marketing Department despite none of it involving ML.
Their definition is "this term attracts attention and money", oriented around their goal. The lay person hearing it has a definition of "hype buzzword bingo score for Product Name". It doesn't communicate anything.
Elide the term AI from any context in which it gets used to describe something and it should still be just as meaningful. If not, nothing was being said.
Be right back. I'm gonna go hit Tab in my command line so the shell's AI can do what I want for me. 😛
@simon @carlana At the same time, a lot of the deception is unintentional. When you exist in a sector of the industry that the public knows as "AI", that the media calls "AI", that industry publications refer to as "AI", that *other* products identify as "AI", going out on a limb and trying to build a brand identity around pedantic hairsplitting around "LLMs" and "machine learning" is a massive uphill battle which you are disincentivized at every possible turn to avoid.
@simon @carlana personally I am trying to Get Into It over the terminology less often, but I will still stick to terms like "LLMs", "chatbots", and "PIGs" in my own writing. Not least of which because the tech behind PIGs/PVGs, LLMs, and ML classifiers are actually all pretty different, despite having some similar elements
@lgw4 I don't think they were wrong to coin a term in 1955 with a perfectly reasonable definition, then consistently apply that definition for nearly 70 years.
It's not their fault that science fiction redefined it from under them!
@simon Machines with intelligence similar to (or better than) that of humans (that is, the current popular concept of artificial intelligence) has been present in science fiction since the 19th century. Dystopian (and utopian) fantasies of humans subjugated (or assisted) by these machine intelligences have been science fiction tropes continuously since then. I would wager that John McCarthy was aware of this fact. No one "redefined it from under them."
@lgw4 that's not an argument I'd heard before! I know science fiction had AI all the way back to Erewhon https://en.m.wikipedia.org/wiki/Erewhon but I was under the impression that the term itself was first used by McCarthy
@scottjenson Yeah, that's exactly why I was resistant to the term too - the "general public" (for want of a better term) knows what AI is, and it's Skynet / The Matrix / Data from Star Trek / Jarvis / Ultron
I decided to give the audience of my writing the benefit of the doubt that they wouldn't be confused by science fiction