It’s OK to call it Artificial Intelligence: I wrote about how people really love objecting to the term "AI" to describe LLMs and suchlike because those things aren't actually "intelligent" - but the term AI has been used to describe exactly this kind of research since 1955, and arguing otherwise at this point isn't a helpful contribution to the discussion.
Short version: "I’m going to embrace the term Artificial Intelligence and trust my readers to understand what I mean without assuming I’m talking about Skynet."
I added an extra section to my post proving a better version of the argument as to why we shouldn't call it AI https://simonwillison.net/2024/Jan/7/call-it-ai/#argument-against
Calling LLMs “AI” is a bald faced lie.
The promoters try to excuse it by saying they’re using a different definition of intelligence now. But they know nobody else is using this novel definition.
They are getting away with it because we live in the Era of Shamelessness.
@simon I do tend to agree with your argument. It doesn't matter that much what we call it at this point - it's a clear umbrella term for the majority of the population. You can get more granular as discussion gets more specific and academic. I don't think my mom is going to understand the difference between AGI and a multi-modal large language model (MMLLM?) - it's absurd to expect otherwise. Meanwhile, these systems are becoming part of everyone's life - these nuances are meaningless.
And another section trying to offer a useful way forward: Let’s tell people it’s “not AGI” instead
https://simonwillison.net/2024/Jan/7/call-it-ai/#not-agi-instead
... OK, I'm cutting myself off now - I added one last section, "Miscellaneous additional thoughts", with further thinking inspired by the conversation here: https://simonwillison.net/2024/Jan/7/call-it-ai/#misc-thoughts - plus a closing quote from @glyph
@simon it's not AI at all. Don't let the push marketing sons of bitches claim the memetic space. "Auto complete at scale" ain't intelligent
@simon Doesn’t this just re-establish the same problem? AGI isn’t a well-known term, so you’re still left defining the terms of the debate you’re hoping to avoid in order to avoid misleading the reader.
@futuraprime maybe!
My hunch is that it's easier to teach people that new term than convive them to reject a term that everyone else in society is already using
@simon Yeah, that’s fair. Certainly everyone equates LLMs with AI.
The other part of my reluctance is that lots of people are trying to broaden the term to capitalise on it—I’ve seen “AI” applied to all sorts of unsupervised learning tasks to make them sound fancier. The gulf between someone’s random forest classifier and GPT4 is so huge it makes me want to be more specific.
@futuraprime I was tasked with delivering a recommendation system a while ago, and the product owners REALLY wanted it to use machine learning and AI... I eventually realized that what they wanted was "an algorithm", so I got something pretty decent working with a pretty dumb Elasticsearch query plus a little bit of SQL
@Seruko I 100% agree that autocomplete at scale isn't intelligent, but I still think "Artificial Intelligence" is an OK term for this field of research, especially since we've been using it to describe non-intelligent artificial systems since the 1950s
I like "AGI" as the term to use for what autocomplete-at-scale definitely isn't
@simon @glyph This is an interesting piece, Simon - thank you for writing it.
I wonder if you're not somewhat undermining your own argument somewhat.
There is no reason at all why the interface to an LLM needs to be a chat interface "like you're talking to a human". That is a specific choice - and we have known for decades that humans will attach undue significance to something that "talks like a person" - all the way back to Eliza. 1/
@glyph Added this just now, a thing I learned from https://social.juanlu.space/@astrojuanlu/111714012496518004 which gave me an excuse to link to https://99percentinvisible.org/episode/project-cybersyn/ (I'll never skip an excuse to link to that)
Casual thought: maybe a good term for "artificial intelligence" that's actually intelligent... is intelligence!
@simon @glyph this is well covered in the older Norvig books (I just looked because I am sitting next to them). PAIP has a very humorous chapter on “GPS" the general problem solver, and AI: A Modern Approach covers the history very well in Section 1.3 (~page 17), and mentions escape from cybernetics, but not the personal stuff.
(I have a bunch of these books, as I would buy anything I could find that would tell me “what computers can do”, and the Internet really wasn't any good yet)
@simon A problem I see is that the colloquial use of “intelligence” implies conscious agency, and brings with a whole host of assumptions that are not warranted with artificial system, and that can cause huge problems.
@simon we began debating this on the Safe Network forum and it quickly became obvious that it is incredibly hard to define. There are so many ways to look at phenomena that could be called intelligence, so many timescales and scopes.
Really the first step is to clearly specify your terms. Anything ambiguous is pretty useless.
@simon @glyph Therefore, this is an explicit design choice on the part of the product designers from these companies - and I struggle to see any reason for it other than to deliberately exploit the blurring of the distinction between "AI" & AGI - for the purpose of confusing non-technical investors and thus to juice valuations - regardless of the consequences. 2/
@kittylyst @glyph I'm more than happy to undermine my own argument on this one, I don't particularly strong opinion here other than "I don't think it's particularly useful to be pedantic about the I in AI".
100% agree that the chat interface is a big part of it, and also something which isn't necessarily the best UI for working with these tools, see also: https://simonwillison.net/2023/Oct/17/open-questions/#open-questions.005.jpeg
@kittylyst @glyph The thing I've found particularly upsetting here is the way ChatGPT etc talk in the first person - they even offer their own opinions on things some of the time! It's incredibly misleading.
Likewise the thing where people ask them questions about their own capabilities, which they then convincingly answer despite not having accurate information about "themselves" https://simonwillison.net/2023/Mar/22/dont-trust-ai-to-talk-about-itself/