It’s OK to call it Artificial Intelligence: I wrote about how people really love objecting to the term "AI" to describe LLMs and suchlike because those things aren't actually "intelligent" - but the term AI has been used to describe exactly this kind of research since 1955, and arguing otherwise at this point isn't a helpful contribution to the discussion.
Short version: "I’m going to embrace the term Artificial Intelligence and trust my readers to understand what I mean without assuming I’m talking about Skynet."
@simon What makes this hard currently is that many of the loudest advocates *are explicitly* talking about skynet (or "digital god" or whatever). And it seems like they're using this history of the term as cover with a general audience.
@simon the term "AI" deeply misleads laypeople into thinking sentient minds are at play, leading to all kinds of misuse/harm. I dont have to list links to all the damage "AI" has done so far due to people putting it in charge of things since "it's intelligent".
going to keep using technical terms like "machine learning" so that all the non-tech people I talk to understand a tech person like me does not consider this stuff to be "intelligent" in any way we usually define that term for humans
I added an extra section to my post proving a better version of the argument as to why we shouldn't call it AI https://simonwillison.net/2024/Jan/7/call-it-ai/#argument-against
@simon The problem is people taking it literally. Yeah, AI is a field of computer science. But it's being marketed as a product. And it's being hyped as if it's now an achieved reality, instead of just software that mimics human conversation, art, etc.
@simon Your readers are probably fine, but the problem is this is the first time this has escaped into the real world. It is being put in front of muggles who have been trained on sci-fi and have wildly unrealistic expectations. We know LLMs are glorified photocopiers, but normal people who I've spoken with genuinely expect the "intelligence" bit to mean that answers come from human-like knowledge and thought. The danger is the AI label means they trust what LLMs generate without question.
@radiac I do agree with that, but I'm not sure that's the battle worth fighting right now - my concern is that if we start the conversation with "you know it shouldn't really be called AI, right?" we've already put ourselves at a disadvantage with respect to helping people understand what these things are and what they can reasonably be used to do
@simon True, it's not like we can change the narrative now anyway - it's intentional, it's billionaire marketing. Trying to rebrand as "not AGI" is not going to work, the public have never heard of AGI and won't be interested in the difference.
It's trolling vs abuse, or hacker vs cracker again - if I say in the real world "I enjoy trolling" I lose friends, or "I'm a hacker" they imagine me skating around train stations looking for landlines. Difference is, misnomers like that don't risk harm.
Calling LLMs “AI” is a bald faced lie.
The promoters try to excuse it by saying they’re using a different definition of intelligence now. But they know nobody else is using this novel definition.
They are getting away with it because we live in the Era of Shamelessness.
@simon I do tend to agree with your argument. It doesn't matter that much what we call it at this point - it's a clear umbrella term for the majority of the population. You can get more granular as discussion gets more specific and academic. I don't think my mom is going to understand the difference between AGI and a multi-modal large language model (MMLLM?) - it's absurd to expect otherwise. Meanwhile, these systems are becoming part of everyone's life - these nuances are meaningless.
And another section trying to offer a useful way forward: Let’s tell people it’s “not AGI” instead
https://simonwillison.net/2024/Jan/7/call-it-ai/#not-agi-instead
... OK, I'm cutting myself off now - I added one last section, "Miscellaneous additional thoughts", with further thinking inspired by the conversation here: https://simonwillison.net/2024/Jan/7/call-it-ai/#misc-thoughts - plus a closing quote from @glyph
@simon it's not AI at all. Don't let the push marketing sons of bitches claim the memetic space. "Auto complete at scale" ain't intelligent
@simon Doesn’t this just re-establish the same problem? AGI isn’t a well-known term, so you’re still left defining the terms of the debate you’re hoping to avoid in order to avoid misleading the reader.
@futuraprime maybe!
My hunch is that it's easier to teach people that new term than convive them to reject a term that everyone else in society is already using
@simon Yeah, that’s fair. Certainly everyone equates LLMs with AI.
The other part of my reluctance is that lots of people are trying to broaden the term to capitalise on it—I’ve seen “AI” applied to all sorts of unsupervised learning tasks to make them sound fancier. The gulf between someone’s random forest classifier and GPT4 is so huge it makes me want to be more specific.
@futuraprime I was tasked with delivering a recommendation system a while ago, and the product owners REALLY wanted it to use machine learning and AI... I eventually realized that what they wanted was "an algorithm", so I got something pretty decent working with a pretty dumb Elasticsearch query plus a little bit of SQL
@Seruko I 100% agree that autocomplete at scale isn't intelligent, but I still think "Artificial Intelligence" is an OK term for this field of research, especially since we've been using it to describe non-intelligent artificial systems since the 1950s
I like "AGI" as the term to use for what autocomplete-at-scale definitely isn't
@simon @glyph This is an interesting piece, Simon - thank you for writing it.
I wonder if you're not somewhat undermining your own argument somewhat.
There is no reason at all why the interface to an LLM needs to be a chat interface "like you're talking to a human". That is a specific choice - and we have known for decades that humans will attach undue significance to something that "talks like a person" - all the way back to Eliza. 1/
@glyph Added this just now, a thing I learned from https://social.juanlu.space/@astrojuanlu/111714012496518004 which gave me an excuse to link to https://99percentinvisible.org/episode/project-cybersyn/ (I'll never skip an excuse to link to that)
Casual thought: maybe a good term for "artificial intelligence" that's actually intelligent... is intelligence!
@simon @glyph this is well covered in the older Norvig books (I just looked because I am sitting next to them). PAIP has a very humorous chapter on “GPS" the general problem solver, and AI: A Modern Approach covers the history very well in Section 1.3 (~page 17), and mentions escape from cybernetics, but not the personal stuff.
(I have a bunch of these books, as I would buy anything I could find that would tell me “what computers can do”, and the Internet really wasn't any good yet)
@simon A problem I see is that the colloquial use of “intelligence” implies conscious agency, and brings with a whole host of assumptions that are not warranted with artificial system, and that can cause huge problems.
@simon we began debating this on the Safe Network forum and it quickly became obvious that it is incredibly hard to define. There are so many ways to look at phenomena that could be called intelligence, so many timescales and scopes.
Really the first step is to clearly specify your terms. Anything ambiguous is pretty useless.
@simon @glyph Therefore, this is an explicit design choice on the part of the product designers from these companies - and I struggle to see any reason for it other than to deliberately exploit the blurring of the distinction between "AI" & AGI - for the purpose of confusing non-technical investors and thus to juice valuations - regardless of the consequences. 2/
@kittylyst @glyph I'm more than happy to undermine my own argument on this one, I don't particularly strong opinion here other than "I don't think it's particularly useful to be pedantic about the I in AI".
100% agree that the chat interface is a big part of it, and also something which isn't necessarily the best UI for working with these tools, see also: https://simonwillison.net/2023/Oct/17/open-questions/#open-questions.005.jpeg
@kittylyst @glyph The thing I've found particularly upsetting here is the way ChatGPT etc talk in the first person - they even offer their own opinions on things some of the time! It's incredibly misleading.
Likewise the thing where people ask them questions about their own capabilities, which they then convincingly answer despite not having accurate information about "themselves" https://simonwillison.net/2023/Mar/22/dont-trust-ai-to-talk-about-itself/
@simon less harm was done in 1955, 1960, 1970 etc. because we didn't have machines that were so singularly focused on pretending to be (confident, authoritative) humans at such massive scales, there was little chance of misunderstanding back then. now these machines have "I hope you misunderstand what I do" at their core
@zzzeek That's a very strong argument. I'm going to add a longer section about science fiction to my post, because that's the reason I held off on the term for so long too
@zzzeek Added that section here https://simonwillison.net/2024/Jan/7/call-it-ai/#argument-against
@simon just as the center of my assertion "I hope you misunderstand what I do", I would use the "AI Safety" letter as the prime example, of billionaires and billionaire-adjacent types declaring that this "AI" is so, so close to total sentience that governments *must* stop everyone (except us! who should be gatekeepers) from developing this *so very dangerous and powerful!* technology any further
lots of non-tech ppl signed onto that thing and it was quite alarming