... OK, I'm cutting myself off now - I added one last section, "Miscellaneous additional thoughts", with further thinking inspired by the conversation here: https://simonwillison.net/2024/Jan/7/call-it-ai/#misc-thoughts - plus a closing quote from @glyph
@simon @glyph This is an interesting piece, Simon - thank you for writing it.
I wonder if you're not somewhat undermining your own argument somewhat.
There is no reason at all why the interface to an LLM needs to be a chat interface "like you're talking to a human". That is a specific choice - and we have known for decades that humans will attach undue significance to something that "talks like a person" - all the way back to Eliza. 1/
@simon @glyph Therefore, this is an explicit design choice on the part of the product designers from these companies - and I struggle to see any reason for it other than to deliberately exploit the blurring of the distinction between "AI" & AGI - for the purpose of confusing non-technical investors and thus to juice valuations - regardless of the consequences. 2/
@kittylyst @glyph The thing I've found particularly upsetting here is the way ChatGPT etc talk in the first person - they even offer their own opinions on things some of the time! It's incredibly misleading.
Likewise the thing where people ask them questions about their own capabilities, which they then convincingly answer despite not having accurate information about "themselves" https://simonwillison.net/2023/Mar/22/dont-trust-ai-to-talk-about-itself/