welcome to the future, now your error-prone software can call the cops
(this is an Anthropic employee talking about Claude Opus 4)
welcome to the future, now your error-prone software can call the cops
(this is an Anthropic employee talking about Claude Opus 4)
@molly0xfff As if people who do illegal/immoral stuff on this level weren't able to afford a few GPUs and run LLMs themselves, without a phone line to the cops.
This is two things: 1. hype. overselling that language models understand anything about their output. And 2. an attempt to appease people who are afraid those models could be misused.