I’ve never had an issue with the anthropomorphization of digital processes. When someone says „if you enter this kind of command, the tool knows that it needs to look up this thing over here‟, it’s an adequate metaphor of what’s happening, and the language flows well. People get a decent visualization of boxes and arrows in their heads that captures what the processes are doing.
It’s different when you’re anthropomorphizing an LLM. Please don’t, be very technical and precise about what it does. There’s a kind of uncanny valley of behavior that makes the metaphor collapse.
Clacke på Mastodon – link til indlæg