I don't understand how anyone can watch how blatantly Grok is manipulated to answer the way ownership desires it to and then act like the other LLM chatbots couldn't possibly be similarly but less obviously compromised to produce responses in whatever way corporate interests and priorities dictate.
Probahee
in reply to Heather Bryant • • •TurblesCelbor
in reply to Probahee • • •dahlingzoe.bsky.social
in reply to TurblesCelbor • • •Shiri Bailem likes this.
dahlingzoe.bsky.social
in reply to dahlingzoe.bsky.social • • •Shiri Bailem likes this.
dahlingzoe.bsky.social
in reply to dahlingzoe.bsky.social • • •Shiri Bailem
in reply to Heather Bryant • •@Heather Bryant Funny thing about that is we have extremely limited ability to manipulate them, it's why they're horrible to put into customer service roles (best we've managed is just using it to match your request to a selection of prewritten responses, everything else has ended in disaster).
There's research into these things, but until then anyone trying to manipulate the outputs like that are going to end up with what's happening with Grok.
It's not just Musk being sloppy about it, it's him being dumb enough to barrel ahead without adequate testing... let alone listening to other's experiences.