Skip to main content


I don't understand how anyone can watch how blatantly Grok is manipulated to answer the way ownership desires it to and then act like the other LLM chatbots couldn't possibly be similarly but less obviously compromised to produce responses in whatever way corporate interests and priorities dictate.
in reply to Heather Bryant

If one model is transparently manipulated, you should assume the others are manipulated — just more skillfully. Grok is sloppy about it. Other companies are subtle about it. The only difference is competence, not intent.
in reply to Probahee

Well, Grok-1 at least is available under Apache 2.0 licensing, so that code is auditable, but non of the subsequent versions are afaik. IDK, I don't trust companies that call themselves names like "OpenAI" but then won't open source their code.
in reply to TurblesCelbor

theres only so much auditing one can do of gigabytes of model weights. ultimately, the real sauce is in the training process.
in reply to Heather Bryant

@Heather Bryant Funny thing about that is we have extremely limited ability to manipulate them, it's why they're horrible to put into customer service roles (best we've managed is just using it to match your request to a selection of prewritten responses, everything else has ended in disaster).

There's research into these things, but until then anyone trying to manipulate the outputs like that are going to end up with what's happening with Grok.

It's not just Musk being sloppy about it, it's him being dumb enough to barrel ahead without adequate testing... let alone listening to other's experiences.

⇧