Skip to main content


Disclaimer: All my written opinions are (un)educated guesses sources from scattered online opinion pieces and news articles I speed-read out of curiosity.

In the event I’m actually right about something, it’s more likely to be coincidental with my internal consistency than grounded in solid knowledge or understanding of the relevant topic(s).

However, I’ll always be able to write mostly convincing arguments à-la-Large Language Model because school trained me for eloquence rather than factual accuracy. Thanks for reading!

in reply to Hypolite Petovan

I stumbled on this thought again today and I realized that I lived long enough to become ChatGPT.
in reply to Hypolite Petovan

@Hypolite Petovan that's one of the things that gets to me about people's reactions to the "inaccuracies" ... one of the stepping stones of AI is getting it to human level intelligence, and that includes a certain degree of faults...
in reply to Shiri Bailem

@Shiri Bailem My problem with LLMs' inaccuracies is that they are made at a frightening scale that few single humans have ever been able to reach. It's like someone learned about Brandolini's Law and thought the problem was that it was still possible to refute bullshit.
⇧