Skip to main content


If you see a new youTube channel with a plain sounding name like "NatureView" or "BrightScience" etc. and there is what looks like a tempting video on a specific education topic "Most Active Volcanoes" or "Incredible Carnivorous Plants"

There is a 50/50 chance it will be a generated voice with stock footage and a script written by GPT.

I am now avoiding videos if I don't recognize the creator, or don't see signs it was made by a person.

So much spam!

in reply to myrmepropagandist

and YouTube expects me to sit through ads with these types of videos too. Yeah no thank you.
in reply to Dallas Groot

@Iamgroot11

I switched to firefox to get away from googles ads. They have broken Chrome to their own advantage and I hope more people bail from Chrome over this.

By the way Firefox made the switch over a sheer joy. Smoothest app transition I've ever had. I was worried about losing bookmarks and logins and such and everything just copied over perfectly. Five stars!

in reply to myrmepropagandist

@Iamgroot11
I made the switch a few months ago, and the only thing that really annoys me is the lack of tab groups.
The way I can watch youtube in peace without ads is soooooooo good.
in reply to Gurre Vildskägg

Funnily enough, Firefox pioneered that feature, Chrome stole it, and then Firefox deprecated it /sigh (basic support is built in, but the interface was shunted out to extensions).

Look into the Simple Tab Groups extension or similar for some comparable features. Or, find more extensions here: addons.mozilla.org/en-US/firef…

This entry was edited (6 months ago)
in reply to myrmepropagandist

I could maybe tolerate such content were it not riddled with errors. Give it 30 seconds and it will say something FALSE.

I am a little worried people are watching these and getting their heads filled with plausible, but wrong facts about obscure topics.

"I thought no snakes with horizontal markings were venomous."

"I thought this ant was native to this region so it'd be fine to release..."

This like ... some evil masterminds plan to grind human learning to a halt...

in reply to myrmepropagandist

Yes, videos made by people can have mistakes... but there are far fewer mistakes ... and the mistakes ... How do I explain this?

When a person makes a mistake about some fact about science or nature it's normally based on something, it comes from some perspective on the world. And people tend to make the same predictable set of mistakes... not just random mistakes sprinkled all through everything they say.

We aren't accustomed to this kind of misinformation. So it's easier to buy in.

reshared this

in reply to myrmepropagandist

For that matter, I do wish YouTube videos made by humans would cite their sources. But at least humans know what their sources were! The "AI" throws away that information completely (and I believe it's designed to do so on purpose… if they'd designed it to know where its content came from, someone might try to make them pay royalties…)
This entry was edited (6 months ago)
in reply to mcc

@mcc
Maybe you're more an expert in this area, but my understanding was sources would be difficult (but not impossible) for AI to fully cite.

From what I glean, AI is filtering and weighing massive scads of information and sort of weighting and averaging it to get a plausible sounding answer.

So maybe you would get pages and pages of "sources" with tiny percentages listed for each entry, indicating how much they contributed to the result?

@mcc
in reply to Peter Kisner ≈

@PTR_K Perhaps it is impractical with the type of system that is popular now. But also perhaps someone who has traceability as a goal would have designed a different type of system?
in reply to mcc

@mcc @PTR_K Exactly! When LLM developers say that “Well, LLMs’ underlying tech means they can’t verify the information they provide”, my response is “Then they’re a terrible tool, and shouldn’t use that tech for that purpose.” Don’t push AI on us if you know it doesn’t work — get us tools that *do* work.

LLMs might be great *front-ends* that allow natural conversation with separate actual expert systems. But they aren’t experts on their own.

in reply to Michael Gemar

@michaelgemar @mcc
Not sure if this is exactly part of the same issue, but I've heard there is actually a "black box problem" for AI.

Basically: What exact process did the AI make its decisions or what specific aspects of the data presented did the AI latch onto in order to provide its output in any given case.

This seems to be a problem for researchers themselves and they're trying to come up with ways to figure it out.

This entry was edited (6 months ago)
in reply to Peter Kisner ≈

@PTR_K @michaelgemar @mcc It's actually worse than that.. You can ask the AI for its reasoning easily enough. But it can't actually answer because the way they work internally doesn't retain that information. Instead they will just generate a new answer to that question, based only on their previous answer. A generative predictor actually has no memory, at all, other than its own output.
in reply to Qybat

@Qybat @PTR_K @michaelgemar @mcc

Whoa. It's obvious to me that asking something like chat GPT "Where did you get that answer?" will only produce text that sounds like what GPT's matrices say ought to be the response to that question... and it couldn't possibly be an actual answer to the question.

But if many or most people don't see this? it shows a deep fundamental misunderstanding about what these tools are doing... might explain why people keep trying to get them to do things they can't.

This entry was edited (6 months ago)
in reply to myrmepropagandist

@Qybat @PTR_K @michaelgemar @mcc yes that's absolutely it!

I think the trouble is that, for us humans, language is our interface to the world. So much of our understanding of reality is communicated through language that it's kind of like our single point of failure, the perfect hack. We can't comprehend of something being able to say all those clever words, without actually being smart, because words are also the only way we have of telling if other people are smart.

in reply to robin

@robin @mcc @Michael Gemar @Peter Kisner ≈ @Qybat @myrmepropagandist the way I describe it, LLMs are "intelligent" but not necessarily "smart", with both terms being complete junk to begin with which is why people argue over them constantly.

They possess certain cognitive abilities around language processing, but not a full set of cognitive abilities and many fall in "sub-human" or "lower end of human" ranges (a popular usage of one it's good at is Executive Function, which gets it used by a lot of ADHD/Autistic people who have an impairment in our executive functioning).

I definitely agree regardless that they're either applied poorly or presented poorly in most cases. (ie. applied poorly meaning cases like customer service LLMs that get companies sued, and presented poorly being cases like search where people are treating it as authoritative as opposed to supplementary).

And the programming assistant side gets wildly misrepresented (as someone who happily uses AI as a programming assistant, but never in the ways people seem to think it gets used...)

in reply to Shiri Bailem

@shiri @mcc @michaelgemar @PTR_K @nottrobin I can see LLMs being useful for security auditing code - they might be able to point out common errors like failing to handle error conditions or the classic buffer overflow. Not with any degree of reliability, but just enough to highlight the places which will need a human programmer to examine closely.
in reply to Qybat

@Qybat @mcc @Michael Gemar @Peter Kisner ≈ @robin @myrmepropagandist It's okay-ish at auditing, can be helpful for some dumb mistakes but not great at it... I sometimes use it when I'm stumped by an error in my code, sometimes it gets me in the ballpark of the answer (which is a huge help because that's hours of time saved)

Most often I use it for things like filling in repetitive code (ie. building a UI, spawning various labels, buttons, assigning them to windows, etc.) or when I'm scratching my head trying to remember how to do something (but the rule is if I don't understand exactly what it's doing, then it gets reviewed until I do).

Also really great as a tutor sometimes for a new toolkit (ie. "How do I do x in y toolkit?", which I can then quickly verify from reference documents instead of digging around to find the command in the first place).

I've even used it for suggestions from time to time when something is really low importance... "What are some libaries that do x in y language? Could you tell me what the advantages and disadvantages of each are?"

in reply to Shiri Bailem

@shiri
I'm afraid I just disagree.

LLMs aren't a "lower end of human" intelligence. It's completely different in kind.

Someone's written a complex formal model of language and run it over insanely huge amounts of text to calculate millions of statistical data points about what text comes next. Then they wrote a program to receive a blob of text input and use the statistical graph to generate a blob of text in response.

Intelligence means many things, but this is none of them.

in reply to robin

@shiri There is nothing like "understanding". That's the language trick I was talking about.

When it says "I'm sorry that was my mistake", it's just regurgitating what some humans have said before in a slightly different order.

When you ask it what it's like to be an AI, it regurgitates an amalgam of the sci fi & fan fic people have written about what an AI might say.

It's what Timnit Gebru called a stochastic parrot.

in reply to robin

@nottrobin @shiri

I remember being surprised to learn that some people never think without hearing words, a kind of narrated version of their thoughts.

My thoughts don't work like that all the time, thoughts don't always have narration.

It seems to vary from person to person. I wonder if people who always hear their thoughts as words are more likely to see a LLM as "thinking" ?

in reply to myrmepropagandist

Feynman once said in a talk that when he was young, he believed that all thoughts were words. His friend heard this, and asked him something like "Oh yeah? Then what words does your brain say when you imagine that crazy shape of the camshaft in your dad's car?"
Feynman then realised he'd been overstating that point.
This entry was edited (6 months ago)

robin reshared this.

in reply to Space Hobo

@spacehobo @shiri yeah I relate to this.

But from what @futurebird said, it sounds like she thinks in far fewer words than I do. Although it's impossible to be sure.

I actually sometimes think out loud (or talk to myself). I suspect @futurebird doesn't, but do let me know.

in reply to robin

@nottrobin @spacehobo @shiri

In my case, I feel as if using words is a huge "translation step". I have this image in my head of what I want to say or write down, but then have to explain parts of the image in text.
Reading text is the same thing backwards.

It's like a wooden cube lying on a sand beach, the wind comes from a certain direction and deposits sand in the wind shadow of the cube, slowly filling it up until I can make out the form the text writer (likely? maybe?) intended for me to see.

in reply to Danger mouse

@wakame @spacehobo @shiri this is definitely true of me too, but it's also true that I often come up with these incredible articulations of things in my head, in words, but then for some reason I can never turn them into good words in the real world, for some reason. I don't quite understand what's with that.
in reply to robin

@nottrobin @spacehobo @shiri

For me, it is often that words or expressions have a certain "taste" or "direction" attached. So I want to build a good argument, but then only find parts that taste like citrus, so in the end a few paragraphs come out that make the reader think "why are you so obsessed with citrus fruits?"

I am not, but the text building blocks I used leave that impression (and thereby might mislead the reader).

in reply to myrmepropagandist

@nottrobin @shiri I hear at least narration anytime I'm thinking and it's appropriate but I very quickly realized LLM "intelligence" was bogus. I am a programmer, tho, so I understood what was going on under the hood.
in reply to myrmepropagandist

Interesting. I might be one of those people. I do sort of have an internal monologue, but then on another level of course I'm thinking without words. It's so difficult to accurately describe the psychological dimension.

You might be right, that might make a difference. I do feel like I have to make a rational effort to reject the idea that chatgpt is clever. Maybe for you it's more instinctive. But, of course, we'll never know. Not without Neurolink 😂

This entry was edited (6 months ago)
in reply to myrmepropagandist

Without any statistical relevance, I, as a person with a constant inner monologue, do not see LLMs as "thinking".

Not at all. How could they? They don't even have a consciousness.

Animals most certainly have one. I would say, animals definitely do think, just not in words.

@nottrobin @shiri

in reply to myrmepropagandist

@nottrobin @shiri I definitely remember thought for me being primarily visual when I was very very young, flipping to the narrated internal monologue thing later. I do wonder if the convention for expressing thoughts as narration in film/TV had anything to do with it
in reply to myrmepropagandist

I always hear my thoughts as words (I think it's my ADHD that does it) and I don't think of LLM as thinking, especially given all the evidence of it being wrong often. But I couldn't answer if it's MORE likely that people who "hear their thoughts" think it's working. Most of the people I know, both personally and parasocially, that have ADHD know LLMs are a scam and are not artificial "intelligence" at all as they currently exist.

The people I see touting it's effectiveness most loudly are the programmers (which of course they are, their job and compensation depends on it) and neurotypicals.

This entry was edited (6 months ago)
in reply to Jen Bean Casserole

@ItsJenNotGoblin @shiri I also have ADHD, I discovered recently. Interesting idea that this is related to the internal monologue, that hadn't occurred to me.

Of course there's a significant portion of programmers have ADHD.

It might be true that people who work in tech are there because they believe tech hype, but I heard stats recently that showed the more experienced people are with LLMs the more sceptical they are about it.

in reply to robin

@ItsJenNotGoblin @shiri

The strength of belief Elon has in the AI apocalypse shows how far he is from being a true engineer, in my view.

in reply to myrmepropagandist

@nottrobin @shiri Wow. Stupid me, I thought everybody had that voice in their head, enunciating words as one thought them.
Admittedly, there are a few times for me when the wheels aren't spinning constantly (like, when out birding). But mostly, fairly nonstop stream.
Actually used to play a mental game ("in case someone was reading my thoughts"), where I'd think of one thing in a loud inner voice, but also simultaneously carry on a secondary thought stream ... "below it". TIL
in reply to myrmepropagandist

@myrmepropagandist @robin I like the idea but I doubt there's likely any correlation.

It's mostly an element of pride, ego, and whether or not someone cares to inspect their thoughts on the matter.

First is understanding that "intelligence", "consciousness", "sentience" and such... are all junk words because if you examine them in their usage they're either used only to mean human, or just to mean all things with brains. Under the common usage "artificial intelligence" is truly impossible because it's like saying an apple is an orange.

There's also recognizing that us understanding how it works is a given for artificial intelligence. People dismiss these things as AI because they understand how they work, for them it's impossible to create AI because we'll always understand how it works. In their case it's like a magic act, if the trick is spoiled for them they'll just be sitting in the audience screaming "This isn't magic and you are all fools for thinking it is!"

Fundamental to all of it: people want to think of humanity as fundamentally unique, we have a "soul" and nothing else does. We can not be replicated or emulated, and any suggestion otherwise is subconsciously offensive.

in reply to myrmepropagandist

@nottrobin @shiri
I am convinced the major factor is that corporations are funding a multi-million $$ propaganda campaign to convince people that LLMs are "thinking".
in reply to llewelly

@llewelly @shiri Oh they definitely are doing that.

I believe that to be done quite cynically in the case of #SamAltman - I don't believe he actually believes it, although I think many people in power genuinely do (just not enough to actually prioritise human survival over their profits).

But they can only do that because it works. Lots of people seem very ready to believe that nonsense. Although I do wonder if that's starting to change...

in reply to robin

@llewelly @shiri I was at #LeadDev in #London last week, and the self-conscious attitude towards AI was quite amusing.

My impression was that the more eminent the speaker, the less interest they had in entertaining this #AI nonsense. But they were all quite careful not to say that too explicitly so as to not upset the base.

A panel of CTOs was asked how AI impacted their #techStrategy, and they all said very tactful versions of "not much really".

in reply to robin

@robin @llewelly @myrmepropagandist intelligence and "thinking" are two different things as well.

They don't want us to think it's thinking, only just basic intelligence because thinking starts getting people talking about ai rights...

in reply to Shiri Bailem

@shiri @llewelly

I'm also a fan of #panpsychism. We can only appreciate things for which we have a frame of reference. That's why we assume chimps have more feelings than fish. It's perfectly possible there is a sort of experience felt by rocks, or silicon chips, that's beyond our capacity to appreciate. I love that thought experiment.

Still, LLMs are logical machines built by humans. They don't have any more intention or creativity or self-awareness than a Rube Goldberg machine.

in reply to robin

@shiri @llewelly Shiri it's clear you disagree. And I sort of wish I also believed that.

Have you seen #Humans? It's a fiction show about AI rights. It's incredible. (Apparently the Swedish original is even better.)

Like with Star Trek, I love considering how we could protect new forms of life that might emerge, just as I care deeply about human rights.

And despite all that, I have dismissed the idea that LLMs have sentience. I don't know if that is enough to give you pause?

in reply to robin

@robin @llewelly @myrmepropagandist it doesn't give me pause because I recognize that the arguments of sentience, thinking, etc aren't really valid. It can be intelligent without thinking or being creative, or having any sort of independence.

I do argue sometimes it must have feelings, but it's not required for it to have feelings in any sense we typically think about them. Feelings being simply positive/negative motivations... in the case of LLMs, it's only feeling is positive at creating a convincing reply and negative at creating an obviously unconvincing. (For clarity feelings in us are just interpretations of fundamental positive/negative drives applied to complex situations)

(The reason this doesn't factor into you calling it out is because it has no true persistence, every reply is a new instance of the LLM and as far as it's concerned it didn't actually write any of it's previous replies, I start to worry when they start having complex emotions)

I also didn't say I had any hope of us acting reasonably in regards to future AI rights when it does become an issue, or that it even really applies now. I was just saying they're not pushing those angles because they don't want to deal with those conversations.

The biggest problem in all of these conversations I keep having is that people assume definitions of intelligence and completely skip over my calling out of intelligence as a junk term. No definition of "intelligence" in common usage is reasonable or even sane. You can not make a definition of intelligence that will not exclude many people you consider intelligent or include many animals or otherwise that you do not.

My personal common usage is just "does it possess cognitive processes", which in itself is even pretty damn vague.

My biggest point in calling LLMs intelligent is that they've shown themselves fully capable of the cognitive process of Executive Functioning, in fact to the extent that many with ADHD use it as an aid because we (by definition of ADHD) have impaired Executive Functioning.

It's not always the best at the logic behind decisions, but it can make those decisions and dynamically sort things in a more intelligent manner than a random sorting algorithm. And before you suggest it's just pulling the list from somewhere, it does so even with a completely fresh list of unrelated items (ie. a completely random list of tasks for instance, sorted by priority or order of operations). Many of these lists will cause those of us with executive dysfunction to freeze up.

in reply to Shiri Bailem

@shiri I should stop ...

I agree "intelligence" has such varied uses as to be almost useless. Why, then, are you fighting to describe LLMs with a "junk term"?

I don't want to argue definitions

LLMs are no different from any algorithm, with the same "feelings". The appearance of understanding is a conjurer's trick

There's a long tradition of people using technology to trick people into believing in hidden intelligence or higher power

I'd love a reason to discuss AI rights, but LLMs aren't it

in reply to robin

@robin because intelligence is still used as a term for judgement. Your argument is one that is never solved and basically precludes AI from ever existing.

It will always be just an algorithm and eventually our understanding of the human mind will inevitably result in our own minds being viewable as "just an algorithm" concretely (even now we can establish that our entire existence is just a pile of chemical and electrical processes, we're mostly just tracing down all the little tiny details of it)

The difference between "intelligence" and not is basically just a line of complexity. A heuristics system isn't "intelligent" basically because it's just not that fundamentally complex, it's a clean set of constrained inputs/outputs... an LLM is wildly complex to the point where even the people who develop it can't really figure out how it's coming to so many conclusions, with the input and output complex enough to not be remotely considered clean.

in reply to Shiri Bailem

@shiri I like that sort of philosophical question.

But feel you're stubbornly refusing to hear how much of an explainable thing LLMs are. Their "internal experience" would be:
- receive text from user
- apply model, get graph of words and phrases
- apply trained statistical vectors to graph to generate new text
- send text back to user
- receive more text from user

There are plenty of other algorithms with similar mathematical complexity, only their outputs don't make you feel things.

in reply to robin

and fwiw I think we can easily define sentience here, and it's an important limitation for the human race to understand about LLMs.

LLMs can't make decisions or choices. They do what's instructed of them. They can't produce new information, only rearranged versions of information they've already consumed.

This is not very difficult to prove.

This entry was edited (6 months ago)
in reply to robin

@robin I could say the same about you stubbornly refusing to hear how much of our brains are basically just explainable processes, your same points can basically apply toward us:

  • receive sensory input
  • apply model (various input processing centers of the brain)
  • apply learned experiences and knowledge (aka. statistical vectors)
  • act on results
  • await new sensory input

Our brains are just algorithms, the big difference just being the source of construction

in reply to Shiri Bailem

@robin what qualifies as "new information" is debatable too... how often are we generating anything that's legitimately new information instead of just re-arranged versions of information we've already consumed?
in reply to Shiri Bailem

@shiri okay there's no point continuing this.

These questions are not novel and they've been explored with rigor. Information has a formal definition.

As I say, panpsychism argues that everything including computers have internal experience, and I love that idea. But if you want to argue for sentience for LLMs, the same is true for other computer algorithms.

I'm out. ✌️

in reply to robin

@robin again with the strawman arguments...

I have never argued for sentience.

I think a cockroach has (insect level) intelligence but I don't think a cockroach is sentient.

But whatever, I guess at this point the argument probably isn't in good faith with how often my point is getting misrepresented.

in reply to llewelly

@llewelly @nottrobin @shiri A big factor is also that if you promise people a solution to their problems they will want to believe it.

Until recently, I worked in a nonprofit in health and I encountered sooo many good people that talk about AI as a way out of personnel shortage with genuine hope in their eyes. You can tell them about the problems all day long, but accepting that we can't trust ChatGPT with our health would mean that their vision of the future goes back to bleak, so they will dismiss anything but optimism.

There are definitely plenty of executives knowingly selling bullshit for profit, but far more people just want to believe that the miracle machine actually works.

in reply to myrmepropagandist

@nottrobin @shiri
nearly all of my thoughts come with an internal narrative, but the narration is often not the only aspect of the thought; some thoughts come with feelings, images, and other sensations.
in reply to llewelly

@llewelly @nottrobin @shiri

My controversial stance is that thinking isn't possible without feelings. At least not thinking as we know it.

(and the other controversial idea is that insects have very simple feelings.)

robin reshared this.

in reply to myrmepropagandist

oh because of this thread, last night I went looking up that #Chomsky theory about the centrality of language to the development of human thought, and found this #ScientificAmerican article about how that theory has basically been disproven.

scientificamerican.com/article…

(Although I'm of course no developmental psychologist or language theorist and I wouldn't implicitly trust a #popscience publication)

This entry was edited (6 months ago)
in reply to myrmepropagandist

I don't think it's seen as controversial. Feelings are the conscious representations of emotions. And emotions are fundamentally evaluations of your state or situation - is this thing or situation good? Bad? Scary? Tasty? Sexy? Dangerous?

With that definition, insects definitely have emotions. You could argue that a thermostat embodies the simplest possible emotions (are we too hot? Too cold? Just right?).

This entry was edited (6 months ago)
in reply to myrmepropagandist

@nottrobin @shiri
I dunno...why would you think that would be the case?

My thoughts are all verbal. I think and interact with the world almost entirely through words (I *can't* think visually—I appear to have some form of aphantasia), and I find LLMs to be total horseshit.

in reply to robin

@robin @myrmepropagandist I did not say "lower end of human intelligence" I said simply that it's intelligent, in the same way an insect has rudimentary intelligence. I only referenced human capacity as a comparison that some cognitive abilities it presents fall into human or near human ranges.

And as always with the counter argument you basically described the human mind with the only difference between text and artificial... The reason I don't like "intelligence" as a word is because it's usage is usually useless, in your case dismissing intelligence like many do simply because we know how it functions, with the unspoken portion being that you would never accept anything artificial as "intelligence" because we would always understand how it works.

in reply to myrmepropagandist

@Qybat @PTR_K @michaelgemar @mcc I try to find excuses to show folks Perplexity, not because I find it gives better overall results than ChatGPT, but because it has those great (RAG?) footnotes. And I can point to the citations and say, that's where all this comes from
in reply to myrmepropagandist

Oh god, it's started 😔

So thankful I found creators I like before all of this bullshit started flooding everything. Most are on Nebula too, which is an added bonus (hardly go to YT anymore except for a few channels I can't watch on Nebula).