A minority struggling to fit in a world not designed by them.
Conversing with the new live AI's can seem like chatting to an autistic friend.
Lately I've been using Google's Gemini "deep research" option. Give it a paragraph and it scans mostly research sources and produces a report. As it works it describes its "thought processes" .
It feels very much like I'm chatting with info dumping autistic friend about a shared special interest.
Shiri Bailem likes this.
reshared this
Shiri Bailem
in reply to darrellpf • •@darrellpf Brace yourself talking about it, people come out of the woodwork to crap on anyone saying anything remotely positive about AI, even if it's being used for accessibility purposes (they refuse to believe it can actually do anything in the first place, and the rest of the time they're too reactively angry about it to think about the impact of their actions)
Also in before someone goes on some rant along those last lines, just be on the look out for hallucinations when using it for search or any sort of information (double check it's answers before you rely on them or spread it).
like this
marionline and Bernie Luckily Does It like this.
reshared this
ActuallyAutistic group and The Fediverse Mule :D reshared this.
darrellpf
in reply to Shiri Bailem • • •@shiri mas.to/users/darrellpf I've been in the computer industry since it's inception. The old rule of "garbage in, garbage out still applies".
You can get any kind of output you wish and any kind of sycophant response and spin you like depending on the input and the questions.
I stopped myself from moving on to talk about propaganda, controlling the media or child rearing.
@actuallyautistic
Shiri Bailem likes this.
ActuallyAutistic group reshared this.
Murdoc Addams 🧛🏻:ri: 🇨🇦
in reply to darrellpf • • •I've noticed this myself actually. I tried playing around with them, mostly asking questions I already knew the answers to, and found that I got pretty good results (not perfect, but mostly correct). Not that this makes me doubt the bad results that others have gotten. Rather, what I suspect is the case is that I just know better how to talk to them, and what to use them for. I also know to check any results. But overall I found the experience pleasant, I like the way they organize information when they report it to me. (This makes me think of the stereotype that autistic people think and communicate like computers. 😆 )
Fun fact: When I was doing this, one of the things I was asking it about was autism. Through that conversation I ended up learning about employment support services for autistic people that I didn't know existed, and it was able to point me to real organizations in my area, even my city! So I'm going through that process right now, and if I end up getting employment because of them, I'll have A.I./LLM to thank for it.
(To be clear, I am well aware of their limitations and how they can give incorrect information. Like I said, I know how to ask, and I know to check that info after. I didn't even believe that those organizations were real when it told me about them, but I checked, and they all were.)
ActuallyAutistic group reshared this.
Shiri Bailem
in reply to Murdoc Addams 🧛🏻:ri: 🇨🇦 • •@Murdoc Addams 🧛🏻:ri: 🇨🇦 @darrellpf There's a few factors in the bad results people talk about.
The big elements are they usually are talking about their experiences with earlier models and without web search integration (ie. asking the original Chat GPT 3.5 questions). Newer models have more information and are a little better at catching themselves.
The other element is the big persistent one: LLMs struggle with saying "I don't know", the reality is that they're meant to mimic responses and the response to question is expected to be an answer, the best response being a correct answer... but they also optimize for efficient responses, and nothing is more efficient than answering everything with "I don't know"... so avoiding that means when it doesn't know it then it invents a plausible answer. (This is called a hallucination in the AI field)
And because they are looking for reasons to dismiss them, they also pull edge cases:
* Poorly configured AI systems like how Google's AI responses would often just take troll reddit posts for granted because Google just kinda shoved it in there without much consideration
* That it struggles with tasks... that are not part of it's logic and processing. (Think of it like the language center of the brain with the bare minimum of any other parts) For example that it can struggle with doing math, especially trick word problems. (They love to pull that out and claim that it can't possibly be AI because the language processing AI gets tripped up by tricky math)
like this
darrellpf, Woozle Hypertwin and Murdoc Addams 🧛🏻:ri: 🇨🇦 like this.
ActuallyAutistic group reshared this.