HUGE station music and voices update friends!
Please welcome to RFF:
@prcutler
@lost_in_the_loop
@GothAndy
@ethicalrevolution
with more and new from:
@negativeplayers
@LucaManciniDrummer
@s28
@marcueberall
@serpicojam
@axwax
@ordosmarkzero
@etherdiver
@attksthdrknss
@socool
@SamanthaJaneSmith
@draco
@lehto
All channels touched and playlists shook!
listen, submit, contribute:
radiofreefedi.net
Stay awesome out there supporting indie and fedi artists!
Put John Scalzi's Starter Villain on hold in Libby. I'm surprised my library even had copies tbh
The funny part is that the (not actually a) crossposter I'm using for Fedi/ATProto stuff is that it natively has quote posts. Friendica has had quoteposts forever, I suspect other AP software has had quoteposts forever, yet Mastodon refuses to implement them.
If you see a new youTube channel with a plain sounding name like "NatureView" or "BrightScience" etc. and there is what looks like a tempting video on a specific education topic "Most Active Volcanoes" or "Incredible Carnivorous Plants"
There is a 50/50 chance it will be a generated voice with stock footage and a script written by GPT.
I am now avoiding videos if I don't recognize the creator, or don't see signs it was made by a person.
So much spam!
@mcc
Maybe you're more an expert in this area, but my understanding was sources would be difficult (but not impossible) for AI to fully cite.
From what I glean, AI is filtering and weighing massive scads of information and sort of weighting and averaging it to get a plausible sounding answer.
So maybe you would get pages and pages of "sources" with tiny percentages listed for each entry, indicating how much they contributed to the result?
@mcc @PTR_K Exactly! When LLM developers say that “Well, LLMs’ underlying tech means they can’t verify the information they provide”, my response is “Then they’re a terrible tool, and shouldn’t use that tech for that purpose.” Don’t push AI on us if you know it doesn’t work — get us tools that *do* work.
LLMs might be great *front-ends* that allow natural conversation with separate actual expert systems. But they aren’t experts on their own.
@michaelgemar @mcc
Not sure if this is exactly part of the same issue, but I've heard there is actually a "black box problem" for AI.
Basically: What exact process did the AI make its decisions or what specific aspects of the data presented did the AI latch onto in order to provide its output in any given case.
This seems to be a problem for researchers themselves and they're trying to come up with ways to figure it out.
@Qybat @PTR_K @michaelgemar @mcc
Whoa. It's obvious to me that asking something like chat GPT "Where did you get that answer?" will only produce text that sounds like what GPT's matrices say ought to be the response to that question... and it couldn't possibly be an actual answer to the question.
But if many or most people don't see this? it shows a deep fundamental misunderstanding about what these tools are doing... might explain why people keep trying to get them to do things they can't.
@Qybat @PTR_K @michaelgemar @mcc yes that's absolutely it!
I think the trouble is that, for us humans, language is our interface to the world. So much of our understanding of reality is communicated through language that it's kind of like our single point of failure, the perfect hack. We can't comprehend of something being able to say all those clever words, without actually being smart, because words are also the only way we have of telling if other people are smart.
@robin @mcc @Michael Gemar @Peter Kisner ≈ @Qybat @myrmepropagandist the way I describe it, LLMs are "intelligent" but not necessarily "smart", with both terms being complete junk to begin with which is why people argue over them constantly.
They possess certain cognitive abilities around language processing, but not a full set of cognitive abilities and many fall in "sub-human" or "lower end of human" ranges (a popular usage of one it's good at is Executive Function, which gets it used by a lot of ADHD/Autistic people who have an impairment in our executive functioning).
I definitely agree regardless that they're either applied poorly or presented poorly in most cases. (ie. applied poorly meaning cases like customer service LLMs that get companies sued, and presented poorly being cases like search where people are treating it as authoritative as opposed to supplementary).
And the programming assistant side gets wildly misrepresented (as someone who happily uses AI as a programming assistant, but never in the ways people seem to think it gets used...)
@Qybat @mcc @Michael Gemar @Peter Kisner ≈ @robin @myrmepropagandist It's okay-ish at auditing, can be helpful for some dumb mistakes but not great at it... I sometimes use it when I'm stumped by an error in my code, sometimes it gets me in the ballpark of the answer (which is a huge help because that's hours of time saved)
Most often I use it for things like filling in repetitive code (ie. building a UI, spawning various labels, buttons, assigning them to windows, etc.) or when I'm scratching my head trying to remember how to do something (but the rule is if I don't understand exactly what it's doing, then it gets reviewed until I do).
Also really great as a tutor sometimes for a new toolkit (ie. "How do I do x in y toolkit?", which I can then quickly verify from reference documents instead of digging around to find the command in the first place).
I've even used it for suggestions from time to time when something is really low importance... "What are some libaries that do x in y language? Could you tell me what the advantages and disadvantages of each are?"
@shiri
I'm afraid I just disagree.
LLMs aren't a "lower end of human" intelligence. It's completely different in kind.
Someone's written a complex formal model of language and run it over insanely huge amounts of text to calculate millions of statistical data points about what text comes next. Then they wrote a program to receive a blob of text input and use the statistical graph to generate a blob of text in response.
Intelligence means many things, but this is none of them.
@shiri There is nothing like "understanding". That's the language trick I was talking about.
When it says "I'm sorry that was my mistake", it's just regurgitating what some humans have said before in a slightly different order.
When you ask it what it's like to be an AI, it regurgitates an amalgam of the sci fi & fan fic people have written about what an AI might say.
It's what Timnit Gebru called a stochastic parrot.
I remember being surprised to learn that some people never think without hearing words, a kind of narrated version of their thoughts.
My thoughts don't work like that all the time, thoughts don't always have narration.
It seems to vary from person to person. I wonder if people who always hear their thoughts as words are more likely to see a LLM as "thinking" ?
@spacehobo @shiri yeah I relate to this.
But from what @futurebird said, it sounds like she thinks in far fewer words than I do. Although it's impossible to be sure.
I actually sometimes think out loud (or talk to myself). I suspect @futurebird doesn't, but do let me know.
In my case, I feel as if using words is a huge "translation step". I have this image in my head of what I want to say or write down, but then have to explain parts of the image in text.
Reading text is the same thing backwards.
It's like a wooden cube lying on a sand beach, the wind comes from a certain direction and deposits sand in the wind shadow of the cube, slowly filling it up until I can make out the form the text writer (likely? maybe?) intended for me to see.
For me, it is often that words or expressions have a certain "taste" or "direction" attached. So I want to build a good argument, but then only find parts that taste like citrus, so in the end a few paragraphs come out that make the reader think "why are you so obsessed with citrus fruits?"
I am not, but the text building blocks I used leave that impression (and thereby might mislead the reader).
Interesting. I might be one of those people. I do sort of have an internal monologue, but then on another level of course I'm thinking without words. It's so difficult to accurately describe the psychological dimension.
You might be right, that might make a difference. I do feel like I have to make a rational effort to reject the idea that chatgpt is clever. Maybe for you it's more instinctive. But, of course, we'll never know. Not without Neurolink 😂
I always hear my thoughts as words (I think it's my ADHD that does it) and I don't think of LLM as thinking, especially given all the evidence of it being wrong often. But I couldn't answer if it's MORE likely that people who "hear their thoughts" think it's working. Most of the people I know, both personally and parasocially, that have ADHD know LLMs are a scam and are not artificial "intelligence" at all as they currently exist.
The people I see touting it's effectiveness most loudly are the programmers (which of course they are, their job and compensation depends on it) and neurotypicals.
@ItsJenNotGoblin @shiri I also have ADHD, I discovered recently. Interesting idea that this is related to the internal monologue, that hadn't occurred to me.
Of course there's a significant portion of programmers have ADHD.
It might be true that people who work in tech are there because they believe tech hype, but I heard stats recently that showed the more experienced people are with LLMs the more sceptical they are about it.
The strength of belief Elon has in the AI apocalypse shows how far he is from being a true engineer, in my view.
@myrmepropagandist @robin I like the idea but I doubt there's likely any correlation.
It's mostly an element of pride, ego, and whether or not someone cares to inspect their thoughts on the matter.
First is understanding that "intelligence", "consciousness", "sentience" and such... are all junk words because if you examine them in their usage they're either used only to mean human, or just to mean all things with brains. Under the common usage "artificial intelligence" is truly impossible because it's like saying an apple is an orange.
There's also recognizing that us understanding how it works is a given for artificial intelligence. People dismiss these things as AI because they understand how they work, for them it's impossible to create AI because we'll always understand how it works. In their case it's like a magic act, if the trick is spoiled for them they'll just be sitting in the audience screaming "This isn't magic and you are all fools for thinking it is!"
Fundamental to all of it: people want to think of humanity as fundamentally unique, we have a "soul" and nothing else does. We can not be replicated or emulated, and any suggestion otherwise is subconsciously offensive.
@llewelly @shiri Oh they definitely are doing that.
I believe that to be done quite cynically in the case of #SamAltman - I don't believe he actually believes it, although I think many people in power genuinely do (just not enough to actually prioritise human survival over their profits).
But they can only do that because it works. Lots of people seem very ready to believe that nonsense. Although I do wonder if that's starting to change...
@llewelly @shiri I was at #LeadDev in #London last week, and the self-conscious attitude towards AI was quite amusing.
My impression was that the more eminent the speaker, the less interest they had in entertaining this #AI nonsense. But they were all quite careful not to say that too explicitly so as to not upset the base.
A panel of CTOs was asked how AI impacted their #techStrategy, and they all said very tactful versions of "not much really".
@robin @llewelly @myrmepropagandist intelligence and "thinking" are two different things as well.
They don't want us to think it's thinking, only just basic intelligence because thinking starts getting people talking about ai rights...
I'm also a fan of #panpsychism. We can only appreciate things for which we have a frame of reference. That's why we assume chimps have more feelings than fish. It's perfectly possible there is a sort of experience felt by rocks, or silicon chips, that's beyond our capacity to appreciate. I love that thought experiment.
Still, LLMs are logical machines built by humans. They don't have any more intention or creativity or self-awareness than a Rube Goldberg machine.
@shiri @llewelly Shiri it's clear you disagree. And I sort of wish I also believed that.
Have you seen #Humans? It's a fiction show about AI rights. It's incredible. (Apparently the Swedish original is even better.)
Like with Star Trek, I love considering how we could protect new forms of life that might emerge, just as I care deeply about human rights.
And despite all that, I have dismissed the idea that LLMs have sentience. I don't know if that is enough to give you pause?
@robin @llewelly @myrmepropagandist it doesn't give me pause because I recognize that the arguments of sentience, thinking, etc aren't really valid. It can be intelligent without thinking or being creative, or having any sort of independence.
I do argue sometimes it must have feelings, but it's not required for it to have feelings in any sense we typically think about them. Feelings being simply positive/negative motivations... in the case of LLMs, it's only feeling is positive at creating a convincing reply and negative at creating an obviously unconvincing. (For clarity feelings in us are just interpretations of fundamental positive/negative drives applied to complex situations)
(The reason this doesn't factor into you calling it out is because it has no true persistence, every reply is a new instance of the LLM and as far as it's concerned it didn't actually write any of it's previous replies, I start to worry when they start having complex emotions)
I also didn't say I had any hope of us acting reasonably in regards to future AI rights when it does become an issue, or that it even really applies now. I was just saying they're not pushing those angles because they don't want to deal with those conversations.
The biggest problem in all of these conversations I keep having is that people assume definitions of intelligence and completely skip over my calling out of intelligence as a junk term. No definition of "intelligence" in common usage is reasonable or even sane. You can not make a definition of intelligence that will not exclude many people you consider intelligent or include many animals or otherwise that you do not.
My personal common usage is just "does it possess cognitive processes", which in itself is even pretty damn vague.
My biggest point in calling LLMs intelligent is that they've shown themselves fully capable of the cognitive process of Executive Functioning, in fact to the extent that many with ADHD use it as an aid because we (by definition of ADHD) have impaired Executive Functioning.
It's not always the best at the logic behind decisions, but it can make those decisions and dynamically sort things in a more intelligent manner than a random sorting algorithm. And before you suggest it's just pulling the list from somewhere, it does so even with a completely fresh list of unrelated items (ie. a completely random list of tasks for instance, sorted by priority or order of operations). Many of these lists will cause those of us with executive dysfunction to freeze up.
@shiri I should stop ...
I agree "intelligence" has such varied uses as to be almost useless. Why, then, are you fighting to describe LLMs with a "junk term"?
I don't want to argue definitions
LLMs are no different from any algorithm, with the same "feelings". The appearance of understanding is a conjurer's trick
There's a long tradition of people using technology to trick people into believing in hidden intelligence or higher power
I'd love a reason to discuss AI rights, but LLMs aren't it
@robin because intelligence is still used as a term for judgement. Your argument is one that is never solved and basically precludes AI from ever existing.
It will always be just an algorithm and eventually our understanding of the human mind will inevitably result in our own minds being viewable as "just an algorithm" concretely (even now we can establish that our entire existence is just a pile of chemical and electrical processes, we're mostly just tracing down all the little tiny details of it)
The difference between "intelligence" and not is basically just a line of complexity. A heuristics system isn't "intelligent" basically because it's just not that fundamentally complex, it's a clean set of constrained inputs/outputs... an LLM is wildly complex to the point where even the people who develop it can't really figure out how it's coming to so many conclusions, with the input and output complex enough to not be remotely considered clean.
@shiri I like that sort of philosophical question.
But feel you're stubbornly refusing to hear how much of an explainable thing LLMs are. Their "internal experience" would be:
- receive text from user
- apply model, get graph of words and phrases
- apply trained statistical vectors to graph to generate new text
- send text back to user
- receive more text from user
There are plenty of other algorithms with similar mathematical complexity, only their outputs don't make you feel things.
and fwiw I think we can easily define sentience here, and it's an important limitation for the human race to understand about LLMs.
LLMs can't make decisions or choices. They do what's instructed of them. They can't produce new information, only rearranged versions of information they've already consumed.
This is not very difficult to prove.
@robin I could say the same about you stubbornly refusing to hear how much of our brains are basically just explainable processes, your same points can basically apply toward us:
Our brains are just algorithms, the big difference just being the source of construction
@shiri okay there's no point continuing this.
These questions are not novel and they've been explored with rigor. Information has a formal definition.
As I say, panpsychism argues that everything including computers have internal experience, and I love that idea. But if you want to argue for sentience for LLMs, the same is true for other computer algorithms.
I'm out. ✌️
@robin again with the strawman arguments...
I have never argued for sentience.
I think a cockroach has (insect level) intelligence but I don't think a cockroach is sentient.
But whatever, I guess at this point the argument probably isn't in good faith with how often my point is getting misrepresented.
@llewelly @nottrobin @shiri A big factor is also that if you promise people a solution to their problems they will want to believe it.
Until recently, I worked in a nonprofit in health and I encountered sooo many good people that talk about AI as a way out of personnel shortage with genuine hope in their eyes. You can tell them about the problems all day long, but accepting that we can't trust ChatGPT with our health would mean that their vision of the future goes back to bleak, so they will dismiss anything but optimism.
There are definitely plenty of executives knowingly selling bullshit for profit, but far more people just want to believe that the miracle machine actually works.
oh because of this thread, last night I went looking up that #Chomsky theory about the centrality of language to the development of human thought, and found this #ScientificAmerican article about how that theory has basically been disproven.
scientificamerican.com/article…
(Although I'm of course no developmental psychologist or language theorist and I wouldn't implicitly trust a #popscience publication)
Much of Noam Chomsky’s revolution in linguistics—including its account of the way we learn languages—is being overturnedPaul Ibbotson (Scientific American)
I don't think it's seen as controversial. Feelings are the conscious representations of emotions. And emotions are fundamentally evaluations of your state or situation - is this thing or situation good? Bad? Scary? Tasty? Sexy? Dangerous?
With that definition, insects definitely have emotions. You could argue that a thermostat embodies the simplest possible emotions (are we too hot? Too cold? Just right?).
@robin @myrmepropagandist I did not say "lower end of human intelligence" I said simply that it's intelligent, in the same way an insect has rudimentary intelligence. I only referenced human capacity as a comparison that some cognitive abilities it presents fall into human or near human ranges.
And as always with the counter argument you basically described the human mind with the only difference between text and artificial... The reason I don't like "intelligence" as a word is because it's usage is usually useless, in your case dismissing intelligence like many do simply because we know how it functions, with the unspoken portion being that you would never accept anything artificial as "intelligence" because we would always understand how it works.
Perplexity AI claims it sends a user agent and respects robots.txt but it absolutely does notrknight.me
Oh god, it's started 😔
So thankful I found creators I like before all of this bullshit started flooding everything. Most are on Nebula too, which is an added bonus (hardly go to YT anymore except for a few channels I can't watch on Nebula).
Someone on fedi decided that they'd use their Fedi account as a syndication feed for their blog and switch to Bluesky because nobody (t)here respected her boundaries, which is sad but perfectly understandable. Then some clown replied that they would be blocking her now because Bluesky is pure evil or something and it's like bitch that's what she was talking about. That's the boundary violating behavior.
#ProjectAsher
Dear Friends Strangers and everyone in between,
I’m sorry this took so long but we had to get approvals from both facilities.
I was once told that if you truly need help you need only ask. So humbly I ask you at least read our ask for help below and at least perhaps help us get the word out.
We will post updates now.
I’m sorry it took so long but this stuff isn’t easy to do and it’s taking all my energy to even attempt this.
Fedi if you can do your thing please. If not for us for Asher. I can check and see if you can donate directly to Cornell or . If you prefer and dm me.
On any account or any questions just ask!
Thank you
Derek Jolene and Barbara.
I’m editing the alt text for pictures now but I have to hit send because we are packing.
Last week Asher our Furbaby had a bowel obstruction. He came to us as a stray who followed us home from a few blocks away.
His conditioned improved at first when the blockage was resolved. However, by early Monday morning, he declined dramatically.
We decided to take him to our vet once he opened that day as we were afraid that the stress from the long journey to the emergency vet might worsen his condition.
Yesterday, we found out he was lucky to be alive and his kidneys are shutting down.
Our vet has him semi stabilized now, and recommended a referral to Cornell veterinary hospital where they have specialists who hope to improve his prognosis. Since he’s only 3 years old, all members involved hope to give him the best shot at life.
Getting him stabilized so far is estimated to be 1000+ and the estimate for Cornell ranges between 1500-4500 conservatively. They are unable to provide a more accurate estimate until he has been evaluated.
I have helped others before to fundraise for their companions fundraising and we try to help with outreach in our community in terms of cat rescue, TNR, and finding affordable care. However, we have never had to ask for help ourselves in this regard.
While it’s difficult for us to ask for help, we realize it was the only way to save him. Although some may view him as just a cat or pet, he is so much more to us. Besides being a housemate, he is also a friend, companion, and teacher. We would give him our kidneys if we could.
That said, I know many are struggling as well. I have boosted and donated, but I never did it expecting anything back. I did it because we both believe we are in this together.
So if you can please send Asher your best vibes. Your best boosts.
My partner and I will keep you updated.
We do have Vet references, estimates, drivers license, etc, this is not a scam.
If you can donate, that would be great. No amount is too little, every penny makes a difference. We appreciate every kind thought or prayer at this point.
Thank you all in advance. We don’t have a lot of time. Whether we meet the goal or not, we will do our best to keep fighting for him.
Thank You,
Derek & Jolene & Barbara
Also if anyone knows of any reasonable places to stay or any air bnbs open anything or anyone in the area have any recommendations thank you.
“We are all in this together”
#fediverse
#tootfic
#mutualaid
#solarpunk
#askfedi
#academia
#actuallyautistic
#actuallyadhd
#Cats
#CatsOfMastodon
#kindness
#writing
#love
@academicsunite
@actuallyautistic
Update:These alt text are fixed now thank you for all the boosts thoughts and donations. I’m sorry I haven’t been updating more. We should get updates on him tmr sometime and see if we can find a smoother quieter ride to NY and find out if he can even travel because it does stress him so. If there’s any questions just let us know.
Thank you again for at least giving him a chance and us the privilege of getting to share that time with him.
🥰
“People love great art not for the chemicals it releases but because it challenges us, comforts us, confuses us, probes us, attacks us, excites us, inspires us. Because great art is a miracle, because to witness it is to feel the presence of something like God and the human condition, and to remind us that they are perhaps the same thing.”
vox.com/culture/351041/ai-art-…
#AI #AIart #ChatGPT #OpenAI #MediocrityMachine
Human creativity can't be replaced by generative AI like ChatGPT, DALL-E, and Sora.Rebecca Jennings (Vox)
Many incels and TERFs share the same bleak worldview:
- Men are inherently violent and sexual
- Women are destined to be passive victims
- We’re all governed by our genitals in an unsettlingly violent binary
- This can’t ever be fixed because “biology”
- Queer and trans people are deluded or lying
Both are reactionary views that accept an extreme idea: patriarchy as an inevitable, natural force. Real men are predators and women their prey.
We have a moral obligation to do better than that.
I've never purchased from The Paper Mouse before since it's in the US, but I wish more stores displayed their inks like this. It makes it so much easier to find exactly the colour your after. thepapermouse.com/pages/ink-co…
Looking for just the right shade of ink for your fountain pen? Our ink comparison chart shows them all side-by-side.The Paper Mouse
So, fellow butterfles, remember when I mentioned Friendica yesterday? Instances running the latest version let you post to ActivityPub, the in-house DFRN protocol, and ATProto. You'll have to give it access to a preëxisting BSky account but there you go.
@Trash Panda, longform Yeah, it's puppent only for Bluesky so far, native AT is being worked on.
Friendica is AP native, DFRN is only used with old servers.
You missed native Diaspora and OStatus as well lol
@Trash Panda, longform glad to have you here!
You can also have multiple sub-accounts as well as puppet accounts for a variety for platforms (say if you want to also use Tumblr), let me know if you have any questions!