Skip to main content

Trash Panda (friendica only does everything) reshared this.


Okay, #Linux people, I did it. I'm using the Linux Mint OS on a refurbished laptop, and so far everything has been completely seamless. You told me so.

Trash Panda (friendica only does everything) reshared this.


La palabra «crisis» en chino (危机) no está compuesta por los caracteres de «peligro» y «oportunidad». El primero significa «peligro», pero el segundo significa «punto de inflexión» (el significado original de la palabra «crisis»).​ El mito fue perpetuado principalmente por un discurso electoral de John F. Kennedy.​



Au(dhd?) whinge


Why does my dumb ass keep bouncing around these platforms instead of just using them consistently? new posts should be novelty enough to let the dopamine flow why



Put John Scalzi's Starter Villain on hold in Libby. I'm surprised my library even had copies tbh

#books

Books & Literature Feed reshared this.



The funny part is that the (not actually a) crossposter I'm using for Fedi/ATProto stuff is that it natively has quote posts. Friendica has had quoteposts forever, I suspect other AP software has had quoteposts forever, yet Mastodon refuses to implement them.

#meta #quoteposts #fedimeta #atprometa

reshared this



Trash Panda (friendica only does everything) reshared this.


If you see a new youTube channel with a plain sounding name like "NatureView" or "BrightScience" etc. and there is what looks like a tempting video on a specific education topic "Most Active Volcanoes" or "Incredible Carnivorous Plants"

There is a 50/50 chance it will be a generated voice with stock footage and a script written by GPT.

I am now avoiding videos if I don't recognize the creator, or don't see signs it was made by a person.

So much spam!

in reply to myrmepropagandist

For that matter, I do wish YouTube videos made by humans would cite their sources. But at least humans know what their sources were! The "AI" throws away that information completely (and I believe it's designed to do so on purpose… if they'd designed it to know where its content came from, someone might try to make them pay royalties…)
This entry was edited (1 week ago)
in reply to mcc

@mcc
Maybe you're more an expert in this area, but my understanding was sources would be difficult (but not impossible) for AI to fully cite.

From what I glean, AI is filtering and weighing massive scads of information and sort of weighting and averaging it to get a plausible sounding answer.

So maybe you would get pages and pages of "sources" with tiny percentages listed for each entry, indicating how much they contributed to the result?

@mcc
in reply to Peter Kisner ≈

@PTR_K Perhaps it is impractical with the type of system that is popular now. But also perhaps someone who has traceability as a goal would have designed a different type of system?
in reply to mcc

@mcc @PTR_K Exactly! When LLM developers say that “Well, LLMs’ underlying tech means they can’t verify the information they provide”, my response is “Then they’re a terrible tool, and shouldn’t use that tech for that purpose.” Don’t push AI on us if you know it doesn’t work — get us tools that *do* work.

LLMs might be great *front-ends* that allow natural conversation with separate actual expert systems. But they aren’t experts on their own.

in reply to Michael Gemar

@michaelgemar @mcc
Not sure if this is exactly part of the same issue, but I've heard there is actually a "black box problem" for AI.

Basically: What exact process did the AI make its decisions or what specific aspects of the data presented did the AI latch onto in order to provide its output in any given case.

This seems to be a problem for researchers themselves and they're trying to come up with ways to figure it out.

This entry was edited (1 week ago)
in reply to Peter Kisner ≈

@PTR_K @michaelgemar @mcc It's actually worse than that.. You can ask the AI for its reasoning easily enough. But it can't actually answer because the way they work internally doesn't retain that information. Instead they will just generate a new answer to that question, based only on their previous answer. A generative predictor actually has no memory, at all, other than its own output.
in reply to Qybat

@Qybat @PTR_K @michaelgemar @mcc

Whoa. It's obvious to me that asking something like chat GPT "Where did you get that answer?" will only produce text that sounds like what GPT's matrices say ought to be the response to that question... and it couldn't possibly be an actual answer to the question.

But if many or most people don't see this? it shows a deep fundamental misunderstanding about what these tools are doing... might explain why people keep trying to get them to do things they can't.

This entry was edited (1 week ago)
in reply to myrmepropagandist

@Qybat @PTR_K @michaelgemar @mcc yes that's absolutely it!

I think the trouble is that, for us humans, language is our interface to the world. So much of our understanding of reality is communicated through language that it's kind of like our single point of failure, the perfect hack. We can't comprehend of something being able to say all those clever words, without actually being smart, because words are also the only way we have of telling if other people are smart.

in reply to robin

@robin @mcc @Michael Gemar @Peter Kisner ≈ @Qybat @myrmepropagandist the way I describe it, LLMs are "intelligent" but not necessarily "smart", with both terms being complete junk to begin with which is why people argue over them constantly.

They possess certain cognitive abilities around language processing, but not a full set of cognitive abilities and many fall in "sub-human" or "lower end of human" ranges (a popular usage of one it's good at is Executive Function, which gets it used by a lot of ADHD/Autistic people who have an impairment in our executive functioning).

I definitely agree regardless that they're either applied poorly or presented poorly in most cases. (ie. applied poorly meaning cases like customer service LLMs that get companies sued, and presented poorly being cases like search where people are treating it as authoritative as opposed to supplementary).

And the programming assistant side gets wildly misrepresented (as someone who happily uses AI as a programming assistant, but never in the ways people seem to think it gets used...)

in reply to Shiri Bailem

@shiri @mcc @michaelgemar @PTR_K @nottrobin I can see LLMs being useful for security auditing code - they might be able to point out common errors like failing to handle error conditions or the classic buffer overflow. Not with any degree of reliability, but just enough to highlight the places which will need a human programmer to examine closely.
in reply to Qybat

@Qybat @mcc @Michael Gemar @Peter Kisner ≈ @robin @myrmepropagandist It's okay-ish at auditing, can be helpful for some dumb mistakes but not great at it... I sometimes use it when I'm stumped by an error in my code, sometimes it gets me in the ballpark of the answer (which is a huge help because that's hours of time saved)

Most often I use it for things like filling in repetitive code (ie. building a UI, spawning various labels, buttons, assigning them to windows, etc.) or when I'm scratching my head trying to remember how to do something (but the rule is if I don't understand exactly what it's doing, then it gets reviewed until I do).

Also really great as a tutor sometimes for a new toolkit (ie. "How do I do x in y toolkit?", which I can then quickly verify from reference documents instead of digging around to find the command in the first place).

I've even used it for suggestions from time to time when something is really low importance... "What are some libaries that do x in y language? Could you tell me what the advantages and disadvantages of each are?"

in reply to Shiri Bailem

@shiri
I'm afraid I just disagree.

LLMs aren't a "lower end of human" intelligence. It's completely different in kind.

Someone's written a complex formal model of language and run it over insanely huge amounts of text to calculate millions of statistical data points about what text comes next. Then they wrote a program to receive a blob of text input and use the statistical graph to generate a blob of text in response.

Intelligence means many things, but this is none of them.

in reply to robin

@shiri There is nothing like "understanding". That's the language trick I was talking about.

When it says "I'm sorry that was my mistake", it's just regurgitating what some humans have said before in a slightly different order.

When you ask it what it's like to be an AI, it regurgitates an amalgam of the sci fi & fan fic people have written about what an AI might say.

It's what Timnit Gebru called a stochastic parrot.

in reply to robin

@nottrobin @shiri

I remember being surprised to learn that some people never think without hearing words, a kind of narrated version of their thoughts.

My thoughts don't work like that all the time, thoughts don't always have narration.

It seems to vary from person to person. I wonder if people who always hear their thoughts as words are more likely to see a LLM as "thinking" ?

in reply to myrmepropagandist

Feynman once said in a talk that when he was young, he believed that all thoughts were words. His friend heard this, and asked him something like "Oh yeah? Then what words does your brain say when you imagine that crazy shape of the camshaft in your dad's car?"
Feynman then realised he'd been overstating that point.
This entry was edited (1 week ago)

robin reshared this.

in reply to Space Hobo 🧋

@spacehobo @shiri yeah I relate to this.

But from what @futurebird said, it sounds like she thinks in far fewer words than I do. Although it's impossible to be sure.

I actually sometimes think out loud (or talk to myself). I suspect @futurebird doesn't, but do let me know.

in reply to robin

@nottrobin @spacehobo @shiri

In my case, I feel as if using words is a huge "translation step". I have this image in my head of what I want to say or write down, but then have to explain parts of the image in text.
Reading text is the same thing backwards.

It's like a wooden cube lying on a sand beach, the wind comes from a certain direction and deposits sand in the wind shadow of the cube, slowly filling it up until I can make out the form the text writer (likely? maybe?) intended for me to see.

in reply to Tuxedo Wa-Kamen

@wakame @spacehobo @shiri this is definitely true of me too, but it's also true that I often come up with these incredible articulations of things in my head, in words, but then for some reason I can never turn them into good words in the real world, for some reason. I don't quite understand what's with that.
in reply to robin

@nottrobin @spacehobo @shiri

For me, it is often that words or expressions have a certain "taste" or "direction" attached. So I want to build a good argument, but then only find parts that taste like citrus, so in the end a few paragraphs come out that make the reader think "why are you so obsessed with citrus fruits?"

I am not, but the text building blocks I used leave that impression (and thereby might mislead the reader).

in reply to myrmepropagandist

@nottrobin @shiri I hear at least narration anytime I'm thinking and it's appropriate but I very quickly realized LLM "intelligence" was bogus. I am a programmer, tho, so I understood what was going on under the hood.
in reply to CurtAdams

@CurtAdams
Same here. I often have to think out loud to really get my brain around something I'm thinking about. I totally have a running commentary. I'm slowly thinking these words in my head as I hunt and peck them on my phone 😄 But I understand how LLMs work.

Although, I also have somewhat of a history with machine learning. But I feel like if I didn't I still would have "looked under the hood" out of scepticism.

@futurebird @nottrobin @shiri

in reply to myrmepropagandist

Interesting. I might be one of those people. I do sort of have an internal monologue, but then on another level of course I'm thinking without words. It's so difficult to accurately describe the psychological dimension.

You might be right, that might make a difference. I do feel like I have to make a rational effort to reject the idea that chatgpt is clever. Maybe for you it's more instinctive. But, of course, we'll never know. Not without Neurolink 😂

This entry was edited (1 week ago)
in reply to myrmepropagandist

@nottrobin @shiri FWIW, I mostly think in words (not 100% always of course!) and still recognize that LLMs are definitely not "thinking."

Though of course I understand their underlying mechanisms better than most also, so that obviously plays a part in this.

in reply to myrmepropagandist

Without any statistical relevance, I, as a person with a constant inner monologue, do not see LLMs as "thinking".

Not at all. How could they? They don't even have a consciousness.

Animals most certainly have one. I would say, animals definitely do think, just not in words.

@nottrobin @shiri

in reply to myrmepropagandist

@nottrobin @shiri I definitely remember thought for me being primarily visual when I was very very young, flipping to the narrated internal monologue thing later. I do wonder if the convention for expressing thoughts as narration in film/TV had anything to do with it
in reply to myrmepropagandist

I always hear my thoughts as words (I think it's my ADHD that does it) and I don't think of LLM as thinking, especially given all the evidence of it being wrong often. But I couldn't answer if it's MORE likely that people who "hear their thoughts" think it's working. Most of the people I know, both personally and parasocially, that have ADHD know LLMs are a scam and are not artificial "intelligence" at all as they currently exist.

The people I see touting it's effectiveness most loudly are the programmers (which of course they are, their job and compensation depends on it) and neurotypicals.

This entry was edited (1 week ago)
in reply to urJent message

@ItsJenNotGoblin @shiri I also have ADHD, I discovered recently. Interesting idea that this is related to the internal monologue, that hadn't occurred to me.

Of course there's a significant portion of programmers have ADHD.

It might be true that people who work in tech are there because they believe tech hype, but I heard stats recently that showed the more experienced people are with LLMs the more sceptical they are about it.

in reply to robin

@ItsJenNotGoblin @shiri

The strength of belief Elon has in the AI apocalypse shows how far he is from being a true engineer, in my view.

in reply to myrmepropagandist

@nottrobin @shiri Wow. Stupid me, I thought everybody had that voice in their head, enunciating words as one thought them.
Admittedly, there are a few times for me when the wheels aren't spinning constantly (like, when out birding). But mostly, fairly nonstop stream.
Actually used to play a mental game ("in case someone was reading my thoughts"), where I'd think of one thing in a loud inner voice, but also simultaneously carry on a secondary thought stream ... "below it". TIL
in reply to myrmepropagandist

@nottrobin @shiri I don’t know about the LLM thing but I wish the voices would shut up at times.
in reply to myrmepropagandist

@myrmepropagandist @robin I like the idea but I doubt there's likely any correlation.

It's mostly an element of pride, ego, and whether or not someone cares to inspect their thoughts on the matter.

First is understanding that "intelligence", "consciousness", "sentience" and such... are all junk words because if you examine them in their usage they're either used only to mean human, or just to mean all things with brains. Under the common usage "artificial intelligence" is truly impossible because it's like saying an apple is an orange.

There's also recognizing that us understanding how it works is a given for artificial intelligence. People dismiss these things as AI because they understand how they work, for them it's impossible to create AI because we'll always understand how it works. In their case it's like a magic act, if the trick is spoiled for them they'll just be sitting in the audience screaming "This isn't magic and you are all fools for thinking it is!"

Fundamental to all of it: people want to think of humanity as fundamentally unique, we have a "soul" and nothing else does. We can not be replicated or emulated, and any suggestion otherwise is subconsciously offensive.

in reply to myrmepropagandist

@nottrobin @shiri
I am convinced the major factor is that corporations are funding a multi-million $$ propaganda campaign to convince people that LLMs are "thinking".
in reply to llewelly

@llewelly @shiri Oh they definitely are doing that.

I believe that to be done quite cynically in the case of #SamAltman - I don't believe he actually believes it, although I think many people in power genuinely do (just not enough to actually prioritise human survival over their profits).

But they can only do that because it works. Lots of people seem very ready to believe that nonsense. Although I do wonder if that's starting to change...

in reply to robin

@llewelly @shiri I was at #LeadDev in #London last week, and the self-conscious attitude towards AI was quite amusing.

My impression was that the more eminent the speaker, the less interest they had in entertaining this #AI nonsense. But they were all quite careful not to say that too explicitly so as to not upset the base.

A panel of CTOs was asked how AI impacted their #techStrategy, and they all said very tactful versions of "not much really".

in reply to robin

@robin @llewelly @myrmepropagandist intelligence and "thinking" are two different things as well.

They don't want us to think it's thinking, only just basic intelligence because thinking starts getting people talking about ai rights...

in reply to Shiri Bailem

@shiri @llewelly

I'm also a fan of #panpsychism. We can only appreciate things for which we have a frame of reference. That's why we assume chimps have more feelings than fish. It's perfectly possible there is a sort of experience felt by rocks, or silicon chips, that's beyond our capacity to appreciate. I love that thought experiment.

Still, LLMs are logical machines built by humans. They don't have any more intention or creativity or self-awareness than a Rube Goldberg machine.

in reply to robin

@shiri @llewelly Shiri it's clear you disagree. And I sort of wish I also believed that.

Have you seen #Humans? It's a fiction show about AI rights. It's incredible. (Apparently the Swedish original is even better.)

Like with Star Trek, I love considering how we could protect new forms of life that might emerge, just as I care deeply about human rights.

And despite all that, I have dismissed the idea that LLMs have sentience. I don't know if that is enough to give you pause?

in reply to robin

@robin @llewelly @myrmepropagandist it doesn't give me pause because I recognize that the arguments of sentience, thinking, etc aren't really valid. It can be intelligent without thinking or being creative, or having any sort of independence.

I do argue sometimes it must have feelings, but it's not required for it to have feelings in any sense we typically think about them. Feelings being simply positive/negative motivations... in the case of LLMs, it's only feeling is positive at creating a convincing reply and negative at creating an obviously unconvincing. (For clarity feelings in us are just interpretations of fundamental positive/negative drives applied to complex situations)

(The reason this doesn't factor into you calling it out is because it has no true persistence, every reply is a new instance of the LLM and as far as it's concerned it didn't actually write any of it's previous replies, I start to worry when they start having complex emotions)

I also didn't say I had any hope of us acting reasonably in regards to future AI rights when it does become an issue, or that it even really applies now. I was just saying they're not pushing those angles because they don't want to deal with those conversations.

The biggest problem in all of these conversations I keep having is that people assume definitions of intelligence and completely skip over my calling out of intelligence as a junk term. No definition of "intelligence" in common usage is reasonable or even sane. You can not make a definition of intelligence that will not exclude many people you consider intelligent or include many animals or otherwise that you do not.

My personal common usage is just "does it possess cognitive processes", which in itself is even pretty damn vague.

My biggest point in calling LLMs intelligent is that they've shown themselves fully capable of the cognitive process of Executive Functioning, in fact to the extent that many with ADHD use it as an aid because we (by definition of ADHD) have impaired Executive Functioning.

It's not always the best at the logic behind decisions, but it can make those decisions and dynamically sort things in a more intelligent manner than a random sorting algorithm. And before you suggest it's just pulling the list from somewhere, it does so even with a completely fresh list of unrelated items (ie. a completely random list of tasks for instance, sorted by priority or order of operations). Many of these lists will cause those of us with executive dysfunction to freeze up.

in reply to Shiri Bailem

@shiri I should stop ...

I agree "intelligence" has such varied uses as to be almost useless. Why, then, are you fighting to describe LLMs with a "junk term"?

I don't want to argue definitions

LLMs are no different from any algorithm, with the same "feelings". The appearance of understanding is a conjurer's trick

There's a long tradition of people using technology to trick people into believing in hidden intelligence or higher power

I'd love a reason to discuss AI rights, but LLMs aren't it

in reply to robin

@robin because intelligence is still used as a term for judgement. Your argument is one that is never solved and basically precludes AI from ever existing.

It will always be just an algorithm and eventually our understanding of the human mind will inevitably result in our own minds being viewable as "just an algorithm" concretely (even now we can establish that our entire existence is just a pile of chemical and electrical processes, we're mostly just tracing down all the little tiny details of it)

The difference between "intelligence" and not is basically just a line of complexity. A heuristics system isn't "intelligent" basically because it's just not that fundamentally complex, it's a clean set of constrained inputs/outputs... an LLM is wildly complex to the point where even the people who develop it can't really figure out how it's coming to so many conclusions, with the input and output complex enough to not be remotely considered clean.

in reply to Shiri Bailem

@shiri I like that sort of philosophical question.

But feel you're stubbornly refusing to hear how much of an explainable thing LLMs are. Their "internal experience" would be:
- receive text from user
- apply model, get graph of words and phrases
- apply trained statistical vectors to graph to generate new text
- send text back to user
- receive more text from user

There are plenty of other algorithms with similar mathematical complexity, only their outputs don't make you feel things.

in reply to robin

and fwiw I think we can easily define sentience here, and it's an important limitation for the human race to understand about LLMs.

LLMs can't make decisions or choices. They do what's instructed of them. They can't produce new information, only rearranged versions of information they've already consumed.

This is not very difficult to prove.

This entry was edited (1 week ago)
in reply to robin

@robin I could say the same about you stubbornly refusing to hear how much of our brains are basically just explainable processes, your same points can basically apply toward us:

  • receive sensory input
  • apply model (various input processing centers of the brain)
  • apply learned experiences and knowledge (aka. statistical vectors)
  • act on results
  • await new sensory input

Our brains are just algorithms, the big difference just being the source of construction

in reply to Shiri Bailem

@robin what qualifies as "new information" is debatable too... how often are we generating anything that's legitimately new information instead of just re-arranged versions of information we've already consumed?
in reply to Shiri Bailem

@shiri okay there's no point continuing this.

These questions are not novel and they've been explored with rigor. Information has a formal definition.

As I say, panpsychism argues that everything including computers have internal experience, and I love that idea. But if you want to argue for sentience for LLMs, the same is true for other computer algorithms.

I'm out. ✌️

in reply to robin

@robin again with the strawman arguments...

I have never argued for sentience.

I think a cockroach has (insect level) intelligence but I don't think a cockroach is sentient.

But whatever, I guess at this point the argument probably isn't in good faith with how often my point is getting misrepresented.

in reply to llewelly

@llewelly @nottrobin @shiri A big factor is also that if you promise people a solution to their problems they will want to believe it.

Until recently, I worked in a nonprofit in health and I encountered sooo many good people that talk about AI as a way out of personnel shortage with genuine hope in their eyes. You can tell them about the problems all day long, but accepting that we can't trust ChatGPT with our health would mean that their vision of the future goes back to bleak, so they will dismiss anything but optimism.

There are definitely plenty of executives knowingly selling bullshit for profit, but far more people just want to believe that the miracle machine actually works.

in reply to myrmepropagandist

@nottrobin @shiri
nearly all of my thoughts come with an internal narrative, but the narration is often not the only aspect of the thought; some thoughts come with feelings, images, and other sensations.
in reply to llewelly

@llewelly @nottrobin @shiri

My controversial stance is that thinking isn't possible without feelings. At least not thinking as we know it.

(and the other controversial idea is that insects have very simple feelings.)

robin reshared this.

in reply to myrmepropagandist

oh because of this thread, last night I went looking up that #Chomsky theory about the centrality of language to the development of human thought, and found this #ScientificAmerican article about how that theory has basically been disproven.

https://www.scientificamerican.com/article/evidence-rebuts-chomsky-s-theory-of-language-learning/

(Although I'm of course no developmental psychologist or language theorist and I wouldn't implicitly trust a #popscience publication)

This entry was edited (1 week ago)
in reply to myrmepropagandist

I don't think it's seen as controversial. Feelings are the conscious representations of emotions. And emotions are fundamentally evaluations of your state or situation - is this thing or situation good? Bad? Scary? Tasty? Sexy? Dangerous?

With that definition, insects definitely have emotions. You could argue that a thermostat embodies the simplest possible emotions (are we too hot? Too cold? Just right?).

This entry was edited (1 week ago)
in reply to myrmepropagandist

@nottrobin @shiri
I dunno...why would you think that would be the case?

My thoughts are all verbal. I think and interact with the world almost entirely through words (I *can't* think visually—I appear to have some form of aphantasia), and I find LLMs to be total horseshit.

in reply to robin

@robin @myrmepropagandist I did not say "lower end of human intelligence" I said simply that it's intelligent, in the same way an insect has rudimentary intelligence. I only referenced human capacity as a comparison that some cognitive abilities it presents fall into human or near human ranges.

And as always with the counter argument you basically described the human mind with the only difference between text and artificial... The reason I don't like "intelligence" as a word is because it's usage is usually useless, in your case dismissing intelligence like many do simply because we know how it functions, with the unspoken portion being that you would never accept anything artificial as "intelligence" because we would always understand how it works.

in reply to myrmepropagandist

@Qybat @PTR_K @michaelgemar @mcc I try to find excuses to show folks Perplexity, not because I find it gives better overall results than ChatGPT, but because it has those great (RAG?) footnotes. And I can point to the citations and say, that's where all this comes from
in reply to myrmepropagandist

Oh god, it's started 😔

So thankful I found creators I like before all of this bullshit started flooding everything. Most are on Nebula too, which is an added bonus (hardly go to YT anymore except for a few channels I can't watch on Nebula).




I don't 100% love the platforms I'm on 100% of the time but calling one or more of them evil seems inappropriate/disproportionate? I don't get people who think Fedi or Bluesky should fail.

Trash Panda (friendica only does everything) reshared this.


It feels like the majority of the software industry has just completely given up on security, stability, performance. Or not even given up, rather that we just decided it wasn't important. There's nothing too sacred to be sacrificed for convenience or clicks.
This entry was edited (1 week ago)
in reply to Drew DeVault

I work mostly in web development. I notice that these are hardly a topic there. Especially performance. Just rent a bigger server. My own take is that I won’t upgrade the server until I optimized my code to a point where I can’t see any optimization strategies left. I don’t think developers are fully to blame, though. We probably should speak up more. But also, there should be enough time to do it well.


Content warning: Original post is too long for Bluesky. Click this to read.

Shiri Bailem reshared this.


Trash Panda (friendica only does everything) reshared this.


#ProjectAsher
Dear Friends Strangers and everyone in between,
I’m sorry this took so long but we had to get approvals from both facilities.

https://www.gofundme.com/f/project-asher

I was once told that if you truly need help you need only ask. So humbly I ask you at least read our ask for help below and at least perhaps help us get the word out.

We will post updates now.

I’m sorry it took so long but this stuff isn’t easy to do and it’s taking all my energy to even attempt this.

Fedi if you can do your thing please. If not for us for Asher. I can check and see if you can donate directly to Cornell or . If you prefer and dm me.
On any account or any questions just ask!
Thank you
Derek Jolene and Barbara.

I’m editing the alt text for pictures now but I have to hit send because we are packing.

Last week Asher our Furbaby had a bowel obstruction. He came to us as a stray who followed us home from a few blocks away.

His conditioned improved at first when the blockage was resolved. However, by early Monday morning, he declined dramatically.

We decided to take him to our vet once he opened that day as we were afraid that the stress from the long journey to the emergency vet might worsen his condition.

Yesterday, we found out he was lucky to be alive and his kidneys are shutting down.

Our vet has him semi stabilized now, and recommended a referral to Cornell veterinary hospital where they have specialists who hope to improve his prognosis. Since he’s only 3 years old, all members involved hope to give him the best shot at life.

Getting him stabilized so far is estimated to be 1000+ and the estimate for Cornell ranges between 1500-4500 conservatively. They are unable to provide a more accurate estimate until he has been evaluated.

I have helped others before to fundraise for their companions fundraising and we try to help with outreach in our community in terms of cat rescue, TNR, and finding affordable care. However, we have never had to ask for help ourselves in this regard.

While it’s difficult for us to ask for help, we realize it was the only way to save him. Although some may view him as just a cat or pet, he is so much more to us. Besides being a housemate, he is also a friend, companion, and teacher. We would give him our kidneys if we could.

That said, I know many are struggling as well. I have boosted and donated, but I never did it expecting anything back. I did it because we both believe we are in this together.

So if you can please send Asher your best vibes. Your best boosts.
My partner and I will keep you updated.

We do have Vet references, estimates, drivers license, etc, this is not a scam.

If you can donate, that would be great. No amount is too little, every penny makes a difference. We appreciate every kind thought or prayer at this point.

Thank you all in advance. We don’t have a lot of time. Whether we meet the goal or not, we will do our best to keep fighting for him.

Thank You,
Derek & Jolene & Barbara

https://www.gofundme.com/f/project-asher

Also if anyone knows of any reasonable places to stay or any air bnbs open anything or anyone in the area have any recommendations thank you.

“We are all in this together”

#ProjectAsher

#fediverse
#tootfic
#mutualaid
#solarpunk
#askfedi
#academia
#actuallyautistic
#actuallyadhd
#Cats
#CatsOfMastodon
#kindness
#writing
#love
@academicsunite
@actuallyautistic
Update:These alt text are fixed now thank you for all the boosts thoughts and donations. I’m sorry I haven’t been updating more. We should get updates on him tmr sometime and see if we can find a smoother quieter ride to NY and find out if he can even travel because it does stress him so. If there’s any questions just let us know.
Thank you again for at least giving him a chance and us the privilege of getting to share that time with him.
🥰

reshared this

in reply to EveryDay Human Derek ActuallyAutistic group reshared this.

Boost this and please send to people who will boost thank you fedi we will keep you updated.

Trash Panda (friendica only does everything) reshared this.


“People love great art not for the chemicals it releases but because it challenges us, comforts us, confuses us, probes us, attacks us, excites us, inspires us. Because great art is a miracle, because to witness it is to feel the presence of something like God and the human condition, and to remind us that they are perhaps the same thing.”

https://www.vox.com/culture/351041/ai-art-chatgpt-dall-e-sora-suno-human-creativity

#AI #AIart #ChatGPT #OpenAI #MediocrityMachine

in reply to Kydia Music Kydia Music reshared this.

Something I keep coming back to: we not only love great art for reasons that can't easily be commodified, we love artists for even less commodifiable reasons.

People write whole books of art appreciation, spending loads of time on artists' processes. What's that podcast about how music is made? Song Exploder? Imagine an episode of Song Exploder about an AI-generated song…

reshared this

in reply to mibwright

@mibwright

Oh a Song Exploder episode about AI generated music would be hilarious. If it had no human input other than the prompt it would last 30 seconds.
“How did this song come about?”

“I wanted to write a song about farts and holding them in at work. So I typed the prompt ‘90’s hard rock song about holding in farts at work; male vocals, solo guitar, fast tempo, driving rhythm’ and Suno spit this out in 5 seconds.”

in reply to Kydia Music

@mibwright
And of course if it *did* have more human input (lyrics, tweaking the song afterwards to clean up all the artifacts, having a real vocalist re-sing it) the episode would focus on that, since the human element of any creation is the most significant aspect and the part people would be interested in. Especially since these AI companies are evasive about how their technology works.
in reply to mibwright Kydia Music reshared this.

@mibwright
It’s incredible how much more meaningful any piece of art becomes when you’ve met the artist, if you’ve heard them speak or chatted with them or even just read their backstory. Suddenly you clearly see their artwork as an expression of an individual mind, a record of thoughts, feelings, time, effort, experience, judgment. The artwork reveals itself to be a form of communication that you can ponder with your own mind and heart as a fellow human.




I was reading a Bluesky post by Roxane Gay about how Rx drug commercials are weirdly upbeat and that unearthed the memory of this haunting commercial for heart failure meds that used old folks singing "Tomorrow" from Annie and that activates my fight-or-flight response to this day.

Trash Panda (friendica only does everything) reshared this.


Many incels and TERFs share the same bleak worldview:

- Men are inherently violent and sexual
- Women are destined to be passive victims
- We’re all governed by our genitals in an unsettlingly violent binary
- This can’t ever be fixed because “biology”
- Queer and trans people are deluded or lying

Both are reactionary views that accept an extreme idea: patriarchy as an inevitable, natural force. Real men are predators and women their prey.

We have a moral obligation to do better than that.



Cmon, phone, there's no reason for you to be flaking out on me. Just let me press the button.


Trash Panda (friendica only does everything) reshared this.


I've never purchased from The Paper Mouse before since it's in the US, but I wish more stores displayed their inks like this. It makes it so much easier to find exactly the colour your after. https://www.thepapermouse.com/pages/ink-color-chart

#FountainPenInk


Trash Panda (friendica only does everything) reshared this.


what's your favorite raspberry pi? please respond by saying "my favorite raspberry pi is..." and then suggest another single board computer that isn't a raspberry pi. im trying to genericize the trademark, you see
in reply to josef

my favorite raspberry pi is a m3 mac mini. those i think fit the definition of a raspberry pi right? :3


Content warning: big dumb long story viewable at this link



Why did we ever fall away from BBCode? As far as I can tell it can do way more than Markdown and it doesn't have 90 million different implementations.

#Markdown #BBCode

covracer reshared this.



Lemmy <-> Friendica federation is a bit jank. I wonder if there's anything I'm able to do on my end.

#lemmy #friendica #apMeta

This entry was edited (2 weeks ago)


So, fellow butterfles, remember when I mentioned Friendica yesterday? Instances running the latest version let you post to ActivityPub, the in-house DFRN protocol, and ATProto. You'll have to give it access to a preëxisting BSky account but there you go.

#atproto #friendica #ActivitiyPub

This entry was edited (2 weeks ago)
in reply to Trash Panda (friendica only does everything)

@Trash Panda, longform Yeah, it's puppent only for Bluesky so far, native AT is being worked on.

Friendica is AP native, DFRN is only used with old servers.

You missed native Diaspora and OStatus as well lol



Because I borked my account on Libranet, I've set up a new friendica here. I'm happy that I can actually do both microblogging and forum-posting here.
This entry was edited (2 weeks ago)
in reply to Trash Panda (friendica only does everything)

@Trash Panda, longform glad to have you here!

You can also have multiple sub-accounts as well as puppet accounts for a variety for platforms (say if you want to also use Tumblr), let me know if you have any questions!