Skip to main content


99% of the time "Judeo-Christian" is antisemitic. And yes, I will absolutely elaborate on this if asked.

Credit: @Rabbit Cohen

Edit because this blew up far more than I expected and multiple people have asked for me to elaborate, here's a copy of my elaboration with follow up questions encouraged:

It's a messy topic and it's late here (I'm a bit sleepy), so feel free to ask follow up questions.

The short version of it is that Judeo-Christian is almost always used in one of two harmful ways:

1) To try and give more credibility and weight to something that is purely Christian by claiming that it's part of Judaism as well when it's not (like the above example, because Judaism explicitly permits abortions)
2) To try and talk about broader groupings of related faiths while ignoring the many other Abrahamic faiths (the proper term, though that one more often hurts the lesser known groups, don't use it unless you also know it applies to groups like the Baháʼí, which I'll admit even I know next to nothing about, but it's valid here because all I'm doing is naming their religious family)

Because many (cough most cough) teach a bastardized form of Judaism through the lens of Christianity, and because that's the only exposure many get to our faith... they get skewed harmful and hurtful ideas about us.

Some highlight examples:
* We don't have an established afterlife (we don't say there isn't one, we just have zero information on it if there is)
* We don't seek "eternal reward", the reward for our faith is being a better person than we were the day before
* We have forgiveness baked into our faith, and no it doesn't require animal sacrifice (it requires you to actually ask the person you wronged...)
* We thoroughly encourage arguing any topic with anyone (right time and place of course), and that includes picking a fight with God if you think they're wrong about something (you have a 99.9% chance of being wrong... but we commend the effort and every once in a while someone wins the argument)
* We have a rule, Pikuach Nefesh, roughly meaning that life is the highest commandment. Your well being takes precedence over your faith, if it would hurt you or others to be observant than you are exempt from that requirement. It's unacceptable to hurt others for your faith, and for yourself it's frowned upon
* We actively discourage conversion, it's allowed but it's not a trivial process. We don't want people to become Jews, we just want people to be better.

This entry was edited (1 year ago)
Unknown parent

Shiri Bailem

@Shannon (she/her) @Pedestriansfirst I suppose you're technically correct, I guess I usually never think about it because there's always more apt descriptions (ie. Nazis are often Zionists because "Blood And Soil").

And yes on the antisemitism of it, I just chose not to say anything about that in favor of a chance at education. (Also a love for getting into arguments with aggressive militant atheists because it's so fun to see their talking points shatter and the confusion that comes from it)

And I didn't bring it up later because I felt from the conversation that it wasn't going to be a problem again from them because they learned some things about Judaism, Jewish Culture, and that religions people can in fact own and acknowledge bad behaviors in their own communities.

Unknown parent

Shiri Bailem

@Shannon (she/her) I don't think believing all zionists are jews isn't that messy of a idea because it impacts so little, especially since the zionist behavior of non-jews is already easily discernible on it's own as awful anyways.

And keep in mind that the comparison is that this started from assuming that all Jews condoned the atrocities committed by the Israeli government and has walked away knowing that it's not uniform.



This is a long article, but the theory hits *hard* with me and connects really well.

The basic gist is that autistics almost always define our identities by what we do and our personal traits, while non-autistics almost always define their identities by their relationships (in particular to social groups)

If you don't have it in you to read all of it, definitely read the section: "How does having an experientially-constructed identity impact relationships?".

neuroclastic.com/the-identity-…

Mandi reshared this.

Unknown parent

Shiri Bailem

@bike I suspect it isn't that much different. Collectivist societies can be awful in their own ways.

They're still better imo, but they have a tendency to focus too hard on traditions and conformity on top of the ideals of communal responsibility.

But in all cases it's a mesh of peer pressure and group identity vs our value identity.

@bike
Unknown parent

Shiri Bailem
@bike I get that, I mostly mention that so I don't come across as bashing collectivist societies incidentally. My point was more that I doubt there's that much difference for us, just swap out one set of rules that don't make sense for another set that don't make sense for a different reason.
@bike


Why You Must Keep The Monsters Human


*(Reposting because my node crashed and lost all my posts and I want to keep this one pinned)*

I've been mulling over making this post for a little bit, but I think it's really **really** important.

It's critically important that you remember and acknowledge the humanity of monsters. Not for their benefit, but for *everyone else's* benefit.

When someone commits a monstrous act or shares a monstrous belief, we want to think of them as an inherently vile and non-human thing.

But doing so shields and protects other monsters.

When you make a Nazi, or any kind of abuser, into a one-dimensional monster. When you make their whole existence *center* on this monstrous act or belief... you make it hard to see their humanity. And that's the point, you don't *want* to see their humanity.

*** You Don't Want To Believe That Someone You Know And Trust (Maybe Even Love) Is Capable Of Such Atrocity. ***

And that's the problem. Because when you reject their humanity, that humanity becomes their shield. Your friend Bob can't possibly be a Nazi or a child-abuser, he's such a loving father and he helped you move!

Because you see their humanity, you can't possibly imagine them as monsters because the monsters have no humanity in your eyes.

There's a reason that when serial killers get caught their neighbors say they couldn't imagine them doing such things.

So don't ignore their humanity, keep it in your mind... so the next one can't use it as a shield.



I'm so incredibly done with people being on a high-horse shitting on advances in AI for no other reason than to feel better about themselves.

Like if your issue is things like copyright and training data? Sure, go off, it's a philosophical argument there about rights, economy, etc. Likewise for arguments about ecological impact (it can be made reasonable there, the companies just don't want to).

But if you're just posting bullshit like "Hahaha, the language model can't do math" or "Look at how it was baited into saying something stupid" as proof that it's worthless: go fuck yourself.

Let alone the people who try to relate AI development to "NFT Bros"... NFTs literally don't do shit, AI actually has multiple proven and valid uses cases but if you think it's the same thing that just shows you have your head up your ass and refuse to look at the world around you.

All of that before getting to the fact that they have shown incredible usefulness for disability accommodations, but I guess it doesn't count if you prefer to be ableist and think we don't need or deserve accommodations?

So tired of people in general right now...

#AI #LLM

#ai #LLM

Charlotte Joanne reshared this.

in reply to Shiri Bailem

> "Hahaha, the language model can't do math"

People don't say that, they say haha the **AI** can't do math.

This problem stems from the LLM owners/developers themselves, by calling their product "AI" -- Artificial *Intelligence* -- instead of just LLM, or "Language generator" or something similar, thus creating the expectation that it actually can think, reason, or compute things instead of just generating plausible-sounding text responses to prompts.

And that choice was apparently deliberate on their part, to hype up the product. IOW, don't advertise or imply with misleading buzzwords that your product can do things it can't and then get upset when people mock it for not being able to do what you wanted them to believe it can do.

in reply to Lea

@Lea or maybe just people always had a wrong idea of what early AI would look like?

People just expected it to jump out fully formed as a super-genius rather than baby steps of abilities...

@Lea
in reply to Shiri Bailem

Maybe, but only because they were taught to by its creators.

Before "generative AI" (creation of new images and text) came out, AI was associated more with things like detecting possible cancer in medical scans, exoplanets in telescope images, recognizing/describing objects in images, finding the structure of proteins, searching for possible new medicines, etc. (not to mention military applications :oh_no: ).

So as I recall it was looked upon as something to help humans solve problems, having great potential in spite of some negative early experiences with things like automatic stock trading.

So IMO these new generative applications should never have been called "AI" because they are not. Even using terms like saying they were "trained" or "learned" from existing art and texts implies intelligent reasoning which is misleading.

Rather, it was the developers who learned how to make their statistical algorithms better able to extract, recombine, and generate similar, plausible-looking (but not true or accurate) output from the mass of data they collected.

IOW, it's those pre-existing AI applications that can at least on some level be said to have been "trained" and to have "learned" to recognize specific kinds of patterns in visual/spatial/textual data and mark it for *humans* to then interpret and judge for accuracy. And certainly not to make up new stuff.

Making up new stuff should be for entertainment purposes only.

in reply to Lea

@Lea yeah, that's a fundamental misunderstanding of how these came to be... your examples of non-generative AI are all the same basic technology. In crude terms it's an image enhancement/recognition ai run in reverse. (Then LLMs were cutting out the image part)

Their training process is nearly identical in fact.

@Lea
This entry was edited (4 months ago)
in reply to Shiri Bailem

My point is that it's the generative aspect that is the most fundamental and important difference, and that, along with allowing decision-making (as in the stock market example) is where the harm arises.

I haven't seen anything beneficial from generative models, other than as I said before, entertainment, or perhaps some strictly personal use that is not ever disseminated to others.

Generating anything with it for any serious purpose is harmful and with no positive benefit (unless you consider deep fake propaganda a "benefit").

Using it for research papers, legal briefs, class assignments, fake photos, etc. is harmful for so many reasons I don't have room to state them all here.

in reply to Lea

@Lea I've seen a whole bunch of practical beneficial uses:

  • I use it personally as a coding assistant (great for repetitive code entry where it'll automatically fill in variable names in repetitive blocks of code, or help with debugging)
  • I've used it time to time as a search aid (when used properly with a good system it can drastically cut down on complex searches, taking something that would have taken me anywhere from a half hour to 2 hours into something that takes 5-10 minutes)
  • Once or twice I've made use of it as a sort of thesaurus to figure out terms when i didn't have enough to search off of (as in, with a normal thesaurus I would give similar words... but I didn't have similar words in these cases)
  • As a disability aid for executive dysfunction (see goblin.tools that "magic to-do list" and "compiler" aren't just gimmicks, they're for people with executive dysfunction who struggle breaking down, separating, and sorting tasks)
  • I've used it personally as an aid for communication. Being an autistic, when communicating with allistics, especially in a professional environment, it can help edit my message/email to maintain the correct tone as well as make sure I'm getting my point across (again see goblin.tools "Formalizer")
  • I've also seen in the wild Amazon's got a great use now where it summarizes user reviews for a product (and this has been practically helpful, I saw some issues with products before buying them there and it gave me cause to look deeper into the reviews to figure out whether that issue mattered to me)

Yes, there are plenty of harmful uses, but if you don't see positive uses you're not looking at all.

There's also the upcoming uses that are still being ironed out but inevitable (as in these aren't just hypotheticals, it's just down to predictable progress):
* Dynamic dialogue in games: tech demos of this have been pretty cool, it's not there to replace writers but to allow NPCs to fluidly talk back to anything rather than purely scripted responses)
* Virtual assistants: you may or may not like them, but many people do like them and being able to handle fluid natural language is a huge step compared to before.

I'm not going to pretend that image generators have much use at the moment, there's technologies that come later that they lead to that can be useful (like dynamic generation of 3d models and environments, which can be good progress towards the goal of "holodecks" in the sense of being able to dynamically call up desired environments then tweaking them to be what you need).

Voice generators right now mostly are just good for nice computer voices and maybe in the future being used to splice in dynamic content (such as player character name) into games (right now games either avoid saying your name entirely, use a stand-in name, and/or have a handful of pre-recorded names that are really cool if they happen to be on the list)

@Lea
in reply to Shiri Bailem

I do admit there may be some beneficial uses of generative not-AI that I haven't been able to think of, or might be someday. Meanwhile the internet is being flooded with generated false or meaningless "content" that is in turn being re-ingested as fodder for even more of the same.

Other than coding assistant, most of your examples sound more like analysis (which is fine) than generation of new content (not fine):

- analyzing human-written prose and flagging possible tone or phrasing problems = analytical. Suggesting replacement text is generative but untrustworthy. You have to check that it isn't saying something you didn't mean to say, then trust that it's an improvement.

- summarizing reviews = analyzing: compiling stats in categories of what the reviews say. Not generating reviews or anything new.

- coding assistant: This *is* generating something: garbage. I've played with it and never gotten anything useful. It can only handle coding problems that have already been solved and included in the training data. Anything requiring it to actually solve a problem not already in its store of code fails. There's no logic or reasoning (as you've already pointed out), which would be required to do that. The main problem I found was it would call functions that don't even exist in the language--I assume it finds people's custom functions scraped from somewhere and just uses those function names as if they were built-in functions in the language.

in reply to Lea

@Lea I think you're quibbling over the term "generative" there, a non-generative version would be outputting just variables rather than raw text. Ie. non-generative analysis is just going to be "x% confident, x% aggressive, x% sympathetic..."

  • analyzing prose and flagging tone or phrasing problems = like so many anti-ai people you just assume the person using it is somehow unable to read and understand the text. And frankly as I've already said this is a disability accommodation I'm a little bit pissed off at this response. We can understand these things when pointed out, we can understand whether the text it gives is trash, we're not idiots. What it does is consistently give us text that fixes our tonal issues, and we can recognize those fixes after the fact but can't reasonably do them on our own. So maybe get your head out of your ass when talking about a disability accommodation that someone has first hand experience on and has said so up front.
  • summarizing reviews - again, you're just trying to throw away a point because you don't understand what "generative" or "large language model" means.
  • coding assistant - and again clearly you made one half-assed attempt to make it do more than it's capable of, considered it trash, and then threw the whole thing away. I've used it plenty, I've used it to speed through refactoring a whole project to change database engines out, I've used it to speed through building UIs with a bunch of buttons. Does it create good code when I just ask it to write a whole application for me? Hell no. But it sure as hell can see me writing a list of buttons and go, "Oh, I recognize the pattern of the names I'm going to fill this in 20 more times with all variables names changed to meet the pattern".
@Lea
This entry was edited (4 months ago)
Unknown parent

Shiri Bailem

@Lea like I said, you're attacking the definition of generative by ignoring the actual definition.

And I'm done with this conversation since all you're doing is looping back on points I've already called out. You've been an ass, just accept it and move on.

@Lea


Rant about AI:

Sadly there's no reasonable way to differentiate AI content from "real" content. And regardless of your opinions on AI there's no "stopping" it (it's a "cat's out of the bag" situation, you can run these things on your home computer with open source software... there's no way short of an apocalypse to stop development from here).

What we do have is a lot of fighting and little effort to work on solutions of living with this. And I think worse yet many taking the anti-AI stance, especially the loudest of them, are basically making things worse because real solutions are anathema to them (ie. anything short of an outright ban on the technology is unacceptable, which means they tend to push back against even efforts to rein in AI or talk over those who want to push those efforts).

On top of that you have the borderline predatory push of "AI Detection Tools" and "AI Poisoning". The detection tools are a question of "How many real lives are you okay with ruining to catch a handful of bad uses cases in AI because there is zero way to have any certainty on the accuracy of these tools?" while poisoning tools are a security blanket that leads to people dropping their defenses because they don't stop AI, just slightly delay it's access to your content (even the creators of those tools acknowledge that AI will quickly bypass them, at which point there's no difference in whether or not you used that tool), worse yet as AI gets further incorporated in search tools it can make it harder to get visibility and exposure over AI generated content.

What we really need to be focusing on to address the problems with AI:

  • Learning how copyright works (in my experience artists tend to have a woefully bad understanding of what is or isn't covered) and making sure corporations don't lobby the government into allowing copyright on AI works (under current law they are public domain, aka. no copyright, but there's already been one case of pushing that they can copyright "arrangements" of AI works). This means if they want to actually have a copyright on art, they've got to pay a human artist
  • We need to push for reporting requirements/standards. One of the most toxic elements is how much AI floods spaces and bumps out human artists, especially when they attack the prompt containing the artist's name (meaning searching for that artist can turn up more AI work than their actual work)... there needs to be a requirement that AI art be labeled. This also works with the previous point as it is similar to being able to search for something released Creative Commons.
  • Push for copyright responsibility in outputs rather than training data inputs. This sounds like something that is already one of the loudest arguments, but really isn't. Most arguments I hear try to go after AI tools for copyright content in their training data... but if you actually learn copyright you realize that a victory here largely means that major companies get more of an advantage because copyright only applies when content is copied (ie. when the training data is made available for smaller companies to run their own) vs when content is transformed (despite popular opinion, the vast majority of AI output does not violate copyright and qualifies as a transformative work... see again learning copyright law, plus a dash of learning how these tools actually work). Responsibility in outputs means that an AI can violate copyright (if I ask an AI tool to give me the first chapter of a copyrighted book and it does so... that is a violation and they need to genuinely be responsible for taking measures to prevent this from happening, but there should also be leeway for "forced violations", ie. when you bend over backwards to make it break copyright vs just saying "give me the first chapter of...")
  • Work on learning and developing responsible usage. Again despite popular artist opinion, there genuinely is a lot of responsible use cases for all these AI technologies, from using LLMs to help debug code, summarize text, prioritize lists to voice duplicators used, with the license of the original VA, being used for dynamic speech (ie. voice assistants or actually speaking a player's name in a video game in the middle of otherwise pre-recorded output). And that's not to ignore image generators which can be used for enhancing/repairing old photos, or just used for general visual effects on your own art (ie. the filters everyone uses on instagram or the like... much of them are the exact same tech as AI Image Generators)
  • And as always... fighting capitalism because the real threat of AI is the same as any other technology advancement: if CEOs can replace you with a machine, they will, and we live in a society where no employment means risk of death.

#AI #ResponsibleAI #Rant



Any usage of "AI" detection is trash and accomplishes nothing but hurting innocent people.

youtu.be/7Av0w55Q6Ps?si=f6I3Yu…

#AI #College



I'm pretty sure even AI supporters can agree we really need to institute some sort of laws around mandatory labeling of AI generated content.

#ai

#ai
in reply to Calamity Caitlin

@Calamity Caitlin you mean 80% of the reason we need such a law? (the other 20% being rampant AI generated misinformation)


Random thought of a brighter aspect of the #ai future:

We're not too far off from AI video editing, and imagine writing out a list of all your triggers, loading it into an AI, and then having it edited movies and tv shows to remove all triggering content.

#ai


Just sharing a random bit of AI fun because I was bored and had a thought.

AI can incidentally translate language, but that also extends to things like converting victorian english to modern day english and even updating slang and euphamisms (though when asked for slang it'll lay it on a bit thick... but it's fun about it)

So... here's the opening monologue of Romeo & Juliet (and no, it didn't preserve iambic pentameter):

Yo, listen up, peeps! Two big-shot families, equally high and mighty,
Kicking it in fair Verona, where this drama's gonna go down.
An old beef turns into some fresh drama,
And it's so bad it gets everyone dirty.
From the messed-up love lives of these two rival crews,
A pair of star-crossed lovers are gonna do something crazy,
Their messed-up misadventures bring down a whole lotta pain,
And with their deaths, their parents' drama finally stops.
The crazy rollercoaster of their death-marked love,
And the never-ending feud between their folks,
That only ends with their tragic deaths,
Is gonna be the main event on our stage—
If y'all stick around and listen close,
We'll do our best to keep it real for ya!

#AI #RandomFun #Shakespeare



#RandomThoughts

With the uptick of AI image generation and soon video, and the current strike where it's one of the key issues, I had an interesting thought:

Once this is in full swing (and hopefully in a way that's equitable for actors), I'm imagining we'll eventually have an AI actor version of the Willhelm Scream. Have this one background actor who's in *every* movie and cinephiles will point, and I don't mean like Stan Lee cameos where they get dialogue and a face shot, I mean like Alfred Hitchcock cameos where it's a game to spot them.

#ai

* One of the points I've heard they're striking over is because studios want to pay a background actor for one day of work to scan them in and then use their likeness forever. I hope they come to an good arrangement for the actors in this, they deserve to get paid.