Skip to main content


99% of the time "Judeo-Christian" is antisemitic. And yes, I will absolutely elaborate on this if asked.

Credit: @Rabbit Cohen

Edit because this blew up far more than I expected and multiple people have asked for me to elaborate, here's a copy of my elaboration with follow up questions encouraged:

It's a messy topic and it's late here (I'm a bit sleepy), so feel free to ask follow up questions.

The short version of it is that Judeo-Christian is almost always used in one of two harmful ways:

1) To try and give more credibility and weight to something that is purely Christian by claiming that it's part of Judaism as well when it's not (like the above example, because Judaism explicitly permits abortions)
2) To try and talk about broader groupings of related faiths while ignoring the many other Abrahamic faiths (the proper term, though that one more often hurts the lesser known groups, don't use it unless you also know it applies to groups like the Baháʼí, which I'll admit even I know next to nothing about, but it's valid here because all I'm doing is naming their religious family)

Because many (cough most cough) teach a bastardized form of Judaism through the lens of Christianity, and because that's the only exposure many get to our faith... they get skewed harmful and hurtful ideas about us.

Some highlight examples:
* We don't have an established afterlife (we don't say there isn't one, we just have zero information on it if there is)
* We don't seek "eternal reward", the reward for our faith is being a better person than we were the day before
* We have forgiveness baked into our faith, and no it doesn't require animal sacrifice (it requires you to actually ask the person you wronged...)
* We thoroughly encourage arguing any topic with anyone (right time and place of course), and that includes picking a fight with God if you think they're wrong about something (you have a 99.9% chance of being wrong... but we commend the effort and every once in a while someone wins the argument)
* We have a rule, Pikuach Nefesh, roughly meaning that life is the highest commandment. Your well being takes precedence over your faith, if it would hurt you or others to be observant than you are exempt from that requirement. It's unacceptable to hurt others for your faith, and for yourself it's frowned upon
* We actively discourage conversion, it's allowed but it's not a trivial process. We don't want people to become Jews, we just want people to be better.

This entry was edited (1 year ago)
Unknown parent

Shiri Bailem

@Shannon (she/her) @Pedestriansfirst I suppose you're technically correct, I guess I usually never think about it because there's always more apt descriptions (ie. Nazis are often Zionists because "Blood And Soil").

And yes on the antisemitism of it, I just chose not to say anything about that in favor of a chance at education. (Also a love for getting into arguments with aggressive militant atheists because it's so fun to see their talking points shatter and the confusion that comes from it)

And I didn't bring it up later because I felt from the conversation that it wasn't going to be a problem again from them because they learned some things about Judaism, Jewish Culture, and that religions people can in fact own and acknowledge bad behaviors in their own communities.

Unknown parent

Shiri Bailem

@Shannon (she/her) I don't think believing all zionists are jews isn't that messy of a idea because it impacts so little, especially since the zionist behavior of non-jews is already easily discernible on it's own as awful anyways.

And keep in mind that the comparison is that this started from assuming that all Jews condoned the atrocities committed by the Israeli government and has walked away knowing that it's not uniform.



This is a long article, but the theory hits *hard* with me and connects really well.

The basic gist is that autistics almost always define our identities by what we do and our personal traits, while non-autistics almost always define their identities by their relationships (in particular to social groups)

If you don't have it in you to read all of it, definitely read the section: "How does having an experientially-constructed identity impact relationships?".

neuroclastic.com/the-identity-…

Mandi reshared this.

Unknown parent

Shiri Bailem

@bike I suspect it isn't that much different. Collectivist societies can be awful in their own ways.

They're still better imo, but they have a tendency to focus too hard on traditions and conformity on top of the ideals of communal responsibility.

But in all cases it's a mesh of peer pressure and group identity vs our value identity.

@bike
Unknown parent

Shiri Bailem
@bike I get that, I mostly mention that so I don't come across as bashing collectivist societies incidentally. My point was more that I doubt there's that much difference for us, just swap out one set of rules that don't make sense for another set that don't make sense for a different reason.
@bike


Why You Must Keep The Monsters Human


*(Reposting because my node crashed and lost all my posts and I want to keep this one pinned)*

I've been mulling over making this post for a little bit, but I think it's really **really** important.

It's critically important that you remember and acknowledge the humanity of monsters. Not for their benefit, but for *everyone else's* benefit.

When someone commits a monstrous act or shares a monstrous belief, we want to think of them as an inherently vile and non-human thing.

But doing so shields and protects other monsters.

When you make a Nazi, or any kind of abuser, into a one-dimensional monster. When you make their whole existence *center* on this monstrous act or belief... you make it hard to see their humanity. And that's the point, you don't *want* to see their humanity.

*** You Don't Want To Believe That Someone You Know And Trust (Maybe Even Love) Is Capable Of Such Atrocity. ***

And that's the problem. Because when you reject their humanity, that humanity becomes their shield. Your friend Bob can't possibly be a Nazi or a child-abuser, he's such a loving father and he helped you move!

Because you see their humanity, you can't possibly imagine them as monsters because the monsters have no humanity in your eyes.

There's a reason that when serial killers get caught their neighbors say they couldn't imagine them doing such things.

So don't ignore their humanity, keep it in your mind... so the next one can't use it as a shield.



I'm so incredibly done with people being on a high-horse shitting on advances in AI for no other reason than to feel better about themselves.

Like if your issue is things like copyright and training data? Sure, go off, it's a philosophical argument there about rights, economy, etc. Likewise for arguments about ecological impact (it can be made reasonable there, the companies just don't want to).

But if you're just posting bullshit like "Hahaha, the language model can't do math" or "Look at how it was baited into saying something stupid" as proof that it's worthless: go fuck yourself.

Let alone the people who try to relate AI development to "NFT Bros"... NFTs literally don't do shit, AI actually has multiple proven and valid uses cases but if you think it's the same thing that just shows you have your head up your ass and refuse to look at the world around you.

All of that before getting to the fact that they have shown incredible usefulness for disability accommodations, but I guess it doesn't count if you prefer to be ableist and think we don't need or deserve accommodations?

So tired of people in general right now...

#AI #LLM

#ai #LLM

Charlotte Joanne reshared this.

in reply to Shiri Bailem

> "Hahaha, the language model can't do math"

People don't say that, they say haha the **AI** can't do math.

This problem stems from the LLM owners/developers themselves, by calling their product "AI" -- Artificial *Intelligence* -- instead of just LLM, or "Language generator" or something similar, thus creating the expectation that it actually can think, reason, or compute things instead of just generating plausible-sounding text responses to prompts.

And that choice was apparently deliberate on their part, to hype up the product. IOW, don't advertise or imply with misleading buzzwords that your product can do things it can't and then get upset when people mock it for not being able to do what you wanted them to believe it can do.

in reply to Lea

@Lea or maybe just people always had a wrong idea of what early AI would look like?

People just expected it to jump out fully formed as a super-genius rather than baby steps of abilities...

@Lea
in reply to Shiri Bailem

Maybe, but only because they were taught to by its creators.

Before "generative AI" (creation of new images and text) came out, AI was associated more with things like detecting possible cancer in medical scans, exoplanets in telescope images, recognizing/describing objects in images, finding the structure of proteins, searching for possible new medicines, etc. (not to mention military applications :oh_no: ).

So as I recall it was looked upon as something to help humans solve problems, having great potential in spite of some negative early experiences with things like automatic stock trading.

So IMO these new generative applications should never have been called "AI" because they are not. Even using terms like saying they were "trained" or "learned" from existing art and texts implies intelligent reasoning which is misleading.

Rather, it was the developers who learned how to make their statistical algorithms better able to extract, recombine, and generate similar, plausible-looking (but not true or accurate) output from the mass of data they collected.

IOW, it's those pre-existing AI applications that can at least on some level be said to have been "trained" and to have "learned" to recognize specific kinds of patterns in visual/spatial/textual data and mark it for *humans* to then interpret and judge for accuracy. And certainly not to make up new stuff.

Making up new stuff should be for entertainment purposes only.

in reply to Lea

@Lea yeah, that's a fundamental misunderstanding of how these came to be... your examples of non-generative AI are all the same basic technology. In crude terms it's an image enhancement/recognition ai run in reverse. (Then LLMs were cutting out the image part)

Their training process is nearly identical in fact.

@Lea
This entry was edited (4 months ago)
in reply to Shiri Bailem

My point is that it's the generative aspect that is the most fundamental and important difference, and that, along with allowing decision-making (as in the stock market example) is where the harm arises.

I haven't seen anything beneficial from generative models, other than as I said before, entertainment, or perhaps some strictly personal use that is not ever disseminated to others.

Generating anything with it for any serious purpose is harmful and with no positive benefit (unless you consider deep fake propaganda a "benefit").

Using it for research papers, legal briefs, class assignments, fake photos, etc. is harmful for so many reasons I don't have room to state them all here.

in reply to Lea

@Lea I've seen a whole bunch of practical beneficial uses:

  • I use it personally as a coding assistant (great for repetitive code entry where it'll automatically fill in variable names in repetitive blocks of code, or help with debugging)
  • I've used it time to time as a search aid (when used properly with a good system it can drastically cut down on complex searches, taking something that would have taken me anywhere from a half hour to 2 hours into something that takes 5-10 minutes)
  • Once or twice I've made use of it as a sort of thesaurus to figure out terms when i didn't have enough to search off of (as in, with a normal thesaurus I would give similar words... but I didn't have similar words in these cases)
  • As a disability aid for executive dysfunction (see goblin.tools that "magic to-do list" and "compiler" aren't just gimmicks, they're for people with executive dysfunction who struggle breaking down, separating, and sorting tasks)
  • I've used it personally as an aid for communication. Being an autistic, when communicating with allistics, especially in a professional environment, it can help edit my message/email to maintain the correct tone as well as make sure I'm getting my point across (again see goblin.tools "Formalizer")
  • I've also seen in the wild Amazon's got a great use now where it summarizes user reviews for a product (and this has been practically helpful, I saw some issues with products before buying them there and it gave me cause to look deeper into the reviews to figure out whether that issue mattered to me)

Yes, there are plenty of harmful uses, but if you don't see positive uses you're not looking at all.

There's also the upcoming uses that are still being ironed out but inevitable (as in these aren't just hypotheticals, it's just down to predictable progress):
* Dynamic dialogue in games: tech demos of this have been pretty cool, it's not there to replace writers but to allow NPCs to fluidly talk back to anything rather than purely scripted responses)
* Virtual assistants: you may or may not like them, but many people do like them and being able to handle fluid natural language is a huge step compared to before.

I'm not going to pretend that image generators have much use at the moment, there's technologies that come later that they lead to that can be useful (like dynamic generation of 3d models and environments, which can be good progress towards the goal of "holodecks" in the sense of being able to dynamically call up desired environments then tweaking them to be what you need).

Voice generators right now mostly are just good for nice computer voices and maybe in the future being used to splice in dynamic content (such as player character name) into games (right now games either avoid saying your name entirely, use a stand-in name, and/or have a handful of pre-recorded names that are really cool if they happen to be on the list)

@Lea
in reply to Shiri Bailem

I do admit there may be some beneficial uses of generative not-AI that I haven't been able to think of, or might be someday. Meanwhile the internet is being flooded with generated false or meaningless "content" that is in turn being re-ingested as fodder for even more of the same.

Other than coding assistant, most of your examples sound more like analysis (which is fine) than generation of new content (not fine):

- analyzing human-written prose and flagging possible tone or phrasing problems = analytical. Suggesting replacement text is generative but untrustworthy. You have to check that it isn't saying something you didn't mean to say, then trust that it's an improvement.

- summarizing reviews = analyzing: compiling stats in categories of what the reviews say. Not generating reviews or anything new.

- coding assistant: This *is* generating something: garbage. I've played with it and never gotten anything useful. It can only handle coding problems that have already been solved and included in the training data. Anything requiring it to actually solve a problem not already in its store of code fails. There's no logic or reasoning (as you've already pointed out), which would be required to do that. The main problem I found was it would call functions that don't even exist in the language--I assume it finds people's custom functions scraped from somewhere and just uses those function names as if they were built-in functions in the language.

in reply to Lea

@Lea I think you're quibbling over the term "generative" there, a non-generative version would be outputting just variables rather than raw text. Ie. non-generative analysis is just going to be "x% confident, x% aggressive, x% sympathetic..."

  • analyzing prose and flagging tone or phrasing problems = like so many anti-ai people you just assume the person using it is somehow unable to read and understand the text. And frankly as I've already said this is a disability accommodation I'm a little bit pissed off at this response. We can understand these things when pointed out, we can understand whether the text it gives is trash, we're not idiots. What it does is consistently give us text that fixes our tonal issues, and we can recognize those fixes after the fact but can't reasonably do them on our own. So maybe get your head out of your ass when talking about a disability accommodation that someone has first hand experience on and has said so up front.
  • summarizing reviews - again, you're just trying to throw away a point because you don't understand what "generative" or "large language model" means.
  • coding assistant - and again clearly you made one half-assed attempt to make it do more than it's capable of, considered it trash, and then threw the whole thing away. I've used it plenty, I've used it to speed through refactoring a whole project to change database engines out, I've used it to speed through building UIs with a bunch of buttons. Does it create good code when I just ask it to write a whole application for me? Hell no. But it sure as hell can see me writing a list of buttons and go, "Oh, I recognize the pattern of the names I'm going to fill this in 20 more times with all variables names changed to meet the pattern".
@Lea
This entry was edited (4 months ago)
Unknown parent

Shiri Bailem

@Lea like I said, you're attacking the definition of generative by ignoring the actual definition.

And I'm done with this conversation since all you're doing is looping back on points I've already called out. You've been an ass, just accept it and move on.

@Lea