Skip to main content


I'm so incredibly done with people being on a high-horse shitting on advances in AI for no other reason than to feel better about themselves.

Like if your issue is things like copyright and training data? Sure, go off, it's a philosophical argument there about rights, economy, etc. Likewise for arguments about ecological impact (it can be made reasonable there, the companies just don't want to).

But if you're just posting bullshit like "Hahaha, the language model can't do math" or "Look at how it was baited into saying something stupid" as proof that it's worthless: go fuck yourself.

Let alone the people who try to relate AI development to "NFT Bros"... NFTs literally don't do shit, AI actually has multiple proven and valid uses cases but if you think it's the same thing that just shows you have your head up your ass and refuse to look at the world around you.

All of that before getting to the fact that they have shown incredible usefulness for disability accommodations, but I guess it doesn't count if you prefer to be ableist and think we don't need or deserve accommodations?

So tired of people in general right now...

#AI #LLM

#ai #LLM

Charlotte Joanne reshared this.

Unknown parent

Shiri Bailem

@anubis2814 ... did you even read the post?

Like if your issue is things like copyright and training data? Sure, go off, it's a philosophical argument there about rights, economy, etc. Likewise for arguments about ecological impact (it can be made reasonable there, the companies just don't want to).
in reply to Shiri Bailem

@anubis2814 The core issue is that they're trying to cram it in everywhere for fear of being left behind on the "next big thing" and they all insist on using the absolute highest performance, latest, and most powerful AIs for everything.

I can run a reasonable model for most use cases offline on my phone, spinning it up just as needed (ironically my phone is more powerful in this regards than my desktop... so I'm stuck with online models there, but even then I typically use the lower power models)

in reply to Shiri Bailem

I have an .mp3 of my voice saying things I never said because I fed about 12 minutes of my voice into a cloud based software that can do that extremely well.

I can see all sorts of applications for this, purely for comedy purposes but doubt I will use it for anything (speech synthesizer will do for what I need).

I was taken aback by this episode though. It was my voice (well, one of them) and hearing it from something else was odd.

in reply to Air Quotes Comedian

@Air Quotes Comedian oh yeah, as much as it has potential for good it's also a hell of a lot of potential for absolutely terrifying as well.

Another problem with the bulk of "Anti-AI" crowds is that they drown out the real problems and lash out at anyone trying to fix them.

in reply to Shiri Bailem

There is an awful lot of bullshit and nonsense surrounding the subject.

My interaction with AI has been quite limited. If I feel a YouTube video is read by AI or generated by such then I click off, and I find the constant chat bots in the corners of commercial websites to be unwelcome (or if you get one on the telephone).

As with anything you're going to get a mountain of shit (a content avalanche that will render the internet useless if the pundits are to be believed) but I'm impressed by its capabilities.

I mean, relatively impressed.

The most impressive thing computer related thing I've ever seen was in the 80's where I saw an Acorn Archimedes playing video footage of a race car event.

Blew my mind.

in reply to Air Quotes Comedian

@Air Quotes Comedian Yeah, bulk of the places I've seen it have been misplaced and majority of them I could have told people how it'd go before they even started...

Chat Bots in the corners of commercial websites are going to fall apart as they increasingly realize that it'll say shit that they're now liable for. The god forsaken AI voiceover videos and content farms, every one of which is truly awful and I wonder how they're even seeing returns on it in the first place. And don't even get me started on Google's "Let's cram an AI answer into search results randomly because there's no way it will say awful shit that people will take at face value".

I've mostly used it for skimming on my behalf (I use big-agi.com for that especially, but in general for most of my AI uses), code assistance, and the occasional editing.

Unknown parent

Shiri Bailem
@anubis2814 but again please acknowledge I addressed that in my post and you've brought no new information... because I'm also sick of people thinking they "one-upped" me because they didn't pay attention to what I said.
in reply to Shiri Bailem

> "Hahaha, the language model can't do math"

People don't say that, they say haha the **AI** can't do math.

This problem stems from the LLM owners/developers themselves, by calling their product "AI" -- Artificial *Intelligence* -- instead of just LLM, or "Language generator" or something similar, thus creating the expectation that it actually can think, reason, or compute things instead of just generating plausible-sounding text responses to prompts.

And that choice was apparently deliberate on their part, to hype up the product. IOW, don't advertise or imply with misleading buzzwords that your product can do things it can't and then get upset when people mock it for not being able to do what you wanted them to believe it can do.

in reply to Lea

@Lea or maybe just people always had a wrong idea of what early AI would look like?

People just expected it to jump out fully formed as a super-genius rather than baby steps of abilities...

@Lea
in reply to Shiri Bailem

Maybe, but only because they were taught to by its creators.

Before "generative AI" (creation of new images and text) came out, AI was associated more with things like detecting possible cancer in medical scans, exoplanets in telescope images, recognizing/describing objects in images, finding the structure of proteins, searching for possible new medicines, etc. (not to mention military applications :oh_no: ).

So as I recall it was looked upon as something to help humans solve problems, having great potential in spite of some negative early experiences with things like automatic stock trading.

So IMO these new generative applications should never have been called "AI" because they are not. Even using terms like saying they were "trained" or "learned" from existing art and texts implies intelligent reasoning which is misleading.

Rather, it was the developers who learned how to make their statistical algorithms better able to extract, recombine, and generate similar, plausible-looking (but not true or accurate) output from the mass of data they collected.

IOW, it's those pre-existing AI applications that can at least on some level be said to have been "trained" and to have "learned" to recognize specific kinds of patterns in visual/spatial/textual data and mark it for *humans* to then interpret and judge for accuracy. And certainly not to make up new stuff.

Making up new stuff should be for entertainment purposes only.

in reply to Lea

@Lea yeah, that's a fundamental misunderstanding of how these came to be... your examples of non-generative AI are all the same basic technology. In crude terms it's an image enhancement/recognition ai run in reverse. (Then LLMs were cutting out the image part)

Their training process is nearly identical in fact.

@Lea
in reply to Shiri Bailem

My point is that it's the generative aspect that is the most fundamental and important difference, and that, along with allowing decision-making (as in the stock market example) is where the harm arises.

I haven't seen anything beneficial from generative models, other than as I said before, entertainment, or perhaps some strictly personal use that is not ever disseminated to others.

Generating anything with it for any serious purpose is harmful and with no positive benefit (unless you consider deep fake propaganda a "benefit").

Using it for research papers, legal briefs, class assignments, fake photos, etc. is harmful for so many reasons I don't have room to state them all here.

in reply to Lea

@Lea I've seen a whole bunch of practical beneficial uses:

  • I use it personally as a coding assistant (great for repetitive code entry where it'll automatically fill in variable names in repetitive blocks of code, or help with debugging)
  • I've used it time to time as a search aid (when used properly with a good system it can drastically cut down on complex searches, taking something that would have taken me anywhere from a half hour to 2 hours into something that takes 5-10 minutes)
  • Once or twice I've made use of it as a sort of thesaurus to figure out terms when i didn't have enough to search off of (as in, with a normal thesaurus I would give similar words... but I didn't have similar words in these cases)
  • As a disability aid for executive dysfunction (see goblin.tools that "magic to-do list" and "compiler" aren't just gimmicks, they're for people with executive dysfunction who struggle breaking down, separating, and sorting tasks)
  • I've used it personally as an aid for communication. Being an autistic, when communicating with allistics, especially in a professional environment, it can help edit my message/email to maintain the correct tone as well as make sure I'm getting my point across (again see goblin.tools "Formalizer")
  • I've also seen in the wild Amazon's got a great use now where it summarizes user reviews for a product (and this has been practically helpful, I saw some issues with products before buying them there and it gave me cause to look deeper into the reviews to figure out whether that issue mattered to me)

Yes, there are plenty of harmful uses, but if you don't see positive uses you're not looking at all.

There's also the upcoming uses that are still being ironed out but inevitable (as in these aren't just hypotheticals, it's just down to predictable progress):
* Dynamic dialogue in games: tech demos of this have been pretty cool, it's not there to replace writers but to allow NPCs to fluidly talk back to anything rather than purely scripted responses)
* Virtual assistants: you may or may not like them, but many people do like them and being able to handle fluid natural language is a huge step compared to before.

I'm not going to pretend that image generators have much use at the moment, there's technologies that come later that they lead to that can be useful (like dynamic generation of 3d models and environments, which can be good progress towards the goal of "holodecks" in the sense of being able to dynamically call up desired environments then tweaking them to be what you need).

Voice generators right now mostly are just good for nice computer voices and maybe in the future being used to splice in dynamic content (such as player character name) into games (right now games either avoid saying your name entirely, use a stand-in name, and/or have a handful of pre-recorded names that are really cool if they happen to be on the list)

@Lea
in reply to Shiri Bailem

I do admit there may be some beneficial uses of generative not-AI that I haven't been able to think of, or might be someday. Meanwhile the internet is being flooded with generated false or meaningless "content" that is in turn being re-ingested as fodder for even more of the same.

Other than coding assistant, most of your examples sound more like analysis (which is fine) than generation of new content (not fine):

- analyzing human-written prose and flagging possible tone or phrasing problems = analytical. Suggesting replacement text is generative but untrustworthy. You have to check that it isn't saying something you didn't mean to say, then trust that it's an improvement.

- summarizing reviews = analyzing: compiling stats in categories of what the reviews say. Not generating reviews or anything new.

- coding assistant: This *is* generating something: garbage. I've played with it and never gotten anything useful. It can only handle coding problems that have already been solved and included in the training data. Anything requiring it to actually solve a problem not already in its store of code fails. There's no logic or reasoning (as you've already pointed out), which would be required to do that. The main problem I found was it would call functions that don't even exist in the language--I assume it finds people's custom functions scraped from somewhere and just uses those function names as if they were built-in functions in the language.

in reply to Lea

@Lea I think you're quibbling over the term "generative" there, a non-generative version would be outputting just variables rather than raw text. Ie. non-generative analysis is just going to be "x% confident, x% aggressive, x% sympathetic..."

  • analyzing prose and flagging tone or phrasing problems = like so many anti-ai people you just assume the person using it is somehow unable to read and understand the text. And frankly as I've already said this is a disability accommodation I'm a little bit pissed off at this response. We can understand these things when pointed out, we can understand whether the text it gives is trash, we're not idiots. What it does is consistently give us text that fixes our tonal issues, and we can recognize those fixes after the fact but can't reasonably do them on our own. So maybe get your head out of your ass when talking about a disability accommodation that someone has first hand experience on and has said so up front.
  • summarizing reviews - again, you're just trying to throw away a point because you don't understand what "generative" or "large language model" means.
  • coding assistant - and again clearly you made one half-assed attempt to make it do more than it's capable of, considered it trash, and then threw the whole thing away. I've used it plenty, I've used it to speed through refactoring a whole project to change database engines out, I've used it to speed through building UIs with a bunch of buttons. Does it create good code when I just ask it to write a whole application for me? Hell no. But it sure as hell can see me writing a list of buttons and go, "Oh, I recognize the pattern of the names I'm going to fill this in 20 more times with all variables names changed to meet the pattern".
@Lea
This entry was edited (4 months ago)
Unknown parent

Shiri Bailem

@Lea like I said, you're attacking the definition of generative by ignoring the actual definition.

And I'm done with this conversation since all you're doing is looping back on points I've already called out. You've been an ass, just accept it and move on.

@Lea