Hola
Alfred E. Neuman likes this.
Alfred E. Neuman doesn't like this.
Date night tonight with my girlfriend, what do y'all think? #TransEuphoria
(Sorry no image description, if someone else writes one I'll copy it, but I'm useless at describing myself)
A friend's furbaby is suffering and needs care yesterday, please donate if you can: gofundme.com/f/help-giggles-ge…
Donate to Help Giggles Get Urgent Dental Care, organized by Sarah Dearing
As you know I’ve been saving money for giggles dental, but as prices soar and her he… Sarah Dearing needs your support for Help Giggles Get Urgent Dental Caregofundme.com
reshared this
Still processing something...
I'm recovering from the trauma of constantly being hyper-vigilant of how I might present as a threat, constantly focused on how to make others feel safe around me...
and now I've traded it for the trauma of being constantly hyper-vigilant of potential threats around me.
It sucks but honestly I prefer it to the intense sense of isolation and loneliness.
like this
Solidarity with your journey and hopefully dealing with your hyper vigilance and trauma issues.
Shiri Bailem likes this.
Just reaching out for suggestions, it's not often I get people converted to the fediverse but when I do I'm a bit short on suggested instances, especially in variety.
As it stands I've got my own instance, which runs Friendica. But because I'm a nerd who craves all the options at once and will sacrifice some user-friendliness for it, it's good for me not for most users.
Aside from that, I've got lgbtqia.space which is a wonderful Mastodon server I started out on in the fediverse. But obviously leaves me fumbling when helping the rare few friends who aren't in the alphabet.
I'd like suggestions of other open instances (aside from the mega instances like mastodon.social) that I can direct people to, especially of different platforms so they have a little more choice in their experience.
Mist and fog clouds over this domain. The thoughts of raceme flowers and pinnated ferns hang over my minds eye as I walk with covered pupils. The hard dirt feels rock like, with an occasional trip over roots.
Pulling me down a pathway of cedar and pine, nesting in its canopy, birds orientated skywards to a plateau of limitless sky. Just out of reac- war̓n̼͚͜i̴͈ng,͈͑̌ l͓̀o̩͋ͬw̦͊ b̵̍a̶͋͂t̫tê̾̍r͉̔͠y
O̪͉p͎e̜̽n̡ ỳ̬o̲ur̘̥ e̫͌yē͌ͅs̵͗̿, O̪͉p͎e̜̽n̡ ỳ̬o̲ur̘̥ e̫͌yē͌ͅs̵͗̿, O̪͉p͎e̜̽n̡ ỳ̬o̲ur̘̥ e̫͌yē͌ͅs̵͗̿.
Getting tired of people pretending Jews can't possibly know the difference between Holocaust and "run of the mill" genocide (both are bad, one just is much more orchestrated).
But why should I be surprised? The pajamafication of the holocaust means most people don't even understand a damn thing about it anyways.
I'm so incredibly done with people being on a high-horse shitting on advances in AI for no other reason than to feel better about themselves.
Like if your issue is things like copyright and training data? Sure, go off, it's a philosophical argument there about rights, economy, etc. Likewise for arguments about ecological impact (it can be made reasonable there, the companies just don't want to).
But if you're just posting bullshit like "Hahaha, the language model can't do math" or "Look at how it was baited into saying something stupid" as proof that it's worthless: go fuck yourself.
Let alone the people who try to relate AI development to "NFT Bros"... NFTs literally don't do shit, AI actually has multiple proven and valid uses cases but if you think it's the same thing that just shows you have your head up your ass and refuse to look at the world around you.
All of that before getting to the fact that they have shown incredible usefulness for disability accommodations, but I guess it doesn't count if you prefer to be ableist and think we don't need or deserve accommodations?
So tired of people in general right now...
Charlotte Joanne reshared this.
> "Hahaha, the language model can't do math"
People don't say that, they say haha the **AI** can't do math.
This problem stems from the LLM owners/developers themselves, by calling their product "AI" -- Artificial *Intelligence* -- instead of just LLM, or "Language generator" or something similar, thus creating the expectation that it actually can think, reason, or compute things instead of just generating plausible-sounding text responses to prompts.
And that choice was apparently deliberate on their part, to hype up the product. IOW, don't advertise or imply with misleading buzzwords that your product can do things it can't and then get upset when people mock it for not being able to do what you wanted them to believe it can do.
@Lea or maybe just people always had a wrong idea of what early AI would look like?
People just expected it to jump out fully formed as a super-genius rather than baby steps of abilities...
Maybe, but only because they were taught to by its creators.
Before "generative AI" (creation of new images and text) came out, AI was associated more with things like detecting possible cancer in medical scans, exoplanets in telescope images, recognizing/describing objects in images, finding the structure of proteins, searching for possible new medicines, etc. (not to mention military applications ).
So as I recall it was looked upon as something to help humans solve problems, having great potential in spite of some negative early experiences with things like automatic stock trading.
So IMO these new generative applications should never have been called "AI" because they are not. Even using terms like saying they were "trained" or "learned" from existing art and texts implies intelligent reasoning which is misleading.
Rather, it was the developers who learned how to make their statistical algorithms better able to extract, recombine, and generate similar, plausible-looking (but not true or accurate) output from the mass of data they collected.
IOW, it's those pre-existing AI applications that can at least on some level be said to have been "trained" and to have "learned" to recognize specific kinds of patterns in visual/spatial/textual data and mark it for *humans* to then interpret and judge for accuracy. And certainly not to make up new stuff.
Making up new stuff should be for entertainment purposes only.
@Lea yeah, that's a fundamental misunderstanding of how these came to be... your examples of non-generative AI are all the same basic technology. In crude terms it's an image enhancement/recognition ai run in reverse. (Then LLMs were cutting out the image part)
Their training process is nearly identical in fact.
My point is that it's the generative aspect that is the most fundamental and important difference, and that, along with allowing decision-making (as in the stock market example) is where the harm arises.
I haven't seen anything beneficial from generative models, other than as I said before, entertainment, or perhaps some strictly personal use that is not ever disseminated to others.
Generating anything with it for any serious purpose is harmful and with no positive benefit (unless you consider deep fake propaganda a "benefit").
Using it for research papers, legal briefs, class assignments, fake photos, etc. is harmful for so many reasons I don't have room to state them all here.
@Lea I've seen a whole bunch of practical beneficial uses:
- I use it personally as a coding assistant (great for repetitive code entry where it'll automatically fill in variable names in repetitive blocks of code, or help with debugging)
- I've used it time to time as a search aid (when used properly with a good system it can drastically cut down on complex searches, taking something that would have taken me anywhere from a half hour to 2 hours into something that takes 5-10 minutes)
- Once or twice I've made use of it as a sort of thesaurus to figure out terms when i didn't have enough to search off of (as in, with a normal thesaurus I would give similar words... but I didn't have similar words in these cases)
- As a disability aid for executive dysfunction (see goblin.tools that "magic to-do list" and "compiler" aren't just gimmicks, they're for people with executive dysfunction who struggle breaking down, separating, and sorting tasks)
- I've used it personally as an aid for communication. Being an autistic, when communicating with allistics, especially in a professional environment, it can help edit my message/email to maintain the correct tone as well as make sure I'm getting my point across (again see goblin.tools "Formalizer")
- I've also seen in the wild Amazon's got a great use now where it summarizes user reviews for a product (and this has been practically helpful, I saw some issues with products before buying them there and it gave me cause to look deeper into the reviews to figure out whether that issue mattered to me)
Yes, there are plenty of harmful uses, but if you don't see positive uses you're not looking at all.
There's also the upcoming uses that are still being ironed out but inevitable (as in these aren't just hypotheticals, it's just down to predictable progress):
* Dynamic dialogue in games: tech demos of this have been pretty cool, it's not there to replace writers but to allow NPCs to fluidly talk back to anything rather than purely scripted responses)
* Virtual assistants: you may or may not like them, but many people do like them and being able to handle fluid natural language is a huge step compared to before.
I'm not going to pretend that image generators have much use at the moment, there's technologies that come later that they lead to that can be useful (like dynamic generation of 3d models and environments, which can be good progress towards the goal of "holodecks" in the sense of being able to dynamically call up desired environments then tweaking them to be what you need).
Voice generators right now mostly are just good for nice computer voices and maybe in the future being used to splice in dynamic content (such as player character name) into games (right now games either avoid saying your name entirely, use a stand-in name, and/or have a handful of pre-recorded names that are really cool if they happen to be on the list)
I do admit there may be some beneficial uses of generative not-AI that I haven't been able to think of, or might be someday. Meanwhile the internet is being flooded with generated false or meaningless "content" that is in turn being re-ingested as fodder for even more of the same.
Other than coding assistant, most of your examples sound more like analysis (which is fine) than generation of new content (not fine):
- analyzing human-written prose and flagging possible tone or phrasing problems = analytical. Suggesting replacement text is generative but untrustworthy. You have to check that it isn't saying something you didn't mean to say, then trust that it's an improvement.
- summarizing reviews = analyzing: compiling stats in categories of what the reviews say. Not generating reviews or anything new.
- coding assistant: This *is* generating something: garbage. I've played with it and never gotten anything useful. It can only handle coding problems that have already been solved and included in the training data. Anything requiring it to actually solve a problem not already in its store of code fails. There's no logic or reasoning (as you've already pointed out), which would be required to do that. The main problem I found was it would call functions that don't even exist in the language--I assume it finds people's custom functions scraped from somewhere and just uses those function names as if they were built-in functions in the language.
@Lea I think you're quibbling over the term "generative" there, a non-generative version would be outputting just variables rather than raw text. Ie. non-generative analysis is just going to be "x% confident, x% aggressive, x% sympathetic..."
- analyzing prose and flagging tone or phrasing problems = like so many anti-ai people you just assume the person using it is somehow unable to read and understand the text. And frankly as I've already said this is a disability accommodation I'm a little bit pissed off at this response. We can understand these things when pointed out, we can understand whether the text it gives is trash, we're not idiots. What it does is consistently give us text that fixes our tonal issues, and we can recognize those fixes after the fact but can't reasonably do them on our own. So maybe get your head out of your ass when talking about a disability accommodation that someone has first hand experience on and has said so up front.
- summarizing reviews - again, you're just trying to throw away a point because you don't understand what "generative" or "large language model" means.
- coding assistant - and again clearly you made one half-assed attempt to make it do more than it's capable of, considered it trash, and then threw the whole thing away. I've used it plenty, I've used it to speed through refactoring a whole project to change database engines out, I've used it to speed through building UIs with a bunch of buttons. Does it create good code when I just ask it to write a whole application for me? Hell no. But it sure as hell can see me writing a list of buttons and go, "Oh, I recognize the pattern of the names I'm going to fill this in 20 more times with all variables names changed to meet the pattern".
@Lea like I said, you're attacking the definition of generative by ignoring the actual definition.
And I'm done with this conversation since all you're doing is looping back on points I've already called out. You've been an ass, just accept it and move on.
Emsquared
Unknown parent • •Emsquared
Unknown parent • •Emsquared
Unknown parent • •Shiri Bailem
in reply to Emsquared • •Emsquared likes this.
Emsquared
in reply to Shiri Bailem • •