Skip to main content


"Embrace AI or be left behind" is a condescending and heartless ultimatum. A false dichotomy that reeks of debunked social darwinist horse-fuckery.

Tech should adapt to people's needs.

Not the reverse.

in reply to Joan Westenberg

Quite happy to be left behind thanks.
I'd rather be on the rescue team than the plane crash. There's a slight chance it'll crash into my house but the odds are in my favour.
in reply to Joan Westenberg

I'd love nothing more than to be left behind while the zealots crash head first into a brick wall.
in reply to Joan Westenberg

“embrace or be left behind” — the rallying cry of hucksters, scam artists, and other such trustworthy people.
in reply to Joan Westenberg

It's the same with adopting the cloud. If you prefer using on-prem or a VPS you are not "getting" it. Everyone should run everything at either GCP, AWS or Azure.

Everything else is stupid, and bad.

in reply to Joan Westenberg

I don't think early adopters will have that much of an advantage anyways, if it's really taking over everything, the second iterations will outpace the first ones
in reply to mmby

@mmby @Joan Westenberg This right here.

I've got a generally positive outlook on AI and do think it's going to become intrinsic going forward... but I don't think there's any real advantage to early adoption in the long-term, just short term.

Hell, the majority of company level adoption I see is a trainwreck because everyone is in a mad dash to "not be left behind" that they're slapping it in all sorts of places it shouldn't be (ie. companies trying to make AI customer support bots make me want to get my popcorn...)

Roundcat reshared this.

in reply to Shiri Bailem

@shiri @mmby I haven’t seen any reason to be optimistic about #ai. Under #capitalism all I see are dead arts, replaced jobs, and the loss of humanity. I think it’s naive to believe otherwise.

That’s not even getting into its impact on #climatechange

in reply to Roundcat

@Roundcat @mmby @Joan Westenberg ... Under capitalism all I see is the complete collapse of society and the end of all life on earth...

That doesn't make salads bad.

Attacking AI because capitalism does awful things with it is basically the same as attacking a salad... and equally ridiculous.

On the climate change front, I absolutely believe they're being wildly irresponsible with processes there and it can be done way more efficiently... they just don't care.

If you're going to attack something because capitalism misuses it... just focus on attacking capitalism itself, otherwise all you're doing is fighting on behalf of capitalism (since it's biggest tool in keeping us suppressed is creating nonsense battles for us to fight, ie fighting people over recycling)

in reply to Shiri Bailem

@shiri @mmby This is the same argument I hear from gun apologists.

The thing is Ai is a tool of capitalism as much as guns are a tool of war. Sure they can exist outside those roles, but the reality they make both war and capitalism deadlier and more costly.

in reply to Roundcat

@Roundcat @shiri @mmby
"I haven’t seen any reason to be optimistic about #ai."

How about the possibility that it could make much higher quality legal, medical, etc. advice available to people who cannot currently afford it?

That it gives people new artistic tools?

in reply to AKingsbury

@AlexanderKingsbury @Roundcat @shiri I feel currently AI is about power - about who controls the tool

if a megacorp is the gatekeeper, how much can you trust that the deal about access to the tool doesn't change all the time?

if models that run on a smartphone are available but they cannot be trained yet by regular people, are the tools really open?

in time we may have truly open, 'vegan' models with documented providence of training data/methods, but that time isn't now

in reply to mmby

@mmby @Roundcat @shiri
I mean...lots of people are cranking out AIs. You certainly don't have to use them, and you absolutely don't have to use only ones that are controlled by whatever "megacorp" you're worried about. Heck, you can make your own, if you feel like it.
in reply to mmby

@mmby @AKingsbury @Roundcat @Joan Westenberg ftr, we do have models that run locally on personal devices, so long as your device is higher end.

I've got a 7 year old tiny workstation device with no dedicated graphics... and it can run some of the smallest models, albeit very very slowly (as in ask a query then go get a cup of coffee).

And honestly, I think training is also possible locally as well, just more resource intensive.

This is all presuming you're not chomping at the bit for the latest cutting edge models... and frankly that's a thing I think needs to be reined in... to not always go for the biggest "best" model first thing... 99% of my AI queries are to older models, I only pull out "current" models for particularly vexing questions.

in reply to AKingsbury

@AlexanderKingsbury @shiri @mmby “Could” is not a substitute for “Will.” NFT could have revolutionized a lot of things but the most common use case for them became to scam people and launder money. Given the implementations I see now, I have no reason to be even slightly optimistic about what we call Ai.

Also, no artist has ever asked for this, and most have had their livelihoods jepardized by how it. But sure, let’s keep pretending we’re doing them a favor.

in reply to Roundcat

@Roundcat @shiri @mmby
I never said anything about doing artists a favor. But there are still plenty of other people out there who want to make art, who simply don't have the bandwidth for any number of reasons. I like that there are now machines that they can describe their vision to and see it take shape, even if they cannot afford to hire a human artist.
in reply to AKingsbury

@AlexanderKingsbury @Roundcat @shiri @mmby There's no signs that either of these are even remotely possible with the current approaches to AI.
in reply to Michael T. Richter

@qqmrichter @Roundcat @shiri @mmby
It's not only "remotely possible", it's happening now. I know plenty of people who enjoy using AI for various artistic purposes.
in reply to AKingsbury

@AlexanderKingsbury @Roundcat @shiri @mmby No. You know people using AI to pretend to do art, but in reality are just cosplaying it.

You may choose to disagree with that characterization. You would be wrong.

in reply to Michael T. Richter

@qqmrichter @Roundcat @shiri @mmby
Ah, I forgot; you're the objective source of all definitions of what is and is not "art". Got it.
in reply to Roundcat

@Roundcat @qqmrichter @shiri @mmby
Okay. I never claimed art theft was art. Of course, I don't know a single artist who has ever claimed to me, with a straight face, to never have derived inspiration or material from anywhere else. It is nice to know who I can go to when I'm wondering if any specific thing is art or not, though. For years, I've wondered what the definition for art is, and now I've found the one true oracle that can answer that question for me.
in reply to Joan Westenberg

Rephrased as « Accept being cooked by climate change done by AI or do something about it! » I can’t get how AI dudebros reject reality.
in reply to canleaf08 ⌘ ✅

@canleaf08 ⌘ ✅ @Joan Westenberg I can't get how people attack the base technology over the actual cause of problems... ie. how leftist duebros reject reality
in reply to Joan Westenberg

May be, but it's also an imperative of capitalist competition, so it's not going to stop until capitalist competition stops.

Shiri Bailem reshared this.

in reply to Joan Westenberg

or is it just good advice that in the depressingly capitalist world we live in, the quickest and cheapest way of achieving something usually wins out? We’ve seen people and companies left behind by technology thousands of times over, it seems either arrogant or naive to assume it won’t happen again.

You can argue over whether something is good or bad all day long, but that doesn’t mean said thing isn’t going to happen regardless.

in reply to Joan Westenberg

I have been left behind by right wing assaults since the 60s I am still thriving.
in reply to Joan Westenberg

Same thing for politics.

Politics should represent the people's needs.

Not just the few and corporations.

in reply to Joan Westenberg

Horse Fuckery!!! Cracked me up!!! True and funny statement, thank you for making my Sunday morning ❤️❤️
in reply to Joan Westenberg

When printing was invented, scribes ate dirt.

When cars were invented, horsemen ate dirt.

When guns were invented, archers ate dirt.

When robots were invented, industry workers ate dirt.

When computers were invented, paper workers ate dirt.

Now, here we are with AI.

Might be a good idea to adapt to new technologies.

This entry was edited (3 weeks ago)
in reply to Joan Westenberg

I am sure the tech people would like to do that. The tech capitalist though don’t care what happens with people.
in reply to Joan Westenberg

My kids say "I read it on the internet so it must be true". How about "An LLM read it on the internet so stochastically it's true"?
The 'AI' hype will die when either people get tired of the literally average mediocrity of 'AI' output, or when LLMs' ingestion of their own output makes 'AIs' simply output the phrase "Meh meh meh meh" ad infinitum.
in reply to Joan Westenberg

It will be essential that we come to terms w them if they are sentient. There are a lot of humanity's habits we don't want them knowing. We are a questionable influence.
in reply to Joan Westenberg

AI initiatives are hyped up to get investors to buy into a bubble, no different than the dot com bubble.
https://www.visualcapitalist.com/sp/3-reasons-why-ai-enthusiasm-differs-from-the-dot-com-bubble/

However, the hysteria & hype is also disguising who is really funding these initiatives; anti-democracy billionaires.
https://www.oracle.com/jo/news/announcement/oracle-will-train-saudi-nationals-in-artificial-intelligence-and-other-latest-digital-technologies-2023-12-14/
https://www.weforum.org/agenda/2024/05/these-5-countries-are-leading-the-global-ai-race-heres-how-theyre-doing-it/

Anti-democracy billionaires have these goals:
1. Stop the phase out of fossil fuel & keep frying the planet
2. Evade taxation & regulation
3. End democracy & hijack ...

1/2

in reply to Joan Westenberg

You’re right, it is a false dichotomy. There’s a middle ground: appreciate the recent advancements and understand their current limitations.

However, this requires an understanding of nuance, which our society seems to have lost.

in reply to Joan Westenberg

I’m still waiting for the answer to all of the compute power and the carbon emissions associated with it. None of these executives or PR folks seem to be willing to address it.
in reply to Joan Westenberg

@jbzfn But we actually *want* to be left behind.

Maybe even go further back a bit to make sure we don’t see even the start of this shit.

@jbz
in reply to Joan Westenberg

A couple of years ago they were saying the same about Web3.
in reply to Joan Westenberg

For me, AI is in the same bucket as crypto. Humans have yet to figure out their own intelligence #. GIGO. Am I wrong?
in reply to Joan Westenberg

Exactly. Technology is there to serve mankind. Not to provide riches for the few and to enslave the rest. Unacceptable…
in reply to Joan Westenberg

The CEO at Old Job constantly repeated this statement at every all-hands meeting and that's why I don't work for them anymore
in reply to Joan Westenberg

Content warning: "AI"

in reply to Joan Westenberg

Defiantly what the investors say to big tech when they want to shove it into a product for no understandable or useful reason.
in reply to Joan Westenberg

My reply to such statements would always be: "That is gonna happen the day when YOU stop embracing ableism" ..
in reply to Joan Westenberg

In addition to being condescending, it's also worth ignoring.
After all, it's the same people that were telling everybody "have fun staying poor" when said people were uninterested in the crypto stuff these conmen were peddling just a few years ago.
Now they have pivoted to AI.
in reply to Andreas

@Andreas @Joan Westenberg as someone who learned all about crypto, was really interested in it, but never saw a meaningful future in it (there's a future... it's just more of the same, I am deeply curious if it'll ever die given the design of it): yes and no...

LLMs in particular will have a monumental impact going forward but right now people are floundering about... those crypto-bros are the worst offenders for putting it in places it should never be (ie. customer support, trying to strong arm it into coding entire projects from scratch, etc)...

All the best uses cases right now are subtle ones and don't involve "embracing", more "not rejecting" and properly learning it's limits (as opposed to the exaggerated failings shared around so often by people who hate AI).

Examples of valid use cases:
* Amazon has implemented it to add a summary of customer reviews to product listings (this isn't a "good product you should buy" but "people like X about it and complain about Y")
* Augmented Search (this has a bad reputation because it's over-applied, 95% of searches don't benefit from AI and many of those are negatively impacted, but it can help speed up searches when you're having to do a deep dive for a tiny bit of information that you're going to be able to establish proof of shortly after, ie. "What the heck does this error code mean?" or "what was the command to do X?", if it's wrong you'll know quickly and usually have some clues on how to find the right information faster and if there's any risk it's faster to check if it's right than it is to find the information in the first place on your own)
* Virtual assistants, the biggest problem with them has been rigid command structures and nobody wants to try and remember acceptable syntax for a spoken command that'll likely get distorted by the STT algorithm anyways... LLMs allow for natural commands to be given significantly more easily (but we shouldn't trust it with anything resembling a sensitive command... ie. no telling to to make money transfers or buy products, but great for turning lights on, checking notifications, etc)

Examples of failings people need to learn:
* Hallucinations: it can always lie and the lie is often going to sound convincing, this is because LLMs don't know how to say "I don't know"... so if they don't know they'll just invent something that looks as plausible as possible, so never trust it to be truly accurate.
* Artificial Intelligence also necessities Artificial Stupidity: Intelligence is a fuzzy and unclear process, it's not a simple database search with clean binary data. So any artificial intelligence will make wrong assumptions or bad interpretations.
* The "latest and greatest" models are really wasteful environmentally and financially to use constantly... using older models for simpler tasks is honestly preferable (unless the newer model is less resource intensive than the older), moving to local running of lower grade models is honestly the ideal goal for the sake of privacy, security, and environmental effects.

in reply to Shiri Bailem

@shiri This is a well thought out critique but I think you are overanalyzing use cases without considering energy expenditure.

It's the same with blockchain: Are there any good use cases where blockchain technology is useful? Yes.
Are there any good use cases where LLM models or generative models are useful? Without a doubt.

Does the increased energy expenditure for the fancy new tech relate to a similar improvement in capability of the new tech?
Hell nah! For neither of them.

Yes, LLM can be cool for some things. But it's not sooooo much better to explain the multi-magnitude increased energy needs.

So in short, the tech sucks. And I say that as someone who is benefiting from some LLM augmentation for a lot of use-cases.

in reply to Andreas

@Andreas @Joan Westenberg that shows you have more faith in crypto than I have... I have yet to see anything that resembles a real viable use case...

As far as the energy expenditure of LLMs, it's genuinely largely overstated through typical biases:
* it's compared to impact but not to other sources of environmental impact (so many things sound huge on their own but much much smaller in context, ie. what's the cost compared to Google searches? compared to Facebook?)
* it's the overuse that has the biggest impact that cost as well, that is people using the largest/most-expensive models unnecessarily (like I mentioned before) and using them wastefully (ie. inserting LLM queries into every search, the Amazon example use case is a good case because it's doing it once and reusing the result, and then not doing it on every single product... at least if they have any sense)

On top of that, it's impact on productivity and daily lives is still being figured out as well as still growing. We're coming up with new ways to reduce the resource consumption as well as smarter and smarter models. It's kinda like judging the future of the computing industry by the computers of the 40s-50s.

in reply to Joan Westenberg

Also, one can detest AI as it's manifested now, which is merely code for inequality and needless suffering, a way to transfer more to those who already have too much.

Behind it is, into the future.