Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.
EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful
tante.cc/2026/02/20/acting-eth…
Acting ethically in an imperfect world
Life is complicated. Regardless of what your beliefs or politics or ethics are, the way that we set up our society and economy will often force you to act against them: You might not want to fly somewhere but your employer will not accept another mod…tante (Smashing Frames)
This entry was edited (8 hours ago)
cholling likes this.
reshared this

Cory Doctorow
in reply to tante • • •R.L. LE
in reply to Cory Doctorow • • •Cory Doctorow
in reply to R.L. LE • • •@herrLorenz
> Cory shows his libertarian leanings here...
> Many people criticizing LLMs come from a somewhat leftist (in contrast to Cory’s libertarian) background.
Shiri Bailem likes this.
Cory Doctorow
in reply to Cory Doctorow • • •This falls into the "you are entitled to your own opinions, but not your own facts" territory.
Shiri Bailem likes this.
R.L. LE
in reply to Cory Doctorow • • •CJPaloma aka Aunt Tifa
in reply to Cory Doctorow • • •@pluralistic @herrLorenz that second example goes well into overreach territory, and I can see why you'd be not happy with it.
And/but a big part of libertarian appeal is that it does muddy how being "individually free from regulation" can be cast as liberatory. As if individual freedom is all that's needed. "I'm free when there are no regulations" is obviously shallow to lefties, but it (individual freedom) is also a component of why people are lefties, there's real overlap.
Cory Doctorow
in reply to CJPaloma aka Aunt Tifa • • •@CJPaloma @herrLorenz
There is no virtue in being constrained or regulated per se.
Regulation isn't a good unto itself.
Regulation that is itself good - drawn up for a good purpose, designed to be administrable, and then competently administered - is good.
Shiri Bailem likes this.
CJPaloma aka Aunt Tifa
in reply to Cory Doctorow • • •@pluralistic @herrLorenz Of course! Agreed.
The overlap ends around -when- reasons are "good" enough. Laws about how to treat other people are relatively easy.
But until enough people see rivers on fire, regulations on -doing certain things- aren't imposed, despite many people saying "hey, this isn't good" decades prior.
Not reining in/regulating until after -foreseeable- catastrophes results in all kinds of shit shows (from the MIC, to urban sprawl, to plastics, to tax laws, etc)
Joris Meys
in reply to Cory Doctorow • • •@tante made. He had the same complaint for starters (your argument was heavily drenched in 'you ppl are purists' ), but he also makes the valid argument that technology isn't neutral in itself. Open weights based on intellectual theft and forced labor is still a problem. Until we have a discussion on how the weights come to fruitition, LLM's are objectively problematic from an ethical view. That has nothing to do with purism.
Simon Zerafa (Status: 😊)
in reply to tante • • •That doesn't seem to be the best idea @pluralistic
AI and LLM output is 90% bullshit, and most people don't have the time nor the patience to work out which 10% might actually be useful.
That's completely ignoring the environmental and human impacts of the AI bubble.
Try buying DDR memory, a GPU or an SSD / HDD at the moment.
Cory Doctorow
in reply to Simon Zerafa (Status: 😊) • • •@simonzerafa
What is the incremental environmental damage created by running an existing LLM locally on your own laptop?
As to "90% bullshit" - as I wrote, the false positive rate for punctuation errors and typos from Ollama/Llama2 is about 50%, which is substantially better than, say, Google Docs' grammar checker.
Shiri Bailem likes this.
kel
in reply to Cory Doctorow • • •@pluralistic
I am astonished that I have to explain this,
but very simply in words even a small child could understand:
using these products *creates further demand*
- surely you know this?
Well, either you know this and are being facetious, or you are a lot stupider than I ever thought possible for someone with your privilege and resources.
I am absolutely floored at this reveal, just wow, "where's Cory and what have you done with him?" 🤷
Massive loss of respect!
@simonzerafa @tante
Shiri Bailem
in reply to kel • •@kel it sounds like your respect is rooted only in someone agreeing with you. If you respected them you'd maybe take a minute to listen to their arguments and ask yourself more about why they might disagree with you.
Namely the fact that you don't understand how "using these products creates further demand" doesn't relate to their arguments at all?
@Cory Doctorow @Simon Zerafa (Status: 😊) @tante
like this
Je ne suis pas goth and webhat like this.
Simon Zerafa (Status: 😊)
in reply to Cory Doctorow • • •@pluralistic
Of course, I am speaking in generalities.
Encouraging the use of LLM's is counterproductive in so many ways, as I highlighted.
Pop a power meter on that LLM adorned PC and let us all know what the power usage looks like with and without your chosen LLM running on a typical task 🙂
That's power that generated somewhere, even if it's with renewable energy.
The main issue with LLM's is that they don't encourage critical thinking, in a world which is already suffering from a massive shortage.
Cory Doctorow
in reply to Simon Zerafa (Status: 😊) • • •@simonzerafa
As I wrote (and it seems you haven't read what I wrote, which is weird, because that seems like a good first step if you're going to criticize my conduct), I'm running Ollama on a laptop that doesn't even have a GPU.
Its power consumption is comparable to, say, watching a Youtube video.
I know this because my laptop is running free software that lets me accurately monitor its activity, and because the model is also free software.
Shiri Bailem likes this.
Cory Doctorow
in reply to Cory Doctorow • • •Checking for punctuation errors is does not discourage critical thinking. It's weird to laud "critical thinking" and also make this claim.
Shiri Bailem likes this.
tante
in reply to Cory Doctorow • • •David Huggins-Daines
in reply to tante • • •@pluralistic @simonzerafa I agree in principle with Cory, but I really wish that he had clarified that:
1. Ollama is not an LLM, it's a server for various models, of varying degrees of openness.
2. Open weights is not open source, the model is still a black box. We should support projects like OLMO, which are completely open, down to the training data set and checkpoints.
3. It's quite difficult to "seize that technology" without using Someone Else's Computer to do so (a.k.a clown/cloud)
David Huggins-Daines
in reply to David Huggins-Daines • • •@pluralistic @simonzerafa But ALSO: using a multi-billion-parameter synthetic text extruding machine to find spelling and syntax errors is a blatant example of "doing everything the least efficient way possible" and that's why we are living on an overheating planet buried under toxic e-waste.
If I think about it harder I could probably come up with a more clever metaphor than killing a mosquito with a flamethrower, but you get the idea.
Cory Doctorow
in reply to David Huggins-Daines • • •@dhd6 @simonzerafa
No. It's like killing a mosquito with a bug zapper whose history includes thousands of years of metallurgy, hundreds of years of electrical engineering, and decades of plastics manufacture.
There is literally no contemporary manufactured good that doesn't sit atop a vast mountain of extraneous (to that purpose) labor, energy expenditure and capital.
David Huggins-Daines
in reply to Cory Doctorow • • •@pluralistic @simonzerafa As always, yes and no. A bug zapper is designed to zap bugs, it is a simple mechanism that does that one thing, and does it well. An LLM is designed to read text and generate more text.
That we have decided that the best way to do NLP is to use massively overparameterized word predictors that we have trained using RL to respond to prompts, rather than just, like, doing NLP, is just crazy from an engineering standpoint.
Rube Goldberg is spinning in his grave!
Cory Doctorow
in reply to David Huggins-Daines • • •@dhd6 @simonzerafa
Remember when Usenet's backbone cabal worried about someone in Congress discovering that the giant, packet-switched research network that had been constructed at enormous public expense was being used for idle chit chat?
The nature of general purpose technologies is that they will be used for lots of purposes.
David Huggins-Daines
in reply to Cory Doctorow • • •@pluralistic @simonzerafa indeed, I guess the question is whether the scale of the *ahem* waste, fraud and abuse *ahem* of resources that LLMs seem to imply, even in benign use cases like yours, is out of line with historical precedent or not.
Am I an old man yelling at a cloud?
No, it's the children who are wrong!
Cory Doctorow
in reply to David Huggins-Daines • • •@dhd6 @simonzerafa
Rockets were literally perfected in Nazi slave labor camps.
elle
in reply to Cory Doctorow • • •@pluralistic @dhd6 @simonzerafa what a shit take dude. rockets being perfected by nazis, project paperclip, and now a neonazi in charge of one of the largest space tech programs on the planet, along with a bullshit generating LLM.
so yeah, maybe this is all fash tech, and maybe taking a stand of "I'm not touching that shit with a thousand-meter pole" is not "neoliberal purity culture". and ollama of all things? the shit pumped out by fucking Meta? are you shitting me?
Cory Doctorow
in reply to elle • • •@elle @dhd6 @simonzerafa
"You used the wrong open model because I don't like the company that made it" is the actual definition of nonsense purity culture.
elle
in reply to Cory Doctorow • • •@pluralistic @dhd6 @simonzerafa you wrote a book on how much of a shitbag company corpos like Meta are. now you're saying "oh it's not that bad, look it's marginally better than Google Docs spell checker"?! did someone hack your fucking account?
there are legitimately open models that originate from academic institutions, train on open data with full consent. even those models take tens-of-thousands of euros to train. well outside the resources available to most open-source enjoyers
Jared White (ResistanceNet ✊)
in reply to Cory Doctorow • • •@pluralistic @dhd6 @simonzerafa Good grief, these ad hoc rationalizations are absurd and you know it.
FYI, rockets are enormously environmentally destructive (fuel, pollution, noise, etc.). The planet would be better off with as few rockets launching as possible.
Saying an LLM is OK because some completely other "good" technology was invented by evil people is a *non argument*.
Cory Doctorow
in reply to Jared White (ResistanceNet ✊) • • •@jaredwhite @dhd6 @simonzerafa You're right, that would be a silly thing to say.
Good thing I didn't say it.
Ray McCarthy
in reply to Cory Doctorow • • •But Google Docs anything is rubbish.
Cory Doctorow
in reply to Ray McCarthy • • •I see. And do you have moral opinions about whether people should use Google Docs? Do you seek out strangers to tell them that it's dangerous to use Google Docs?
Shiri Bailem likes this.
Kid Mania
in reply to Cory Doctorow • • •@pluralistic @simonzerafa
"What is the incremental environmental damage created by running an existing LLM locally on your own laptop?"
I dunno. But how about a couple of million people?
The person who coins the term 'enshittification' defends LLM. Just...wow. We truly are fucked.
Let's all do what Cory does!
☠️
Meanwhile:
technologyreview.com/2025/05/2…
#doomed #ClimateChange
Cory Doctorow
in reply to Kid Mania • • •Which "couple million people" suffer harm when I run a model on my laptop?
Kid Mania
in reply to Cory Doctorow • • •@pluralistic @simonzerafa
Missed the point, sir.
When one person does it...no big deal.
When a couple of million people do it...well, see the MIT article above.
Kid Mania
in reply to Kid Mania • • •Subhead quote from the article:
"The emissions from individual AI text, image, and video queries seem small—until you add up what the industry isn’t tracking and consider where it’s heading next."
Cory Doctorow
in reply to Kid Mania • • •@clintruin @simonzerafa
You are laboring under a misapprehension.
I will reiterate my question, with all caps for emphasis.
Which "couple million people" suffer harm when I run a model ON MY LAPTOP?
Kid Mania
in reply to Cory Doctorow • • •@pluralistic @simonzerafa
I'll reiterate my response.
When you *alone* do it...no big deal.
When a couple of million do it ON THEIR OWN LAPTOPS...problem.
Cory Doctorow
in reply to Kid Mania • • •@clintruin @simonzerafa
OK, sorry, i was under the impression that I was having a discussion with someone who understands this issue.
You are completely, empirically, technically wrong.
Checking the punctuation on a document on your laptop uses less electricity than watching a Youtube video.
Kid Mania
in reply to Cory Doctorow • • •@pluralistic @simonzerafa
Fair enough, Cory. You're gonna do what you want regardless of my accuracy or inaccuracy anyway. And maybe I've misunderstood this. The same way many many will.
But visualize this:
"Hey...I just read Cory Doctrow uses an LLM to check his writing."
"Really?"
"Yeah, it's true."
"Cool, maybe what I've read about ChatGPT is wrong too..."
Cory Doctorow
in reply to Kid Mania • • •@clintruin @simonzerafa
This is an absurd argument.
"I just read about a thing that is fine, but I wasn't paying close attention, so maybe something bad is good?"
Come.
On.
Kid Mania
in reply to Cory Doctorow • • •@pluralistic @simonzerafa
Maybe...
Maybe not.
You have a good day.
algernon, deployer of builds, builder of jank, fan of junk, and only junk (allegedly)
in reply to Cory Doctorow • • •Anyone who's hosting a website, and is getting hammered by the bots that seek content to train the models on. Those of us are the ones who continue getting hurt.
Whether you run it locally or not, makes little difference. The models were trained, and training very likely involved scraping, and that continues to be a problem to this day. Not because of ethical concerns, but technical ones: a constant 100req/sec 24/7, with over 2.5k req/sec waves may sound little in this day and age, but at around 2.5k req/sec (sustained for about a week!), my cheap VPS's two vCPUs are bogged down trying to deal with all the TLS handshakes, let alone serving anything.
That is a cost many seem to forget. It costs bandwidth, CPU, and human effort to keep things online under the crawler DDoS - which often will require cold, hard cash too, to survive.
Ask Codeberg or LWN how they fare under crawler load, and imagine someone who just wants to have their stuff online having to deal with similar abuse.
That is the suffering you enable when using any LLM model, even locally.
Kid Mania
in reply to Kid Mania • • •But hey, you do you, Cory.
I'm nobody...your Cory Doctrow.
Let's all do what Cory does...
Cory Doctorow
in reply to Kid Mania • • •@clintruin @simonzerafa
Well, you could "do what Cory does" by familiarizing yourself with the conduct that you are criticizing before engaging in ad hominem.
To be fair, that's not unique to me, but people who fail to rise to that standard are doing themselves and others no good.
twifkak
in reply to Cory Doctorow • • •Cory Doctorow
in reply to twifkak • • •@twifkak @simonzerafa
parsing a doc uses as much juice as streaming a Youtube video and less juice than performing a gnarly transform on a hi-rez in the Gimp.
I measured.
Shiri Bailem
in reply to twifkak • •@twifkak I believe datacenter models likely use more because they're explicitly not efficient. They're running the most cutting edge equipment constantly and this equipment is the stuff built to be the fastest not the most efficient.
But to add to your point though on power consumption vs environmental damage, you're absolutely right there and on the right track. The power consumption isn't the big deal with the datacenters, it's the heat generation.
While the power consumption isn't a non-issue, it's trivial in comparison to the heat management issue which is where we get these conversations of water consumption.
Your individual device, because it's spaced out from other devices running LLMs can just air cool without issue.
The datacenter on the other hand, because they're all crammed in a tight space, has to use bigger and costlier and more impactful systems to move the heat. They can't use air cooling and have to use something like water cooling just to get the heat out of the building.
If the chips didn't have safeties and the cooling system shut down, those buildings would catch fire.
@Cory Doctorow @Simon Zerafa (Status: 😊) @tante
Natalie likes this.
Daniel Lakeland reshared this.
Ray McCarthy
in reply to Simon Zerafa (Status: 😊) • • •At best 40% junk, but unless you are so expert you don't need it, you can't know which is plausible rubbish.
Would you play Russian Roulette every day for hours?
Cory Doctorow
in reply to Ray McCarthy • • •Again, what does checking the punctuation on a single essay per day have to do with "play[ing] Russian Roulette every day for hours?"
Shiri Bailem
in reply to Cory Doctorow • •@Cory Doctorow I'd be disappointed if I didn't see myself in the pattern of engaging with people on a post like this who are worlds away from having a fair discussion...
They literally can't see the reality of AI beyond their arguments, they've decided it's inherently evil and wrong and locked in their viewpoint.
So their "russian roulette every day for hours" is because, despite you saying what you use it for, they can't comprehend how it can be used outside of the worst possible use cases.
Same reason they're accusing you of being a libertarian, but that's already the purity culture you were originally calling out.
@Simon Zerafa (Status: 😊) @Ray McCarthy @tante
like this
Charlie Stross, Fruits, komali_2, webhat and victorvonvortex like this.
Fruits
in reply to Shiri Bailem • • •@shiri @pluralistic
And this is one of the reasons I've struggled with staying on Mastodon/Fedi, and come and go often.
There's this super hardcore fanatism, not just about LLMs/AI, but other topics as well, and if a person puts one toe on the line, they are eviscerated.
At some point it becomes hard to really engage with people when you have to be careful not to go against the grain. I don't have a thick enough skin to handle people berating me for not thinking exactly like them.
Shiri Bailem likes this.
Shiri Bailem
in reply to Fruits • •@Fruits @Cory Doctorow god, mood and a half right there...
Honestly the worst problem is that it's a self-defeating issue... the type of people who flock first to this platform are the type of people we're having issues with... and the solution is adding more people to dilute them, but they're driving off people in a self-reinforcing cycle...
The fact that many of them will out loud say they don't want regular people to join fedi still leaves my jaw on the floor...
like this
Fruits and victorvonvortex like this.
Fruits
in reply to Shiri Bailem • • •@shiri @pluralistic
Yeah. I took a break for about a year, and came back last week.
I see exactly the same active accounts as I've been seeing since around 2020. Saying exactly the same things.
I don't want troublesome people to come here, but the platform really does need new blood because this is just a bunch of people saying the same thing over and over, year after year.
And after so many years, I guess now they're down to "eating their own", starting with Doctorow.
Shiri Bailem likes this.
FediThing
in reply to tante • • •I really like and admire @pluralistic and have utmost respect for him, and that's why I'm totally baffled about why he is claiming "fruit of the poisoned tree" arguments as cause of LLM scepticism.
The objections to LLMs aren't about origins but about what they they are doing right now: destroying the planet, stealing labour, giving power over knowledge to LLM owners etc.
The objections are nothing to do with LLMs' origins, they're entirely about LLMs' effects in the here and now.
Cory Doctorow
in reply to FediThing • • •Which parts of running a model on your own laptop are implicated in "destroying the planet?" How is checking punctuation "stealing labor?" Or, for that matter "giving power over knowledge to LLM owners?"
Shiri Bailem likes this.
Nelson
in reply to Cory Doctorow • • •I think you can answer these questions yourself.
Suppose you wore a coat made out of mink fur. The minks are already dead, simply wearing the coat won't kill more minks. What does wearing mink fur have to do with cruelty to minks?
Suppose you live in the time of the Luddites. Legislation prohibits trade unions and collective bargaining. Mill owners introduce machines, reducing wages. But you build your own machine. Problem solved? You helping labor or capital?
@FediThing @tante
Cory Doctorow
in reply to Nelson • • •@skyfaller @FediThing
This is a "fruit of the poisoned tree" argument.
Suppose you use a computer to post to Mastodon, despite the fact that silicon transistors were invented by the eugenicist William Shockley, who spent his Nobel money offering bribes to women of color to be sterlized?
Suppose you sent that Mastodon post on a packet-switched network, despite the fact that this technology was invented by the war criminals at the RAND corporation?
Shiri Bailem likes this.
Cory Doctorow
in reply to Cory Doctorow • • •@skyfaller @FediThing
Also, you're wrong about the Luddites, just as a factual matter. The guilds the Luddites sprang from weren't prohibited by law, they were *protected* by law, and the Luddites' cause wasn't about gaining new protections under statute, but rather, enforcing existing statutory protections.
(Also: the Luddites didn't oppose steam looms or stocking frames; their demands were for fair deployment of these)
Shiri Bailem likes this.
Nelson
in reply to Cory Doctorow • • •@pluralistic Thank you for the fact check. I was paraphrasing that text from the popular Nib comic: thenib.com/im-a-luddite/
If this contains factual inaccuracies I will need to do more research and perhaps stop sharing that comic.
@FediThing @tante
Shiri Bailem likes this.
Cory Doctorow
in reply to Nelson • • •Shiri Bailem likes this.
Nelson
in reply to Cory Doctorow • • •@pluralistic I don't think mink fur or LLMs are comparable to criticizing the origins of the internet or transistors. It's the process that produced mink fur and LLMs that is destructive, not merely that it's made by bad people.
For example, LLM crawlers regularly take down independent websites like Codeberg, DDoSing, threatening the small web. You may say "but my LLM is frozen in time, it's not part of that scraping now", but it would not remain useful without updates.
@FediThing @tante
Cory Doctorow
in reply to Nelson • • •No. Literally the same LLM that currently finds punctuation errors will continue to do so. I'm not inventing novel forms of punctuation error that I need an updated LLM to discover.
Shiri Bailem likes this.
Nelson
in reply to Cory Doctorow • • •@pluralistic Ok, fair enough, if spell checking is literally the only thing you use LLMs for.
I still think you wouldn't rely on a 1950s dictionary for checking modern language, and language moves faster on the internet, but I'm willing to concede that point.
I still think a deterministic spell checker could have done the job and not put you in this weird position of defending a technology with wide-reaching negative effects. But I guess your post was for just that purpose.
@FediThing @tante
Cory Doctorow
in reply to Nelson • • •@skyfaller @FediThing
I'm not using it for spell checking.
Did you read the article that is under discussion?
Nelson
in reply to Cory Doctorow • • •@pluralistic I apologize, I did in fact read the relevant section of your post, and I was using spell-checking as shorthand for all typo checking, because deterministic grammar checkers have also existed for some time, although not as long as spell checkers and perhaps they have not been as reliable. I understand that LLMs can catch some typos that deterministic solutions may not.
I just think we should put more effort into improving deterministic tools instead of giving up.
@FediThing @tante
Cory Doctorow
in reply to Nelson • • •Shiri Bailem
in reply to Nelson • •@Nelson Funny thing there... a frozen in time LLM doesn't really lose that much functionality. Most good uses of LLMs don't rely on timely knowledge.
For instance @Cory Doctorow 's use case is checking punctuation and grammar. So an LLM only loses functionality there at the rate grammar fundamentally changes... which is glacially.
Also, not all local LLMs are crawler based. For instance when training on wikipedia data to have more recent and accurate knowledge, they offer a bittorrent download of the whole site contents.
The ones creating problems with crawlers are the ones I'm certain Cory will agree are a problem, the big companies that are competing for investors by constantly throwing more and more data at their model in the drive for increasingly small improvements as the only way they have to compete for investors.
@tante @FediThing
Natalie likes this.
Correl Roush
in reply to Nelson • • •This is precisely it; it's about the process, not their distance from Altman, Amodei, et al. (which the Ollama project and those like it achieve).
The LLM models themselves are, per this analogy, still almost entirely of the mink-corpse variety, and I think it's a stretch to scream "purity!" at everyone giving you the stink eye for the coat you're wearing.
It's not impossible to have and use a model, locally hosted and energy-efficient, that wasn't directly birthed by mass theft and human abuse (or training directly off of models that were). And having models that aren't, that are genuinely open, is great! That's how the wickedness gets purged and the underlying tech gets liberated.
Maybe your coat is indeed synthetic, that much is still unclear, because so far all the arguing seems to be focused on the store you got it from and the monsters that operate the worst outlets.
Cory Doctorow
in reply to Correl Roush • • •@correl @skyfaller @FediThing
More fruit of the poisoned tree.
"This isn't bad, but it has bad things in its origin. The things I use *also* have bad things in their origin, but that's OK, because those bad things are different because [reasons]."
This is the inevitable, pointless dead-end of purity culture.
Nelson
in reply to Cory Doctorow • • •@pluralistic This seems like whataboutism. Valid criticisms can come from people who don't behave perfectly, because otherwise no one would be able to criticize anything. Similarly, we can criticize society while participating in it.
The point I'd like to make (that doesn't seem to be landing) is that LLMs aren't just made by bad people, but are also made through harmful processes. Harm dealt mostly during creation can be better than continuing harm, but still harmful.
@correl @FediThing @tante
ಚಿರಾಗ್ 🌹✊🏾Ⓥ🌱🇵🇸 (he/him) reshared this.
Nelson
in reply to Nelson • • •@pluralistic @correl @FediThing In the climate crisis we are often concerned about "embodied emissions", things made with fossil fuels that may not use fossil fuels once they're created. If we don't change our fossil fuel using production systems, those embodied emissions could be enough to kill us.
I'd say that the literal and figurative embodied emissions of even local LLMs are sufficient to make them problematic to use. Individuals avoiding them is insufficient but necessary.
Cory Doctorow
in reply to Nelson • • •@skyfaller @correl @FediThing
That is completely backwards.
The entire point of measuring embodied emissions is to *make use of things that embody emissions*.
We improve old, energy inefficient buildings *because they represent embodied emissions* rather than building new, more efficient buildings because the *net* emissions of building a new, better building exceed the emissions associated with a remediated, older building.
Nelson
in reply to Cory Doctorow • • •@pluralistic You're missing my point. Old houses should be used, but if new houses are built using fossil fuels, then we can cook ourselves by building them even if new buildings are fully electrified.
It feels like you're ignoring the context where LLMs are still being created. It's ethically different to use something made by slaves if slavery is not in the past. If you golf on a golf course maintained by prison labor yesterday, it matters that prisoners will clean it again tomorrow.
@correl
Cory Doctorow
in reply to Nelson • • •@skyfaller @correl
I'm not ignoring that context, it is *entirely irrelevant*, because I am *not* using some prospective, as-yet-to-be-trained LLM to check punctuation on my laptop. I am using an *actual, existing* LLM.
So if your argument is, "If you did something that's not the thing you've done, that would be bad," my response is, "Perhaps that's true, but I have no idea why you would seek to a stranger to discuss that subject."
Cory Doctorow
in reply to Nelson • • •@skyfaller @correl @FediThing
Yes, that is just more fruit of the poisoned tree.
This thing harmed people in its creation, therefore the thing is bad, as are all things derived from it.
However, the things *I* use don't count, because the bad things in their history are different because [insert incoherent rationalization].
Correl Roush
in reply to Cory Doctorow • • •While I can understand your argument and almost certain exhaustion at hollow criticism, that response feels very dismissive of the points being made against your application of that argument.
I'm not sure how fruitful of an argument can be had with regard to what you may or not be using, as you really haven't clarified that anyhow besides locally hosted software that could be used to run terrible models, so this whole mess is just an endless back and forth of "You seem to be dodging the nature of the evil you may be accepting" vs "You're over-concerned with purity", and I think that's justifiably leaving a bad taste in everyone's mouth.
Cory Doctorow
in reply to Correl Roush • • •@correl @skyfaller @FediThing
> as you really haven't clarified that anyhow
I'm sorry, this is entirely wrong.
The fact that you didn't bother to read the source materials associated with this debate in no way obviates their existence.
I set out the specific use-case under discussion in a single paragraph in an open access document. There is no clearer way it could have been stated.
Radio Free Trumpistan
in reply to Cory Doctorow • • •Cory Doctorow
in reply to Radio Free Trumpistan • • •@claralistensprechen3rd @skyfaller @FediThing @correl
I don't know what this has to do with someone stating "you haven't clarified" something, when you have.
Also, I have reposted the paragraph in question TWICE this morning.
Correl Roush
in reply to Cory Doctorow • • •Again, this feels dismissive, and dodges the argument. The clarity I was referring to wasn't the use case you laid out (automated proofreading) or the platform (Ollama), but (as has been discussed at length through this thread of conversation) which models are being employed.
This entire conversation has been centered around how currently available models not evil due to vague notions of who incepted the technology they're based upon, but the active harm employed in their creation.
To return to the discussion I'm attempting to have here, I find your fruits of the poisoned tree argument weak, particularly when you're invoking William Shockley (who is most assuredly had no direct hand in the transistors installed in the hardware on my desk nor their component materials) as a counterpoint to the stolen work and egregious cost that are intrinsic to even the toy models out there. It reads to me as employing hyperbole and false equivalence defensively rather than focusing on why what you're comfortable using is, well, comfortable.
Cory Doctorow
in reply to Correl Roush • • •Scraping work is categorically not "stealing."
Shiri Bailem
in reply to Nelson • •@Nelson I think you should be able to answer these questions yourself, but clearly are struggling...
On your mink fur argument: the one ethical way to wear something like that is to only purchase used and old. The harm is done regardless of whether you purchase, you don't increase demand because your refusal to purchase new or recent means there's no profit in it. (This argument is also flawed because it's assuming local LLMs are made for profit when no profit is made on them)
And on your Luddite argument: When someone is using a machine to further oppress workers, the issue is not the machine but the person using it. You attack the machine to deprive them of it. But when an individual is using a completely separate instance of the machine, contributing nothing to those who are using the machine to abuse people... attacking them is simply attacking the worker.
@tante @FediThing @Cory Doctorow
Nelson
in reply to Shiri Bailem • • •Shiri Bailem
in reply to Nelson • •@Nelson that is a better argument and I'll definitely accept that.
I think for many of us, myself included, the big thing with AI there is the investment bubble. Users aren't making that much difference on the bubble, the people propping up the bubble are the same people creating the problems.
I know I harp on people about anti-AI rage myself, but I specifically harp on people who are overbroad in that rage. So many people dismiss that there are valid use cases for AI in the first place, they demonize people who are using it to improve their lives... people who can be encouraged now to move on to more ethical platforms, and when the bubble bursts will move anyways.
We honestly don't need public pressure to end the biggest abuses of AI, because it's not public interest that's fueling them... it's investor's believing AI techbros. Eventually they're going to wise up and realize there's literally zero return on their investment and we're going to have a truly terrifying economic crash.
It's a lot like the dot-com bubble... but drastically worse.
Natalie likes this.
Shiri Bailem
in reply to Shiri Bailem • •@Nelson Added detail: much of the perceived popularity of AI is propped up and manufactured.
We're all aware how we're being force fed AI tools left and right... and the presence of those tools is much of what the perceived popularity comes from.
Like Google force feeding AI results in it's search then touting people actively using and engaging with it's AI.
There's a great post I saw, that sadly I can't easily find, that highlights the cycle where business leaders tout that they'll integrate AI to make things look good to the shareholders. They then roll out AI, and when people don't use it they start forcing people to use it. They then turn around and report to the shareholders that people are using the AI and they're going to integrate even more AI!
Once the bubble pops, we stop getting force fed AI and it starts scaling back to places where people actually want to use it and it actually works.
Natalie likes this.
FediThing
in reply to Cory Doctorow • • •@pluralistic
(Hello Mr Doctorow! Just want to make clear I admire you a great deal and this isn't intended as an attack on you!)
Running a local LLM with no connection to outside providers might be a way of avoiding bad stuff, but I am not clear on how this relates to discussing origins of technologies?
It seems like there's ambiguity in your post about whether it applies just to people with homelabs wondering if they should try offline LLMs, or whether you are discussing LLMs as a general technology?
Almost everyone using LLMs will use the online kind, so objections to LLMs are (reasonably IMHO) based on that scenario.
Cory Doctorow
in reply to FediThing • • •@FediThing
> I am not clear on how this connects to discussing origins of technologies
Because the arguments against running an LLM on your own computer boil down to, "The LLM was made by bad people, or in bad ways."
This is a purity culture standard, a "fruit of the poisoned tree" argument, and while it is often dressed up in objectivity ("I don't use the fruit of the poisoned tree"), it is just special pleading ("the fruits of the poisoned tree that I use don't count, because __").
Shiri Bailem likes this.
Cory Doctorow
in reply to Cory Doctorow • • •@FediThing
> Almost everyone using LLMs will use the online kind, so objections to LLMs are (reasonably IMHO) based on that scenario.
Except that in this specific instance, you are weighing on an article that claims that it is wrong to run a local LLM for the purposes of checking for punctuation errors.
Shiri Bailem likes this.
FediThing
in reply to Cory Doctorow • • •@pluralistic
Thank you for the responses 🙏
"Because the arguments against running an LLM on your own computer"
...ahhh okay. So was this post aimed more at a very narrow homelab kind of audience?
It's just, as a reader, the article's emphasis on examples of tech origins imply it's trying to defend LLMs in general? This probably is my ignorance as a reader, but it's how it came across to me, and led to bafflement.
Cory Doctorow
in reply to FediThing • • •@FediThing This is the use-case that is under discussion.
pluralistic.net/2026/02/19/now…
Pluralistic: Six Years of Pluralistic (19 Feb 2026) – Pluralistic: Daily links from Cory Doctorow
pluralistic.netShiri Bailem likes this.
FediThing
in reply to Cory Doctorow • • •@pluralistic
Thanks. Can totally see how that makes sense at a technical level for people who run their own offline services.
I think it's the ambiguity that is driving the discourse over this post. People are taking the "refusing to use a technology" section as a defence of LLMs in general?
If the angle was caging LLMs or something like that it, might make it clearer that you aren't endorsing the most common form of LLM?
Anyway, it's your call on this as author, just wanted to feed back on this because your writing matters and I hope feedback is helpful to it.
Cory Doctorow
in reply to FediThing • • •Shiri Bailem
in reply to FediThing • •@FediThing I think the problem in discourse is the overwhelming amount of people experience anti-AI rage.
In the topic of LLMs, the two loudest groups by a wide margin are:
1. People who refuse to see any nuance or detail in the topic, who can not be appeased by anything other than the complete and total end of all machine learning technologies
2. AI tech bros who think they're only moments away from awakening their own personal machine god
I like to think I'm in the same camp as @Cory Doctorow , that there's plenty of valid use for the technology and the problems aren't intrinsic to the technology but purely in how it's abused.
But when those two groups dominate the discussions, it means that people can't even conceive that we might be talking about something slightly different than what they're thinking.
Cory in the beginning explicitly said they were using a local offline LLM to check their punctuation... and all of this hate you see right here erupted. If you read through the other comment threads, people are barely even reading his responses before lumping more hate on him.
And if someone as great with language as Cory can't put it in a way that won't get this response... I think that says alot.
@tante
like this
victorvonvortex, Rens van der Heijden and Else, Someone like this.
Daniel Lakeland reshared this.
FediThing
in reply to Shiri Bailem • • •@shiri
(Untagged Cory as I'm sure he is getting a lot of replies and I don't want to repeat myself at him.)
I don't think it's the first part that caused problems but the later parts, as they didn't explicitly mention offline LLMs and it was possible to read the later text as referring to all LLMs.
Shiri Bailem
in reply to FediThing • •@FediThing The link in question where he talked about it, and did explicitly say it, though he didn't use the "offline" label specifically he basically described it as such. (The label itself is not purely self explanatory, so wouldn't have helped much)
Here's the article link: pluralistic.net/2026/02/19/now…
On friendica the thumbnail of the page is what I've attached here, incidentally the key paragraph in question.
@tante
FediThing
in reply to Shiri Bailem • • •Yup, that's the start, but then the text goes onto a discussion of very broad technologies and refusal to use them, which is where the ambiguity sort of creeps in. It isn't clear in the later sections if it's referring to LLMs in general, or just the very specific niche of offline LLMs.
I'm not posting this to attack Cory but to give feedback as a reader. I (incorrectly) took him to be talking about LLMs in general in the later section of the post, and it's possible other people are interpreting the later sections in the same way.
prince lucija
in reply to Shiri Bailem • • •fully agree!
@pluralistic @tante @FediThing
prince lucija
in reply to FediThing • • •i feel in the similar way as big tech has taken the notion of AI and LLMs as a cue/excuse to mount a global campaign of public manipulation and massive investments into a speculative project and pumps gazillions$ into it and convinces everyone it's innevitable tech to be put in bag of potato chips, the backlash is then that anything that bears the name of AI and LLM is poisonous plague and people are unfollowing anyone who's touched it in any way or talks about it in any other way than "it's fascist tech, i'm putting a filter in my feed!" (while it IS fascist tech because it's in hands of fascists).
in my view the problem seems not what LLMs are (what kind of tech), but how they are used and what they extract from planet when they are used by the big tech in this monstrous harmful way. of course there's a big blurred line and tech can't be separated from the political, but... AI is not intelligent (Big Tech wants you to believe that), and LLMs are not capable of intelligence and learning (Big Tech wants you to believe that).
so i feel like a big chunk of anger and hate should really be directed at techno oligarchs and only partially and much more critically at actual algorithms in play. it's not LLMs that are harming the planet, but rather the extraction, these companies who are absolute evil and are doing whatever the hell they want, unchecked, unregulated.
or as varoufakis said to tim nguyen: "we don't want to get rid of your tech or company (google). we want to socialize your company in order to use it more productively" and, if i may add, safely and beneficialy for everyone not just a few.
bazkie 👩🏼💻 bitplanes 🎵
in reply to prince lucija • • •@prinlu @FediThing @pluralistic I agree with most things said in this thread, but on a very practical level, I'm curious what training data was used for the model used by @pluralistic 's typo-checking ollama?
for me, that training data is key here. was it consensually allowed for use in training?
because as I understand, LLMs need vast amounts of training data, and I'm just not sure how you would get access to such data consensually. would love to be enlightened about this :)
Cory Doctorow
in reply to bazkie 👩🏼💻 bitplanes 🎵 • • •@bazkie @prinlu @FediThing
I do not accept the premise that scraping for training data is unethical (leaving aside questions of overloading others' servers).
This is how every search engine works. It's how computational linguistics works. It's how the Internet Archive works.
Making transient copies of other peoples' work to perform mathematical analysis on them isn't just acceptable, it's an unalloyed good and should be encouraged:
pluralistic.net/2023/09/17/how…
How To Think About Scraping – Pluralistic: Daily links from Cory Doctorow
pluralistic.netbazkie 👩🏼💻 bitplanes 🎵
in reply to Cory Doctorow • • •@pluralistic @prinlu @FediThing I think the difference to search engines is how LLM reproduces the training data..
as a thought experiment; what if I'd scrape all your blogposts, then start a blog that makes Cory Doctorow styled blogposts, which would end up more popular than your OG blog since I throw billions in marketing money at it.
would you find that ethical? would you find it acceptable?
further thought experiment; lets say you lose most of your income as a result and have to stop making blogs and start flipping burgers at mcDonalds.
your blog would stop existing, and so, my copycat blog would, too - or at least, it would stop bringing novel blogposts.
this kind of effect is real and will very much hinder cultural development, if not grind it to a halt.
that is a problem - this is culturally unsustainable.
Cory Doctorow
in reply to bazkie 👩🏼💻 bitplanes 🎵 • • •First: checking for punctuation errors and other typos *in my own work* in a model running on *my own laptop* has nothing - not one single, solitary thing - in common with your example.
Nothing.
Literally, nothing.
But second: I literally license my work for commercial republication and it is widely republished in commercial outlets without any payment or notice to me.
bazkie 👩🏼💻 bitplanes 🎵
in reply to Cory Doctorow • • •but then you consented to that, right? you are in control of that.
also my example IS similar - after all, it's data scraped without consent, used to create another work. the typo-checker changes your blogpost based on my training data, in the same way my copycat blog changes 'my' works based on your training data.
sure, it's on a way different scale - deliberately, to more clearly show the principle - but it's the same thing.
Cory Doctorow
in reply to bazkie 👩🏼💻 bitplanes 🎵 • • •@bazkie
Should we ban the OED?
There is literally no way to study language itself without acquiring vast corpora of existing language, and no one in the history of scholarship has ever obtained permission to construct such a corpus.
bazkie 👩🏼💻 bitplanes 🎵
in reply to Cory Doctorow • • •@pluralistic I gave it a good thought, and you know what, I'm gonna argue that yes, for me there is a degree of unethical-ness to that lack of permission!
the things that makes me not mind that so much are a variety of differences in method and scale;
(*btw just explaining my personal reasons here, not arguing yours)
- every word in the OED was painstakingly researched by human experts to make the most possible sense of it
- coming from a place of passion on the end of the linguists, no doubt
- the ownership of said data isn't "techno-feudal mega-corporations existing under a fascist regime"
- the OED didn't spell the end of human culture (heh) like LLMs very much might.
so yeah. I guess we do agree that, on some level, the OED and an LLM have something in similar.
it's the differences in method and scale that make me draw the line somewhere in between them; in a different spot from where you may draw it.
and like @zenkat mentioned elsewhere, it's the whole thing around LLMs that makes me very wary of normalizing anything to do with it, and I concede I wouldn't mind your slightly unethical LLM spellchecker as much, if we didn't live in this horrible context. :)
I guess this has become a bit of a reconciliatory toot. agree to disagree on where we draw the line, to each their own, and all that.
FediThing
in reply to Cory Doctorow • • •@pluralistic @bazkie @prinlu
This would be my take:
Search engines direct people to the work they index. They reward labour by directing people towards it.
Scraping without consent for training data lets people reproduce the work without crediting or rewarding the people who actually did the labour. That seems like labour theft?
If it is labour theft, then it isn't sustainable and that's part of why LLMs are so questionable as a technology.
Cory Doctorow
in reply to FediThing • • •@FediThing @bazkie @prinlu
There are tons of private search engines, indices, and analysis projects that don't direct text to other works.
I could scrape the web for a compilation of "websites no one should visit, ever." That's not "labor theft."
FediThing
in reply to Cory Doctorow • • •@pluralistic @bazkie @prinlu
Indexing works is a totally different thing to creating knock-offs of works, surely?
What Miyazaki said about AI knock-offs surely illustrates the difference?
Cory Doctorow
in reply to FediThing • • •No one is defending "creating knock offs of works." Why would you raise it here? Who has suggested that this is a good way to use LLMs or a good outcome from scraping?
Cory Doctorow
in reply to Cory Doctorow • • •The argument was literally, "It's not OK to check the punctuation in *your own work* if the punctuation checker was created by examining other peoples' work, because performing mathematical analysis on other peoples' work is *per se* unethical."
Cory Doctorow
in reply to Cory Doctorow • • •By this standard the OED is unethical.
bazkie 👩🏼💻 bitplanes 🎵
in reply to Cory Doctorow • • •Cory Doctorow
in reply to bazkie 👩🏼💻 bitplanes 🎵 • • •@bazkie @FediThing @prinlu
You've literally just made the case against:
* Dictionaries
* Encyclopedias
* Bibliographies
And also the entire field of computational linguistics.
If that's your position, fine, we have nothing more to say to one another because I think that's a very, very bad position.
bazkie 👩🏼💻 bitplanes 🎵
in reply to Cory Doctorow • • •I did not make that case, if you'd properly read my [additions] to the statement.
making dictionaries etc isn't automated on mass scales like feeding training data to LLMs is.
it's a very human job that involves a lot of expertise and takes a lot of time.
zenkat
in reply to Cory Doctorow • • •@pluralistic @bazkie @FediThing @prinlu I think part of the issue here is that GenAI is being pushed so hard and fast *everywhere* that's it's hard to be nuanced about what narrow use-cases might be acceptable or not.
We're living under a massive pro-LLM propaganda campaign. They have already set the terms of the debate with a maximalist position. It's no surprise that the backlash is similarly absolute.
Joris Meys
in reply to Cory Doctorow • • •@pluralistic
No, because dictionnaries are about language which is a shared common, encyclopedias are about knowledge, which is a shared common, and bibliohraphies are a list of works, not a derivative.
Knowledge, language and a list of works cannot be copyrighted. You can use language, knowledge, words from the dictionary. You can quote an encyclopedia when refering to the source. None of that is even relevant to this discussion.
@bazkie @FediThing @prinlu @tante
Joris Meys
in reply to Cory Doctorow • • •@pluralistic
The argument was "without the consent of the creators of said works." And you know that.
Don't be just another debate bro. Please.
@FediThing @bazkie @prinlu @tante
FediThing
in reply to Cory Doctorow • • •@pluralistic @bazkie @prinlu
If LLMs were only used for checking grammar that is one thing.
But by far the most common use of LLMs is labour theft through creating knock-offs, and that's something else.
I think the concern is that training data useful for the first case could be useful for the second case too? Hence the questions about where the training data comes from and where it ends up.
Kind of feels like it needs to be strictly ringfenced if it's to be ethical?
Cory Doctorow
in reply to FediThing • • •Once again, you a replying to a thread that started when someone wrote that using an LLM to check the punctuation in your own work is ethically impermissible because no one should assemble corpora of other peoples' works for analytical purposes under any circumstances, ever.
bazkie 👩🏼💻 bitplanes 🎵
in reply to Cory Doctorow • • •zivi
in reply to Cory Doctorow • • •@pluralistic @FediThing you’re attempting to legitimize use of an unethical technology for something you don’t actually need a plausible-sounding-wall-of-text generator for
it goes beyond “it’s made by bad people in bad ways”. it’s a “”tool”” that actively causes cognitive decline and psychosis and sucks the soul out of everything it touches. and mind you promoting and legitimizing it is an act of support for those bad people and their bad ways. your deflection is a typical that of someone with no regard for ethics
“I installed Ollama” instantly gives a person away as a techbro
Cory Doctorow
in reply to zivi • • •@zaire @FediThing
I'm not a liberal, I'm a leftist, so perhaps this is why I disagree with you.
The argument that "something is unethical because someone else used it in an unethical way" is so incoherent that it doesn't even rise to the level of debatability.
Mark Saltveit
in reply to Cory Doctorow • • •What's the difference between your argument here and "Slavery is OK because I didn't kidnap the slaves; I just inherited them from my dad." ??
Cory Doctorow
in reply to Mark Saltveit • • •@taoish @FediThing
Because there are no slaves in this instance. Because no one is being harmed or asked to do any work, or being deprived of anything, or adversely affected in *any articulable way*.
But yeah, in every other regard, this is exactly that enslaving people.
Sure.
Mark Saltveit
in reply to Cory Doctorow • • •@pluralistic @FediThing
Unless you consider stolen intellectual property (and ongoing copyright violations) a harm, a deprivation, &c.
But your general analogy against "fruit of the poison tree" morality would seem to also apply in the case of slavery -- in my hypothetical, the person didn't enslave anyone. They just inherited a slave from someone who did. That is indeed "fruit of a poisoned tree", even if they just continued an existing enslavement.
We have a real world recent example -- the cell lines stolen from Henrietta Lacks. Do you dismiss any moral concerns about using her cell line without consent as a neo-liberal moral purity trap?
Cory Doctorow
in reply to Mark Saltveit • • •Scraping and training are not copyright infringements:theguardian.com/us-news/ng-int…
AI companies will fail. We can salvage something from the wreckage
Cory Doctorow (The Guardian)Lupino
in reply to Cory Doctorow • • •Cory Doctorow
in reply to Lupino • • •This is a purity culture argument about the "fruit of the poisoned tree." The silicon in your laptop was invented by a eugenicist. The network your packets transit was invented by war criminals. The satellite the signal travels on was launched on a rocket descended from Nazi designs that were built by death-camp slaves.
Cory Doctorow
in reply to Cory Doctorow • • •To be clear, I completely reject this argument as a form of special pleading. Everyone has a reason why *their* fruit of the poisoned tree is OK, but other peoples' fruit of the poisoned tree is immoral.
Lupino
in reply to Cory Doctorow • • •@pluralistic i guess this misses the point: the particular chip in my laptop wasn't made by war criminals (i hope...), but the model you do use was trained under vast amounts of energy and water consumption. I'm not sure this is completely comparable, tbh.
@FediThing @tante
Lupino
in reply to Lupino • • •Cory Doctorow
in reply to Lupino • • •Llama 2 was not built to check spelling and grammar. That's "not even wrong."
Shiri Bailem likes this.
Cory Doctorow
in reply to Lupino • • •No, this is just more "fruit of the poisoned tree" and your argument that your fruit of the poisoned tree doesn't count is the normal special pleading that this argument always decays into.
Shiri Bailem likes this.
Lupino
in reply to Cory Doctorow • • •Cory Doctorow
in reply to Lupino • • •I never denied the existence of "use-cases that...one can reject it its entirety."
Colman Reilly
in reply to FediThing • • •Cory Doctorow
in reply to Colman Reilly • • •Shiri Bailem
in reply to Cory Doctorow • •like this
Fruits, komali_2 and webhat like this.
Ursa
in reply to Cory Doctorow • • •@pluralistic @Colman @FediThing
This is...disappointing. To be fair, I'm disappointed in almost everyone in this thread for engaging in schoolyard shit throwing, but you're much higher in status and your shit sticks. Have a conversation. Figure out where these views can comingle. Find common understanding or you risk using your high status to fracture an already unstable alliance of people who want technology to operate safely and for the benefit of our shared humanity.
Do better.
komali_2
in reply to Cory Doctorow • • •Ghostrunner
in reply to Cory Doctorow • • •Martijn Vos
in reply to Ghostrunner • • •@Ghostrunner @Cory Doctorow @Colman Reilly @FediThing @tante
You had a higher opinion of him, but not of yourself? And yet you wonder why he's popular.
I'm not a fan of all the shit throwing in this thread, but if you participate, you're going to get some on you.
Ian Betteridge
in reply to FediThing • • •Cory Doctorow
in reply to Ian Betteridge • • •Performing mathematical analysis on large corpora of published work is not "stealing."
Hanno Rein
in reply to Cory Doctorow • • •Shiri Bailem
in reply to Hanno Rein • •@Hanno Rein It may seem totally different to you, but legally there's little difference between you being inspired after listening to a bunch of music and the LLM training off of it.
If you want to get into a deeper ethics conversation than the legal text, you're wading into some deep mud that goes back to the advent of copyright. (People take it for granted that there were ethical debates about copyright and the settled answer was that copyright is a compromise not a fundamental right)
@Cory Doctorow @tante @FediThing @Ian Betteridge
Ian Betteridge likes this.
Hanno Rein
in reply to Shiri Bailem • • •@shiri
Sorry, I wasn't very clear. The comment was not about the LLM training data including a song but it's ability to reproduce the song afterwards. It becomes a Beatles impersonator.
LLMs can (mostly) output what they got as training data. So I don't buy the mathematical analysis argument.
@pluralistic @tante @FediThing @ianbetteridge
Shiri Bailem
in reply to Hanno Rein • •@Hanno Rein
it varies and is a violation when they do... but here's the catch: that's the worst argument to challenge them on.
The big ecologically devastating companies behind all the worst of things? They benefit enormously if tackled from that front.
Because that argument doesn't make LLMs illegal or affect their training, it only makes it illegal to share the models required to run your own. Because the copyright violation is either in the output (which they can filter) or it's in the model files, which means they'd be the only ones you could go to for AI.
That argument could even solve the AI bubble... again in the worst way because their abuses would be allowed to continue (the bubble bursting is what will shut down most if not all of these datacenters)
@Cory Doctorow @tante @FediThing @Ian Betteridge
Ian Betteridge
in reply to Cory Doctorow • • •Cory Doctorow
in reply to Ian Betteridge • • •David
in reply to Cory Doctorow • • •@pluralistic @ianbetteridge @FediThing
It's still profit loss damage curable by income transfer if the illegally acquired data was used to create that profit. Dataset prominence should provide the percentage of profits and prominence is data size but also inference casualty. The primary literature should not be able to be diluted with free intellectual property.
I don't know if any of this is actual case law and I'm not a lawyer.
Cory Doctorow
in reply to David • • •You're talking about ways of using models, not the creation of models. It's possible to make a model that does illegal things. But training a model is not illegal.
James Gleick
in reply to Cory Doctorow • • •@pluralistic @ianbetteridge @FediThing “Mathematical analysis” is doing a lot of work here. It could mean gathering meaningless statistics. Or it could mean capturing the qualities (deviations from the average) that make a particular work of art (or author) special, creative, surprising—for use in simulacra.
I think that's harmful, to the culture as a whole, if not to the artworks and artists getting regurgitated.
Cory Doctorow
in reply to James Gleick • • •@gleick @ianbetteridge @FediThing
Let's stipulate to that (I don't agree, as it happens, but that's OK). It's still not a copyright infringement to enumerate and analyze the elements of a copyrighted work.
For the record, I think AI art is bad and neither consume nor make it.
James Gleick
in reply to Cory Doctorow • • •@pluralistic @ianbetteridge @FediThing I'm not claiming that's copyright infringement. Even if one respects the general framework of copyright, which I know you don’t, it seems hopeless to apply it to this AI mess.
But there is a kind of theft here. Not that it's actionable or measurable. But it’s nontrivial. It's related to questions of impersonation. It's an assault on individuality. Whatever your reasons for thinking AI art is bad (I have some sense), it's related to that, too.
James Gleick
in reply to James Gleick • • •Dave Rahardja
in reply to James Gleick • • •@gleick @pluralistic @ianbetteridge @FediThing I think the sense of “theft” that creators feel is directly caused by the fact that the AI industry (as it stands today) is a Ponzi scheme which is fundamentally built on remixing creators’ works and devaluing human labor. I have a feeling that most creators will not feel the same kind of outrage if an educational institution created the same technology for academic use, e.g. to generate insights into online culture and psychology.
In short, the GRIFT (i.e. the particular application of the technology) is the source of the feeling of theft, not the technology itself. I think the tech itself has value when used ethically.
FWIW I agree with Cory here that copyright is the *wrong* framework to use for criticizing AI, because for every case where copyright helps the individual creator, there are hundreds of cases where it helps incumbent megacorporations more.
#ai
humancode.us/2024/05/15/copyri…
Copyright will not save us from AI
humancode.usMartijn Vos
in reply to Dave Rahardja • • •@Dave Rahardja @Cory Doctorow @tante @FediThing @James Gleick @Ian Betteridge
I think there's a couple of aspects to the "theft":
* the theft of material: they're trained on copyrighted material
* the theft of jobs: AI is being used to replace artists/writers/coders; it's the same thing that upset the Luddites
* the theft of style: not only does AI "learn" from the works of others, it can emulate it. On demand. Some artists have very unique, personal styles that are suddenly not their own anymore.
Alaric Snell-Pym
in reply to Cory Doctorow • • •Bruno Nicoletti
in reply to Cory Doctorow • • •Cory Doctorow
in reply to Bruno Nicoletti • • •@bjn @ianbetteridge @FediThing
Once again, you're talking about *using* a model, not training a model.
Also "IP theft" isn't a thing. Perhaps you mean copyright infringement?
Bruno Nicoletti
in reply to Cory Doctorow • • •Cory Doctorow
in reply to Bruno Nicoletti • • •@bjn @ianbetteridge @FediThing it is a bedrock of copyright law that devices 'capable of sustaining a substantial non-infringing use' are lawful. Decided in 1984 (SCOTUS/Betamax) and repeatedly upheld.
It is categorically untrue that merely because a model's output can infringe copyright that the model is therefore illegal.
There's not much that's truly settled in American limitations and exceptions, but this is.
Cory Doctorow
in reply to Cory Doctorow • • •Cory Doctorow
in reply to Cory Doctorow • • •Bruno Nicoletti
in reply to Cory Doctorow • • •Cory Doctorow
in reply to Bruno Nicoletti • • •Bruno Nicoletti
in reply to Cory Doctorow • • •Mastodon Migration
in reply to tante • • •Hmmmm... How about this perspective?
LLM is just a programming technique. The ethicality of using LLMs relates to the type of use and the source of the data it was trained on.
Using LLMs to search the universe for dark matter using survey telescopic data or to identify drug efficacy using anonymized public health records is simply using the latest technology for good purpose. Cory's use seems like this.
LLMs trained on stolen data creating derivative work. That's just theft.
Shiri Bailem likes this.
Shiri Bailem
in reply to Mastodon Migration • •@Mastodon Migration tagging @Cory Doctorow because this is a good line of discussion and he might need the breath of fresh air you're bringing.
My own two cents: you're missing one of the big complaints in the form of "how they were trained" which is the environment impact angle. Not that it isn't addressed by Cory's use case, just a missing point in the conversation that's helpful to include.
The "stolen data" rabbit hole is sadly a neverending one that digs into deep issues that predate LLMs. Like the ethics of copyright (which is an actual discussion, just so old that it's forgotten in a time when copyright is taken for granted). Using it to create "art" and especially using it to replace artist jobs is however a much much more clear argument.
Nitpick: LLMs can't be used for checking drug efficacy or surveying telescopic data, I think in this line you're confusing LLM with the technology it's based on which is Machine Learning.
@tante
like this
Mastodon Migration, Cregg, Aleksandr Koltsoff and Hobart like this.
Matt Rife reshared this.
Mastodon Migration
in reply to Shiri Bailem • • •@shiri @pluralistic
Thanks for these corrections. Completely agree with everything, and thanks for tagging Cory.
One of the really unfortunate things that the Silicon Valley scammers have achieved is to coopt new technologies for their despicable pump and dump schemes and apply their disingenuous hype factory which ends up tarring all uses with the same brush.
Shiri Bailem likes this.
David Fleetwood - RG Admin
in reply to Mastodon Migration • • •@mastodonmigration @shiri @pluralistic The only ethical use of a LLM would be one where the training dataset was ethically acquired, the power was minimized to the level of other methods of providing the same benefits, and the 'benefits' were actually measureable and accurate.
None of those are true today, and so far as I know there is little to no path to them.
zivi likes this.
Mastodon Migration
in reply to David Fleetwood - RG Admin • • •Seems like Cory's local punctuation and grammer checker is such an example, no?
Shiri Bailem
in reply to Mastodon Migration • •@Mastodon Migration
it's the "copyright" issue, the outlook that unless everyone who posted anything that was used receives a check for a hefty sum then it's unethical.
Copyright is in quotes because it's not really a violation of copyright (the LLMs are not producing whole copies of copywritten materials without basically being forced) nor is it a violation of the intent of copyright (people are confused, copyright was never intended to give artists total control, it's just to ensure new art continues to be created).
@Cory Doctorow @David Fleetwood - RG Admin @tante
like this
Mastodon Migration and David Fleetwood - RG Admin like this.
David Fleetwood - RG Admin
in reply to Shiri Bailem • • •Also it's incredibly unclear to me how a LLM is a good use case for punctuation and grammar checking,. something regular document editors have done incredibly well since the late 90's or so. Like that's your use case? Not promoting Microsoft here but Word has been fantastic at that since at least 2003.
Seems weird to use that as the case for an energy sucking plagiarism machine.
Radio Free Trumpistan
in reply to tante • • •komali_2
in reply to tante • • •questions for the leftists and liberals from a confused anarchist:
1. Do you think you can put the cat back in the bag with LLMs? How?
2. For those that believe that LLMs were trained on stolen data, what does it mean for data to be private, scarce property, that can be "stolen?"
3. What about models that just steal from the big boys, like the PRC ones? Theft from capitalists, surely ethical?
4. Will you not using any LLMs cause Sam Altman and friends to lose control of your country?
Shiri Bailem likes this.
Radio Free Trumpistan
in reply to komali_2 • • •Shiri Bailem
in reply to komali_2 • •@komali_2 answering as a leftist AI-moderate who just understands the arguments better than most:
@tante