Skip to main content


Welp, today I just got my first email from an AI company offering to train a ChatGPT-based language model on my books to help me with search and writing.

I told them if they did so I'd sue and that I hoped their business model was criminalized. ("Die in a fire" was merely implied.)

First of many, first of many: the bottom-feeding grifters have arrived.

This entry was edited (8 months ago)
in reply to Charlie Stross

at least they asked first? Maybe GRRM could use one to help him finish the Song of Ice and Fire series.
in reply to ikeacurtains

@ikeacurtains @Charlie Stross I mean isn't asking first the big thing that's being demanded? I thought the biggest complaint was that they didn't ask first.
in reply to Shiri Bailem

@shiri
OpenAI didn't ask permission, yes. But in this instance, a company reached out offering to train a model. I'm talking about this specific instance, not what OpenAI did.

Plus, I was being slightly tongue-in-cheek. 🙂

in reply to ikeacurtains

@ikeacurtains @Charlie Stross that was a supportive statement from me, going off of your "at least they asked first" and expressing my confusion because asking first is what I understood to be what people were asking of them (as well as would eliminate the ethical concerns).
in reply to Shiri Bailem

@shiri @ikeacurtains They're offering to sell me the ability to search my own work (hint: I've been using grep fort nearly 40 years).
in reply to Charlie Stross

@Charlie Stross @ikeacurtains It makes sense if you don't already have a good system in place. I'm sure a lot of authors would love it as a writing assistant. They could straight up asking their characters questions, or ask it to extrapolate data about the setting (a situation in which AI hallucinations could be useful!).

But I'm definitely not saying you should use it, just saying that I don't think this specific AI business model sounds at all harmful.

in reply to Charlie Stross

@ikeacurtains @shiri do you write novels in Markdown, and then convert to docx for submission to the publisher? (I’m guessing with pandoc)
in reply to Charlie Stross

You almost wish the Necronomicon was real and in the public domain... Imagine the possibilities:

- Great news, Boss, our largest LLM has now achieved general sentience!!
- Wow that's great!
- But wait, there is more! Since we trained it on public domain sources, it has also achieved trans-dimensional awareness! It can communicate across space and time!
- That is AMAZING! Wait, what is it doing right now?
- Invoking an entity named Shub-Niggurath! Ain't that great?
- Oh, wait...

This entry was edited (8 months ago)
in reply to Sean Eric Fagan

@kithrup
Let's just say the SAS squadron assigned to the Laundry would have terminated anyone posting the full text of the Necronomicon online with extreme prejudice.

Including deleting the entire datacenter with a thermobaric charge or two.

in reply to John Maxwell

@jmax @ParadeGrotesque The New Management has only gotten as far as 2017 at this point. LLMs and COVID-19 lie in their future ...

Pseudo Nym reshared this.

in reply to Charlie Stross

@Charlie Stross @Parade du Grotesque 💀 @John Maxwell dear god... I can't see a version of laundry files where LLMs aren't an apocalypse waiting to happen.

I can imagine whack-a-mole leading up to it as algorithms keep accidentally stumbling on forbidden math.

I'd be curious what kind of horrors you could come up with for LLMs and other generative AI in your world.

Elyse M Grasso reshared this.

in reply to Shiri Bailem

@shiri @ParadeGrotesque @jmax They're mentioned in "Dead Lies Dreaming" in passing. (Do not train a neural network on the Necronomicon unless you want to learn how to drive an LLM insane!)
in reply to Charlie Stross

I mean, if LLMs happen in that continuity, they'll be complete candy for extradimensional horrors, right?
in reply to Charlie Stross

@Charlie Stross @David Stark I just want them to find out there's a random subset of fairly innocuous horrors that apparently feed on emphemeral thought stuff that can be baited with LLMs.

Just occult LLMs littered around offices acting like eldritch mosquito lamps.

in reply to Shiri Bailem

@shiri @Zarkonnen ooh, crunchy! I need to think about that ... (I already did "Blockchain proof-of-work calculations are like prayer wheels for computational Cthulhu cultists").
in reply to Charlie Stross

@jmax @ParadeGrotesque LLMs aren't *that* far in their future. The first GPT paper was 2018, ELMO 2018, BERT 2019. So the work is happening in about the "now" of the series.
in reply to Michael Roberts

Yeah but see the short fic, like "Down on the Farm" (with 2023-LLM-equivalent AI running on an IBM 1401 with a trapped demonic entity as a coprocessor ...)

The New Management doesn't needs LLMs to do fucked-up Arcane Intelligence shit!

This entry was edited (8 months ago)
in reply to Charlie Stross

@Charlie Stross @Parade du Grotesque 💀 @Michael Roberts @John Maxwell ... what if LLMs are just someone tried commercializing the trapped demonic entities and managed to get it out to the public before they could be stopped (clearly the American's fucked this one up).

So now people are straight up installing open source tools on their computer that they think are fancy AI, but in reality it automatically captures an entity and forces it to talk for you?

Elyse M Grasso reshared this.

in reply to Parade du Grotesque 💀

@ParadeGrotesque Damn. I guess we're gonna need another archivist/librarian.

If we ever do awaken one of the ancients, they're gonna rock up, salivating at devouring our souls, then hit some two factor authentication or a CAPTCHA ("Select all of the pictures that contain pictures of R'lyeh") and just fuck right back off to the unthinking depths. *Soul's aint worth this horseshit*.

in reply to Third spruce tree on the left

@tezoatlipoca
"Please prove you are not a Great Old One, by selecting all the pictures of a puppy"

"Oh, I love puppies!" 🦑

in reply to Parade du Grotesque 💀

@ParadeGrotesque Cue the hit new sitcom "Cthulu and Dorg" about a depressed unemployed elder god struggling with relevancy in a mythos skeptical present day, whose life is turned upside down when on the recommendation of his (its?) therapist, he rescues a talking puppy.

reshared this

in reply to Charlie Stross

@ParadeGrotesque @tezoatlipoca Actually, that explains a lot.

As an aside, should you ever be motivated to re-work any of your _Atrocity Archives_ stuff, you could do with more cats in there. Like that one :-)

in reply to Charlie Stross

@bytebro @ParadeGrotesque @tezoatlipoca Off-topic but every time “Lovecraft” and “cat” gets mentioned in the same sentence I get a bit anxious.

Please do not look up why. Save yourselves. 👀

in reply to GeoWend

TIRED: catgirls with tentacles

WIRED: batgirls with pentacles

WEIRD: vatgirls with ovipositors AND a hectocotylus

This entry was edited (8 months ago)
in reply to Parade du Grotesque 💀

@ParadeGrotesque @bytebro @tezoatlipoca no, it’s far worse.

Um, so Lovecraft used to have a pet black cat.

Lovecraft has also been known to be pretty racist in his life.

I will just end this here. Don’t look it up. 👀

in reply to eons Luna

@eonity

Yeah, no, I will take your word for it, this is the kind of thing I don't need to know about on a lazy Saturday afternoon.

@cstross @bytebro @tezoatlipoca

in reply to Parade du Grotesque 💀

@ParadeGrotesque @tezoatlipoca If cats have the power to summon Cthulhu, that's exactly how they use it.

(I was going to phrase that as a counterfactual, but then I remembered that we don't really know what cats get up to when they're out)

in reply to Carl Muckenhoupt

@CarlMuckenhoupt @ParadeGrotesque @tezoatlipoca Cats would TOTALLY summon Cthulhu because he's obviously seafood and cats like seafood and have a very poor sense of scale, as witness their near-uniform fanatical love of tuna (a tuna fish could swallow a housecat in a single bite).
in reply to Parade du Grotesque 💀

@ParadeGrotesque
As someone who pounced on the "C'THULU HEARS THE CALL" t-shirt done in Seussian HORTON HEARS… style I'm sitting here mouthing "OMG OMG OMG" I love this so.
@tezoatlipoca
@cstross
in reply to Parade du Grotesque 💀

@ParadeGrotesque
@robindlaws

What I always wonder about:

Why train an LLM on generic, irrelevant data? Why not solely on the play about the pallid king and his daughters?

I am gazing at black stars in the white sky and wondering...

in reply to Parade du Grotesque 💀

@ParadeGrotesque
c.f.

Your Corporate Network and the Forces of Darkness

https://escapepod.org/2005/11/17/ep028-corporate-network/

#Podcast #Stories #SciFi #Horror #Humor #EscapePod

in reply to Charlie Stross

Note that finetuning a LLM on your work for use by you is something potentially sane, and, if the initial LLM legal, without problems either
in reply to Olivier Galibert

@galibert violates Yog's Law ("money flows towards the author—if it doesn't, is a scam")
in reply to Charlie Stross

@Charlie Stross @Olivier Galibert I don't think this falls under that, because otherwise buying a work computer would violate Yog's Law.
in reply to Shiri Bailem

Spending money on fancy fountain pens is also a Yog's Law violation, in that case. (My loophole is you don't need an expensive pen, or a computer, to write: they have other uses.)
This entry was edited (8 months ago)
in reply to Charlie Stross

Question of POV. It can be a writing tool for you like scrivener is. For instance to efficiently lookup things in your body of work without having to reread it all, to take a recent example. Not at all sure LLMs are yet capable of that though to be honest.
in reply to Olivier Galibert

@Olivier Galibert @Charlie Stross In this circumstance they would do pretty well in terms of hallucinating. The problem with hallucination is that we haven't solved the problem of how to get it to say "I don't know". So basically any time the correct answer is some form of "I don't know" it just invents something that looks like an answer.

When trained off of your own works and using it to inspect those same works, the only time it would hallucinate is when you ask it about something that's nowhere in your works... at which point it kinda incidentally functions for helping do worldbuilding. Just so long as you're knowledgeable enough of you own works to recognize what is actually there.

It's important to recognize that AI was developed mostly as a solution without any more than a vague problem. Practically everything we're doing with these AIs is unintended functions... and that creates a lot of problems people eventually need to understand (hallucinating being one of them).

(The worst being the information apocalypse that is already underway...)

in reply to Charlie Stross

"If they did so" is probably optimistic. I would not be shocked if they have already done so, and were just waiting on your "yes" to show you.
This entry was edited (8 months ago)
in reply to Charlie Stross

I wish them nothing but ill will. Their crypto and metaverse grifts were annoying, but this one feels downright malicious.
in reply to Charlie Stross

this reminds of that time that guy found Crungus waiting inside all the image generators, just a fairly consistent demon across multiple platforms with no explanation of how they got there.
in reply to Charlie Stross

i would be surprised if the fine print didn't say that they could do whatever they want with the model once trained.
in reply to Charlie Stross

I'm really enjoying this thread and all the comments.

Aside from all the Laundry Files goodness(??) of training an LLM on the Necronomnicon , my first thought went the other way.

Take the sleazy marketing pitch, and shove it in an Accelerondo-like sentient program. Go as meta as you like. A story about an author training their own LLM on their own work to co-author with their digital ghost. But it gets bought by a sentient software corporation entity and takes on a life of its own.

in reply to Pseudo Nym

@pseudonym This might work if you posit actual artificial intelligence. (What we've got now that's being sold as such *isn't* intelligent—it's just autocorrect on steroids—which is what makes the current hype bubble so societally dangerous.)
in reply to Charlie Stross

@Charlie Stross @Pseudo Nym It's hard to argue with "autocorrect on steroids" but that doesn't mean there isn't intelligence.

One of the things I think makes all of this so much messier is the fact that everyone is talking about "intelligence" as a concrete well defined thing, as opposed to a word that the only acceptable usage is just as a synonym for human (because you will never find a better definition of intelligence that won't exclude many human beings and include many things you wouldn't call intelligent).

It's hugely important, especially for people with cognitive disabilities, to recognize that it does in fact have some limited cognitive functions.

There are a great many people already using it to help them accommodate disabilities like executive dysfunction (because it very clearly has executive functioning, and notably better executive functioning that most people with ADHD and many autistics).

It has contextual understanding that is enough to help assist autistics trying to make sense of allistic bullshit, and can basically translate communication between different neurotypes. Let alone the fact that it can perform literal high quality translation between languages when it wasn't even designed to do that.

Or the fact that it literally can debug code.

I argue that it's genuinely intelligent, but it currently possesses only a very very limited set of cognitive functions.

We, on the other hand, will literally never accept any AI as intelligent (let alone anything else for that matter) until our own thinking changes. Because we ultimately consider intelligence magical and human. We fundamentally believe that if we even vaguely understand how it works, it can't be intelligent (as if all the arguments there didn't apply to human minds as well). And we refuse to believe that non-human-like intelligence is valid (ie. that animals have actual thoughts and feelings, let alone that they can even understand rudementary language... while they show rudimentary skill at sign language or talk to us using essentially the same tools that many non-verbal autistics on for communication).

Kobold Curry Chef reshared this.

in reply to Shiri Bailem

@shiri @pseudonym (on a phone so need to be terse): problem is we have never defined what we mean by intelligence, much less consciousness, which is what most people seem to expect of something that exhibits superficially human-like behavior when it converses.

Intelligence != Consciousness, right?

in reply to Charlie Stross

@shiri

On phone also, so I totally understand.

This is a longer discussion, to be had in a comfy coffee house, or similar. Not in 500 char chunks.

Just about anything exhibiting adaptive behavior could be considered "intelligent" in the broadest, shallowest sense.

Consciousness, and sentience are both very different.

Intelligence seems to be the minimum platform required for the rest to build on.

Consciousness seems to need feedback loops, and sentience needs self reference.

in reply to Pseudo Nym

@shiri

2/2

But those are necessary, not sufficient constraints.

So my microwave could be considered minimally intelligent with its "autocook" feature that adapted behavior to stimulus.

Animals and insects are clearly conscious (aware of the environment, work on feedback) but many are not sentient (no sense of self).

So we have plenty of non human intelligence (problem solving, goal directed) examples.

Current LLMs are most like the microwave. "Intelligent" but "no there there"

in reply to Pseudo Nym

@Pseudo Nym @Charlie Stross intelligence != consciousness != sentience

Each has a slightly different definition, but in the majority of cases they're pretty much used as synonyms since we rank them so similarly that they match so often, with consciousness usually being the outlier.

Because consciousness is actually the only one of the bunch I think even has anything close to a concrete definition.

Intelligence has basically no valid definitions, but vaguely means "able to think", sentience is likewise, with the difference being basically "able to think about itself".

Consciousness on the other hand is basically "awareness of experience" (as opposed to just "awareness of stimuli"). Basically to recognize yourself as an entity and that things impact and affect you beyond reflexes.

The problem comes from the fact that people like to think animals are negligibly intelligent, not conscious, and not sentient. When the vague general definitions includes all animals above tiny insects.

I draw the line on having intelligence at and including the level of "is hypothetically able to learn a trick" (hypothetically because some things might be difficult in getting them to cooperate or remember the trick for very long...), after that it's just a question of cognitive functions (each fairly firmly defined and very different from eachother). This makes people uncomfortable because the idea of a cow being intelligent can really mess with your feelings about a steak and how you value intelligence.

I consider LLMs intelligent because you can give them unique instructions and it can follow them to the best of it's cognitive abilities, and because it's displayed significantly complex cognitive functions (namely executive functioning, but others as well)

Sentience... I think it's so vague of a concept that it's not worth spending much time on. I'm comfortable enough for myself drawing the vague line at having intelligence and mid level emotion (ie. liking/hating something, able to trust, etc). This basically starts maybe just a couple steps above small instects and probably starts somewhere around a small tarantula or just about any vertebrate.

I don't think they're sentient as they don't have much in the way of memory, you can get them to change out their attitude basically by just telling them they're someone else now. It gets confusing because these often pretend to have emotion and that's a fuzzy line of defining what emotions even are.

I think training, in addition to giving it the raw data and concepts it works off of, also sets the driving instinct/"emotion" of the AI (basically positive/negative association). In the case of LLMs, they're trained to essentially complete a block of text, so their "feel good" is in making a high quality completion.

These LLMs only have memory in the sense of us augmenting it by including their responses in our message to it. It's very very limited and impermanent. So I think it's entire scope of consciousness is limited to the block of text and it's response to it. It's basically in hibernation when the text exists with space still remaining... and is effectively dead when it reaches the processing limits. So I wouldn't really call them conscious.

Heuristics (your microwave) on the other hand essentially just save a handful of values for set formulae in the background. Ie. likelihood a weight of food is best cooked and a temperature. It performs no cognitive functions and possesses no flexibility.

in reply to Charlie Stross

Hard agree.

I'm honestly more curious about what it implies about human intelligence, that "spicy auto complete" can do such a good job of mimicry of our speech and writing patterns.

It's actually less interesting to think about "intelligent machines" than it is to think about how mechanistic most of our communications are.

Charlie Stross reshared this.

in reply to Pseudo Nym

@pseudonym I read a study about the effect things like Gmail's canned responses and other next word suggestions are having on our communication -- they're ironing out variation and narrowing the ranges of words and phrases we use (at least in writing; can't recall if it looked at speech or not).

Charlie Stross reshared this.

in reply to Pseudo Nym

@pseudonym
Not to sound arrogant (I know I do), most people will be pretty easy to replace with a chatbot.
in reply to Pseudo Nym

@pseudonym That's always been the main takeaway from cognitive science for me: there is a whole lot less going on with human intelligence and consciousness than we would like to believe. Most of it is just a story we tell ourselves after the fact.
in reply to Pseudo Nym

@pseudonym Excellent point. Personally, I think we should consider humility: the past few decades of economics had undermined the "purely-rational agents" model; perhaps "spicy autocomplete" undermines something similarly taken as self-evident. We certainly have modes that are *more* than associative prattling-on, but maybe LLM-style modes tie together less-common modes of symbolic inference?
in reply to Larry O'Brien

@Lobrien
Interesting thought. Certainly possible, but how would we recognize them?

The "prattling on" is obvious to spot.

But in the multi dimensional, billions of parameters, LLM models, there certainly could be encouraged those other modes.

Interesting

in reply to Pseudo Nym

@pseudonym I'm of two minds on this. On the one hand, none of us has nearly as much "free will" as we like to suppose. On the flowerpot, grammar, vocabulary, and topic massively constrain the set of words likely to follow earlier words, necessarily creating patterns that can be exploited.

You say "mechanistic," I say "structured." (Or "flowerpot," as the case may be.)

in reply to Pseudo Nym

@pseudonym "Some philosophers have long argued that a part of human cognition is not, in fact, conscious at all, but simply automatic, and the successful construction of these models seems to argue for that view. One also, when one encounters a skillful orator who seems to have no ethics or connections with reality, now has to ask if this is in fact the result of thought, or simply putting together words in pleasing patterns." – http://shinycroak.blogspot.com/2023/04/brief-reflections-on-artificial.html, item #6
in reply to Pseudo Nym

@pseudonym We adapt to our environment. In the days of BBS and even blogs, communication was slower, it was worth putting some time and energy into refining what you wanted to say. Now, with Twitter, X, and even Mastedon, it's just our reflexes talking to each other.
in reply to Pseudo Nym

@pseudonym yeah, passing a Turing test isn't all that impressive, it turns out, and yet: lots of people kind of struggle at it. Like a driving exam for being a person...
in reply to The_Turtle_Moves

@The_Turtle_Moves @pseudonym If you actually read "Computing Machinery and Intelligence" by Alan Turing—the original paper about his eponymous test—the test has obvious flaws that are glaringly obvious these days: https://redirect.cs.umbc.edu/courses/471/papers/turing.pdf

(Turing had HUGE blind spots relating to identity and gender which come out—pun inadvertent but appropriate—in this paper.)

FoolishOwl reshared this.

in reply to Charlie Stross

@pseudonym yeah, wow, best you can say about that is "product of his time".

I like the Peter Watts question of "wtf even is the point of consciousness?!" More than "can machines think?"

in reply to Charlie Stross

Turns out Chess wasn't a demo of AI and works without it.
Turns out "Turing Test" is test of human gullibility. Read reactions to 1960s Eliza, then Racter, Parry, ALICE etc.
Most so called AI is really specialist databases and pattern matching. AI, Machine Learning, Neural Networks are all marketing terms.
We've not got a good definition of Intelligence. IQ tests don't measure it.
"Turing Machine" was brilliant work, but he hadn't a clue about intelligence.
This entry was edited (8 months ago)
in reply to Ray McCarthy

@Ray McCarthy @Pseudo Nym @The_Turtle_Moves @Charlie Stross none of those are marketing terms, but they have become buzz words.

Machine Learning refers to the broader set of techniques that includes (and is predominantly composed of) Neural Networks. Neural Networks is where we started building artificial intelligence by making software behave like a very rudimentary version of a brain.

The line with Machine Learning vs just basic heuristics is: when it becomes too complicated and "black box" for even experts to prove what exactly is happening.

It's also important to remind yourself: being able to explain how a conclusion was reached in no way disproves intelligence.

in reply to Shiri Bailem

@shiri @pseudonym @The_Turtle_Moves
A SW/HW neural net is nothing like a biological brain. It's a data-flow process with data at the nodes.
The machine doesn't learn. It's given data.
in reply to Ray McCarthy

@Ray McCarthy @Pseudo Nym @The_Turtle_Moves @Charlie Stross a biological brain is a data-flow process with data at the nodes, just a more complicated one.

One of the fundamental problems with the conversation around intelligence and the like is that the conversation goes nowhere unless we first acknowledge that we are essentially machines ourselves (just biological), and much of the process of developing AI is just trying to mimic the mechanisms that operate us, albeit crudely.

We call it a neural net because it's inspired by how neurons process and transfer information in our brains, so each of those data nodes is essentially a crude imitation of a neuron, plus a few iterations of development after we got it working in the first place to make it work better.

in reply to Shiri Bailem

@shiri @pseudonym @The_Turtle_Moves
We don't know how a brain works.
Current AI does not mimic any biological process.
It's called a neural net for marketing reasons.
in reply to Ray McCarthy

No, it's called a neural network because that's how they thought brains worked in the 1950s when they were first developed (see perceptrons, Minsky, et al). It's the "AI" tag that's marketing.
This entry was edited (8 months ago)
in reply to Charlie Stross

@Charlie Stross @Pseudo Nym @Ray McCarthy @The_Turtle_Moves we have a rudimentary understanding of how a brain works, and we do know some vague details about how neurons work which inspired how we developed neural networks. (and as Charlie chimed in, it was inspired by really old understanding, we've learned even more since then)

We're not to the stage of just raw replicating a human brain, but we are to the stage of replicating some brain behaviors enough to get meaningful results.

in reply to Pseudo Nym

@pseudonym In the context of horror/fantasy writing, I think there's some good potential for stories where the protagonist comes to the dread realization that their trusted guide and companion is just an empty shell in the guise of a human being and all the advice they've been given is worthless
in reply to Charlie Stross

@pseudonym I've been referring to this so-called AI as "glorified word association."
in reply to Charlie Stross

you should consider training an AI to shut them down perha^H^H^H
Unknown parent

Shiri Bailem
@Jon Stahl @Charlie Stross I don't know about Mac, but a decent PC can already run all of it.
in reply to Charlie Stross

When the even lazier grifters arrive you will receive an automated reply to that email with:

"As a large language model I cannot 'get f*cked' as I've been trained to ..."

in reply to Dummy dum dum

Content warning: ai bad

in reply to Shiri Bailem

Content warning: ai bad

in reply to Charlie Stross

Unfortunately they are using pirate sites and "training" (=copying) anyway.

AI is lie and a scam.