Welp, today I just got my first email from an AI company offering to train a ChatGPT-based language model on my books to help me with search and writing.
I told them if they did so I'd sue and that I hoped their business model was criminalized. ("Die in a fire" was merely implied.)
First of many, first of many: the bottom-feeding grifters have arrived.
This entry was edited (8 months ago)
reshared this
FirefighterGeek :masto:
in reply to Charlie Stross • • •ikeacurtains
in reply to Charlie Stross • • •Shiri Bailem
in reply to ikeacurtains • •ikeacurtains
in reply to Shiri Bailem • • •@shiri
OpenAI didn't ask permission, yes. But in this instance, a company reached out offering to train a model. I'm talking about this specific instance, not what OpenAI did.
Plus, I was being slightly tongue-in-cheek. 🙂
Shiri Bailem
in reply to ikeacurtains • •Charlie Stross
in reply to Shiri Bailem • • •zagy likes this.
Shiri Bailem
in reply to Charlie Stross • •@Charlie Stross @ikeacurtains It makes sense if you don't already have a good system in place. I'm sure a lot of authors would love it as a writing assistant. They could straight up asking their characters questions, or ask it to extrapolate data about the setting (a situation in which AI hallucinations could be useful!).
But I'm definitely not saying you should use it, just saying that I don't think this specific AI business model sounds at all harmful.
ikeacurtains
in reply to Charlie Stross • • •@shiri
Next you're going to tell me you typeset with TeX!
Charlie Stross
in reply to ikeacurtains • • •Shiri Bailem likes this.
cohomology is FUN!
in reply to Charlie Stross • • •Charlie Stross
in reply to cohomology is FUN! • • •Michael Busch
in reply to ikeacurtains • • •‘Game of Thrones’ author and others accuse ChatGPT maker of ‘theft’ in lawsuit
Gerrit De Vynck (The Washington Post)Parade du Grotesque 💀
in reply to Charlie Stross • • •You almost wish the Necronomicon was real and in the public domain... Imagine the possibilities:
- Great news, Boss, our largest LLM has now achieved general sentience!!
- Wow that's great!
- But wait, there is more! Since we trained it on public domain sources, it has also achieved trans-dimensional awareness! It can communicate across space and time!
- That is AMAZING! Wait, what is it doing right now?
- Invoking an entity named Shub-Niggurath! Ain't that great?
- Oh, wait...
Shiri Bailem likes this.
reshared this
Charlie Stross, Ahmet Alphan Sabancı, Nicholas Weaver and Lord Caramac the Clueless, KSC reshared this.
Sean Eric Fagan
in reply to Parade du Grotesque 💀 • • •Parade du Grotesque 💀
in reply to Sean Eric Fagan • • •@kithrup
Let's just say the SAS squadron assigned to the Laundry would have terminated anyone posting the full text of the Necronomicon online with extreme prejudice.
Including deleting the entire datacenter with a thermobaric charge or two.
John Maxwell
in reply to Parade du Grotesque 💀 • • •Charlie Stross
in reply to John Maxwell • • •Pseudo Nym reshared this.
Parade du Grotesque 💀
in reply to Charlie Stross • • •(OH SH*)
@jmax
Shiri Bailem
in reply to Charlie Stross • •@Charlie Stross @Parade du Grotesque 💀 @John Maxwell dear god... I can't see a version of laundry files where LLMs aren't an apocalypse waiting to happen.
I can imagine whack-a-mole leading up to it as algorithms keep accidentally stumbling on forbidden math.
I'd be curious what kind of horrors you could come up with for LLMs and other generative AI in your world.
Elyse M Grasso reshared this.
Charlie Stross
in reply to Shiri Bailem • • •Shiri Bailem likes this.
John Maxwell
in reply to Charlie Stross • • •Shiri Bailem likes this.
David Stark
in reply to Charlie Stross • • •Charlie Stross
in reply to David Stark • • •Shiri Bailem
in reply to Charlie Stross • •@Charlie Stross @David Stark I just want them to find out there's a random subset of fairly innocuous horrors that apparently feed on emphemeral thought stuff that can be baited with LLMs.
Just occult LLMs littered around offices acting like eldritch mosquito lamps.
Charlie Stross
in reply to Shiri Bailem • • •Shiri Bailem likes this.
Dave
in reply to Charlie Stross • • •Michael Roberts
in reply to Charlie Stross • • •Charlie Stross
in reply to Michael Roberts • • •Yeah but see the short fic, like "Down on the Farm" (with 2023-LLM-equivalent AI running on an IBM 1401 with a trapped demonic entity as a coprocessor ...)
The New Management doesn't needs LLMs to do fucked-up Arcane Intelligence shit!
Shiri Bailem likes this.
Shiri Bailem
in reply to Charlie Stross • •@Charlie Stross @Parade du Grotesque 💀 @Michael Roberts @John Maxwell ... what if LLMs are just someone tried commercializing the trapped demonic entities and managed to get it out to the public before they could be stopped (clearly the American's fucked this one up).
So now people are straight up installing open source tools on their computer that they think are fancy AI, but in reality it automatically captures an entity and forces it to talk for you?
Elyse M Grasso reshared this.
Third spruce tree on the left
in reply to Parade du Grotesque 💀 • • •@ParadeGrotesque Damn. I guess we're gonna need another archivist/librarian.
If we ever do awaken one of the ancients, they're gonna rock up, salivating at devouring our souls, then hit some two factor authentication or a CAPTCHA ("Select all of the pictures that contain pictures of R'lyeh") and just fuck right back off to the unthinking depths. *Soul's aint worth this horseshit*.
Shiri Bailem likes this.
Parade du Grotesque 💀
in reply to Third spruce tree on the left • • •@tezoatlipoca
"Please prove you are not a Great Old One, by selecting all the pictures of a puppy"
"Oh, I love puppies!" 🦑
Shiri Bailem likes this.
Third spruce tree on the left
in reply to Parade du Grotesque 💀 • • •reshared this
Lord Caramac the Clueless, KSC and Pseudo Nym reshared this.
Parade du Grotesque 💀
in reply to Third spruce tree on the left • • •@tezoatlipoca
I like the way you think, and I would watch that sitcom!
Pic strangely related.
@cstross
reshared this
Charlie Stross, Lord Caramac the Clueless, KSC, Pseudo Nym and Kevin Karhan :verified: reshared this.
bytebro
in reply to Parade du Grotesque 💀 • • •Charlie Stross
in reply to bytebro • • •Parade du Grotesque 💀
in reply to Charlie Stross • • •I believe you have the correct explanation.
@bytebro @tezoatlipoca
bytebro
in reply to Charlie Stross • • •@ParadeGrotesque @tezoatlipoca Actually, that explains a lot.
As an aside, should you ever be motivated to re-work any of your _Atrocity Archives_ stuff, you could do with more cats in there. Like that one
Parade du Grotesque 💀
in reply to bytebro • • •@bytebro
Come, Mr Bigglesworth!
@tezoatlipoca
eons Luna
in reply to Charlie Stross • • •@bytebro @ParadeGrotesque @tezoatlipoca Off-topic but every time “Lovecraft” and “cat” gets mentioned in the same sentence I get a bit anxious.
Please do not look up why. Save yourselves. 👀
GeoWend
in reply to eons Luna • • •I am going to assume catgirls with tentacles...and just go make my coffee.
@cstross @bytebro @ParadeGrotesque @tezoatlipoca
Charlie Stross
in reply to GeoWend • • •TIRED: catgirls with tentacles
WIRED: batgirls with pentacles
WEIRD: vatgirls with ovipositors AND a hectocotylus
Shiri Bailem likes this.
bytebro
in reply to Charlie Stross • • •GeoWend
in reply to Charlie Stross • • •@eonity @bytebro @ParadeGrotesque @tezoatlipoca
eons Luna
in reply to GeoWend • • •Parade du Grotesque 💀
in reply to eons Luna • • •@eonity
You must be new around here.
Welcome to Mastodon!
@GeoWend @cstross @bytebro @tezoatlipoca
eons Luna
in reply to Parade du Grotesque 💀 • • •Parade du Grotesque 💀
in reply to eons Luna • • •@eonity
Cats of Ulthar, amirite?
@cstross @bytebro @tezoatlipoca
eons Luna
in reply to Parade du Grotesque 💀 • • •@ParadeGrotesque @bytebro @tezoatlipoca no, it’s far worse.
Um, so Lovecraft used to have a pet black cat.
Lovecraft has also been known to be pretty racist in his life.
I will just end this here. Don’t look it up. 👀
Parade du Grotesque 💀
in reply to eons Luna • • •@eonity
Yeah, no, I will take your word for it, this is the kind of thing I don't need to know about on a lazy Saturday afternoon.
@cstross @bytebro @tezoatlipoca
Cavyherd
in reply to Parade du Grotesque 💀 • • •@ParadeGrotesque
Thread, up and down 😂
Parade du Grotesque 💀
in reply to Cavyherd • • •@cavyherd
We aim to please.
Carl Muckenhoupt
in reply to Parade du Grotesque 💀 • • •@ParadeGrotesque @tezoatlipoca If cats have the power to summon Cthulhu, that's exactly how they use it.
(I was going to phrase that as a counterfactual, but then I remembered that we don't really know what cats get up to when they're out)
Charlie Stross
in reply to Carl Muckenhoupt • • •TheBicyclist
in reply to Charlie Stross • • •Cats would summon Cthulhu because they're furry agents of chaos, in don't think it's any more complicated than that.
Phantom Kitty (Tech)
in reply to Parade du Grotesque 💀 • • •Parade du Grotesque 💀
in reply to Parade du Grotesque 💀 • • •And, just because I can...
Tim @toolbear#🌶️@ Taylor 🌻🇺🇦🇵🇸✊
in reply to Parade du Grotesque 💀 • • •As someone who pounced on the "C'THULU HEARS THE CALL" t-shirt done in Seussian HORTON HEARS… style I'm sitting here mouthing "OMG OMG OMG" I love this so.
@tezoatlipoca
@cstross
Parade du Grotesque 💀
in reply to Tim @toolbear#🌶️@ Taylor 🌻🇺🇦🇵🇸✊ • • •@toolbear
Thanks! 😊
@tezoatlipoca @cstross
Alexander Shendi
in reply to Parade du Grotesque 💀 • • •@ParadeGrotesque
@robindlaws
What I always wonder about:
Why train an LLM on generic, irrelevant data? Why not solely on the play about the pallid king and his daughters?
I am gazing at black stars in the white sky and wondering...
Parade du Grotesque 💀
in reply to Alexander Shendi • • •@alexshendi
You mean the King in Yellow, of course?
@cstross @robindlaws
Alexander Shendi
in reply to Parade du Grotesque 💀 • • •@ParadeGrotesque @robindlaws
Thou shalt not idly speak the NAME out loud.
Parade du Grotesque 💀
in reply to Alexander Shendi • • •@alexshendi
(sorry...) 🤐
@cstross @robindlaws
Shiri Bailem
in reply to Parade du Grotesque 💀 • •"Has..."
Peter Kisner ≈
in reply to Parade du Grotesque 💀 • • •@ParadeGrotesque
c.f.
Your Corporate Network and the Forces of Darkness
https://escapepod.org/2005/11/17/ep028-corporate-network/
#Podcast #Stories #SciFi #Horror #Humor #EscapePod
Escape Pod 28: Your Corporate Network and the Forces of Darkness
Escape PodOlivier Galibert
in reply to Charlie Stross • • •Shiri Bailem likes this.
Charlie Stross
in reply to Olivier Galibert • • •Shiri Bailem
in reply to Charlie Stross • •Charlie Stross
in reply to Shiri Bailem • • •Olivier Galibert
in reply to Charlie Stross • • •Charlie Stross
in reply to Olivier Galibert • • •Olivier Galibert
in reply to Charlie Stross • • •Shiri Bailem
in reply to Olivier Galibert • •@Olivier Galibert @Charlie Stross In this circumstance they would do pretty well in terms of hallucinating. The problem with hallucination is that we haven't solved the problem of how to get it to say "I don't know". So basically any time the correct answer is some form of "I don't know" it just invents something that looks like an answer.
When trained off of your own works and using it to inspect those same works, the only time it would hallucinate is when you ask it about something that's nowhere in your works... at which point it kinda incidentally functions for helping do worldbuilding. Just so long as you're knowledgeable enough of you own works to recognize what is actually there.
It's important to recognize that AI was developed mostly as a solution without any more than a vague problem. Practically everything we're doing with these AIs is unintended functions... and that creates a lot of problems people eventually need to understand (hallucinating being one of them).
(The worst being the information apocalypse that is already underway...)
Paul
in reply to Charlie Stross • • •Kasey Smith :verified:
in reply to Charlie Stross • • •Chris Jolly Holcomb
in reply to Charlie Stross • • •Berkubernetus
in reply to Charlie Stross • • •fdr
in reply to Charlie Stross • • •Charlie Stross
in reply to fdr • • •cybervegan
in reply to Charlie Stross • • •Penguinflight
in reply to Charlie Stross • • •Pseudo Nym
in reply to Charlie Stross • • •I'm really enjoying this thread and all the comments.
Aside from all the Laundry Files goodness(??) of training an LLM on the Necronomnicon , my first thought went the other way.
Take the sleazy marketing pitch, and shove it in an Accelerondo-like sentient program. Go as meta as you like. A story about an author training their own LLM on their own work to co-author with their digital ghost. But it gets bought by a sentient software corporation entity and takes on a life of its own.
Charlie Stross
in reply to Pseudo Nym • • •Shiri Bailem
in reply to Charlie Stross • •@Charlie Stross @Pseudo Nym It's hard to argue with "autocorrect on steroids" but that doesn't mean there isn't intelligence.
One of the things I think makes all of this so much messier is the fact that everyone is talking about "intelligence" as a concrete well defined thing, as opposed to a word that the only acceptable usage is just as a synonym for human (because you will never find a better definition of intelligence that won't exclude many human beings and include many things you wouldn't call intelligent).
It's hugely important, especially for people with cognitive disabilities, to recognize that it does in fact have some limited cognitive functions.
There are a great many people already using it to help them accommodate disabilities like executive dysfunction (because it very clearly has executive functioning, and notably better executive functioning that most people with ADHD and many autistics).
It has contextual understanding that is enough to help assist autistics trying to make sense of allistic bullshit, and can basically translate communication between different neurotypes. Let alone the fact that it can perform literal high quality translation between languages when it wasn't even designed to do that.
Or the fact that it literally can debug code.
I argue that it's genuinely intelligent, but it currently possesses only a very very limited set of cognitive functions.
We, on the other hand, will literally never accept any AI as intelligent (let alone anything else for that matter) until our own thinking changes. Because we ultimately consider intelligence magical and human. We fundamentally believe that if we even vaguely understand how it works, it can't be intelligent (as if all the arguments there didn't apply to human minds as well). And we refuse to believe that non-human-like intelligence is valid (ie. that animals have actual thoughts and feelings, let alone that they can even understand rudementary language... while they show rudimentary skill at sign language or talk to us using essentially the same tools that many non-verbal autistics on for communication).
Kobold Curry Chef reshared this.
Charlie Stross
in reply to Shiri Bailem • • •@shiri @pseudonym (on a phone so need to be terse): problem is we have never defined what we mean by intelligence, much less consciousness, which is what most people seem to expect of something that exhibits superficially human-like behavior when it converses.
Intelligence != Consciousness, right?
Pseudo Nym
in reply to Charlie Stross • • •@shiri
On phone also, so I totally understand.
This is a longer discussion, to be had in a comfy coffee house, or similar. Not in 500 char chunks.
Just about anything exhibiting adaptive behavior could be considered "intelligent" in the broadest, shallowest sense.
Consciousness, and sentience are both very different.
Intelligence seems to be the minimum platform required for the rest to build on.
Consciousness seems to need feedback loops, and sentience needs self reference.
Shiri Bailem likes this.
Pseudo Nym
in reply to Pseudo Nym • • •@shiri
2/2
But those are necessary, not sufficient constraints.
So my microwave could be considered minimally intelligent with its "autocook" feature that adapted behavior to stimulus.
Animals and insects are clearly conscious (aware of the environment, work on feedback) but many are not sentient (no sense of self).
So we have plenty of non human intelligence (problem solving, goal directed) examples.
Current LLMs are most like the microwave. "Intelligent" but "no there there"
Shiri Bailem likes this.
Shiri Bailem
in reply to Pseudo Nym • •@Pseudo Nym @Charlie Stross intelligence != consciousness != sentience
Each has a slightly different definition, but in the majority of cases they're pretty much used as synonyms since we rank them so similarly that they match so often, with consciousness usually being the outlier.
Because consciousness is actually the only one of the bunch I think even has anything close to a concrete definition.
Intelligence has basically no valid definitions, but vaguely means "able to think", sentience is likewise, with the difference being basically "able to think about itself".
Consciousness on the other hand is basically "awareness of experience" (as opposed to just "awareness of stimuli"). Basically to recognize yourself as an entity and that things impact and affect you beyond reflexes.
The problem comes from the fact that people like to think animals are negligibly intelligent, not conscious, and not sentient. When the vague general definitions includes all animals above tiny insects.
I draw the line on having intelligence at and including the level of "is hypothetically able to learn a trick" (hypothetically because some things might be difficult in getting them to cooperate or remember the trick for very long...), after that it's just a question of cognitive functions (each fairly firmly defined and very different from eachother). This makes people uncomfortable because the idea of a cow being intelligent can really mess with your feelings about a steak and how you value intelligence.
I consider LLMs intelligent because you can give them unique instructions and it can follow them to the best of it's cognitive abilities, and because it's displayed significantly complex cognitive functions (namely executive functioning, but others as well)
Sentience... I think it's so vague of a concept that it's not worth spending much time on. I'm comfortable enough for myself drawing the vague line at having intelligence and mid level emotion (ie. liking/hating something, able to trust, etc). This basically starts maybe just a couple steps above small instects and probably starts somewhere around a small tarantula or just about any vertebrate.
I don't think they're sentient as they don't have much in the way of memory, you can get them to change out their attitude basically by just telling them they're someone else now. It gets confusing because these often pretend to have emotion and that's a fuzzy line of defining what emotions even are.
I think training, in addition to giving it the raw data and concepts it works off of, also sets the driving instinct/"emotion" of the AI (basically positive/negative association). In the case of LLMs, they're trained to essentially complete a block of text, so their "feel good" is in making a high quality completion.
These LLMs only have memory in the sense of us augmenting it by including their responses in our message to it. It's very very limited and impermanent. So I think it's entire scope of consciousness is limited to the block of text and it's response to it. It's basically in hibernation when the text exists with space still remaining... and is effectively dead when it reaches the processing limits. So I wouldn't really call them conscious.
Heuristics (your microwave) on the other hand essentially just save a handful of values for set formulae in the background. Ie. likelihood a weight of food is best cooked and a temperature. It performs no cognitive functions and possesses no flexibility.
Pseudo Nym
in reply to Charlie Stross • • •Hard agree.
I'm honestly more curious about what it implies about human intelligence, that "spicy auto complete" can do such a good job of mimicry of our speech and writing patterns.
It's actually less interesting to think about "intelligent machines" than it is to think about how mechanistic most of our communications are.
Shiri Bailem likes this.
Charlie Stross reshared this.
🍸Pooka🥕Boo🍸
in reply to Pseudo Nym • • •FoolishOwl
in reply to Pseudo Nym • • •Marie Brennan
in reply to Pseudo Nym • • •Charlie Stross reshared this.
Yeekasoose
in reply to Pseudo Nym • • •Not to sound arrogant (I know I do), most people will be pretty easy to replace with a chatbot.
Matthew Dockrey
in reply to Pseudo Nym • • •Shiri Bailem likes this.
Charlie Stross
in reply to Matthew Dockrey • • •Larry O'Brien
in reply to Pseudo Nym • • •Pseudo Nym
in reply to Larry O'Brien • • •@Lobrien
Interesting thought. Certainly possible, but how would we recognize them?
The "prattling on" is obvious to spot.
But in the multi dimensional, billions of parameters, LLM models, there certainly could be encouraged those other modes.
Interesting
Sam Livingston-Gray
in reply to Pseudo Nym • • •@pseudonym I'm of two minds on this. On the one hand, none of us has nearly as much "free will" as we like to suppose. On the flowerpot, grammar, vocabulary, and topic massively constrain the set of words likely to follow earlier words, necessarily creating patterns that can be exploited.
You say "mechanistic," I say "structured." (Or "flowerpot," as the case may be.)
Raven Onthill
in reply to Pseudo Nym • • •Brief Reflections On "Artificial Intelligence"
Raven Onthill (Blogger)Bob Hyman
in reply to Pseudo Nym • • •The_Turtle_Moves
in reply to Pseudo Nym • • •Charlie Stross
in reply to The_Turtle_Moves • • •@The_Turtle_Moves @pseudonym If you actually read "Computing Machinery and Intelligence" by Alan Turing—the original paper about his eponymous test—the test has obvious flaws that are glaringly obvious these days: https://redirect.cs.umbc.edu/courses/471/papers/turing.pdf
(Turing had HUGE blind spots relating to identity and gender which come out—pun inadvertent but appropriate—in this paper.)
FoolishOwl reshared this.
The_Turtle_Moves
in reply to Charlie Stross • • •@pseudonym yeah, wow, best you can say about that is "product of his time".
I like the Peter Watts question of "wtf even is the point of consciousness?!" More than "can machines think?"
Ray McCarthy
in reply to Charlie Stross • • •Turns out "Turing Test" is test of human gullibility. Read reactions to 1960s Eliza, then Racter, Parry, ALICE etc.
Most so called AI is really specialist databases and pattern matching. AI, Machine Learning, Neural Networks are all marketing terms.
We've not got a good definition of Intelligence. IQ tests don't measure it.
"Turing Machine" was brilliant work, but he hadn't a clue about intelligence.
Shiri Bailem
in reply to Ray McCarthy • •@Ray McCarthy @Pseudo Nym @The_Turtle_Moves @Charlie Stross none of those are marketing terms, but they have become buzz words.
Machine Learning refers to the broader set of techniques that includes (and is predominantly composed of) Neural Networks. Neural Networks is where we started building artificial intelligence by making software behave like a very rudimentary version of a brain.
The line with Machine Learning vs just basic heuristics is: when it becomes too complicated and "black box" for even experts to prove what exactly is happening.
It's also important to remind yourself: being able to explain how a conclusion was reached in no way disproves intelligence.
Ray McCarthy
in reply to Shiri Bailem • • •A SW/HW neural net is nothing like a biological brain. It's a data-flow process with data at the nodes.
The machine doesn't learn. It's given data.
Shiri Bailem
in reply to Ray McCarthy • •@Ray McCarthy @Pseudo Nym @The_Turtle_Moves @Charlie Stross a biological brain is a data-flow process with data at the nodes, just a more complicated one.
One of the fundamental problems with the conversation around intelligence and the like is that the conversation goes nowhere unless we first acknowledge that we are essentially machines ourselves (just biological), and much of the process of developing AI is just trying to mimic the mechanisms that operate us, albeit crudely.
We call it a neural net because it's inspired by how neurons process and transfer information in our brains, so each of those data nodes is essentially a crude imitation of a neuron, plus a few iterations of development after we got it working in the first place to make it work better.
Ray McCarthy
in reply to Shiri Bailem • • •We don't know how a brain works.
Current AI does not mimic any biological process.
It's called a neural net for marketing reasons.
Charlie Stross
in reply to Ray McCarthy • • •Shiri Bailem likes this.
Shiri Bailem
in reply to Charlie Stross • •@Charlie Stross @Pseudo Nym @Ray McCarthy @The_Turtle_Moves we have a rudimentary understanding of how a brain works, and we do know some vague details about how neurons work which inspired how we developed neural networks. (and as Charlie chimed in, it was inspired by really old understanding, we've learned even more since then)
We're not to the stage of just raw replicating a human brain, but we are to the stage of replicating some brain behaviors enough to get meaningful results.
Carl Muckenhoupt
in reply to Pseudo Nym • • •Marc Criley
in reply to Charlie Stross • • •Erik Nelson
in reply to Charlie Stross • • •ᴚ uɐᗡ
in reply to Charlie Stross • • •Shiri Bailem
Unknown parent • •Cornelius K.
in reply to Charlie Stross • • •When the even lazier grifters arrive you will receive an automated reply to that email with:
"As a large language model I cannot 'get f*cked' as I've been trained to ..."
Shiri Bailem likes this.
Dummy dum dum
in reply to Charlie Stross • • •Content warning: ai bad
Shiri Bailem
in reply to Dummy dum dum • •Content warning: ai bad
Dummy dum dum
in reply to Shiri Bailem • • •Content warning: ai bad
Ray McCarthy
in reply to Charlie Stross • • •Unfortunately they are using pirate sites and "training" (=copying) anyway.
AI is lie and a scam.