Big Yud: You try to explain how airplane fuel can melt a skyscraper, but your calculation doesn't include relativistic effects, and then the 9/11 conspiracy theorists spend the next 10 years talking about how you deny relativity.
Similarly: A paperclip maximizer is not "monomoniacally" "focused" on paperclips. We talked about a superintelligence that wanted 1 thing, because you get exactly the same results as from a superintelligence that wants paperclips and staples (2 things), or from a superintelligence that wants 100 things. The number of things It wants bears zero relevance to anything. It's just easier to explain the mechanics if you start with a superintelligence that wants 1 thing, because you can talk about how It evaluates "number of expected paperclips resulting from an action" instead of "expected paperclips * 2 + staples * 3 + giant mechanical clocks * 1000" and onward for a hundred other terms of Its utility function that all asymptote at different rates.
The only load-bearing idea is that none of the things It wants are galaxies full of fun-having sentient beings who care about each other. And the probability of 100 uncontrolled utility function components including one term for Fun are ~0, just like it would be for 10 components, 1 component, or 1000 components. 100 tries at having monkeys generate Shakespeare has ~0 probability of succeeding, just the same for all practical purposes as 1 try.
(If a googol monkeys are all generating using English letter-triplet probabilities in a Markov chain, their probability of generating Shakespeare is vastly higher but still effectively zero. Remember this Markov Monkey Fallacy anytime somebody talks about how LLMs are being trained on human text and therefore are much more likely up with human values; an improbable outcome can be rendered "much more likely" while still being not likely enough.)
An unaligned superintelligence is "monomaniacal" in only and exactly the same way that you monomaniacally focus on all that stuff you care about instead of organizing piles of dust specks into prime-numbered heaps. From the perspective of something that cares purely about prime dust heaps, you're monomaniacally focused on all that human stuff, and it can't talk you into caring about prime dust heaps instead. But that's not because you're so incredibly focused on your own thing to the exclusion of its thing, it's just, prime dust heaps are not inside the list of things you'd even consider. It doesn't matter, from their perspective, that you want a lot of stuff instead of just one thing. You want the human stuff, and the human stuff, simple or complicated, doesn't include making sure that dust heaps contain a prime number of dust specks.
Any time you hear somebody talking about the "monomaniacal" paperclip maximizer scenario, they have failed to understand what the problem was supposed to be; failed at imagining alien minds as entities in their own right rather than mutated humans; and failed at understanding how to work with simplified models that give the same results as complicated models
Paperclip maximizer is a funny concept because we are already living inside of one. The paperclips are monetary value in wealthy people's stock portfolios.
A year and two and a half months since his Time magazine doomer article.
No shut downs of large AI training - in fact only expanded. No ceiling on compute power. No multinational agreements to regulate GPU clusters or first strike rogue datacenters.
Just another note in a panic that accomplished nothing.
It’s also a bunch of brainfarting drivel that could be summarized:
Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.
Or
Read Asimov’s I, Robot. Then note that in our reality, we’ve not yet invented the Three Laws of Robotics.
Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.
You make his position sound way more measured and responsible than it is.
His 'effective safety measures' are something like A) solve ethics B) hardcode the result into every AI, I.e. garbage philosophy meets garbage sci-fi.
Before we accidentally make an AI capable of posing existential risk to human being safety
It's cool to know that this isn't a real concern and therefore in a clear vantage of how all the downstream anxiety is really a piranha pool of grifts for venture bucks and ad clicks.
considering that the more extemist faction is probably homeschooled, i don't expect that any of them has ochem skills good enough to not die in mysterious fire when cooking device like this
There’s a giant overlap between Christian fundamentalism and the whole singularity shtick, and Yud’s whole show is really the technological version of Christian futurist eschatology (i.e. the belief that the Book of Revelations etc. are literal depictions of the future).
As a mild tangent off this, just how many fucking things these dipshits have infected infuriates me. One of the most prominent ways I can think of is Star Trek Discovery, which in 3 different phases/places tried to be oh so edgy by riding the horses of musk praise, ai panic, and a variant of the christian futurist eschatology
Even in the very moment of seeing it they were jarring experiences. It’s going to age so, so badly.
Oh shit, I remember the Musk namedrop in Discovery. Didn’t they name him alongside historical scientists and inventors? I seldom feel actual cringe but that was actually embarrassing.
Yeah and people tried to rationalize this as not being bad (after Musk was revealed to be a dipshit to people who were paying less attention) by saying 'it was the evil mirror universe captain who said it' but that didn't seem that convincing to me. (esp as the rest of the crew didn't react to it, which you can rationalize away with 'well he is the captain, you don't go argue with them about stuff like this').
In episode four (season one) of Star Trek: Discovery Elon Musk is mentioned by captain Gabriel Lorca between the Wright Brothers and Zefran Cochrane.Short cl...
Granted, this was back when Musk's public perception was at its most positive - it would take until the Thai cave diver incident in July 2018 before we saw the first hole being blown in Musk's "IRL Tony Stark" image.
Somewhat fittingly, that incident played out on Twitter, whose acquisition by Musk has done plenty to showcase his true colours.
Trying to imagine what would a mirror universe Musk gets up to. I think any level of notoriety for him inevitably ends in execution after some failed palace intrigue.
We get it, we just don't agree with the assumptions made. Also love that he is now broadening the paperclips thing into more things, missing the point of the paperclips thing abstracting from the specific wording of the utility function (because like with disaster prepare people preparing for zombie invasions, the actual incident doesn't matter that much for the important things you want to test). It is quite dumb, did somebody troll him by saying 'we will just make the LLM not make paperclips bro?' and he got broken so much by this that he is replying up his own ass with this talk about alien minds.
e: depressing seeing people congratulate him for a good take. Also "could you please start a podcast". (A schrodinger's sneer)
He's talking like it's 2010. He really must feel like he deserves attention, and it's not likely fun for him to learn that the actual practitioners have advanced past the need for his philosophical musings. He wanted to be the foundation, but he was scaffolding, and now he's lining the floors of hamster cages.
Making an analogy to something more familiar, or to anything that actually happens in real life, is too pedestrian for a true visionary.
(It's just a guess on my part, but given the extent to which conspiracy theorists are all marinating in a common miasma these days, I'd expect that a 9/11 twoofer would be more likely to deny relativity for being "Jewish physics".)
There is a way of seeing the world where you look at a blade of grass and see "a solar-powered self-replicating factory". I've never figured out how to explain how hard a superintelligence can hit us, to someone who does not see from that angle. It's not just the one fact.
It's almost as if basing an entire worldview upon a literal reading of metaphors in grade-school science books and whatever Carl Sagan said just after "these edibles ain't shit" is, I dunno, bad?
Only, it isn't a factory. As the only thing it produces is copies of itself, and not products like factories do. Von Neumann machines would have been a better comparison
I think Roko's Basilisk really sums up the Less Wrong community. They had a full panic but ultimately it takes a massive fucking ego to imagine an all-powerful AI would waste its resources simulating them in hell. The AI would rather spend its time making paperclips, it doesn't give a shit about making perfect holodeck copies of nobodies when the post hoc torture won't change any outcomes.
BigMuffin69
in reply to BigMuffin69 • • •Big Yud: You try to explain how airplane fuel can melt a skyscraper, but your calculation doesn't include relativistic effects, and then the 9/11 conspiracy theorists spend the next 10 years talking about how you deny relativity.
Similarly: A paperclip maximizer is not "monomoniacally" "focused" on paperclips. We talked about a superintelligence that wanted 1 thing, because you get exactly the same results as from a superintelligence that wants paperclips and staples (2 things), or from a superintelligence that wants 100 things. The number of things It wants bears zero relevance to anything. It's just easier to explain the mechanics if you start with a superintelligence that wants 1 thing, because you can talk about how It evaluates "number of expected paperclips resulting from an action" instead of "expected paperclips * 2 + staples * 3 + giant mechanical clocks * 1000" and onward for a hundred other terms of Its utility function that all asymptote at different rates.
The only load-bearing idea is that none of the things It wants are galaxies full of fun-having sentient beings who care about each other. And the probability of 100 uncontrolled utility function components including one term for Fun are ~0, just like it would be for 10 components, 1 component, or 1000 components. 100 tries at having monkeys generate Shakespeare has ~0 probability of succeeding, just the same for all practical purposes as 1 try.
(If a googol monkeys are all generating using English letter-triplet probabilities in a Markov chain, their probability of generating Shakespeare is vastly higher but still effectively zero. Remember this Markov Monkey Fallacy anytime somebody talks about how LLMs are being trained on human text and therefore are much more likely up with human values; an improbable outcome can be rendered "much more likely" while still being not likely enough.)
An unaligned superintelligence is "monomaniacal" in only and exactly the same way that you monomaniacally focus on all that stuff you care about instead of organizing piles of dust specks into prime-numbered heaps. From the perspective of something that cares purely about prime dust heaps, you're monomaniacally focused on all that human stuff, and it can't talk you into caring about prime dust heaps instead. But that's not because you're so incredibly focused on your own thing to the exclusion of its thing, it's just, prime dust heaps are not inside the list of things you'd even consider. It doesn't matter, from their perspective, that you want a lot of stuff instead of just one thing. You want the human stuff, and the human stuff, simple or complicated, doesn't include making sure that dust heaps contain a prime number of dust specks.
Any time you hear somebody talking about the "monomaniacal" paperclip maximizer scenario, they have failed to understand what the problem was supposed to be; failed at imagining alien minds as entities in their own right rather than mutated humans; and failed at understanding how to work with simplified models that give the same results as complicated models
BigMuffin69
in reply to BigMuffin69 • • •mountainriver
in reply to BigMuffin69 • • •DickFiasco
in reply to BigMuffin69 • • •slopjockey
in reply to BigMuffin69 • • •BigMuffin69
in reply to slopjockey • • •zbyte64
in reply to BigMuffin69 • • •slopjockey
in reply to slopjockey • • •I'm one of the lucky 10k who found out what a paperclip maximizer is and it's dumb as SHIT!
Actually maybe it's time for me to start grifting too. How's my first tweet look?
barsquid
in reply to slopjockey • • •Shitgenstein1
in reply to BigMuffin69 • • •A year and two and a half months since his Time magazine doomer article.
No shut downs of large AI training - in fact only expanded. No ceiling on compute power. No multinational agreements to regulate GPU clusters or first strike rogue datacenters.
Just another note in a panic that accomplished nothing.
Jonathan Hendry
in reply to Shitgenstein1 • • •@sneerclub
Might have got him some large cash donations.
David Gerard
in reply to Jonathan Hendry • • •fartsparkles
in reply to Shitgenstein1 • • •It’s also a bunch of brainfarting drivel that could be summarized:
Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.
Or
Read Asimov’s I, Robot. Then note that in our reality, we’ve not yet invented the Three Laws of Robotics.
OhNoMoreLemmy
in reply to fartsparkles • • •If yud just got to the point, people would realise he didn't have anything worth saying.
It's all about trying to look smart without having any actual insights to convey. No wonder he's terrified of being replaced by LLMs.
fartsparkles
in reply to OhNoMoreLemmy • • •Architeuthis
in reply to fartsparkles • • •You make his position sound way more measured and responsible than it is.
His 'effective safety measures' are something like A) solve ethics B) hardcode the result into every AI, I.e. garbage philosophy meets garbage sci-fi.
barsquid
in reply to Architeuthis • • •Shitgenstein1
in reply to fartsparkles • • •It's cool to know that this isn't a real concern and therefore in a clear vantage of how all the downstream anxiety is really a piranha pool of grifts for venture bucks and ad clicks.
Soyweiser
in reply to Shitgenstein1 • • •At least the lack of Rationalist suicide bombers running at data centers and shouting 'Dust specks!' is encouraging.
skillissuer
in reply to Soyweiser • • •skillissuer
in reply to skillissuer • • •Sailor Sega Saturn
in reply to BigMuffin69 • • •Tar_Alcaran
in reply to Sailor Sega Saturn • • •Trash Panda (friendica only does everything)
in reply to BigMuffin69 • •like this
raktheundead likes this.
SneerClub reshared this.
lessthanluigi
in reply to BigMuffin69 • • •Mii
in reply to lessthanluigi • • •There’s a giant overlap between Christian fundamentalism and the whole singularity shtick, and Yud’s whole show is really the technological version of Christian futurist eschatology (i.e. the belief that the Book of Revelations etc. are literal depictions of the future).
Cory Doctorow and Charlie Stross call it Rapture of the Nerds.
Rapture of the Nerds
craphound.comfroztbyte
in reply to Mii • • •As a mild tangent off this, just how many fucking things these dipshits have infected infuriates me. One of the most prominent ways I can think of is Star Trek Discovery, which in 3 different phases/places tried to be oh so edgy by riding the horses of musk praise, ai panic, and a variant of the christian futurist eschatology
Even in the very moment of seeing it they were jarring experiences. It’s going to age so, so badly.
Mii
in reply to froztbyte • • •froztbyte
in reply to Mii • • •Soyweiser
in reply to Mii • • •Elon Musk mentioned in Star Trek Discovery
YouTubeBlueMonday1984
in reply to Soyweiser • • •Granted, this was back when Musk's public perception was at its most positive - it would take until the Thai cave diver incident in July 2018 before we saw the first hole being blown in Musk's "IRL Tony Stark" image.
Somewhat fittingly, that incident played out on Twitter, whose acquisition by Musk has done plenty to showcase his true colours.
earthquake
in reply to Soyweiser • • •deborah
in reply to earthquake • • •Eric Cartmen "Best Friends"
YouTubezbyte64
in reply to lessthanluigi • • •V0ldek
in reply to BigMuffin69 • • •AllNewTypeFace
in reply to V0ldek • • •BlueMonday1984
in reply to V0ldek • • •Amoeba_Girl
in reply to BigMuffin69 • • •itsnotits
in reply to BigMuffin69 • • •froztbyte
in reply to itsnotits • • •Amoeba_Girl
in reply to itsnotits • • •blakestacey
in reply to Amoeba_Girl • • •BigMuffin69
in reply to itsnotits • • •Soyweiser
in reply to BigMuffin69 • • •We get it, we just don't agree with the assumptions made. Also love that he is now broadening the paperclips thing into more things, missing the point of the paperclips thing abstracting from the specific wording of the utility function (because like with disaster prepare people preparing for zombie invasions, the actual incident doesn't matter that much for the important things you want to test). It is quite dumb, did somebody troll him by saying 'we will just make the LLM not make paperclips bro?' and he got broken so much by this that he is replying up his own ass with this talk about alien minds.
e: depressing seeing people congratulate him for a good take. Also "could you please start a podcast". (A schrodinger's sneer)
corbin
in reply to BigMuffin69 • • •Coll
in reply to corbin • • •That's a good quote, did you come up with that? I for one would be ecstatic to be the scaffolding of a research field.
carlitoscohones
in reply to BigMuffin69 • • •blakestacey
in reply to carlitoscohones • • •Making an analogy to something more familiar, or to anything that actually happens in real life, is too pedestrian for a true visionary.
(It's just a guess on my part, but given the extent to which conspiracy theorists are all marinating in a common miasma these days, I'd expect that a 9/11 twoofer would be more likely to deny relativity for being "Jewish physics".)
blakestacey
in reply to BigMuffin69 • • •Quoth Yud:
It's almost as if basing an entire worldview upon a literal reading of metaphors in grade-school science books and whatever Carl Sagan said just after "these edibles ain't shit" is, I dunno, bad?
BigMuffin69
in reply to blakestacey • • •like this
Trash Panda (friendica only does everything) likes this.
Soyweiser
in reply to blakestacey • • •Only, it isn't a factory. As the only thing it produces is copies of itself, and not products like factories do. Von Neumann machines would have been a better comparison
barsquid
in reply to BigMuffin69 • • •