Dears, you are my tribe. I am lately following Emille's work everywhere! I am sure that I will listen to this debate more than once. Diving deep into subjects like this, different deep interests, deconstructing ideologies bring me in very lonely places eventually... but this is how it is...
@CH-fp6gj
Жыл бұрын
The greatest risk is sociopaths and psychopaths.
@Mhm5213
Жыл бұрын
I gotta find the time stamp for the bit about multiplicity and singularity.....
@wyleong4326
7 ай бұрын
1:10:20
@Mhm5213
7 ай бұрын
@@wyleong4326 thank you so much!
@jamesfehlinger9731
Жыл бұрын
From my archive [2009]: > What kind of damage do you think a transhumanist would do, > in the very unlikely case that such a person entered a position > of power? Well, 5 years ago [2004] (when I took these things a bit more seriously than I take them now) I wrote: The "Singularitarian" circus may just be getting started! But seriously -- if you extrapolate this sort of hysteria to the worst imaginable cases (something the Singularitarians seem fond of doing) then we might expect that: 1. The Singularitarian Party actually turns into a bastion of anti-technology. The approaches to AI that -- IMH non-expert opinion -- are likeliest to succeed (evolutionary, selectionist, emergent) are frantically demonized as too dangerous to pursue. The most **plausible** approaches to AI are to be regulated the way plutonium and anthrax are regulated today, or at least shouted down among politically-correct Singularitarians. IOW, the Singularitarian Party arrogates to itself a role as a sort of proto-Turing Police out of William Gibson. Move over, Bill Joy! It's very Vingean too, for that matter -- sounds like the first book in the "Realtime" trilogy (_The Peace War_). 2. The **approved** approach to AI -- an SIAI-sanctioned "guaranteed Friendly", "socially responsible" framework (that seems to be based, in so far as it's coherent at all, on a Good-Old-Fashioned mechanistic AI faith in "goals" -- as if we were programming an expert system in OPS5), which some (more sophisticated?) folks have already given up on as a dead end and waste of time, is to suck up all of the money and brainpower that the SL4 [i.e., the old "Shock-Level 4" mailing list] "attractor" can pull in -- for the sake of the human race's safe survival of the Singularity. 3. Inevitably, there will be heretics and schisms in the Church of the Singularity. The Pope of Friendliness will not yield his throne willingly, and the emergence of someone. . . bright enough and crazy enough to become a plausible successor will **undoubtedly** result in quarrels over the technical fine points of Friendliness that will escalate into religious wars. 4. In the **absolute worst case** scenario I can imagine, a genuine lunatic FAI-ite will take up the Una****er's tactics, sending packages like the one David Gelernter got in the mail. --------------------------------- I was reacting to the following remark [on the old "SL4" mailing list] by one "starglider", who was, once upon a time, an insider at the "Singularity Institute for Artificial Intelligence". "To my knowledge Eliezer Yudkowsky is the only person that has tackled these issues [of "Friendliness"] head on and actually made progress in producing engineering solutions (I've done some very limited original work on low-level Friendliness structure). Note that Friendliness is a class of advanced cognitive engineering; not science, not philosophy. We still don't know that these problems are actually solvable, but recent progress has been encouraging and we literally have nothing to lose by trying I sincerely hope that we can solve these problems, stop Ben Goertzel and his army of evil clones (I mean emergence-advocating AI researchers :) and engineer the apotheosis. The universe doesn't care about hope though, so I will spend the rest of my life doing everything I can to make Friendly AI a reality. Once you /see/, once you have even an inkling of understanding the issues involved, you realise that one way or another these are the Final Days of the human era and if you want yourself or anything else you care about to survive you'd better get off your ass and start helping. The only escapes from the inexorable logic of the Singularity are death, insanity and transcendence."
@markjennings2605
Жыл бұрын
Emile Torres is so confused 'they' don't know their pronoun. Looking at 'them' neither do I. However 'they' are now a philosopher, or is that 'philosophers'? The immediate question is : given that they are pro extinctionist what are they doing continuing to exist?
@miroslavblagojevic2402
6 ай бұрын
It's a curs of highly intelligent people, you'll never understand.
@jamesfehlinger9731
Жыл бұрын
From the comment thread on the fresh "SneerClub" subreddit post entitled "Yudkowsky dropped a new essay, and is shoehorning himself into cult-leader-corner again": "I remember writing him an email in the late 90s to know his thoughts on what kind of curriculum it could take to educate specialists that could tackle the "bad singularity" problem because that was the thing then...and I remember I was a bit perplexed that he didn't care for it because HE was the one who was gonna solve it. The level of confidence he had for himself was impressive but I thought it was a bit foolhardy... playing with something a bit more dangerous than fire if he would end up failing... A couple of years later people started to talk about cultists and yeah, the similarities were there from the get go, with the "everything MUST revolve around ME" "charismatic leader" being central to it all all the time..." I call this "impressive confidence" the "guru-whammy". And it's very seductive -- lots of people fall for it. Unfortunately for the guru, some people eventually go on to fall **out** of it (including, it would seem, both Nikola Danaylov and Émile P. Torres. ;->
Пікірлер: 12