4:35 The Large Mammal Brain Preservation Prize was awarded in 2018 for Aldehyde-Stabilized Cryopreservation that preserves structure down to the synapse level.
@philip_hofmaenner47
Ай бұрын
Let me get this straight, this guy would risk the end of all humanity for the slight possibility he would maybe become immortal? 😭😂
@victor_rybin
23 күн бұрын
it makes total sense: the future generations are worthless without us (people who live today). only currently living people have value (and they give value to the next generations by thinking about them)
@joelroback2563
3 ай бұрын
Discussion should have been focused on alternate (non ai) doomscenario p, since this seems, to me, to be one of the only remaining factors that logically could persuade anyone away from paus AI if well argued and described. Entertaining discussion though. Great fan of your initiative Shapira, and good presented by your guest as well. Good stuff still exist on youtube, thanks for being a part of that. /J
@DoomDebates
3 ай бұрын
Thanks Joel, appreciate it. FYI I’ll probably be recording a video version of this tweet I wrote which I think is on the topic you’d like me to address: x.com/liron/status/1766723002870513908
@joelroback2563
3 ай бұрын
@@DoomDebates Cool. If you find any interest in it, please include your thoughts on a AI supported/distorted future in comparison to one without. (Could there be non s.agi S-risks?, are we already set to collapse without a new, higher tier of management?) Even though I find myself agreeing with yours and Yudkowsky's line of logic, there seem to remain some room for subjective opinion regarding what wheel to bet on regardless of the direness of the AI-route. This is offcoarse not at all the general discussion. But I do find the non-ai path typically less analyzed in ai-doom discourse. Maybe it will not provide any chance at all in comparison to ai's .1-50% chance.. cant get much more pessimistic than this but I guess it could work as an argument to force AI-dev. Thanks for link, good read.
@DoomDebates
3 ай бұрын
@@joelroback2563 I think “The Precipice” by Toby Ord provides a good survey of the top x-risks facing humanity. Nuclear is about a 1%/yr of catastrophic risk though not fully extinction level. AI is the only one that’s >10% total x-risk in our own lifetimes.
@vaevictis3612
3 ай бұрын
I think that the question of the moral weight of the all future people vs ours, goes deeper than just basic AI Alignment. And its kind of off topic, but just a couple of points because the other guy didn't argue enough to my liking. This is not really a doom debate as I otherwise fully agree with the subject (my base p(d) is ~99% (zero sum game theory between us and ASI), unless we invent credible alignment techniques that can shift our expected p value down. I would personally be comfortable throwing the dice on ~10-20%. - The universe as we know it is finite. So are the resources in it. We can convert all resources to one single currency - time. Time we can spend on the available energy. As our efficiency improves, we can use energy to squeeze more time out of one Joule. We can easily imagine a world of all matter converted to computronium, where we live the virtual lives similar to ours, but on a better efficiency than just a biological existence we have now. Following Landauer limit and maximizing\optimizing for efficiency of all available matter, we can sustain a population similar to ours for truly *staggeringly humongous* aeons. Trillions of years passing as mere seconds, subjectively. And yet everything has its end. The entropy (as we know it now) is a ruthless b***h, our energy will run out sooner or later. The chaos of life will end, the absolute order of equal energy distribution will arrive. - We can however change the equation a bit more, by playing with the consumption. If we reduce the amount of "people", the expected lifespan of those remaining increases. This is still a subject to efficiency diminishing returns. We might find some value which is the most optimal (N(people)/time). But that is another topic. We can also increase the amount of "people" - and the expected lifespan will drop. - The more you increase the population, the more time you lose for yourself. But the more time you create for others. You probably already know where this is going. Enter utilitarian heaven\hell aka the repugnant conclusion. So the golden question is - *where do you draw the line* ? You can have trillions more of people (with equal rights to the energy share, we kind of assume energy-"communism" here, because if we add competition, things will get real ugly and real complicated, real fast). Those trillions of people will actually probably not decrease the timespan much. But then the real chad utilitarian will say, why stopping there? Why not going *all the way* ? Why having humans at all? We can turn all matter into the most optimal life-happiness configuration, maybe into single neurons that just fire once and shut down. Why would their combined experience be less valuable than that of the human-type life? This also goes another way too - human-type life is a very arbitrary value to center on - what if we humans are closer to those mono-neuron life forms from the point of view of ASI? - So which is better? More humans for less time? Less humans for more time? I think I can answer that more easily than it seems. There is no one answer. It depends on your own personal preferences. Its the same as the Orthogonality thesis. The issue is only where *you personally draw the line*. You can draw it at mono-neural computronium sea. You can draw it at 100 trillion. You can draw it at 8 billion. You can even draw it at much smaller number. Even one (1) life is perfectly valid, by itself. - I draw the line according to my personal valuation of things around me. I value myself first (basic instinct of me as a human animal), my family next, my friends and associates after. Otherwise, I value all currently existing people equally. Following the lines of Redrick from "Roadside Picnic" by Strugatsky brothers, I would extend the same choice to the Wish Granter: "Happiness to all, and let no one go emptyhanded". So to start, to those people (we can also potentially include selected animals&plants from our Earth) I would agree sharing the resources of future equally. I am not against slow, planned, careful and limited expansion of the population. Let's say, logarithmic increase with doubling starting at every 1,000 years then slowing down exponentially. These people would live as literal gods, and have the power to do anything within their virtual borders and their equal share of energy, until the end of times. - So the real question comes, when it is time to choose to draw the line, how do we determine who's going to draw it? Do we poll the consensus between all humans? Do we find some ivory tower philosophers and take their personal valuation and enforce on others? Maybe ask the (already somehow aligned) ASI to find the value that would be accepted by all people? Or maybe, it would be up to the value system of the person pressing the proverbial button? And now we come back to the alignment, on the higher level (alignment of all human values and preferences to each other). - Things would of course change radically if we (or ASI) beat or cheat entropy in some way.
@philip_hofmaenner47
Ай бұрын
I'm 44 and my estimate that AI will make me immortal before I die is less than 0.01%. I could be wrong but my intuition says it's not impossible but extremely hard to stop ageing (especially DNA damage over time) even with AGI. I feel we're already a very optimised organism for longevity under the environmental conditions of this place. I'm even more sceptical that we'll be able to upload ourselves into a "Matrix" anytime soon. Even if it becomes possible and people are already doing it-even if you could interact with them in the Matrix and couldn't tell the difference-I’d still be doubtful. I’d question whether those are truly the same people or anything close to conscious beings, rather than just simulations mimicking those who have died. I would be questioning if they're experiencing real qualia or if they was just acting like they did. Also you would need exabytes of data to digitally copy a brain (on a atomic level) which in my opinion would be the only save way to be sure it would be the person.
@victor_rybin
23 күн бұрын
what is your estimate that something else apart from AI will make you immortal? what might it be?
@philip_hofmaenner47
22 күн бұрын
@@victor_rybin I don't think we will become immortal anywhere soon. And yes AI is the most likely technology that could achieve it very far in the future.. There are just so many unsolved problems.Especially DNA damage over time...
@markupton1417
27 күн бұрын
He thinks ASI is going to make him immortal 😂
@victor_rybin
23 күн бұрын
what would be a better bet?🤔
@philip_hofmaenner47
Ай бұрын
I even feel some sympathy for that guy but the way he's doing brain gymnastics and distort philosophy of ethics is a bit much. People dying of natural causes is not genocide. For me it's very clear that the survival of humanity for possibly millions of years to come, should outweigh almost anything else for us as a species. I feel he just doesn't want to die. lol...
@victor_rybin
23 күн бұрын
and for me it's very clear that the survival of humanity is absolutely useless without the survival of presently living people, who give value to the humanity in the first place
@philip_hofmaenner47
22 күн бұрын
@@victor_rybin Every generation give new value to humanity in their time... It was like 2000 years ago and it will be like that 2000 years from now (unless we annihilate ourself)
@lukesanthony
3 ай бұрын
definitly bet on blue
@ironknightgaming5706
Ай бұрын
Take a shot every time they say the word “cryonics” 😴😴😴🛏️🥱
Пікірлер: 18