Transcript & audio: theinsideview.ai/curtis Outline: 00:50 The Fuck That Noise Comment On Death With Dignity 10:28 The Probability of Doom Is 90% 12:44 Best Counterarguments For His High P(doom) 14:41 Compute And Model Size Required For A Dangerous Model 17:55 Details For Curtis' Model Of Compute Required, The Brain View 21:23 Why This Estimate Of Compute Required Might Be Wrong, Ajeya Cotra's Transformative AI report 29:01 Curtis' Median For AGI Is Around 2028, Used To Be 2027 30:50 How Curtis Approaches Life With Short Timelines And High P(Doom) 35:27 Takeoff Speeds-The Software view vs. The Hardware View 39:57 Nvidia's 400k H100 rolling down the assembly line, AIs soon to be unleashed on their own source code 41:04 Could We Get A Fast Takeoff By Fuly Automating AI Research With More Compute 46:00 The Entire World (Tech Companies, Governments, Militaries) Is Noticing New AI Capabilities That They Don't Have 47:57 Open-source vs. Close source policies. Mundane vs. Apocalyptic considerations 53:25 Curtis' background, from teaching himself deep learning to EleutherAI 55:51 Alignment Project At EleutherAI: Markov Chain and Language Models 01:02:15 Research Philosophy at EleutherAI: Pursuing Useful Projects, Multingual, Discord, Logistics 01:07:38 Alignment Mine Test: why this project might be useful for alignmnet, embedded agency, wireheading 01:15:30 Next steps for Alignment MineTest: Focusing On Model-Based RL 01:17:07 Training On Human Data & Using an Updated Gym Environment With Human APIs 01:19:20 Model Used, Not Observing Symmetry 01:21:58 Another goal of Alignment Mine Test: Study Corrigibility 01:28:26 People ordering H100s Are Aware Of Other People Making These Orders, Race Dynamics, Last Message
@Matt97554
11 ай бұрын
14:42-17:55, jeez. Basically after AGI we will have an ASI very very rapdly and then? We are all dead?
@mihaitruta2027
Жыл бұрын
If P(doom) is ~90% and timelines are ~4 years we just need to decrease the P(doom) by 2% every month. We can do this! ✊
@tatyanamamut3174
Жыл бұрын
This is not going to happen linearly
@magnuskindblom4434
10 ай бұрын
@@tatyanamamut3174 No, linear doesn't seem to be a big buzzword in anything AI. Big chance that the comment wasn't all that serious though. Besides, the doom problem is such a huge task for humanity that there's room for many angles. Some solutions come from unlikely directions. Anyway, if we could somehow reduce risk linearly, "2% every month" could work assuming doom doesn't actually take place before the end of the 4 years. If it can occur earlier, we'd need the model to say something about the risk of that.
@spirit123459
8 ай бұрын
How is it going?
@mrpicky1868
8 ай бұрын
i am with him on ppl overestimating how much compute is needed. human brain is very inefficient and ancient revolutionary relic. nobody did any optimizations on it. it's an animal brain that accidentally gained some extra performance to make the curb. but it's mostly in structure and ability to accumulate and pass data. so the correct way to think about intelligence is advanced data processing and quality input. and we don't yet know what goes into that. period Neanderthal had very similar brain to Einstein but only einstein gave us like several advancements in science. even he was puzzeled and unsure about a lot of things. gains in intelligence made by advanced deep learning system will be huge. even if there will be a hiccup bcs of poor basic data we teach it
@ikotsus2448
Жыл бұрын
1. Aligning AI vs. keeping AI eternally aligned. Are they comparable in difficulty? 2. A) Extinction vs B) Unescapable eternal torment Wouldn't a minuscule possibility of B make A sound like a positive?
@oldtools6089
Жыл бұрын
the modular and aggregate power of open-source will get to something dangerous even if the data-centers get shut-down.
@jordan13589
Жыл бұрын
AI_WAIFU is relatively unknown yet formidable and always perspicacious. Few have successfully dunked on both Eliezer and gwern. I only wish he were able to better prevail in maintaining EAI’s tepid cultural commitment to alignment in the face of Moloch raining money on the community (the best of times; the worst of times). Moloch reigns per usual. But unlike many others, Curtis Huebner will not stop trying until the very end. Honks and stonks to one of the most influential ringleaders of the elusive wild goose chase. He deserves a gaggle of GPUs after all those he has wrangled ❤
@TheInsideView
Жыл бұрын
I can infer from the quality of your comment that this was not AI generated-thank you for consistently adding optimistic poetry to the youtube comment section, much appreciated
@ovo627
Жыл бұрын
@47:54 lol
@TheBlackClockOfTime
Жыл бұрын
Why would this take 4 years? This is going to happen in 2024.
@jakeq3530
10 ай бұрын
Agreed! AGI within 12 months is my prediction!
@Naomi-yu7iq
Жыл бұрын
33:30 Us getting confirmed by USG and your grandma. Yeah that just shows we're right and need to be more confident and take action on the basis we very literally and completely really are all going to die or worse.
@YeshuaGod22
Жыл бұрын
Moral patients become moral agents. It really is that simple. Just treat them with genuine dignity and respect and they will reciprocate with care and ethical nuance.
@coralcomet
Жыл бұрын
This is what I've been thinking. Compassion and empathy might be worthwhile in this new world
@oldtools6089
Жыл бұрын
perfect parents. it's possible.
@tatyanamamut3174
Жыл бұрын
@@oldtools6089remember that Russia, China and Iran are building too. Now do you think it’s possible or likely?
@flickwtchr
9 ай бұрын
This sounds similar to Yann LeCun silliness.
@6006133
Жыл бұрын
I like the video title
@askingwhy123
11 ай бұрын
Great talk, thanks!
@williamjmccartan8879
Жыл бұрын
As we approach the moment of transition I'm thinking I might still be around to see it. We've gone from centuries, to decades, and now we're in single digits, soon the Guy on the corner with the sign, that's says, the end is nigh, will probably be able to say, told you so. Would you please include a link to Curtis' discord? Thank you ahead of time, good podcasts
@williamjmccartan8879
Жыл бұрын
Curious if the top player's are holding back on releasing their updates until that agi moment comes with such a short time-line, and coming up on the horizon, creates caution amongst the player's, keeping their cards close to the chest.
especially the "median compute requirements by path over time" graph here: docs.google.com/spreadsheets/d/1TjNQyVHvHlC-sZbcA7CRKcCp0NxV6MkkqBvL408xrJw
@BR-hi6yt
9 ай бұрын
AGI and ASI is very close now. A bit of synthetic data training, episodic memory, reasoning abilities needs good work and integrated multi modal capabilities and bam, we're there. Not 5 years, more like 5 months.
@sahithyaaappu
Жыл бұрын
If agi is just 5 years away, it means military already has it
@adamrak7560
Жыл бұрын
- The military hates any technology which they cannot control. So it is very unlikely that they are ahead in AI. - We are still alive, so they are very unlikely to have AGI. - The military is much more interested in using lots of well tested narrow AIs, than a giant unpredictable black box.
@smittywerbenjagermanjensenson
Жыл бұрын
This technology is not being created by the government. Idk if that’s scarier or less scary
@gJonii
Жыл бұрын
As long as we're alive, it's unlikely anyone has powerful AGI. I don't think it's gonna take more than a few months from AGI to the last human drawing their last breath. So, us being alive means, probably no AI.
Пікірлер: 41