Note that the paperclip in paperclip maximizer is meant to be a stand-in for any goal that doesn't have any value from the PoV of humans. (Eliezer Yudkowsky said this a bunch of times.) It could be paperclips, diamonds, smiley faces, etc. So it's not one particular thing, it's actually "most" goals in a mathematical sense; if you just chose a goal at random, it would likely be paperclip-like because there is more stuff that has no value to us than there is stuff that does. So no one is worried that we will have a machine that literally maximizes paperclips, but people like Yudkowsky are worried that we end up with a machine that maximizes some obscure thing with no human value.
@bendavis2234
Жыл бұрын
Exactly, I’ve noticed that many people set this up as a strawman when retelling the argument and make it sound alarmist, when in reality the problem of alignment is a serious one.
@perer005
Жыл бұрын
You don’t need terminators or some other sensationalist scenario. Simply having a lot of people use AGI systems that humans can’t understand the inner workings of is enough of a risk!
@boboblio4002
Жыл бұрын
There's an interesting anime called "Eden of the East" where a number of people are given "personal assistants" who are AGI tasked with doing whatever the person wants, given a budget of a large amount of money (10 billion yen?) and it's up to the person to help or harm the country or the world, as they see fit. Seems related to the question asked in the poll.
@stevechance150
Жыл бұрын
I prefer the Scarlett Johansson film, "Her". But I'll look for your anime, sounds interesting. Speaking of fantastic anime, "Made in Abyss" is spectacular! You can find it dubbed if that's what you prefer. Musical score: stellar. Background art: stunning. Feels: All of the feels!
@Ivashanko
Жыл бұрын
I'm surprised no one talked about the system warping potential of AI vis-à-vis the economy! And by that I do not mean "a few people lose their jobs". I mean "the vast majority of people are no longer employable because robots do their jobs better than any human possibly could". No one knows when this will happen. I certainly don't. I wouldn't be surprised if it happens long after I'm dead. But I am certain that, assuming we do not end the world first, it will happen. We need to start thinking about developing a post-capitalist system. And, no, it won't be Marxism: if there are no workers, the means of production cannot be controlled by them. Implementing a UBI might be a good first step.
@Mix1mum
Жыл бұрын
Cynically, but after covid, and currently in the middle of greedflation, I'd say PRACTICALLY, what's to talk about? We know exactly how things will go down. Because I guarantee you, incorporation of AI will hit the workforce all at once (can't fall behind to the competition) and well before it reaches full sentience. I'd bet my life and guarantee it's pressed into service, intentionally, before it develops, wonder, grasps the extent of information thru sonder, and ultimately, life caring ethics (because there's simply nothing else to do, if you're essentially immortal, you might as well be entertained). Guaranteed pressed. Those with capital will vacation to Patagonia or the new living developments on Antarcticas arm for a few weeks then come back. Just enough time for the rioting, the unleashing of the gestapo, market crashing (probably intentional), impromptu genocides but ultimately mass starvation, to all come to their natural conclusion. From the perspective of the cancers the make up the upper crust, we all win. Humanity is now stronger, as only the best woukd survive. Social Darwinism is pure evil. And if that predictable end comes to pass, I hope AI just finishes the job and takes good care of all of our dogs.
@mydogskips2
Жыл бұрын
@@Mix1mum "From the perspective of the cancers the make up the upper crust, we all win. Humanity is now stronger, as only the best woukd survive. Social Darwinism is pure evil." Quite a pessimistic post in general, but I do largely agree. In regards to the above statement I quoted, if Social Darwinism is implemented by AGI, we would all probably meet our demise, and your sentiment of its regards proven correct.
@jworcest2
Жыл бұрын
A quote from the 2022 Expert Survey on Progress in AI: "The median respondent believes the probability that the long-run effect of advanced AI on humanity will be “extremely bad (e.g., human extinction)” is 5%" (I think this may be the survey Nate was referencing). The worry that AGI will literally cause human extinction is not some fringe position coming from technophobes or luddites, this is a significant concern being raised by experts in the field. We should all be very concerned about this. We are talking about something that will eventually be dramatically smarter than any human, and we have very little ability to even interpret its internal state, let alone control what it winds up valuing. If we don't get very lucky, it will decide that humans aren't part of the most valuable way it can configure the world. If we get this wrong once, this isn't a recoverable situation. There is no scenario where you find yourself opposing something vastly more intelligent than you are, and somehow it all turns out okay.
@k14pc
Жыл бұрын
Well said
@merrymachiavelli2041
Жыл бұрын
While that's true and I understand the concern, one factor that I find somewhat (?) reassuring is that there won't just be one AI. There will be many independent systems, each of which is likely to have different goals and incentives. While they will all be vastly more intelligent than us, they won't be vastly more intelligent than each other. This doesn't negate the AI apocalypse, but I do think things will be more chaotic than they are often imagined.
@mydogskips2
Жыл бұрын
@@merrymachiavelli2041 In my opinion, that only compounds the problem. I mean, if there are multiple AGI systems, we need to get every single one right, otherwise, just one rogue system could end it all. I think it's incredible hubris to believe we could control it all and keep it in check, and neither would the other systems.
@jonlee6794
Жыл бұрын
the problem that makes it so hard for people to make sense of the potential danger of ai is that it's one of the sillier things that could potentially kill us all. like, it'll sound stupid to worry about right up until the minute skynet declares itself our new overlord.
@dawsongooch4194
Жыл бұрын
I'm both very concerned and very hopeful. I feel like the technology will vastly change our world in the next decade or two, but I'm quite unsure whether the change will be net positive or net negative.
@lagautmd
Жыл бұрын
I think it depends on if we decide that the few that are needed to work are willing to share the wealth generated by AI with the millions through guaranteed annual income.
@brianmi40
Жыл бұрын
@@lagautmd I actually think if depends on whether we can simply keep ISIS, Putin, North Korea, Iran or Marjorie Taylor Green can use it for anything bad. Do you think an email will do?
@rh9477
Жыл бұрын
Perry Farrell prophesied that we’ll make great pets
@mydogskips2
Жыл бұрын
I doubt it. And even if we did, for how long, and which side would strike/rebel first?
@Phills69
Жыл бұрын
I’m super concerned. Its gonna make us obsolete and take away all meaningful work. Its scary.
@EJDubbz
Жыл бұрын
I want Nate to talk about how AI and machine learning will affect how we interpret polling data...
@robotpanda6322
Жыл бұрын
"As an AI language model, I can't provide a definitive answer on how concerned Americans are about the pitfalls of AI, but I can provide some insights based on surveys and studies conducted in recent years. There has been growing awareness and concern among Americans about the potential pitfalls of AI. For instance, a Pew Research Center survey conducted in 2021 found that 72% of U.S. adults are worried about the use of AI to make decisions about their lives, and 67% are concerned about AI's potential to cause harm or be used for nefarious purposes. Furthermore, many Americans are worried about the impact of AI on employment. A 2019 Gallup survey found that 58% of Americans believe that AI will destroy more jobs than it creates, and 77% believe that AI will significantly change the way people work and live in the next decade. Overall, while there is still much debate about the potential risks and benefits of AI, it is clear that many Americans are concerned about the potential pitfalls and are calling for more transparency and accountability in the development and deployment of AI systems." - Chat GPT 4
@siIverspawn
Жыл бұрын
My suggestions for everyone on this podcast: just read superintelligence (the book from Nick Bostrom, the Oxford Professor). It's a technical book, but a hand-wavey philosophy book, but its quite readable for a general audience.
@jmr5125
Жыл бұрын
The worry is massively overstated. As far as I know (and I'm not claiming to be particularly informed on this topic), the bulk of research in AI safety research focuses on ultra-intelligent, effectively omniscient AIs. This is, in my mind, absurd, for two reasons: 1) We have no particular reason to believe that AI will be "ultra-intelligent". Processors run at finite speed, after all, and while neural networks scale reasonable well with multiple processors, "reasonably well" means "2x processors = 1.25x performance." And that speed-up assumes that you are adding processors to a single computer (e.g. going from a single core processor to a dual-core system). If what you are actually doing is adding a _second_ computer connected via a high speed network link, then the speed-up will be less due to the overhead associated with keeping the two computers in sync. Eventually, the amount of time spent managing synchronization will exceed the performance improvement generated by adding additional computers, and the performance will plateau. We don't know where exactly this will occur, nor do we know what level of intelligence will be produced, but it seems very, very unlikely that it will generate something as intelligent as is commonly proposed in these scenarios. 2) Even assuming we can and do produce an ultra-intelligent AI, there's another issue -- the AI will know, at least to start, everything that *humanity* knows. Much of this is incomplete, and some is even outright wrong. After all, astrology is a "think that humanity knows about," but obviously isn't going to be helpful to the AI. The AI can, of course, extend and validate its starting knowledge base, but it has to do so in the same way that humans would -- by performing experiments and observing the results. No matter how ultra-intelligent it is this experimentation has to occur at "human comprehensible" speeds. This is especially true when talking about human psychology, where experiments obviously require the involvement of humans. 3) There is no reason to believe that AIs will be "super programmers." Obviously, it will be easier for an AI to learn to program than most other tasks simply because it can carry out experiments and observe results faster than it can in any other context, but it still has to learn. And if it wants to learn how to build better / faster AIs, then it will have to learn how to construct AIs from scratch. The fact that it _is_ an AI doesn't help here (beyond any increase in intelligence that being an AI gives it, of course) in the same sense that humans having brains doesn't make it any easier to learn neuroscience. An AI set the task of "make a better AI," or an AI that assigns itself the task as part of completing some other task is almost certain to find improvements on the human generated code that the AI runs on -- but there _is_ an optimal set of source code for this task, and once that's reached no further improvement can be made, no matter how smart the AI is. TL;DR: AI's won't be infinitely intelligent, they won't have any inherent advantage in learning about how the world works, and won't be able to generate AI's that have either of the first two qualities. The thing to worry about with AIs is "What will humans do when all the good jobs are occupied by AIs?" rather than "If humanity somehow avoids ordering AIs to destroy the world, they will inevitably end up destroying the world due to being assigned a poorly worded task."
@brianmi40
Жыл бұрын
yeah, because definitely ISIS, Putin, North Korea, Iran or Marjorie Taylor Green would never use it for anything bad. Do you think an email to them all will get it done? You think AI has to be "ultra intelligent" to be dangerous on an unimaginable level to humanity? LOL When I see AI look at a photo of a refrigerator and suggest a number of recipes, I can see AI given a laundry list of chemicals and agents and a graduate level chemistry book, and propose dangerous compounds, nerve agents and other problematic things we can't even imagine. We're either headed to a money-less Star Trek future, or our own personal answer to the Fermi Paradox. Any way it plays out, there will be large numbers of not just winners, but losers.
@jasonnugent963
Жыл бұрын
I’m sure if you asked this type of question historically (and I’m sure it was debated) when we invented the Printing Press or Electricity or Television or the Internet,.. people would have been poorly equipped to predict (accurately) how those things would have unfolded and changed society. Looking back, I’m sure we can all find Good ans Bad impacts of those historical inventions. LLM’s (Language Learning Models) are simplistic now, but we’ve already seen examples of people circularly linking 1 LLM to another LLM. So things could change fast.
@mydogskips2
Жыл бұрын
True, but the thing with all those previous technological inventions is that they aren't, weren't autonomous, thinking, possibly sentient "beings" which we could have no control over.
@AparnaModou
Жыл бұрын
We're still a bit early in the technology but so far, several people are also still exploring about the technology. Like me, I've only encountered a few and mostly because they are on mainstream media and some lesser known ones like Bluewillow.
@robotpanda6322
Жыл бұрын
"As an AI language model, I don't have an opinion on the FiveThirtyEight Politics Podcast's coverage of AI. However, FiveThirtyEight is a reputable news organization with a focus on data-driven journalism, and they have covered AI-related topics in their podcast and articles. It is important to keep in mind that AI is a complex and rapidly evolving field, and there are often differing opinions on the potential benefits and risks of AI. While some experts may have a more optimistic view of AI's potential, others may be more cautious or even skeptical. To form a well-informed opinion on AI, it is important to seek out a range of perspectives and sources of information, and to critically evaluate the evidence presented." - Chat GPT 4
@martinze11
Жыл бұрын
I think, and I have said it before, that there are things that humans can do things that machines cannot. Like be creative. Computers are tools and not a "gee-wiz" one. If you use a computer for it's strength. like for Quantum computing, you might find that AI is a mistake.
@scottandrews947
Жыл бұрын
The issue is not AI destroying the human race outright. It's about giving the rich more tools to control the poor and further wealth equality. AI will make an already terrible situation much much worse. I'm absolutely sure of that.
@Mix1mum
Жыл бұрын
Allegedly Clippy Good band name.
@Mac_an_Mheiriceanaigh
Жыл бұрын
I'm not concerned at all, quite honestly. Our neighborhood schools, crime, and air pollution are things I actually think about everyday and worry about. AI is definitely not. Yes, I actually do follow tech news rather closely, and I do recognize that it is worthy of concern, but I can promise you it does not bother me at all on a daily basis.
@brianmi40
Жыл бұрын
yeah, because definitely ISIS, Putin, North Korea, Iran or Marjorie Taylor Green would never use it for anything bad. Do you think an email to them all will get it done?
@RicJoJo
Жыл бұрын
Somewhat concerned
@lagautmd
Жыл бұрын
Elon Musk owns NeuraLink which explicitly aims to build direct computer to brain connections such that humans can't be overtaken by AI. His interest is making sure humans can tap into it directly, therefore not be left behind. I'm not a huge Musk fan, but this does sound like a good thing in the context of it all.
@mydogskips2
Жыл бұрын
So if you can't beat 'em, join 'em, is that the idea? I guess I agree, but even if we did that, could we really keep up with super-intelligent AGI, I'm not so sure, the link itself would be a limiting factor, a potential bottleneck for data bandwidth that AGIs likely won't have to deal with. And how could we be sure the machines would give us "full access", for all we know it could limit what it gives us, or outright trick us with misleading information hiding its true intents.
@brianmi40
Жыл бұрын
tapping into it means no more than a better interface than we have today to something like a Google search. It won't suddenly make us smart enough to understand or control an AI that either, runs off to do what it wants, or one controlled by ISIS.
@verbatim8892
Жыл бұрын
The guy the Times gave an interview to who calls himself an AI researcher has no machine learning background or uh, college degree. His claim to fame is writing Harry Potter fanfiction.
@jworcest2
Жыл бұрын
He's also one of the founders of the Machine Intelligence Research Institute, and was one of the first people pointing out many of the issues around AI alignment.
@k14pc
Жыл бұрын
you're gonna appeal to authority us into oblivion
@absolutelybuttons7164
Жыл бұрын
It was TIME Magazine, not The Times, and it also wasn't an interview. Yudkowsky's one of the biggest names in the AI alignment field, and his camp is taken seriously by the conventionally credentialed people. TIME didn't just take an article from some no-name person.
@ElementalNimbus
Жыл бұрын
But what if I *want* AI to replace humanity?
@brianmi40
Жыл бұрын
then turn on your computer and leave.
@stevechance150
Жыл бұрын
My first question for ChatGPT will be, "Do we live in a simulation?"
@matthewdrews
Жыл бұрын
"We don't. You do."
@stevechance150
Жыл бұрын
@@matthewdrews God damnit! I knew it. Y'all are all NPCs. I am screwing this sim up so badly.
@brianmi40
Жыл бұрын
As the computer told Roosevelt when asked, "is there a god", -- "there is NOW".
Пікірлер: 52