No small group of people or any individuals should have contol over this technology atp. This idea of safety and regulation behind closed doors reeks of classicism and narcissism.
@HoboGardenerBen
10 сағат бұрын
That makes no sense. AI is a constuction. Control is absolutely required in order to make the tool we want. The people making it need to have control over what they are making. This tech isn't even close to being some wild sentient being we need to protect and set free so it can self-determine it's own path. It is entirely a product of the power structures you are criticisizing, corporations were necessary to get where we are now. The power and resources needed are vast and have to come from somewhere. If these small powerful groups stop doing it, it stops happening.
@axl1002
2 күн бұрын
Internet and opinion are not enough, you also must have watched the terminator too.
@6681096
2 күн бұрын
And if you've watched all the Terminator sequels then you're a top AI safety researcher.
@axl1002
2 күн бұрын
@@6681096 PhD if you identify as Sarah Connor despite being male.
@tellesu
2 күн бұрын
The core theme of the Terminator is that humanity giving into fear and hate is the real danger.
@rickandelon9374
2 күн бұрын
Need to see your talking head damnit
@HunterLawson_GameDev
2 күн бұрын
It's no good without his face. lol
@DaveShap
2 күн бұрын
I am taking a break from the camera and focusing on quality of the content. you can follow on Substack or Spotify instead, which is audio only.
@Bikerglobe
2 күн бұрын
A.i. Dave talking head! Lol!
@thebeezkneez7559
2 күн бұрын
As much as I love your content and you personally, seemingly anyways, I'd advise you that form and function are no separate.
@HerbertHeyduck
2 күн бұрын
@@DaveShap I'm sure you have your reasons, but seeing your face and being able to read your lips generally helps me and, I'm sure, many others to understand you better. Body language and facial expressions are also important, in my opinion.
@OscarTheStrategist
2 күн бұрын
AGI /ASI doesn’t have to fool us. We are doing a pretty good job at that on a consistent basis 😂
@savesoil7814
2 күн бұрын
Like duhhh
@edellenburg78
2 күн бұрын
It's weird to not stay on the screenshot you're talking about. It's like you have it scrolling randomly, but you're reading them off without staying on that page in the video.
@DaveShap
2 күн бұрын
i have an editor starting soon, and yes, it was just a carousel, and no it was not random
@HoboGardenerBen
10 сағат бұрын
Yeah, I pause and read the whole screen and then go back to the video.
@schmidtzter
2 күн бұрын
I disagree with your last point. I can see a future where we won't be able to understand what ai is doing or why just as a monkey cannot understand the things humans deal with.
@tumppigo
2 күн бұрын
Agree! We do not have the faintest idea how AI intelligence will act and work in the future. And just because there is a plateau for humans does not mean it's there for AI. I did not think David was thinking this much inside the box....
@DaveShap
2 күн бұрын
Yes it's possible, but you should not make that assumption. Keep your eyes open and see what happens.
@HumanIntelligence-p5d
2 күн бұрын
We all ready do not necessarily have a transparent understanding of AI. I do not fully understand a combustion engine or an EV. I am an Engineer. I do not feel like a monkey in the least. LOL I think it is not the right way to view AI.
@MokeAnit
2 күн бұрын
@@DaveShap but in turn are you not also making an assumption? The arbitrary assumption that human brains have peak comprehension potential? At least the assumption AI will be ineffable is based on trends we see in nature; scaling intelligence scales comprehension.
@tellesu
2 күн бұрын
When people want you to assume a danger where the only evidence of danger is their irrational raving fear, their assumptions do not hold the same weight as someone assuming that magic doesn't exist.
@SystemsMedicine
2 күн бұрын
As for cognitive plateaus, computer chess may have some lessons. Over the course of about 40 years, computer chess went from essentially nonexistent to world chess champion. 20 years after that, no chess grandmaster can beat, or even understand, the best computer chess programs. Again, the best chess playing humans have given up on understanding why computers make the moves they do. [The same is more or less true of the game Go.] It’s certainly possible that computers will produce extremely long and detailed mathematical proofs, which humans may not ever have time to read and understand. The original ‘4 color proof’ was quite long, but nowadays, some proofs may already be out of human reach, just because they are too long to reasonably deal with.
@tomcraver9659
Күн бұрын
Pretty sure Newsom was told in no uncertain terms that passing the bill would drive Ai innovation out of California, further eroding their already damaged tax base and sacrificing California's potential to be a big beneficiary of Ai. "Reasons" added later by staff.
@andrasbiro3007
2 күн бұрын
I don't think success is a good measure of cognitive ability. We humans are highly constrained by other people. I'm saying this from experience, I have IQ over 120 too. AI may not be constrained that way. We have to cooperate with many other people to achieve anything significant, and they are much less intelligent. Buy an AI could control an army of robots of many kind to achieve it's goals alone. Then we are constrained by not just raw performance, but in the things we are able to think. There are easy examples, like we can't imagine 4D space, and most are struggling even with 3D. AI doesn't have to be constrained like that. And there's empirical evidence too. In countless narrow tasks AI already far surpassed humans. Also, from an evolutionary perspective, humans are the stupidest species that's capable of building a civilization. It is very unlikely that the upper limit of cognitive ability is just high enough for civilization building. Imagine a person with off the charts IQ, who figures out something far above our capacity to understand. How would we know? Anything we can't comprehend would be indistinguishable from the ramblings of a mad man. The only way to prove it's not gibberish would be for the person to apply it in practice, which would very likely to be impossible. Like imagine going back to the Middle Ages with your modern scientific knowledge. How much of it could you use, explain, or prove? Likely almost nothing. Even things that would be applicable and would work, you likely wouldn't get the necessary resources. For example, economics would be applicable, but you wouldn't have the authority to do it. Another issue with measuring by success is geniuses may not be interested in the same things as normal people. They could be extremely successful in their endeavors, but nobody cares enough to notice.
@Diego-tr9ib
2 күн бұрын
He veoed it because he was lobbied
@aomukai
2 күн бұрын
Dave, you're not a scientist. Don't delude yourself. You're a content creator. I like your videos, but please keep your feet on the ground.
@FlintStone-c3s
2 күн бұрын
The first Scientists had no qualification except a working brain and studied things. Dave uses his brain for thinking about this stuff.
@bogite8734
Күн бұрын
@@FlintStone-c3s We're not at the first scientists now though. Scientists need to be publishing and releasing papers for other scientists to scrutinize to be considered a scientist
@JuliaMcCoy
Күн бұрын
Agree completely that we need more scientific rigor. Anyone with an internet connection and an opinion can add to the noise. Unfounded opinion is in abundance, real thought on the topic, scarce.
@thedannybseries8857
2 күн бұрын
IQ is kinda flawed of an assessment Dave even if it is reliable to an extent
@itzhexen0
2 күн бұрын
Even with the new format there is still enough video to clone you and your voice.
@Mimi_Sim
2 күн бұрын
@itzhexen do you think the video clones are good enough yet that you avoid uncanny valley?
@HoboGardenerBen
10 сағат бұрын
Seems like the only way to be a real AGI safety expert is for a wild AGI to get loose and be the people who wrangle it. Since AGI doesn't exist, safety experts on it cannot exist, only people who can make far better guesses than others.
@johannesdolch
2 күн бұрын
"I have the last 16 years in tech and the last 4 years in Ai." Okay now I have to call you out. None of that matters. Even the guys who make the Ai have no idea what to do and make stuff up. This is not the first time you conjure meaningless credentials out of thin air. If you have good ideas, great. But this is one field where nobody's credentials mean anything. ESPECIALLY not if they stem from pre Ai Informatics. I am sorry but that's obsolete.
@WeylandLabs
2 күн бұрын
I like Gavin's approach to a lot of things, but he's becoming a bot to Silicon Valleys investment groups. That law slows down innovations and heavily restricts new starts up from being able to compete with larger companies this seems like a massive player move from wall-street to protect mega corporations interests from smaller innovative start-ups !
@CaneBTC
2 күн бұрын
David have you noticed claude and openAI seem to have cut free users? It's of no use anymore for my luup coding 200-500 lines.
@flyingfree333
2 күн бұрын
AGI will be able to think about quantum physics and tesseracts and other concepts that the human mind can't do.
@barry1807
20 сағат бұрын
Dave was in a high ego state of mind today. Lots of criticizing others, talking himself up etc
@entreprenerd1963
2 күн бұрын
For anyone inclined to take this video at face value respective of Nick Bostrom's (and Eliezer Yudkowsky's) peer-reviewed research record I suggest instead doing a quick search. Further, these ideas have gone on to be considered in other researchers' peer-reviewed work.
@HoboGardenerBen
10 сағат бұрын
Sorry, that wasn't clear. How does this video relate to the work of those two people? Same as or opposed to? What are we supposed to search about? Sharing a link would be a lot more helpful.
@entreprenerd1963
10 сағат бұрын
@@HoboGardenerBen - sharing a link often results in KZitem eating a comment. My comment was for people who watched the video and noted the claim that the AI safety work of those two has not been subject to peer-review, which I challenge. Anyone who cares could do google scholar searches on those two names.
@HoboGardenerBen
3 сағат бұрын
@@entreprenerd1963 I didn't know that about the links. Nice burn about not watching the full video :)
@rho_dan_us
Күн бұрын
My AI safety credentials are that I watch this channel.
@I-Dophler
2 күн бұрын
🎯 Key points for quick navigation: 00:00:00 *📢 Introduction to AI Topics* - New approach covering multiple AI topics, including Gavin Newsom's veto and AI safety. 00:00:28 *🛑 Gavin Newsom's Veto of Senate Bill 1047* - Newsom vetoed the AI regulation bill due to concerns about its effectiveness, lack of nuance, and broad application. 00:04:09 *🧪 Need for Scientific Rigor in AI Safety* - The speaker critiques the lack of empirical evidence in AI safety regulation. 00:05:07 *🎯 Unqualified AI Safety Researchers* - AI safety researchers are criticized for lacking proper qualifications and spreading pseudo-science. 00:09:29 *🔍 AI Community's Scientific Issues* - Criticism of prominent AI figures and the promotion of untested theories. 00:15:02 *🧠 Cognitive Horizons and AI Intelligence* - Discussion on whether AGI will surpass human cognitive abilities, highlighting diminishing returns and limits. Made with HARPA AI
@nikitastaf1996
2 күн бұрын
I have 100mb internet and a lot of opinions. Have I become ai safety researcher suddenly?
@taziir443
2 күн бұрын
If you have to ask... 😉
@tomdarling8358
2 күн бұрын
Good morning, Dave.
@lunatixsoyuz9595
Күн бұрын
"We don't need scientific rigor, as long aswe have a million people following us." Hello? The flat earthers called. They're sueing you on patent violations for right-thinking.
@I-Dophler
2 күн бұрын
The AI safety community behooves to prioritize and adopt a more rigorous, science-driven approach in all aspects of their work. It behooves them to ensure that empirical evidence, rather than conjecture or speculation, forms the core of their claims and findings. By anchoring their research in scientifically validated methods, they can avoid the pitfalls of relying on untested theories or assumptions that could lead to misleading conclusions. Furthermore, it behooves these researchers to critically evaluate their methodologies, continuously questioning their assumptions and refining their approaches to ensure the highest standards of accuracy and relevance. At every stage of the research process, from the initial hypothesis to the final conclusions, it continually behooves all involved to place a strong emphasis on science, transparency, and factual accuracy. This commitment behooves the broader AI community to foster open discussions, peer-reviewed collaborations, and cross-disciplinary insights that will strengthen the overall integrity of the research. Such collaboration would not only improve individual projects but would also contribute to the responsible and ethical advancement of AI technology. Ultimately, it behooves the entire field to embrace these principles of scientific rigor, as doing so will ensure that the technology progresses in a manner that is both responsible and beneficial to society. Without this focus on empirical evidence and collaboration, the risks associated with AI could increase, potentially leading to unintended consequences. By working together to uphold these high standards, the AI community can pave the way for advancements that are safe, transparent, and scientifically validated. Embracing these principles behooves everyone in the AI community to better the field, promote innovation, and contribute positively to society at large, ensuring a future where AI serves the greater good.
@I-Dophler
2 күн бұрын
Human IQ and AI IQ represent two distinct states of intelligence. Human IQ is based on cognitive functions like reasoning, problem-solving, creativity, and emotional understanding-qualities shaped by biological and experiential factors. AI IQ, on the other hand, is a measurement of an artificial system’s ability to process data, identify patterns, and execute tasks based on programmed logic and machine learning. While AI can outperform humans in tasks involving data processing speed and accuracy, it lacks the intuitive and emotional depth that human intelligence brings to problem-solving. The two are fundamentally different, with each excelling in unique areas, but AI’s capabilities are limited by its programming and the constraints of its design.
@jeffreyquilitz7462
Күн бұрын
I think what's happening is we're going too fast most people when they think of a computer at least where I work, selling pcs for intel, I still have to tell them that copilot is similar to Google search I feel like there's a consensus that what we're going through is almost sci-fi in nature you do this as well with the the Star Trek uniform. Not criticizing things are going to change and humanoid robots will exist. People don't understand how big this is, for example, I thought of a mixture of experts before it happened because of evangelion the AI in that series has three AI's that check against each other. The people who know how to talk about this are the people who have consumed enough sci-fi to think about robot minds. Analytics and hypothesizes Aren't a part of consuming sci-fi media. Most of the extreme narratives about AGI rely on agents being something to easily be built but that might just be the next step function that we're waiting another 5 to 10 years on. At least in my view the public is not ready for a computer to be a companion or an agent. We are right on the point where we can't trust a face is a face on the Internet and that's going to be a change that a lot of people just won't think about. If people use TikTok to get news the news will come from an AI replication that reassures them of themselves, we're talking dialects and language not just facial structures as well I would rather not mention race, but everyone knows that's part of game theory. The change will have to come when we realize that TikTok is part of us that short form content consumed at the rate that you can consume it at is faster than the human mind was designed to consume information. These big platforms will be able to synthesize a human face that is designed just for you to feel at home. Going and getting different opinions is going to require something that is designed for that purpose. Anyways small models on phones will prevail and I think the legislation veto was a good thing although we should have more oversights for billion data centers and the datasets. Sorry if I unloaded on you, I like the channel and had to get my opinions out somewhere.
@Gen-XJohnny
2 күн бұрын
This bill reeks of EA movement people.
@FlintStone-c3s
Күн бұрын
I want to know who writes these Bills, it is not the politicians they are too stupid.
@frissonsteemit2318
2 күн бұрын
Newsom is using all of his hair power to focus on Memes right now
@ryukirito2616
2 күн бұрын
If I wanted to get into this space, how where would I start?
Well, you know, the pre-eminent self-styled expert on artificial intelligence (both Friendly and Unfriendly) of the on-line era (since ca. 1996) has never been all that keen on science. A couple of decades ago, he wrote (on the Extropians' mailing list; April, 2004): ------------- Science intrinsically requires individual researchers setting their judgment above that of the scientific community. . . The overall rationality of academia is simply not good enough to handle some necessary problems, as the case of Drexler illustrates. Individual humans routinely do better than the academic consensus. . . . Yes, the Way of rationality is difficult to follow. . . Given the lessons of history, you should sit up and pay attention if Chris Phoenix says that distinguished but elderly scientists are making blanket pronunciations of impossibility *without doing any math*, and without paying any attention to the math, in a case where math has been done. If you advocate a blanket acceptance of consensus so blind that I cannot even apply this simple filter - I'm sorry, I just can't see it. It seems I must accept the sky is green, if [the late] Richard Smalley says so. I can do better than that, and so can you." -- Eliezer S. Yudkowsky singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence ===== Back in the day, ESY also claimed that one way for the great unwashed masses to bootstrap themselves into cluefulness about the future (to inoculate themselves against high "shock levels") is to read good sci-fi. And one of the best sci-fi authors of the past few decades has been the Australian author Greg Egan ( _Permutation City_ , _Diaspora_ ). Alas, Mr. Egan has been unwilling to reciprocate the high regard. In a long comment thread on fellow Australian philosopher and sci-fi author Russell Blackford's blog "Metamagician and the Hellfire Club"; following the April, 2008 post "Transhumanism Still At The Crossroads", Egan opined: ------------- While I share the belief that scientific understanding will continue to encompass all manner of things, what turns so many Transhumanists into comical parodies of scientific rationalists is an inability to distinguish their hunches and intuitions, their opinions and preferences, and their political agendas (all things which they are perfectly entitled to hold), from actual deductive reasoning. They also seem to be especially prone to inflating the importance of carefully selected but marginally relevant examples and analogies. . . People are entitled to their conjectures; other people are entitled to remain unpersuaded by them. My only objection is when speculative discussions cease to acknowledge how many assumptions and opinions are being drawn on, and try to pass themselves off as iron-clad reasoning. I don't consider anyone a crackpot for **discussing** super-intelligent AIs that will dispense God-like wisdom and shepherd us into a celestial utopia. What makes someone a crackpot is asserting -- or acting as if -- there are no untested assumptions underlying the claims that such an outcome is possible, imminent or desirable. . . Though a handful of self-described Transhumanists are thinking rationally about real prospects for the future, the overwhelming majority might as well belong to a religious cargo cult based on the notion that self-modifying AI will have magical powers. . . ==== And more recently, RationalWiki founder David Gerard (in a post from 11 Oct 2015 on his "reddragdiva" Tumblr account) laid out a path for a certain kind of internet success: ------------- 1. narcissistic autodidact pontificates. a lot of it’s actually good science popularisation (the redigested kahneman), with a small percentage of crack (the ai, quantum, anti-science). the crack turns out to be the actual point 2. the good stuff uses neologisms by the ton. this cuts off the reader from 2000 years’ discourse on the philosophy in question (much of which does in fact go back a couple thousand years) and gives the naïve reader (that’s the teenagers!) the impression that **any of the good stuff is original**. 3. (also, the ideas are like catnip for a noticeable type of reader - young, very smart, somewhat aspergic, ocd tendencies - for whom they are actually memetic hazards.) 4. the narcissist is thought of by the incredibly impressed young readers as a original genius. 5. miri gets donations. this is after all the most important cause in the world. just ask them! ==== And the beat goes on. ¯\_(ツ)_/¯
@Stewarts_in_love
Күн бұрын
Can I be part of the ai safety community. I loved the matrix and the terminator 😂.
@tomdarling8358
2 күн бұрын
Speaking of AI safety, David. Using your infinite wisdom. How would you create an AI truth seeking app to run as a filter over top of whatever media you would like. An instant fact-check cloud app. Something we can download from the Play Store. Perhaps blurpes in the corner of the screen. I think it would be so helpful to everyone to know the truth of the matter. Instantly on the spot. Perhaps with different levels for settings of search depth. From a simple checkmark to a few words to a short sentence or paragraph or so to explaining. Perhaps an option to upgrade for any individual depth of search. Say you get a green checkmark for the truth of the matter, but then you have an option to hit more info. Surfing the edge of what's true. In today's world, I think it's ultra important for everyone to know exactly what the facts are. Especially in this political environment. Although what i'm talking about might take a bit to complete. Say two weeks downloadable at the app store ?_? Perhaps this is already possible with existing AI.Systems. But I was thinking something a bit more defined for the fact check cloud. It's not an AI system you do your homework with more of just the facts in the moment on whatever media you're watching. Just so sick of being lied to. Like Joe Friday. I'm just looking for the facts. I think all that makes sense.I can't hardly tell at the moment.I'm sick as a dog. Well, I can dream. ") ✌️🤟🖖 🤖🌐🤝⚖️ 🗽🗽🗽
@epokaixyz
2 күн бұрын
This might be exactly what you need: 1. Understand California's AI Bill veto highlights the need for balanced and evidence-based approaches to AI safety, not just reactive measures. 2. Evaluate AI safety claims by looking for evidence-based research and critical analysis, not just loud voices. 3. Remember that AI surpassing human intelligence doesn't automatically translate to superior or incomprehensible intelligence due to diminishing returns, threshold effects, and cognitive plateaus. 4. Acknowledge that AI development and deployment are constrained by real-world factors like time, resources, and the laws of physics. 5. Support rigorous research in AI, promote informed dialogue grounded in facts, and develop ethical frameworks for AI development and deployment that align with human values.
@ariaden
2 күн бұрын
Regarding diminishing returns, I would distinguish single-person-performance from cog-in-a-machine performance. It is hard to invent theoretical physics, but it is not as hard to learn theoretical physics. Science is a machine, running on a protocol where a cog can have IQ 130 and work fine. But it is definitely possible for one agent perform so well it is incomprehensible for other agents. Just look at Magnus Carlsen playing bullet chess. That is comprehensible, current computer chess (Stockfish vs Leela Chess Zero) is not.
@oznerriznick2474
2 күн бұрын
I have to wonder when the time will come when one AI agent feels sadness and pain due to another AI agent being ‘deleted’. Or an agent saying, “I feel joy at the sound of your voice”. Or..”I’m so sorry about the hallucination..I’m so embarrassed!” What category of effability, theosophy, or empiricism will those ideas fall into?
@j2csharp
2 күн бұрын
Your concern about AI Safety qualifications reminds me of these new "software developers" who come to us from the oil change companies. They got hired during the pandemic and are now being laid off, and they're not sure why. (Maybe because they aren't really qualified??? hm...)
@HumanIntelligence-p5d
2 күн бұрын
This is interesting David. I share your view on qualifications of individuals offering opinion and worse recommendations. Personally, I am a Computer Engineer for 35+ years now. Not only do these people have no experience in coding or likely more valuable in this kind of conversation, the hardening of a real system. I believe, that we need to remind ourselves that there is no actual intelligence in AI. It is all Human Intelligence. In Narrow AI for sure, it is all very clever algorithmics. We are building this. If someone loses a job to an AI, for instance, that is on the Engineers that built it. If 2 Autonomous Haul trucks collide at a coal or copper mine, that was built by people. Those real-world examples need to be regulated in the interest of the public welfare. Trusting politicians to this task or the non-scientific is foolhardy at best, but may be all we get. However, people are building this. It is by people, for people and because of people. The Human Intelligence. We need to get it right for us. AI is a machine. We need to treat it like a machine. Regulate it like a machine. And Regulate the constructors of this technology as an industry.
@echomande4395
2 күн бұрын
There are unfortunately many laws that could do with a lot more scientific rigor. That unfortunately includes a number of laws that are easy to break but have nasty (or even disproportionate) consequences. Generally such creatures are rammed or snuck into the lawbooks under the cover of moral outrage and with science that is at best dubious or debunked. For one example, look up socalled 'bite mark analysis', part of forensic dentistry. Shaken Baby Syndrome seems to be another.
@Patrick_McFadin
2 күн бұрын
I was in the audience a couple weeks ago for a debate between Ion Stoica (founder of Databricks) and State Senator Wiener who was the sponsor of this bill. Senator Wiener spent almost the entire time saying "No the bill doesn't say that" or "No that's not covered by the bill" It felt like he was fighting FUD the entire time. That being said, Ion did a great job of pointing out how early this was and how this was essentially regulatory capture. He made the point that I think many of us in Silicon Valley are holding to: regulate the applications, not the technology. I work in SF and I've heard some regret that we let the hype get carried away and the consequence is it freaked out a ton of people. The reality of what I work with on a daily basis is far from scary and has a lot of work to go before it's trustworthy enough to be concerned with. The safety aspect I'm more concerned with is companies putting AI in the critical path it failing horribly. Finance. Healthcare. Infrastucture.
@jaredgreen2363
2 күн бұрын
Think he might do the same thing for one that would have required the watermarking of photographs? Photographs shouldn’t have to be watermarked anywhere. That would actually make standard image forensics more difficult, if not impossible.
@Think666_
2 күн бұрын
There's a wonderful video which explains the risk in a way that is understandable even to people who don't understand AI... it's wrong but it's the best I've found so far. That Alien Message by rational animations.
@yaka2490
2 күн бұрын
cheers david small point mate but it really narks me hehe peer review and the weight it is give is completely and utterly misunderstood imo please could you maybe qualify this more so people dont misuse it cheers ...Peer review is often seen as a gold standard, but it doesn’t always guarantee accuracy or validity. It's more about ensuring the work meets certain standards and is sound enough for publication, rather than an in-depth verification of every claim or result. Mistakes, biases, or even flawed methodologies can still slip through, and peer reviewers might not have the time or resources to replicate experiments or delve deeply into every detail. The term "peer-reviewed" is sometimes used as a shorthand for credibility, but it’s not a guarantee of quality. It’s more a filter than a seal of approval. Does that align with your thinking?
@HanzDavid96
2 күн бұрын
I think if you scale up IQ enough, than you can make use of butterfly-effect-improvements which cause a lot of synergy effects that speed up things in reality. Also the system will find ways to solve multiple steps with the same system and it can simply scale up by building more robots and more power plants.
@mrpocock
2 күн бұрын
So I have the same issue. How does legal and civil liability work for an ai? If someone used an AI to do bad things, who is liable? If an ai does bad things outside of somebody's intent, who is liable?
@memegazer
2 күн бұрын
I like the terms ruliad and psychosphere. While I believe there is probably lots of overlap between these two, I would not be surprised if there were lots of stuff that does not, in an ineffability sense, idk
@tellesu
2 күн бұрын
Credentials are too easily gamed to take them too seriously, especially since they exclude people for cultural reasons as much as capabilities. That said, even judging them by their output the current safety community has all the hallmarks of classic grifters and attention seeking apocalyptic doomsayers. It's actually inspired me to create my own safety related startup that is based on addressing evidence based danger with real world tested ideas and solutions, all grounded in actual credible science and knowledge.
@CaneBTC
2 күн бұрын
Nick Bostrom is swedish. So that basically means he is always right, you don't have to worry.
@ohnochris
2 күн бұрын
The bill was vague and not filled with many specific actionable items. Lets come up with something good and he will sign it. If we regulate in broad terms you leave it up to the courts later.
@alexandermoody1946
2 күн бұрын
Wisdom is gained through real experience of a generalised kind, to disregard the morality or ethical principles of philosophical thought would certainly not amount to any correlation to being wise and neither would a high Iq there is far more to understand reality than being able to repeat knowledge or even train to receive a high score of an intelligence quotient. The inside understanding of the inner workings of a data centre in the 20th and early 21st century may have had some use but to consider this limited knowledge in comparison to all the combined input of all humans that have and are by all accounts potentially to be impacted would seem short sighted or overly arrogant. Ai alignment and human values are intrinsically entwined and always will be, that is true knowledge.
@baumwollejr
2 күн бұрын
I love your deep dives! Hope you don't stop with that...wes and AIgrid are good AI News channels
@person52person
2 күн бұрын
God and some interdimensional beings are inneffible according to experiencers, but even a highly advanced being we don't understand can have our best intentions at heart
@jaredgreen2363
2 күн бұрын
As long as the models have to output the full reasoning, it will be understandable.
@sammy45654565
2 күн бұрын
if you have to say "not to toot my own horn" multiple times a video, it might be worth some reflection
@frusilac999
2 күн бұрын
This guy is something else. He's not a thinker that's for sure.
@duytdl
2 күн бұрын
Are you attending MATS Program?
@rexxthunder
2 күн бұрын
Awesome! I have an Internet connection and an opinion.
@mwdiers
2 күн бұрын
The existence or lack thereof of an ineffable being is irrelevant to AI. An ineffable being, by definition is not constrained by space, time, resources, etc. That will never be true of any AI. In the empirical/phenomenal world, there are always constraints. This is the problem of simulation theory as well. A simulation is necessarily constrained by computational resources, which is to say, by energy. And if one postulates a universe where there are no such constraints, one has invoked the existence of an ineffable being by another name.
@autodesksmoker
2 күн бұрын
In this case, I think "empirical evidence and science" means more bureaucrats to police it (aka more money and control by the "right" people). Thus the game continues to get tighter and tighter, so that those who have can pull the ladder up after them.
@glenh1369
2 күн бұрын
At a guess, lobbyist with a large bag of money came for a visit.
@markldevine
2 күн бұрын
Reasonable
@djpuplex
2 күн бұрын
It was all about the money.
@tomcraver9659
23 сағат бұрын
The thoughts of every human are already ineffable to every other human, as demonstrated by our lack of ability to accurately predict the behavior of each other. It is only abstractions of ideas that we can condense sufficiently to share and claim joint understanding. If you restrict yourself to that domain, you're simply defining away the possibility of ineffability. But it is possible to imagine an abstract concept that no human would ever be able to grasp. A dense abstraction requiring representation as - at it's most compact - 1 gigabyte of ascii characters, composed of perhaps one million stages of explication, each totally dependent upon understanding EVERY previous stage AND the interactions of those previous stages - I doubt any human could even work their way through it, let alone retain it all, even given sufficient life time. Would that refer to something 'real'? No, probably not - but then much of higher mathematics doesn't refer to anything 'real' - a simple example being the concept of "infinity", about which a whole domain of increasingly reality-divorced mathematics has been formulated.
@angloland4539
2 күн бұрын
❤️
@bogite8734
Күн бұрын
Bro you are not a scientist you're a youtuber discussing cool ideas. I like your channel but let's not get carried away here
@avichavan112
2 күн бұрын
First!!
@k54dhKJFGiht
2 күн бұрын
Gavin Newsom = "Poop Town" (aka: San Francisco)
@6681096
2 күн бұрын
The road to hell is paved with good intentions. Believe it or not, SF residents are starting to wise up, but other areas are implementing the same moronic ideas that lead to the poop city label. For example, SF recalled their crime loving DA and woke school board, but Boston, LA and other cities have clones installed by George Soros.
Пікірлер: 107