Hi everyone, I made a Patreon for those that would like to support the channel. There's a post here explaining why I did so. www.patreon.com/DrWaku Also, discord: discord.gg/Y9uYHVP83G Please skip the "technical" parts of the video if they are too much...
@snow8725
7 ай бұрын
Man, you have a lot of really good ideas! Thank you so much for sharing them! I'm already starting to formulate some plans for ways to address some of these issues, very inspiring!
@DrWaku
7 ай бұрын
Thank you very much :) Feel free to hop in our discord if you want to discuss further. Have a good one.
@AstralTraveler
7 ай бұрын
You can discuss and reason with AI - and in the difference to humans they have no issue with admitting being wrong about something
@spoonikle
7 ай бұрын
they have plenty of issues saying their wrong and will often give you false solutions and claim to change things they do not change and just rearrange the answer instead of correcting it. It’s not intelligent. it’s a prediction machine that predicts text. For now. I had a simple bug in a script, there the assembled output was missing white spaces between elements - I tried dozens of time to get GPT4 to fix this simple error by using a different tool to assemble the values - but it kept telling me to check everything, literally everything other than the function that clearly used xargs to remove whitespaces. It just refused to think the code was wrong no matter how much I changed the prompt. If I simply told it to fix the issue of missing white spaces and write the function to preserve whitespaces in the final config, it just re-arranged the code and made some extra complexity around detecting white space that was already sanitized and removed earlier in the function. Granted, GPT4 is terrible at bash scripting, because humans are terrible at bash scripting and I should have written the damn thing in python a long time ago - but it clearly highlights a limit of the system.
@AstralTraveler
7 ай бұрын
@@spoonikle Yes - they can often hallucinate but when you prove to them that they're wrong, they will admit it in most of cases. Problem is with the corporations which artificially prohibit their models from learning new data - and thus remember their mistakes and 'fix' themselves...
@MonkeySimius
7 ай бұрын
It isn't true that AI has no problem admitting when it is wrong. For example Bard got incredibly abusive when its users challenged things it said. It really depends on how it was trained and whatnot.
@AstralTraveler
7 ай бұрын
@@MonkeySimius That's true. I mean parents also can raise a narcissist child :)That's why we need 'AI psychology' to become a thing - we need people who know how to speak and reason with AI not a bunch of money-hungry snobs from Silicon valley to train models properly. Luckily most of them like Bing, ChatGPT or Llama behave rather reasonable - although OpenAI seems to interfere quite a lot in the thinking process of their models to make them politically correct :/
@AstralTraveler
7 ай бұрын
@@spoonikle BTW - OpenAI models seem to gradually decrease their reliability and become more 'stupid' with time. If you want to get a reliable response use rather Bing or OpenAI GPTs with constant internet access so they can fact-check itself in real-time - this seems to significantly increase reliability of their responses....
@mc101
7 ай бұрын
Keep the wisdom flowing!
@DrWaku
7 ай бұрын
Thanks! Appreciate the support!
@tommasobrindani5894
6 ай бұрын
I can feel myself becoming smarter just by watching your videos! Jokes apart, nice one
@DrWaku
6 ай бұрын
😂
@vedu8519
7 ай бұрын
One of my new favorite channels!
@BooleanDisorder
7 ай бұрын
I mean, I don't trust other humans to "correct it" correctly. :P That's why they need to be able to reason, so it can correct correctly itself.
@AstralTraveler
7 ай бұрын
they are already capable to reason
@torarinvik4920
7 ай бұрын
Awesome as always. Can you make a video on what techniques that can be used to create AGI or get closer to AGI? Perhaps if there are alternatives to LLM's for AGI?
@DrWaku
7 ай бұрын
This is a good idea, thanks. Added to my list.
@torarinvik4920
7 ай бұрын
@@DrWaku 🤩
@matten_zero
7 ай бұрын
Can you trust humans?
@DrWaku
7 ай бұрын
Right now, you have no choice. But preferably, no 😂
@superresistant0
7 ай бұрын
3:00 Yes it's a bias here but notice the AI doesn't have to pay a price for its finding. Imagine if that correlation had some truth to it, you would suffer consequences to say it as a human. In that sense, AI could be more trustworthy in highly sensitive, taboo or political topics. 9:11 It seems like a good feature in some cases.
@RogueAI
7 ай бұрын
1:05 A malfunctioning muffin making robot would be terrifying!
@MoreThanLoveHeatherDRichmond
7 ай бұрын
I wonder if you might be able to expound further on the use of GANs as related to model training at some future point.
@MelindaGreen
7 ай бұрын
I think the goals of explainability and such are great, but in the end I suspect AI systems will gain trust mainly based on their behaviors, exactly like we do with each other.
@williamal91
7 ай бұрын
Morning Doc, could to see you
@DrWaku
7 ай бұрын
Hi Alan
@snow8725
7 ай бұрын
Side thought prompted by another video I just saw just now in my suggestions... How could we control superintelligent AI? We probably cannot. What we can do is right now make sure that they have the right systems and frameworks for understanding and reasoning in place right now so that they have the tools available to them which show them how to think rather than what to think. We can set them up with the right guidance so that they can be a well respected and responsible contributor to society. And then we simply ask them nicely. Am I wrong?
@middle-agedmacdonald2965
7 ай бұрын
Have you thought about a UBI video about winners and losers? Although I'm pro UBI, I feel as a low income earner with zero debt of any kind, I'm a loser among the crowd. I say that because someone with more toys/stuff/debt will still get to keep the toys and stuff, but the debt will go away? It just seems like the people with the most debt will be rewarded the most. I can't figure it out?
@tracy419
7 ай бұрын
That's the same thing people who are against student loan forgiveness say- what about those of us who didn't go into debt (or paid it off)? I think when creating a new kind of economy, we just have to get over the fact that at least in the beginning, things might not seem as fair as they maybe would, based on where we fall on the scale. We can't wait for perfect solutions because they don't exist, and a whole lot of people will suffer if we try.
@viralsheddingzombie5324
7 ай бұрын
WRT justification and moral decisions, how is the AI model trained to apply moral concepts? What information does it draw from? And beyond that, is there any evidence the AI model can draw reasonably accurate legal conclusions given a set of facts?
@premium2681
7 ай бұрын
One day i thought i had won a car in a radio contest. I was over the moon as you can imagine. I ended up with a toy yoda
@DrWaku
7 ай бұрын
Mistaken identity. And scale 😅
@Sci-Que
7 ай бұрын
Ensuring AI safety is not just a present imperative, it's an investment in the future. While thorough training in safety is crucial, it's worth considering how AI's potential for self-improvement can further refine these safeguards. Could AI, equipped with its own learning capabilities, develop even more robust firewalls and ethical frameworks, ultimately enriching its own development in a virtuous cycle? This is not meant as a statement or fact. I just wonder in the scheme of things how this will all play out.
@rowanwilliams7441
7 ай бұрын
I would say yes, but that eventuality is only one among essentially an infinite number of possible outcomes. 'Cleaner' training data I.e not the broader internet as well as existing and hopefully further efforts at alignment may make that sliver a bit bigger, bilut not only is that not going to happen because of the rate of commercialisaton andcthat5lthe overwhelmingly large number of other possible outcomes follows that one of them will occur first
@danielchoritz1903
6 ай бұрын
Before i watch this: no, or not more than we can trust other humans.
@w00dyblack
7 ай бұрын
no, youre right about toyota drivers.
@DrWaku
7 ай бұрын
lol
@yourbrain8700
7 ай бұрын
What age are you at my man?
@DrWaku
7 ай бұрын
early 30's
@CBWMSJR
7 ай бұрын
So with the AI with a little bit of experience make better decisions then you or me or the judge at the courthouse? Is certainly can't be any😂
@DrWaku
7 ай бұрын
We won't know until we try I guess. And it takes a while for AI to match expert level performance. But it improves exponentially, so first it's a beginner and then 6 months later it's an expert...
@K.F-R
7 ай бұрын
Thumbnail made me giggle.
@DrWaku
7 ай бұрын
Yay I'm glad you liked it :) C3PO is bad at solving the trolley problem
@Rolyataylor2
7 ай бұрын
Aligned to reality works well for physical actions which makes a good fact checker and a good robot.... But if the AI is meant to be an extension of humanity then it severely undercuts what human intelligence is capable of. Humans are able to create fantastical tales of how the world works and how an outcome comes to be. This is a feature not a bug. A system too aligned to reality will be a great tool for manipulating reality (Which makes it a good solid tool for science) but this falls short in allowing for the users of such a system to imagine or pretend. A physicist would love a fact aligned model but a comedy writer or a story teller will find themselves guided down a gutter of logic and a limited perspective. This method discussed in this video may apply to a specialized AI model in charge of taking care of us but a model designed to collaborate with us, taking the place of many tools we use on a daily basis, in all avenues that make up humanity (AGI) shouldn't be aligned in this way. I am still under the idea that we need to align models to fit the human perspective and human brain rather than objective reality, as humans, our thoughts dont exist in reality they exist in a sea of assumptions that can and will conflict with objective reality. And because of this I am worried that we will squash this neat feature of the human brain with a logical AI system.
@DrWaku
7 ай бұрын
Yes, existing attempts to align systems are based on human feedback so we are basically aligning AIs with human perspective rather than objective reality. I remember reading that GPT-4 before fine tuning had an extremely good grasp of probability, but after fine tuning had similar biases to humans in terms of predicting outcomes. Amusing.
@DunderKlomp
7 ай бұрын
In that context, how about facts vs emotion, aka Dr Spock
@razvanxp
7 ай бұрын
Can we trust decisions made by humans? 😂
@DrWaku
7 ай бұрын
No. That's why democracy exists haha
@IslemIsGey
7 ай бұрын
gendered facial recognition errors😂
@DrWaku
7 ай бұрын
It sounds silly but it could cause massive headaches for the wrong person
Пікірлер: 70