The tightness between comedy and education is stunning. I laughed so hard during the Tesla portion and seeing AI turn Obama into a white man.
@TimTom
2 жыл бұрын
Here before your channel blows up faster than a Tesla battery.
@SuperHorsecow
2 жыл бұрын
People have been saying this for a long time
@raquetdude
2 жыл бұрын
Might be several years sadly but do wonder if he likes the small viewership
@yakman94
2 жыл бұрын
having been subscribed for about 2 years now, i'm consistently shocked at how high quality these vids are, man. and not only that, they have only gotten more involved, more and more thoughtful, and even more incredibly interesting. cheers to caleb gamman. from new vegas mods when i arrived to the future of artificial intelligence. best channel on this site bar none. nice job bud cant wait for more! caleb gamman
@Array_of_objects
2 жыл бұрын
just found this channel today, bet it blows up quick
@Extrashreks
2 жыл бұрын
@@Array_of_objects nah im convinced that those mcu formula and moon knight videos have way more views than the few thousand we're shown, like 10,000+ views but yt just hides it cuz its calling out disney for being cheap asses and exploiting georgia laws.
@paranoiacdigest
2 жыл бұрын
A seriously great watch. Eager to watch the rest of this series.
@calebgamman
2 жыл бұрын
next week: Automation and check out the website for so much fucking bonus content: calebgamman.com/algorithms/
@a_d_z_y__
Жыл бұрын
I'd love to do subtitles for this video and a french translation of them, to show my non-english comrades this important point on view on the subject
@calebgamman
Жыл бұрын
absolutely, if you want to make that i'd love to use it! dm me somewhere or caleb@calebgamman.com
@DMO-DMO-DMO
Жыл бұрын
Damn this is so good, why don't you have 1 billion subs
@Sheriff_Ochs
2 жыл бұрын
"Oooh I'm Caleb Gamman look at me and my clever video titles, aren't I ever so clever and precocious?" Yes, yes you are.
@dankswank9088
Жыл бұрын
I've seen this video dozens of times and only just now noticed that the explosion at the end was caused by a burning tesla cybertruck lmfaooooo
@metaphorbrown6350
Жыл бұрын
You mentioning it had me taking BuzzFeed quizzes while watching. Happy to report I'm a snowman.
@MrApalis12
2 жыл бұрын
First time viewer here, this is the first video of yours the algorithm has presented me. I appreciate the critical analysis of AI development as it currently stands. I agree, finding patterns without context or a larger set of rules is highly limiting if AI is to be successful.
@saf_saffy
2 жыл бұрын
your "pattern recognition isn't intelligence" comments are going to really rile up the "I define myself by my high IQ" crew (if this ever goes viral and if they can get past the Musk "slander"). I remember the hype around various chess computers as if the rules of chess were remotely applicable to the complexities of life.
@kingderderder
Жыл бұрын
Believe it or not, there was a time when it was thought that computers could never beat humans at chess. But now we just take it for granted that any smartphone can beat the pants off of any human chess player that ever lived. Basically what I'm saying is we are constantly moving the goalposts on what "intelligence" means as computers get more capable.
@saf_saffy
Жыл бұрын
We learned that chess masters are good at pattern recognition and some evaluation of their opponents weaknesses. Transitioning chess masters to jobs they can cope with usually means building teams where the pat rec skills can be contextualised and tested with different types of intelligence. I'm afraid if we don't start working on computer models not built around yes/no pat rec, we are not going to get anywhere near intelligence just good mimicry of conversation or chess.
@Deadener
Жыл бұрын
@@kingderderder And now people are overcorrecting and severely overestimating what current AI is capable of. The underlying principal of AI limitation hasn't changed: When a computer is put in a situation with a heavily limited ruleset, like a game, it can dominate a human in that one task. But if you put a computer in a situation where the rules are virtually infinite, it falls flat on it's face.
@RygarothRE
2 жыл бұрын
Stoked for this! Stumbled on your channel a year ago by chance. I hope this series will help you get the recognition you deserve
@gregorycomey
2 жыл бұрын
Babe, wake up! New Caleb Gamman upload!
@revengerwizard
2 жыл бұрын
I was kind of waiting some kind of video demystify AI research, finally.
@LPTV84
Жыл бұрын
I recognized the music you used in this video and I love it. Thank you.
@Fidel_Cashflow
2 жыл бұрын
i knew it was going to blow up and i still got jumpscared
@SomebodyBumbleBee
2 жыл бұрын
Great start to the series
@wacker8290
2 жыл бұрын
caleb gamman it’s been a pleasure to watch you mastering a medium that so many of your would-be peers are still mystified by. caleb gamman
@KilgoreTroutAsf
2 жыл бұрын
The current paradigm for tons of dCNN computer vision is flawed from the start. It doesn't understand spacial organisation, or image composition or much of anything other than local patterns. It will recognize a dog all the same whether it is really a dog or an eight-legged, three-eyed monstrosity with dog-like fur and a snout. It will see a picture of a human with a dog and it will have trouble deciding whether it is a human or a dog. Of course there are ways around this, but every more advance feature is implemented ad-hoc to deal with the fundamental problem. Humans and other advanced animals' brains don't work like that in the slightest. After only a few layers of CNN-like processing, there are lots of more sophisticated things like attention mechanisms, circuits that track object movement, 3D visual processing and so on. A human will be perfectly ok wondering if something is a STOP sign for a while and keep an eye on that until it is near enough to confirm it. And it definitely won't think it stopped existing just because it got occluded by a tree for a split second.
@caiodallecio
2 жыл бұрын
That is how it was done like 10 years ago or more, newer models have attention mechanisms and can track objects even if occluded, this video is just a stream o nonsense
@Phoenix_
2 жыл бұрын
Fascinating stuff, Caleb. You’re a gift to us all!
@deshrektives
2 жыл бұрын
the implication that caleb gamman follows dril is pleasantly unsurprising
@loopy4laughs
Жыл бұрын
excellent video, thank you so much
@louishillegassiv
2 жыл бұрын
all my homies love patterns
@GuyOnAChair
2 жыл бұрын
Can already tell this will be depressing.
@senju2024
2 жыл бұрын
Ben Goertzel also believes to obtain true AGI, we need a complete different breakthrough paradyme. I also thinking this way. What we are doing now is doing great improvements on NAI - Narrow AI based on existing slightly better algorithms. We may reach AGI "LIKE" in some cases but again the foundation is built on Narrow AI. We will continue to see this in the next few years. That being said, GATO is maybe going in a better direction. But many AI researchers think we just need to scale out. I disagree!! Anyway, the AI community is split on how to reach AGI.
@saf_saffy
2 жыл бұрын
hope this gets traction in light of the google ai sentience debacle. that dude ex machina-ed onto a text generator so hard LOL
@XxXnonameAsDXxX
2 жыл бұрын
Holy shit this is gonna blow up. Welcome to fame my son.
@factsheet4930
2 жыл бұрын
I'm super inclined to make a "machine learning" calculator 😂
@a_d_z_y__
Жыл бұрын
That would actually be the best tool to demonstrate that deception
@tehbeernerd
2 жыл бұрын
Thanks to the Marvelous podcast for namedropping this channel
@travosk8668
2 жыл бұрын
I'm really digging this
@SmoothSubscribe
2 жыл бұрын
It's a shame that google dev took so long to misunderstand what computer sentience is, otherwise it would have been a banger punchline Soundtrack is perfect, can't wait for part 2
@A1OFFENDER
2 жыл бұрын
Great channel and video brother.
@CHIIIEEEEEEEEFFFFSSS
Жыл бұрын
Just found your channel. Subscribed immediately. Keep up the great work
@alexandercowlishaw
Жыл бұрын
Just found you from Twitter from some trending Taylor Swift vid. your vids are great, keep going
@Infohazard321
2 жыл бұрын
Aw man I love this The Midnight song
@zbynekkozmik157
2 жыл бұрын
Man this is amazing
@LucasDimoveo
2 жыл бұрын
I'm shocked that this channel isn't bigger
@rogergalindo7318
2 жыл бұрын
the algorithm has blessed my day how is it possible that you only have 2,3k subs!??!
@jovi_al
Жыл бұрын
when i heard that one The Midnight song i shit my pants
@name_lyrics
Жыл бұрын
this was an extremely interesting watch!
@RedmotionGames
2 жыл бұрын
Thanks for this video, very good. The conclusions I'm drawing are pretty dire. A Dot Com Bust 2.0 far worse and damaging than the first one.
@alabseries3926
2 жыл бұрын
caleb gamman
@SgtHolton
2 жыл бұрын
Is commenting "caleb gamman" a natural in-joke? Or is it a clever way to foster increased engagement? I think the answer is: caleb gamman.
@majlada
Жыл бұрын
I'm really glad that I found this channel! Didn't notice that this vid is a couple months old and I'm excited to catch up. This is the kind of video I'd expect from a channel with at least a several hundred thousand subscribes, in terms of editing and entertainment, so good job! I was genuinely surprised to see that your channel is (for now) relatively niche. Unfortunately, there's so much misinformation in this video that it falls flat on the educational/informational aspect of it. I'll always appriciate a sober and level-headed analysis of the current trends in data science, since I'm starved for anything that cuts through the mindless hype, but this is an exceedingly pessimistic and inaccurate critique. Please don't take this the wrong way, but it's kind of like if you took all the naive optimism and enthusiasm of Elon Musk fanboys, who seem to genuinely think that every single problem on Earth is one clever invention away from being completely solved, inverted it, then ended up with something that's overly critical and cynical, yet still just as naive (though with much, much better aesthetics). Artificial "Intelligence" is a really complex topic and when you try to provide an informed critique of it, you really need to nail the "informed" part. On the other hand, the culture that surrounds AI, the insane levels of hype, the megalomaniac morons like Elon, and people who somehow deluded themselves into thinking that neural nets can be sentient, are all things that deserve to be mocked out of existance. It's kind of similiar to blockchain, where its fanbase is so fucking insufferable, that people that (rightfully) hate it start critiquing the technological aspects of it and make themselves look like complete idiots to anyone who knows how the tech actually works (that is to say, NOT the crypto enthusiasts who got scammed into buying monkey pictures for insane amounts of money). I know that I'm late to the party, and maybe that's exactly the direction you ended up taking, but I'm yet to watch more of your stuff. Good luck!
@pierstaylor5521
2 жыл бұрын
Sick
@ShaulGoral
2 жыл бұрын
How does one know what research is done at tesla? Or not done?
@z3dar
2 жыл бұрын
Great video and a fair take I suppose. I am left wondering about your thoughts on a few curious AI studies I've seen: -OpenAI Hide&Seek & Starcraft 2 & DOTA2 playing. These tasks seem to require a decent level of abstraction, reacting and predicting from imperfect information. Maybe it's still just pattern recognition, but as a complete noob it seems like there's more to it than that. -DeepMind Multimodal Interactive Agents-study, where the AI seems to learn to generalize information from a small amount of input data, and even learn things that it was not tasked to learn. -AlphaZero & MuZero: Doesn't this suggest that this kind of AI's are better when left to learn by themselves without human reinforcement? Defining goals in more complex tasks might be problematic, but without a deeper understanding of the AlphaGo vs AlphaZero example, it seems like AI rawdogging the rules of the game is the way to go, instead of reinforcement learning through imperfect data. MuZero is much more generalized than AlphaZero while still performing well, and also shows understanding of long-term planning. btw. I know almost nothing about AI, I just watch videos about it.
@SgtHolton
Жыл бұрын
I know this is old, but the problem is that Starcraft, DOTA, and Chess are all games with clear win and loss conditions that thus create positive reinforcement for positive outcomes and negative reinforcement for negative outcomes. The AIs then learn patterns of play and because they aren't a human they can chase ideas in chess and video games toward their logical endpoint, and either through calculation that is beyond human ability or the ability to perfectly remember past failures, become better than humans. The problem becomes that most problems that the AIs would encounter in any real life situation would not have clear win and loss conditions, no defined rules, and involve irrational actors.
@z3dar
Жыл бұрын
@@SgtHolton Good points, although those games could have irrational actors(within actions allowed by the game mechanics) as well. I wonder if the clear win and loss conditions is more of a problem for developers than AI? Most tasks, even complex ones, can be divided to have clear set of goals.
@SemiIocon
Жыл бұрын
Someone on Twitter recommended the Cybergunk series to me, so now I'mma evangelize everyone I meet. I know some people who are wildly overestimating AI potential and as someone interested in history and a cursory knowledge of programming can't fathom how they got to that point except they just buy all the advertisement.
@Vode1234
Жыл бұрын
Dear ai Is a hotdog a sandwich?
@laurentiuvladutmanea3622
Жыл бұрын
This was a great video. It is a crime it is not more popular.
@Ciretako
Жыл бұрын
I'm gonna show this series to my partner and watch it with him >:3
@patrickhaynes3090
Жыл бұрын
For the Al Gore Rhythm!
@joratto2833
2 жыл бұрын
inb4 this blows up
@JeremySmith-wc4lh
2 жыл бұрын
Video was recommended to me on KZitem homepage and I loved it! I thought that you had good nuance and had undertones that should be obvious but aren't recognized by a vast number of people. For example, Elon being the world's richest grifter who has somehow managed to lie about so many promises and come up with even more excuses. Great work!
@navigator1819
2 жыл бұрын
Subbed nice video.
@breakablec
2 жыл бұрын
Good criticism
@priceofiron6900
2 жыл бұрын
🤓
@adriansantiago2967
Жыл бұрын
You look alike Sally Lepage
@adriansantiago2967
Жыл бұрын
Also great video!
@multiHappyHacker
2 жыл бұрын
BG music too loud, obscuring your cheeky chokes somewhat
@pie6088
5 ай бұрын
you look like cole sprouse
@deinemudda7169
2 жыл бұрын
laverly! Chris Knowles (secret sun) explores similiar themes of diminishing/arrested tech progress, check him out :)
@Extrashreks
2 жыл бұрын
CALEB IS FUCKING DEAD GUYS
@Extrashreks
2 жыл бұрын
:(
@steaminghottake6221
2 жыл бұрын
@@Extrashreks Maybe he's been an AI all along?
@Extrashreks
2 жыл бұрын
@@steaminghottake6221 true true, he does speak like an ai and we've ever only seen his face, arms, and stomach and nothing else. So the banner on his second channel (caleb gamman 2) where he's supposedly outside can easily be photoshopped by a person so an AI would have a really easy time making all that stuff too. And no one's ever taken a photo or video of caleb irl outside so hes clearly just a robot AI made by a scientist to talk about the media n shit, case solved.
@ministryofnonlabor1333
Жыл бұрын
might as well use GPT-3 to make it read all the published papers for alzheimers disease and make it conclude a solution
@dallascoggins1534
Жыл бұрын
In this video you said AI sucks at doing actual intelligent stuff, have you heard or seen the recent paper about GPT-4? The paper is called "Sparks of Artificial General Intelligence: Early experiments with GPT-4" and it seems like it's able to do pretty complex tasks.
@rebeccastubbing1240
2 жыл бұрын
Dang I feel so smart after watching this. That room of hundreds of people working on tesla self driving cars is so fucked up.... add that to my list of reasons never to buy a tesla even if I could afford one....
@markdodwell1226
Жыл бұрын
This did not age well 🙂
@raquetdude
Жыл бұрын
How? The Ai systems are still exactly the same they cannot process logic they are pattern based Ai systems…
@ZimoNitrome
Жыл бұрын
"NNs only copy/reproduce and never invents/interpolate" This is just false. Most training samples will be found on the manifold of a trained model but interpolations between samples will be never before seen images. Models are biased towards seen samples: If you only ever show a kid images of red firetrucks, there's a pretty high chance that kid will color a firetruck the color red.
@Deadener
Жыл бұрын
It's 100% true. AI could never conceive of an automobile if you only feed it data about horse-drawn carriages. Yet humans did. And a kid is perfectly capable of drawing a purple firetruck. Have you ever even been around kids? They draw stuff all the time that they or no one else has ever seen. This is because the human brain can generate abstractions of a concept, and rebuild them while maintaining an abstraction as an ulterior goal. AI is not even remotely close to being able to do this. It has no real concept of what it's making, and is merely comparing the terms given to it by a human who conceived of that concept to begin with.
@caiodallecio
2 жыл бұрын
The notion that neural networks cannot capture the underlying rules of a system is patently false. Neural networks can recreate a basic understanding of physics by looking at a bunch o pixels... There is no such thing as a fact when you deal with real data in the real world, it's all probability and pattern recognition all the way down. One-shot learning is a thing that exists, the system learns a function that compares different inputs instead, that's basically how facial recognition operates, and it works pretty well. You ask a machine to do basic math only by looking at an array of pixel values and are surprised that it answers 99.9% of the time correctly when half the numbers were drowned by a toddler on crack. You shouldn't talk about technical subjects when you have near to no underlying understanding of the words you are using. The fact that you say that pattern recognition is not intelligence when one of the first conclusions of an AI course is there is no way of knowing if any system of intelligence is just a table with pre-programmed answers for patterns or actual intelligence... Just go do an AI course online or something...
@sachoslks
Жыл бұрын
This comes off way too dismissive bordering on missinformation. We have teached machines how to paint, to listen/transcribe like a human or even better (Whipser by OpenAI), how to write and how to code, DeepMind made huge advances in Protein Folding and found a new algorithm for matrix multiplication using AI, who cares if it is "just pattern recognition". It seems like the goal post moving for AI is never ending.
@0nc0l40
Жыл бұрын
you keep saying that we're horrendously wrong, that we are spreading misinformation but you never explain why or as the reason you're placing blind output, not the actual mechanism and research pappers behind it...
@ZackScriven
2 жыл бұрын
12:00 it’s only literally infinite if you want it perfect. And it doesn’t need to be. Just better than humans.
@sama847
Жыл бұрын
How much better is your goal though? Slightly better? Way better? The better you want it, the closer the data approaches infinity. At some point it all becomes unfeasible
@theresalwaysanotherway3996
Жыл бұрын
how can you talk about there being no progress in the architecture of the AI models, and then talk about thispersondoesnotexist (a generative adversarial network from 2018), and Dalle-2 (a diffusion AI from 2022). They are fundamentally different architectures for a neural network, resulting in a significant boost to its ability to be creative. To portray all image generative AIs as similar to the GAN of thispersondoesnotexist is either in bad faith, or shows a complete lack of understanding in the field. Especially with the new breakthrough of distillation of guided diffusion models.
@Deadener
Жыл бұрын
Being creative requires intent or desire to create something. An AI software is never creative, because it isn't capable of of exerting any kind of will. It has no concept of the thing it's making, other than what humans and their own creativity have fed into it.
@0nc0l40
Жыл бұрын
You telling me that interpolation of reconstruction algorithms is what "being creative" means to you?
@polecat3
Жыл бұрын
I don't like this video and it contains a few factual errors
@Deadener
Жыл бұрын
Care to point out those errors?
@0nc0l40
Жыл бұрын
@@Deadener no he's a bot
@krebul
2 жыл бұрын
Wow, either this guy has no idea what he's talking about, or he's out-right lying. So many bad arguments here, it's impossible to address it all. I read the comments and was shocked at how many people assume this is credible because it's nicely produced.
@steaminghottake6221
2 жыл бұрын
Wow - either this poster has no idea what he's talking about, or he's out-right lying. One bad argument here and it's possible to address it - @krebul, you made some claims, now back them up. Post your counterarguments. Also, we all know Caleb Gamman of the Caleb Gamman KZitem Account isn't an expert in AI, so I'm not here to be educated. I'm here to be entertained and have my thoughts expanded. This video does both of those things.
@nimashoghi
2 жыл бұрын
@@steaminghottake6221 I'm not OP, but here are just a few issues I found w/ the video (with the relevant timestamps), and sorry for the long text: 1. (6:40, 7:20) It seems like a central argument of this video is that some of the most hyped displays of deep learning (e.g., DALLE-2 for text to image, StyleGAN for ThisPersonDoesNotExist) essentially copy the training data with minor changes. I think that's a stretch. Even in the examples shown in this video, 6:49 shows images of Koalas on bikes that you will not be able to find on the internet (i.e., it's not copying the training data); instead, it has "learned" the visual features of Koalas and bikes and has merged it together. The example at 7:21 is from a paper whose own analysis somewhat contradicts the video's point: "With enough [identity] diversity (here, towards 880 identities), [identity membership] attacks are reduced to near guessing", where "identity membership attack" refers to the scenario where a newly generated face is so similar to one from the original dataset that a neural network can detect it. 2. (10:18) The removal of Tesla's LIDAR sensors was not an arbitrary decision. It was done because sensor fusion (the process of combining the LIDAR sensor readings with the camera stream to create a model of the vehicle's surroundings) is extremely difficult, and the AutoPilot engineering team found that only using the cameras and avoiding having to do sensor fusion produces better results. See Andrej Karpathy's talk about Telsa's AutoPilot for more information on this. 3. (4:36) Pattern recognition is being brushed off as a simple task, but keep in mind that some of the world's most challenging tasks involve understanding the patterns behind some unknown phenomenon. Science is pattern recognition for the processes that govern the physical world around us. Building models that are very efficient at finding patterns can be extremely useful. AlphaFold 2, for example, is a model by DeepMind that predicts proteins' shapes with a 90% accuracy, and it is a technology that will likely aid the creation of future drugs for cancer or other diseases. 4. (4:19) Discussing what is considered "intelligence" is a philosophical question and is, for all intents and purposes, a waste of time. The history of AI is filled with stupid debates on what is and isn't considered intelligent. Before Deep Blue, playing high level chess was considered a form of intelligence. Since then, some have argued that true intelligence implies some creative/artistic ability, and after things like style transfer and DALLE, critics can slightly update this definition again to disqualify the new state of AI. In reality, the machine learning community doesn't engage with this or care about this question at all. 5. (5:25) In general, there are many critiques of the current AI research community not looking at "fundamentally new approaches", but saying this is meaningless without painting some sort of context for what a better alternative path is. Caleb critiques both heavily data-hungry pattern matching methods and rule-based methods, but -- having watched the video a few times now -- I still do not understand what alternative strategy he proposes. It's easy to criticize without having any kind of solution. I get the feeling that Caleb is coming from a place of good faith and has put quite a lot of effort on the video, and I do not want to denigrate him. This should all be taken as constructive criticism. With that said, there are some very key misconceptions in this video that help build a misguided picture of the current landscape of AI research.
@laz2452
2 жыл бұрын
@@nimashoghi Thank you for the detailed response. It is nice to see a well explained criticism. If you don't mind I have some counterarguments of my own from the perspective that there's not much wrong with Calebs video. It is true that the video has "faults": not as detailed / nuanced as could be, focuses on what AI can't do a lot more than what it can. However, it works well in the context and purpse it's meant for: -everyone knows what AI can do, we all kept hearing it for a decade now -the video is meant to be short, not a lecture -emphasizing entertainment value over education -the target audience is not the AI researcher, but the public who heard about AI but have little idea of how it actually works 1) I would say the central argument is this: AI described to the public / decision makers / investors, in a way that is deceptive, by calling the algorithms "intelligent" and talking about "learning", "working like the brain" etc. 4) As making reproducing "intelligence" is the idea that is being sold to investors / society, this is an important question of potential false hype / advertisement. People also have a good understanding of what intelligence is. You can differentiate animals due to being less intelligent and you people know what type of problems can be solved through intelligence. Moving the goal post with chess was fair. Intelligence results in finding a quick and efficient solution, however the same problems that can be solved through intelligent reasoning, may be also solved through brute force, or trying all possible solutions. Before the chess algorithms the only way people knew how to play chess was by intelligent reasoning, however the chess algorithms showed that brute forcing was also a viable option as computers can repeat simple calculations very fast (which no one ever doubted, with our without AI) 1) Koalas and bikes perhaps could have been explained better. The point was not that the AI literally chooses one of the imput images as the output, as clearly it produces some new combination of inputs. The point was the AI models work broadly on specific coloured pixels on specific parts of a screen, with no abstract reasoning. Once an intelligent human knows what marriage is, the human knows the abstract concept, and will have no difficulty recognising marriage for different ethnic groups. An AI generally would need an example for each ethnic group as it is only counting pixels. 5) The point was not to criticise and how to do better. The point was that current AI research is not leading to Skynet as it's completely different than strong AI. (However, public discussion is constantly around strong AI and Skynet) This point can be made without providing the recipe for Skynet. 2) Your explanation makes sense from an engineering perspective, however from an outside perspective, depth perception is clearly useful for driving. The AI not being advanced enough to handle the data is not a good reason for ignoring depth. 3) I don't think the point was pattern recognition itself is simple or not useful. I believe the point was pattern recognition alone (without any abstract reasoning) is not enough to create strong AI.
@nimashoghi
2 жыл бұрын
@@laz2452 I appreciate the well thought out response. First of all, I completely understand that the video's purpose is to educate an audience who is not well-informed on the subject matter. With that said, however, I believe that this increases the burden on the author to make sure that everything is correct. This is because while a well-informed audience would quickly call out inaccuracies, a not-so-well-informed audience would not do so and take the video's points/arguments as facts. With that said, I would be willing to overlook small factual inaccuracies in the video if the underlying narrative showed a good overall understanding of the current state of ML and ML research, but I do not think that it does. Additionally, I'm not here to argue for the Skynet/artificial general intelligence narrative that some people have been pushing. However, I must note that these kinds of narratives are almost always pushed by people who do not truly understand the current state of ML (e.g., business folks). All serious ML courses in universities immediately throw away these grand claims, including claims about AGI or about modeling the human brain. 1) I don't think the claim that people have a good understand of what constitutes intelligence is true. For what it's worth, recent NLP models outperform humans on a series of language understanding benchmark tests. Moreover, people, on average, cannot tell the difference machine-generated (e.g., by GPT-3) text or human-generated text. Same deal applies with images. Also, new ML-based chess engines (Lila, AlphaZero/MuZero, etc.) are able to completely outplay both humans and brute-force engines using some very deep positional understanding of chess positions. 1) Caleb's original point was, indeed, that models are outputting extremely similar images to the ones in the source dataset. I implore you to go over his sources list and read over the relevant paper that he cite as his source for this claim. The paper studies this question and shows that models are vulnerable to this kind of problem. However, as I mentioned in my original comment, it also concludes that with enough diversity in the dataset, this problem can be mitigated. I really don't like to entertain this claim that AI models lack "abstract reasoning", for the same reason that I don't like the debate over what is "intelligence". Recent deep learning based models have beat the best Chess, Go, DOTA, and Starcraft players, just to name a few (and this is without just bruteforcing the entire game like Deepblue). All these games take an immense amount higher-level thinking that we would previously attribute to humans. Bias (e.g., racial and/or gender bias) in AI is a very important and major concern, and there are many research labs across the world that are doing vigorous research in this domain. Finally, just because you brought up marriage: Do us humans really know what marriage is? In the 19th century in the US, women were essentially their husbands' property, with no right to property, right to enter contracts, or right to keep their own wages. In some countries right now, marriage is very similar to 19th century US. Shia Muslims have a concept called Sigheh (aka pleasure marriage or temporary marriage), where a man and woman are declared husband and wife for a limited time. This has historically been used by traveling men to have wives (and thus sex) during long distance travels. Is that marriage? Would someone who was completely unaware of these kinds of practices see this act and immediately classify it as marriage? 5) I agree with your point that we are nowhere near AGI or Skynet. However, I don't agree that this was the video's point. The video's point was to do a deeper dive into the existing AI and claim that it is not living up to the hype. In doing so, Caleb made a myriad of false/badly supported arguments. 2) Caleb specifically said that Elon decided to do this "for some reason", insinuating that this was a dumb decision from higher ups. This is just outright false. With that said, using a camera-based system doesn't get rid of depth data, especially when you have multiple cameras that provide perspectives from different positions (which is the case in Teslas). Finally, on nearly all metrics, this new system outperformed the old system. As mentioned in my original comment, I recommend you watch Andrej's talk. 3) I understand this claim, but, with all due respect to everyone involved, I don't think either you, I, or Caleb are qualified to make this assertion. Putting aside all the AGI stuff, there's serious debate and research in the ML theory community on whether deep learning in interpolating (i.e., producing outputs within the domain on the original data) or extrapolating (i.e., producing new outputs that are outside of the original data domain). An interesting paper on this is "Learning in High Dimension Always Amounts to Extrapolation" by Balestriero and LeCun. If this is an undecided question in the ML research community, I don't think that it's as certain as you claim. As a final point, I don't want to come off as protecting the ML community from all criticism. As an ML researcher who has worked in academic and industry research labs, there are hundreds of issues (systemic or otherwise) that exist with the current state of ML research that I could easily point out to you. The ones argued in this video (with the exception of a few) are just not concerns that I share.
@Warsoly
Жыл бұрын
@@nimashoghi can you iterate through some of the issues you mentioned in the last paragraph, out of sheer curiosity?
@ZeKnife
Жыл бұрын
It's clear you put a lot of effort into this video. But I have to say, you don't seem knowledgeable enough about the history or current state of machine learning to be speaking authoritatively on the subject.
@Deadener
Жыл бұрын
Show examples. Otherwise, your comment is nothing more than a baseless handwave that isn't useful to anyone.
@StevePelcz
2 жыл бұрын
[You need a calebgamman.com account to view this comment.]
Пікірлер: 111