"OPEN AI has already achieved AGI through large model training" you know there are more efficient ways to clickbait than this , right ? "Sam Altman nude photos leaked" . "Illya Sutskever has become Sus-tskever" "Elon Musk did WHAT with his mouth on that space rocket ?????? 😱 😱 😱 😱 😱 😱 😱 😱 😱 😱 😱 😱" "Cure your incel loneliness ! Check out how this multi modal agentic workflow creates my dream waifu 🤤 😍😍"
@zaubermanninc4390
28 күн бұрын
that one is decent "Illya Sutskever has become Sus-tskever" 😂😂😂💀
@genteka5106
26 күн бұрын
Elon Musssssk 😭😭😭
@emperorpalpatine6080
25 күн бұрын
@@genteka5106 Jesus's crust 😁
@GsacsCuny
Ай бұрын
The power of deep neural networks is that they can store huge numbers of linear or nonlinear patterns. That is why we have llm. To reach AGI, we need to develop patterns, not finding patterns. That means only data driven we can not reach AGI. We need somethings beyond data driven.
@washedtoohot
28 күн бұрын
Would reinforcement leaning count as something beyond data driven?
@GsacsCuny
28 күн бұрын
Reinforcement learning is an optimal algorithm to learn optimal patterns based on data and simulations. By simulations you can not develop new patterns.
@imaspacecreature
26 күн бұрын
@@GsacsCuny Working only on the pattern level, only begats learned patterns. Patterns derive from symbols, to develop new patterns, we must start at symbols. Neurosymbolic structures.
@GsacsCuny
26 күн бұрын
Maybe we need to build some pattern data center, and do some mining from patterns based pattern logics and objects oriented. We need to combine data driven and pattern driven.
@camelCased
25 күн бұрын
I'm wondering what drives us, humans. We also receive lots of data. Not only text, of course, but sensory input from outside and also from our own bodies. Maybe one of the important mechanisms that we have is the internal feedback loop - the ability to think about what we think about. And then, of course, continuous learning and summarization to remove specific details but keep the important stuff - concepts, associations, reasoning. Continuous internal cleanup and distillation.
@EvganyVorona
29 күн бұрын
Clickbait
@kaynakkodarsivprogramv.0.040
23 күн бұрын
Thank you very much for the speech Ilya. We love you very much. I hope you continue to give these speeches that will go down in human history with your contributions to Artificial Intelligence.
@superfliping
Ай бұрын
I finally got information from gpt 3.5 prompt hidden redacted information. 200 days of prompting different formats. Thank you for your informative notes. Back propagation is the redacted prompt manner to remove any user data from prompts by user without knowledge of processing. Bais generations are included after this process and procedures. Calculate the parameters to meet full corporate objectives and hidden options in there own formats.
@PhilipSportel
15 күн бұрын
As someone who already learns this way, I can vouch both for its effectiveness and its appearance of confusion and chaos from the outside. A huge problem with our understanding of learning is that we try to do it with others from the outside, where information is incomplete, sparse, and frankly, methods are often prejudiced to make the learner second-guess or doubt themselves. If we taught humans to learn this way, we'd be doing a lot better.
@joshuasmiley2833
Ай бұрын
I appreciate this channel very much. Watching this particular video really showed me what a amazing teacher person Ilya is and how good at teaching and conceptualizing his ideas he is. He truly is a revolutionary and I think will be remembered for the rest of humanities block in time and hopefully his ideas to help push and realize AGI Will will stretch that block of time deep far and wide into the universe. It might not be too brave to say AGI is possibly a first step to bending time and space. Maybe just maybe even to go back in time though physics as we know it does not think that likely. For sure we can go forward to the future, and if we can go back in time, maybe we will be the first generation to create a paradox of humans going back in time to thank Ilya for his work. And would’ve already happened if we weren’t the first to create the paradox therefore we are the generation that get to find out whether time travel to the past is possible as we know the traveling to the future is. Now we just have to stay alive long enough to realize it. The possibilities are so exciting. I keep fearing I might get into a unfortunate accident and not realize the possibilities of longevity in the fountain of youth that I’m sure AGI is going to greatly push forward for our realization
@zaferatasoy3095
28 күн бұрын
Think positive things ;
@SBLP24
27 күн бұрын
The title "OPEN AI has already achieved AGI through large model training" is misleading. While OpenAI has made significant progress in developing large AI models, like the GPT series, claiming that AGI (Artificial General Intelligence) has been achieved is not accurate. AGI refers to an AI system that can perform any intellectual task that a human can do, with full understanding and adaptability across a wide range of tasks. Current AI models, including those developed by OpenAI, are highly advanced but still fall under the category of narrow or weak AI, meaning they excel in specific tasks but lack the broad, general intelligence characteristic of AGI. A more accurate title could be: "OpenAI Advances Large Model Training, Moving Closer to AGI."
I didn't understand why you said the only cost in reality for real agents is survival. Surviving is an objective, the cost would be time, or energy or will, or other, but talking about the meaning of the words, survival is not a cost. The presentation was awesome, thanks
@TheLex1972
21 күн бұрын
Small correction to what Ilya says at 32:30: the groundbreaking work on artificial life that Karl Sims did in 1994 was not performed with tiny computers but on a CM5 Connection Machine, which was a massive, room-size, supercomputer at the time.
@atishbhattacharya3473
28 күн бұрын
I don't dare to challenge him on his domain knowledge about language models, but I personally don't think that llms can reach the illusive AGI. There's something that we're still missing... My guess is it's some sort of large conceptual model, large information model or large symbolic model. Language itself isn't the answer.
@Josephkerr101
28 күн бұрын
Language is symbolic though. The issue is that language and neurons are polysemantic. They change over time as seen in etymology. So it also needs to be plastic. The models as they are update model to model, but not actively. At least on the publicly available front.
@flflflflflfl
27 күн бұрын
2:49 "solar booty back proper" lmao
@CatsAreRubbish
8 күн бұрын
What's the point in using subtitles if they're completely wrong every other line?
@dannyisrael
29 күн бұрын
Where and when was this lecture? Did you just steal it?
@alija83
21 күн бұрын
as @stanislavbaranov821 mentioned, it was in: Wednesday, January 24, 2018 EECS Colloquium 306 Soda Hall (HP Auditorium) kzitem.info/news/bejne/s6x73XqspV-FpY4
@himanish4541
14 күн бұрын
Funny that the original lecture has 11k views but this one with the clickbait title has twice as many
@0zeroplays0
24 күн бұрын
Note to self: Try to implement backpropagation on new words and sentences with new words and preprocessing them to run a tiny training script on just that data and not the whole model but just enough by freezing all but the weights and parameters needed for said data and do backpropagation on that during inference. 4:30
@gakff
28 күн бұрын
When was it?
@stanislavbaranov821
26 күн бұрын
Wednesday, January 24, 2018 EECS Colloquium 306 Soda Hall (HP Auditorium)
@RukshanJ
27 күн бұрын
When was this talk done ?
@14types
26 күн бұрын
EECS Colloquium Wednesday, January 24, 2018 306 Soda Hall (HP Auditorium)
@JTedam
Ай бұрын
May be I am confused but it seems to me Back Propagation IS reinforcement learning. Only this time, the agent is the neural network itself and the learning is the appropriate weight for the feature. The adjustments (action) of weights is essential reenforcing the rights weights of the features. One thing that is often overlooked is the determination of features. That is not an exact science.
@thinkerthoughter100
29 күн бұрын
That is a very interesting thought. Back propagation is indeed RL.
@dannyisrael
29 күн бұрын
But isn’t it an implementation detail? Isn’t the labelling of the data the reinforcement learning?
@rohanpawar2436
28 күн бұрын
Bo, in reinforcement learning only agent changes, with learning , the state and environment dont. But, glad that you went with this thought process
@CatsAreRubbish
8 күн бұрын
This talk is from *2018.* The original stream is on the Berkeley EECS (Electrical Engineering & Computer Sciences) KZitem channel. This channel, Me&ChatGPT, can be ignored. It's rubbish.
@ericlees5534
29 күн бұрын
When did this presentation occur?
@RickeyBowers
29 күн бұрын
It could be six years old? It's not in my playlist, but Ilya has more hair. 🙂 kzitem.info/door/PLEKYDi6joZvHkoAZs2vsvkRGMrAHbOIxC
@MonkiLOST
29 күн бұрын
Insane that they already had Qstar and agents 7 years ago
@kellymaxwell8468
29 күн бұрын
so how close is agi to making videos games do we need agi for that so how close to. AI agent's ai can reason code program script map. So games break it down and do art assets do long term planing. Better reason so it can do a game rather than write it out. Or be able to put those ideas into REALITY. playing and making games.
@DrewProud
28 күн бұрын
Very metal
@bingo-rk1fy
Ай бұрын
What OpenAI achieved AGI ???
@GatePrep-f7x
Ай бұрын
Yaa ...but just in thumb nail
@zaferatasoy3095
28 күн бұрын
Could you add Turkish subtitles ?
@sonasmart
Ай бұрын
انا عايش فى مصر مضطهد من عيلة كمال احمد مرسي و ولادة اللى بيستغلوا سلطتهم عشان يمنعونى من الجواز من اى واحدة اختارها بارادتى ... انا بقالى اكتر من 20 سنه مش عارف اتجوز بسبب اضطهاد العيلة دة ليا و دة كلة بيتم على مرئى ومسمع من البلد مصر
@hannespi2886
28 күн бұрын
clickbait
@adamy4435
12 күн бұрын
then solve human longevity
@sushantpenshanwar
28 күн бұрын
How the heck he talks in so deterministic manner. There are no umms, uhhhs etc in the talk man.
@VictorMartinez-zf6dt
28 күн бұрын
Because he takes time to pause and think.
@gaylenwoof
26 күн бұрын
100% clickbait title. There was no discussion of AGI. Also: This video is not at all suitable for casual interest. Ilya is a smart guy, but terrible at teaching/explaining to the general public. He is talking to an audience that already has high-level training in AI tech.
@zeronicel4455
Ай бұрын
Pong learned to hide 🎉
@insane_neuralnet
29 күн бұрын
cool
@maccloud8526
29 күн бұрын
There will be no AGI. It's just data compression and vectorized db storage. Just a great way of compartmentalising sorting and retrieving data making things appear intelligent.
@zeronicel4455
Ай бұрын
It‘s dangerous currently in Isarael maybe call Putin…
Пікірлер: 63