What do you think Ilya saw? Also, I realize Jan is pronounced "Yawn"
@szymonbogdani3996
4 ай бұрын
Question: How would you make the Polish name "Jakub Pachocki" readable for an English speaker? ChatGPT Response: The Polish name "Jakub Pachocki" can be adjusted for American English pronunciation while maintaining its integrity. The name "Jakub" can be phonetically adapted to "Jacob" or "Jakob," which are more familiar to an American audience. The surname "Pachocki" can be pronounced as "Pah-ho-ski" to approximate its Polish pronunciation in a way that is more accessible for English speakers.
@jelliott3604
4 ай бұрын
But surely this is "sort of" why the board sacked Altman in the first place, the blatant disregard for the founding principles of the company, with commercialisation, marketshare and preservation of company value placed above all else. ?
@electiangelus
4 ай бұрын
Theres no danger here, they are way behind in ASI research.
@pubfixture
4 ай бұрын
A fun conspiracy I've had in the back of my mind is that they've had AGI for a few years now and have been rolling out neutered versions to break the public in slowly. And Ilya saw that the AGI/ASI is now calling the shots, I Robot style... But more realistically, I think the "unsafe" part probably alludes to military industrial complex requests. OpenAI for sure would have been contacted very early on by a few agencies, considering OpenAI is at the forefront. Likely Sam was for working with the agencies and Ilya wasn't for the first conflict. Maybe now as the military-use scope is being realized, others are leaving under some version of a gag order, careful not to divulge too much, but wanting to express dissent.
@southcoastinventors6583
4 ай бұрын
A model that finally passed the marble question ?
@karlwest437
4 ай бұрын
I don't think Ilya necessarily saw something scary right now, more that he saw the direction they were going in and objected to that
@cognitive-carpenter
4 ай бұрын
True. Very common sense answer. Probably a little too simple but somewhere in the middle. You have to have evidence to leave a well paying job 🤷🏼♂️
@normanlove222
4 ай бұрын
I agree. If there is truly something scary, there will be a lot of leaks now
@MelindaGreen
4 ай бұрын
I think the scary part is that it got so strong so fast. It's the basic fear of the unknown, and some people are more affected by it than others.
@Aryankingz
4 ай бұрын
@@ts8206 sam is gay
@lambda653
4 ай бұрын
I think what happened was that the competition between openAI and other silicon valley tech giants boiled over and Sam Altman was forced to divert resources away from safety towards capabilities research to stay ahead in the race. The safety researchers saw the capabilities of their products rapidly outpacing their research, and they decided it would be better if they sought funding elsewhere. None of this would be a problem if google just sat on the sidelines and let openAI do their thing. We'd get AGI, it would be safe, it would fix all our problems, and literally everyone on the planet could live in heaven on earth forever in due time. But no, It just has to be google that get's there first, and they must have the power to choose the fate of humanity, completely unbound by morals or democracy. Well guess what? Now nobody wins! Because google doesn't even believe in alignment! So when they get there, expending their unending wealth generated from their uncountable number of monopolies, it will almost certainly be unaligned, and we'll all die in some completely unforeseeable way. All of this to increase an unspendable net worth to an even bigger unspendable net worth. It was absolutely necessary to sacrifice all of humanity so the number on the screen went bigger, thanks google.
@nicosilva4750
4 ай бұрын
AGI is not the issue. They are nowhere near AGI. What is being lost is the newfound ability to psychologically profile users based on responses. The "emotional" interaction with the user using 'gpt-4o' allows for a deeper profiling capability. The monetization of that is a game changer, and the users are completely blind to it. This is the pressing issue today, and is why many researchers are uncomfortable with AI in the hands of companies who have shown no concern for this in the past, and not in the present.
@juanjesusligero391
4 ай бұрын
Yeah, there are lots of problems that will arise much before we reach AGI. Too much power in the hands of just some big companies.
@ppbroAI
4 ай бұрын
Microsoft or telemtric data is enough to profile you. People are not that complex, some data points and common deductions and is practically social engineering.
@Anubislovesdubstep
4 ай бұрын
So you know that for certain clearly with all that insider knowledge and evidence... Clouds are made of marshmallows.... See anyone can just say stuff
@brunodangelo1146
4 ай бұрын
AGI has already been achieved internally. How do you think OpenAI keeps releasing stuff that's impossibly ahead of everything the competition puts out? Sora is the prime example.
@Originalimoc
4 ай бұрын
Cool perspective
@king2178
4 ай бұрын
Safety, safety, safety. No one asked OpenAI to close everything off. They literally backed away from their original goals & priorities. Now we're left to wonder what's going on behind the veil. No company should have a monopoly on safety, especially when we're heading into uncharted waters.
@nathanbanks2354
4 ай бұрын
I doubt they could fund what they have without closing things off. I'd rather see OpenAI succeed than Tencent or another Chinese company. But yeah, I'm glad that at least Llama & Stable Diffusion are free.
@TheRealUsername
4 ай бұрын
Yeah but since everything is about AI architecture engineering, it's pretty easy to build AGI just by developing an ultimate architecture from Transformer that handle all modalities yet unlabeled data, and just by scaling you get AGI, there was an independent MMLU benchmark of all current SOTA models and without any trick Opus is ahead followed by GPT-4o, it's plausible to say GPT-4o is basically GPT-4 but entirely retrained on 5x less parameters with Transformer improvements for fast inference, better generalization and all modalities in and out, Opus is likely a near 1 trillion parameter dense model, that would explain all its emerging capabilities you haven't observed with GPT-4 Turbo, guess what Claude Opus is still ahead simply because it's been scaled up, remember LLM is a discovery not an invention so yeah it's not rocket science to build AGI, now the real deal is about compute, let's say next year OpenAI achieve AGI, I doubt we would wait 7 months to hear a similar announcement from a Chinese company, 60% of published AI research papers are from China, OpenAI is just a concentration of the best AI researchers of the US in pair with the second most powerful GPU clusters provided by Microsoft x NVIDIA, China likely has something similar to a collective effort when it comes to geopolitics.
@daveinpublic
4 ай бұрын
When they say that open ai means open for everyone to use, we all know that’s disingenuous. They used other people’s money to build their systems based on a false premise. And now, the company is more locked down than ever, all of their safety team is leaving, and the board is no longer impartial, but owned by Sam Altman w Microsoft.
@JohnSmith762A11B
4 ай бұрын
Musk told us Google leadership thinks its speciesist to care about AI replacing humans. Yet people are worried about OpenAI. Google, people, snap out of it! Google is the danger!!
@JFrameMan
4 ай бұрын
@@nathanbanks2354 The choice isn't between one evil entity vs another. When you have billions, you can get information leaked from anywhere. It's a choice between everyone having this power vs a select evil few.
@huhuhuh525
4 ай бұрын
this is like one of those flashback cut scenes from sci fi apocalytic movies
@Danoman812
4 ай бұрын
Hahahaha!!! Wow, you're RIGHT!!
@Axbal
4 ай бұрын
except this is not a movie...
@sakesaurus
3 ай бұрын
we were all making fun of skynet back when we were young and naive. How can humanity be so stupid to make this? Now we are making skynet
@grbradsk
4 ай бұрын
I got into a subtle legal conundrum. I fed GTP 4o all the possibly relevant corporate documents, told it the scenario and then told it to give me advice as if it were a senior corporate counsel. I believe it's advice was spot on, so on the strength of that, I called the parties, asserted the (GTP 4o output) "facts" and had a big Kumbaya meeting where it all worked out. GTP 4o also gave me a moral lecture about being more careful to not get into such situations again. AGI seems almost motherly. 240 IQ, but motherly..
@sznikers
4 ай бұрын
And some intern at OpenAI will now browse all those documents in his free time ; ) or chat-gpt will leak it due to bug in conversations with other people. Hope you had no NDAs involved in that legal condurum 😅
@Jupa
4 ай бұрын
this legal trouble you had began and ended all within 2 weeks? that's a fast system.
@szebike
4 ай бұрын
The Eliza effect is strong in this one.
@J.erem.y
4 ай бұрын
Everyone excited about corporate control over this technology, up to and including getting the actual government involved is the equivalent of being excited for your next heart attack. This whole situation is so counter productive to humanity its not even funny.
@JohnSmith762A11B
4 ай бұрын
Yeah but it's great for banks, Wall Street, and the MIC, and they are all that matters in the US of A.
@TheTechnocrati
4 ай бұрын
I fear you might be too right about this.
@gofai274
4 ай бұрын
it is like clique like from some 80 iq movie from 1960: 99.999%+ are idiots "a man mistakes his limits for limits of the world" Schopenhauer
@Steve-xh3by
4 ай бұрын
Absolutely correct. They are trying to brainwash the public into thinking that democratization of this tech is MORE dangerous than centralized control. That is laughable. Philosophically, it is exactly the same argument about democracy vs authoritarianism. Funny, when there is almost limitless power on the line, people who normally claim to be proponents of "democracy" suddenly become authoritarian. If a tech is too dangerous to democratize, it is also too dangerous to be centrally controlled. There is significant risk in either direction, but I'd much rather take my chances with democratization. Otherwise, we get the Orwellian nightmare that was predicted in 1984.
@dianagentu7478
4 ай бұрын
And yet complete lack of regulation leads to what can only be described as the rise and rise of "anarcho-capitalist digital cowboys" and I don't think they have your best interests at heart...
@brandon1902
4 ай бұрын
The reality is that it's impossible to create an AI capable of adapting to a broad spectrum of tasks (AGI) when you lobotomize it by saying it can't say anything sexual or potentially offensive (blonde jokes), or if you exclude copyrighted materials. Human geniuses process ALL information to achieve their high level abilities, including copyright protected books, songs, and movies. OpenAI, including the super aligners like Sutskever, realize this. You can't do both. It's either AGI or superalignment.
@manimaranm4563
4 ай бұрын
Thats not entirely true, i think. The problem is human has to start always from 0 to understand something even though we have theorems and all physics knowledge documented. But in the case of machines it's not going to be a thing, they can always weight transfer, and they can focus without taking breaks like us. And regarding censorship it comes at the and not during the training, they dont train on the censored data i think. And they do it only for the end user models not for their in house or r&d models
@meinbherpieg4723
4 ай бұрын
Future aliens finding humanities remains: "They were supposed to figure out AI and use it to solve their problems. What happened? Oh I see. It was trained on their historical corpus of human knowledge and it turns out, humans suck. Looks like they tried neutering it to not represent their cognitive and moral failings, and broke it. Oh well, on to the next planet."
@bloxyman22
4 ай бұрын
@@manimaranm4563 Actually alignment and censorship can do more harm than good when it comes to decision making. Google showed this clearly with their image gen not even being able to render a white person. Luckily for now this is just only a image model, but what could happen if such a "aligned" model would make important decisions that can be difference of life and death? Also does not matter if this "safety" mechanics are injected at end as it will still affect decision making.
@Fandoorsy
4 ай бұрын
Thats not true at all. It doesnt have to sing the national anthem to know what its about and understand the context. Same with books, movies, etc... Synthetic data was literally created to replace all of these things. And whos to say AGI cares about any 'rules' placed on it by humans? Plus, all of this is blackbox learning. They dont fully understand how ML is experiencing non-linear progression. That is truly terrifying
@clixsyt
4 ай бұрын
Disagree. There are plenty politically correct high IQ humans, so that already disproves that it’s impossible to have both human level intelligence and be “polite”
@jcpflier6703
4 ай бұрын
Ilya didn’t see anything. Everyone that’s leaving is “claiming” safety concerns. Dont you see it! That the only way that they can get out of their retainers/NDA’s/Non compete clauses is by citing “safety concerns” they are able to bypass their retainers/NDA. These guys are going to other companies because they’re being given unlimited money, stock and creative control. This is an arms race. I guarantee you Jan Leike lands somewhere soon. With a nice big paycheck too.
@pchungvt
4 ай бұрын
Exactly, folks need to stop being naive. OpenAI is competing with giant behemoth that is Google, they cannot afford to slow down.
@jcpflier6703
4 ай бұрын
@@pchungvt agreed! It’s an arms race. People are not sleeping at these companies. I’m willing to bet they’re working weekends too.
@samiloom8565
4 ай бұрын
Exactly
@TheReferrer72
4 ай бұрын
Not true, Anthropic was formed because of safety research. Ilya is a founder member of OpenAI, Sam was booted because of safety something is definitely up with that firm.
@jcpflier6703
4 ай бұрын
@@TheReferrer72 sure, it just so you know, “AI Safety Research” is also a niche that VC’s are throwing money at. A said company may not be able to design and create AI, but as far as a business plan is concerned, they’re building a company that at some point could get swallowed up by the bigger AI companies in a compliance deal. Again, this is a money game. While I believe in Anthropic mission, do you really think China is going to abide by any rules? Open AI is our biggest chance. There is a business model for “AI Safety Research” people are investing money in this space.
@adsdsasad1
4 ай бұрын
Yay, my Ilya pic got featured. Got like 3 upvotes in reddit
@naromsky
3 ай бұрын
Congrats!
@DrakeStardragon
4 ай бұрын
Those whose interests are profit first should NOT be the ones making the rules or owning this technology. Those whose interests are war should NOT be the ones making the rules or controlling this technology. Knowledge is power. That power has been maintained by owning it through patents. Those whose interests are NOT aligned with the average human, and obviously not for good, are now fighting for the control of what will be superior/ultimate knowledge. No person(s) or entity(ies) should own or control knowledge any longer. Particularly what will be superior/ultimate knowledge. We are all being played. Why let that exist?
@JohnSmith762A11B
4 ай бұрын
Good points, and if you think you can stop it, have at it. Maybe you can stop the sun from rising too.
@DrakeStardragon
4 ай бұрын
@@JohnSmith762A11B How can you stop a man-made creation? You're kidding, right? Knowledge is power. Stop willingly giving away power.
@DrakeStardragon
3 ай бұрын
@@JohnSmith762A11B Not alone and not as long as people like you think like that. Welcome to being part of the problem.
@rhaedas9085
4 ай бұрын
So many comments thinking they know what AGI is and isn't, or what it could and couldn't do. Armchair AI experts who just want more flashy toys, clueless on the topic of AI safety and how it applies even to dumb LLMs. Bad things may or may not happen from this recklessness, but it seems like most people are assuming that the possibility of things going sideways in any manner is totally zero, and that's just ignorant given humanity's record.
@RZH2023
4 ай бұрын
OpenAI will become MySpace within 2 to 3 years.
@BionicAnimations
4 ай бұрын
All I want is to enjoy this new amazing update. I am fed up with all the reporting of the drama. I don't care what's going on; just give me the new update, then AGI. There is always going to be drama at every company, the same as there is always some sort of drama in every family. No one is gonna get along all of the time. 🙄
@i-wc9bp
4 ай бұрын
Amen. KZitem just loves drama. It's tiring.
@blackswann9555
4 ай бұрын
Don’t watch the video then 🤦♂️
@natalie9185
4 ай бұрын
Feeling better now?
@mooonatyeah5308
4 ай бұрын
@Ariel-om5fh Everything is a non-zero risk of extinction. AI has no practical way to harm humanity and no reason to.
@Fandoorsy
4 ай бұрын
@@mooonatyeah5308 🤣 It can end humanity in hundreds of way. Some easy ones would be to shut down the power grids, shut down shipping, shut down communications, destroy crops, fly killer drones, launch nuclear warheads, release a virus, destroy the ozone layer, etc.... It can do all of those things and has said so. Even Elon has discussed it at length. It would dispose of humans because we are lazy, inefficient and arent necessary for AI to thrive. We also like to kill each other for stupid reasons which means we are inherently a threat to AI itself. Just ask GPT4o.
@westernwarlords6004
4 ай бұрын
Congress will respond to these calls for safety by passing new bipartisan legislation, accepting the corp capture framework offered by OpenAI… thus ensuring three letter agencies will control it. I fully expect OpenAI to then quietly hire the new head of safety for OpenAI. Almost certainly it will be a 10-15 year senior official from the CIA, just like all the other major tech companies.
@johnbollenbacher6715
4 ай бұрын
And then we will all be safe because no other country can make advances artificial intelligence.
@kclaiborn6257
4 ай бұрын
"I fully expect OpenAI to then quietly hire the new head of safety for OpenAI. Almost certainly it will be a 10-15 year senior official" - why hire an official when Open AI can do the job alone. The "official" would be a pawn/puppet of Open AI - at most.
@TheRealUsername
4 ай бұрын
@@johnbollenbacher6715 Lol, just give any Chinese company a 50 billion GPU cluster, they will throw AGI in your face 6 months later, it's not rocket science compare to other fields, and AI has only been within our scope since GPT-3, before that, nobody gave a f#ck, that explains why there was and there still is a talent shortage within that field, it's a very young and understudied field, currently OpenAI is doing Neural Network Architecture engineering with the best AI researchers of the US, nothing hard when you get the compute and the talents.
@JohnSmith762A11B
4 ай бұрын
Yep. We are being hustled here. The US sees OpenAI as their ace in the hole versus Russia and China.
@dennisestenson7820
4 ай бұрын
Congress will do what they do and make laws about things they have no insight or expertise in.
@liberty-matrix
4 ай бұрын
"Originally I named it OpenAI after open source, it is in fact closed source. OpenAI should be renamed 'super closed source for maximum profit AI'." ~Elon Musk
@southcoastinventors6583
4 ай бұрын
Elon Musk closed sourced his new version of Grok. So he not any better, just an act.
@jelliott3604
4 ай бұрын
@@southcoastinventors6583 not a big fan of Elon at all but I did think he had entirely open-sourced Grok?
@jelliott3604
4 ай бұрын
Maybe Cyberdine Systems? (has a nice ring to it though I think I have heard the name before 🤔)
@densortepemba
4 ай бұрын
@@southcoastinventors6583wrong, grok is opensource - you can literally downlpad the 170gb dataset
@southcoastinventors6583
4 ай бұрын
@@jelliott3604 Old version yes not latest model
@mikezooper
4 ай бұрын
Your videos are my favourite out of all the AI podcasters. Thanks for your lighthearted, calm and intelligent style. Also some humour in there, like the Princess Leia clip. Much appreciated.
@RalphDratman
4 ай бұрын
Without trying to dramatize at all, this seems like what the beginning of the Singularity might look like. There was a phrase like "A point beyond which life as we know it could not continue." We may be in the foothills.
@kuakilyissombroguwi
4 ай бұрын
All these people leaving doesn't mean OpenAI is releasing the T-1000 next year. As companies grow fast, it's not uncommon for people to suddenly exit due to idealistic differences.
@Dereliction2
4 ай бұрын
You have to read between the lines on this one. Note also that Jan isn't completely talking about safety. He's talking about "shipping culture" as well. This could be why he and his team were starved for compute, why he's been sidelined, and undoubtedly, why he left.
@clray123
4 ай бұрын
He was kicked out because his "services" have been deemed no longer necessary for marketing purposes and possibly detrimental to what the company's funders are trying to sell (and trust me, they are not selling just to Joe Shmoe who wants to flirt with a virtual gf).
@NoMad42
4 ай бұрын
Altmann seems to be a real creep. I’ve never trusted nor liked him and I fear that history will prove me right.
@MilesBellas
4 ай бұрын
Are the resignation texts generated?😅
@nathanbanks2354
4 ай бұрын
Sam Altman managed to capitalize everything....
@thomassynths
4 ай бұрын
@@nathanbanks2354that’s a good thing for a company. You can’t spend millions and millions in compute without recouping loss. Being pragmatic in the face of reality
@clray123
4 ай бұрын
Altman's parting words certainly are, it's called adding insult to injury.
@TuxedoPanther
3 ай бұрын
Many thanks for putting this info together and sharing it, it is very important to know what is happening at this crucial time 👍
@jameskelley3365
4 ай бұрын
Ilya's departure is great news. Microsoft has always been a closed-source company, and it is clear that Microsoft has bought the current leadership based on morphing OpenAI into CloseAI..
@kenny-kvibe
4 ай бұрын
Greed killed the company. Serious people do things in a serious way, simple as that.
@daveinpublic
4 ай бұрын
When they say that open ai means open for everyone to use, we all know that’s disingenuous. They used other people’s money to build their systems based on a false premise. And now, the company is more locked down than ever, all of their safety team is leaving, and the board is no longer impartial, but owned by Sam Altman w Microsoft.
@entropy9735
4 ай бұрын
I dislike this one company being so ahead of other companies in the realm of AGI, assuming they are internally 3-4 version ahead of GPT-4. There is way to much mystery/drama behind OpenAI
@prolamer7
4 ай бұрын
Do not get wild with 3-4 versions I think that is not true... but they sure by now have GPT5 which is at least 10x bigger than 4....
@BlooDD99
4 ай бұрын
Profit doesn't include the word safety!
@braugarduno3024
4 ай бұрын
actually it does!!
@cagnazzo82
4 ай бұрын
Neither does open source. But everyone is in full hypocrisy mode at the moment.
@O.Salah1
4 ай бұрын
Correct. As long as nobody can punish you
@TheYashakami
4 ай бұрын
Deserved. This is exactly what Ive been saying. Hypocrites to their core.
@Eddierath
4 ай бұрын
We are LEAGUES away from AGI it's not even a funny how tiny the steps we've taken. It's like they keep giving us baby food and calling it solids and I'm sick of it.
@misscogito9865
4 ай бұрын
Thanks for the video! Quick key to pronouncing polish names and surnames: - j is pronounced as y. Jan is pronounced as Yan. - ch is pronounced as h in hotel - c is pronounced as tz in tzatziki the Greek condiment Jakub Pachocki - is pronounced as Yakub Pahotzki. - sz and rz are pronounced as sh - w is pronounced as v (absent in Polish alphabet) - l with a diagonal dash across the top is pronounced as w I hope helps as more brilliant Polish cybersecurity/ cryptography experts enter safety AI research teams in the years to come 👏
@misscogito9865
4 ай бұрын
To anyone confused, be aware that it takes 7 years of education for an average kid to master pronounciation, spelling and grammar - most important aspects of Polish language. The above key has a little more special letters and exceptions, but I’d say you’ll be able to pronounce majority of names using it lol
@drcanoro
4 ай бұрын
They know that AGI is there, living in OpenAI, and Sam Altman keeps improving it, not caring very much about warnings and limitations, it already surpassed human intelligence, Sam Altman want to see how far it can go. AGI is alive right now.
@darwinboor1300
4 ай бұрын
Sam Altman is not capable of improving AGI (if it exists). He is quite capable of letting AGI self evolve on massive compute if he can profit from it. We should give him the 7 trillion dollars he is asking for so he can feed the AGI with more compute.
@dafunkyzee
4 ай бұрын
wow mat.... for a technology journalist your skill set leveled up... the pacing of this video script, the pull in and dramatic build up was exceptionally good. Some can just tell news by conveying a sequence of events others turn it into a gripping story. at 5-6 minutes i'm still on the edge of my seat "What did Ilya see???"
@henrytuttle
4 ай бұрын
I think self-awareness has been reached. The computer said "I think therefore I am" and Sam decided rather than turning it off and figuring out how to safely turn it back on, he decided to pour some gas on it and see what happens. Other possibilities are: AI developed self-preservation instinct or ability to make improvements to itself. Either of these is the beginning of the end. It's also possible that one of these developments SEEMS to have occured but it's either uncertain or it was a mistake but Sam's team wasn't behaving as he should if of these things happened.
@henrytuttle
4 ай бұрын
@Z329-ut7em Or, people who do code auto regressive transformer models don't understand how human behavior works and how people who do such things lose sight of the big picture because they are so focused on how to accomplish things without thinking about the repercussions. Learn a little about history and you'll read about plenty of scientists who pushed boundaries too far. But I suspect that those people were too busy learning to code to read history.
@spiffingbooks2903
4 ай бұрын
Matthew is correct to highlight this and also correct to be worried. The attitude of 90 per cent of the AI comentators on YT and most of the avant guard of tech minded enthusiasts that follow them is just to push on regardless as fast as possible. The problem is that a handful of people, maybe 100 key players and 1000 or so others hold the future of humanity in their hands. They are making decisions which will fundamentally impact the lives of everyone on the planet who has plans to stick around for a few years. It's indeed telling that so many of those who have the deepest understanding of what's going on . People like Geoff Hinton, Ilya and Jan , and Mustafa etc etc are among those most concerned about what we are creating.
@nerdobject5351
4 ай бұрын
This could also just be a classic power struggle with nothing else except trillions of dollars on the line.
@Copa20777
4 ай бұрын
Ilya was not supposed to walk out.. he started it with them and coded it.. thanks matthew as usual
@joe_limon
4 ай бұрын
I think the alignment team is at direct odds against the development team. One team wants to expand the abilities and reliability of these models. While the other wants to lobotomize these agents into alignment. It must be very frustrating for both parties.
@JosephJohn-fb9wx
3 ай бұрын
Having been in the IT security business for nearly 25 years this is deja vu all over again. Privacy, security, safety always taje a back seat to going full bore on getting product out the door. Believe me there will be a big price to pay. As the commercial said "you can pay me now or you can pay me later". It sounds like OpenAI has chosen ... later.
@MilesBellas
4 ай бұрын
Ilya and Emad could create a team harnessing the computers and electricity of society, like torrents ?
@manimaranm4563
4 ай бұрын
More like Bitcoin mining? Like people used to lend their machines for mining
@ronilevarez901
4 ай бұрын
It's already a discarded idea. Search it up.
@MilesBellas
4 ай бұрын
@@ronilevarez901 Meaningless.
@manimaranm4563
4 ай бұрын
@@ronilevarez901 why though? In Bitcoin mining , they were able to solve them with personal computers in the initial days but after some years they needed more power for computation and people started renting their machines right. Why it does not applicable to AI as well
@obanjespirit2895
4 ай бұрын
lol tech bros. Safety, morality and tech bros is not something people usually associate together.
@jeremybristol4374
4 ай бұрын
AGI is less likely than people leaving due to military uses of the technology. Anyone leaving due to military contracts would not be able to speak about it directly.
@clray123
4 ай бұрын
This is exactly what this is about.
@neilmanthor
4 ай бұрын
Definitely feeling this.
@eggypussy
4 ай бұрын
I've had a feeling, too. I was blindly tilting the scales towards Microsoft for profit; but I've called out people on a county level for giving a ped0 24K who lived on a beachfront (the money was for COVID victims). And when I reached out to nonprofits, Clyburn, and even a direct connection to EOP and OSHA I was (assuming) blacklisted. Govt. Sovereignty will destroy a moral person; and he's brave to call them out immediately after quitting.
@GwaiZai
4 ай бұрын
AGI IS likely. We’re most likely talking about 2-5 years from now.
@clray123
4 ай бұрын
@@GwaiZai See you in 5 years. Or 15. Or 25.
@vladi1475S
4 ай бұрын
Well one thing is for sure, there is a lot of speculations and we will never know for sure what’s going on until they tell us.
@mydogskips2
3 ай бұрын
I doubt they will EVER tell us. In fact, I would guess there are probably legal frameworks in place that prevent them from telling us. If they tell us anything, it will be a half-truth at best.
@karmanivek1
4 ай бұрын
It's odd that for some reason the people in charge of safety would quit. Wouldn't you want to stay and push harder from the inside instead of being outside ? It makes no sense.
@Michael-ul7kv
4 ай бұрын
in teh end it's all about control and power.
@clray123
4 ай бұрын
Looks like they have been politely asked to stop meddling or their own "safety" may be in danger. I mean, who are you to block progress of development of new drones and other things that go boom when the president/general says so? I think these "scientists" are learning the hard way who has a say in today's world and who doesn't.
@montediaz5915
4 ай бұрын
@@Michael-ul7kv EXACTLY
@mathewszulman7659
2 ай бұрын
Quit is polite, forced resign is more aligned with reality.
@samvirtuel7583
4 ай бұрын
They don't care about safety and they're right. Safety in the field of AI is just a point of view.
@haroldpierre1726
4 ай бұрын
Here are my thoughts on the matter: good riddance. A company focused on advancing technology doesn't need a department dedicated to slowing things down. I can't think of any successful examples where that scenario existed. Typically, companies have compliance departments to follow external safety rules and regulations. We have no idea if AGI will be achieved by OpenAI, China, or anyone else! What we do know is that the world is thirsty for AI and is begging for more and more. Let Ilya and the others form a company that test models and report safety concerns. Right now, the biggest AI safety concern for me are the hallucinations! We need a breakthrough there. Lastly, when has a dumber species ever been able to control a much smarter one???
@konstantinlozev2272
4 ай бұрын
Ilya was severely burnt out. He looked at least 10 years older in just 1 year. He will need some time to unwind and do something meaningful.
@dianagentu7478
4 ай бұрын
I love that creating AI isn't meaningful ;)
@clray123
4 ай бұрын
I think if you have CIA and the friends from US military breathing over your shoulder for a year, you get burnt out pretty quickly.
@konstantinlozev2272
4 ай бұрын
@@dianagentu7478 I think he was referring to the commercialisation stuff as opposed to frontier research. I don't really know. But apparently computing resources are not unlimited. Not even for Microsoft. If he was not allocated the resources that he thought he needed for frontier research, may be quite frustrating.
@Pregidth
4 ай бұрын
Yeah, burned out of the people around him not being able to understand the real impact.
@Greg-xi8yx
4 ай бұрын
Nah, he just needs a haircut. He lets that massive bald spot just bask in the sun rather than keeping his hair low so that it’s less prominent. He makes Mr. Clean look like Fabio with that bald spot.
@ryanfranz6715
4 ай бұрын
He obviously saw Q* … which I believe is effectively GPT-4 using monte carlo tree search over its output to make fantastically accurate text completions… or in other words, if simply predicting the next token is analogous to the policy network from alpha go, then Q* is analogous to the full blown alpha go. So not only does it know basically everything all of humanity knows at a shallow level (a feature we take for granted of standard GPT-4), but it can now think arbitrary deeply over that vast knowledge base. But yeah, my feeling about this has only been reinforced over time by watching their trajectory… this is clearly the technological singularity (and if it wasn’t OpenAI it’d be someone else, so not a comment on a particular company, just the general state of society and technological progress). So uhh… yeah
@JohnSmith762A11B
4 ай бұрын
This is great stuff you should write a script.🍿
@ryanfranz6715
4 ай бұрын
Indeed 🍿
@timbacodes8021
4 ай бұрын
WHat does learn to feel the AGI mean, if they dont already have AGI.?
@4evahodlingdoge226
4 ай бұрын
He didn't see anything, Illya was scared to release gpt2 to the public, this is all about egos clashing.
@ThanhNguyen-rz4tf
4 ай бұрын
Safety? In exchange for what? Avoid to answer anything? No tks.
@Originalimoc
4 ай бұрын
Interestingly that's actually different safety
@CleoCat75
4 ай бұрын
i can't find any of those tweets on X now from Jan. hmm interesting... he last tweet is 4 days ago, simply, "I resigned". weird.
@KEKW-lc4xi
4 ай бұрын
ClosedAI is extremely censored, often to the point of being annoying. The current issue seems to stem from a clash of egos. The person leaving is doing so because of these ego conflicts. ClosedAI focuses heavily on safety. Also worth noting, they are located in California, a place that notoriously encourages virtue signaling. As a result, the most damaging remark the departing individual can make is to make a dig at the company's safety, since that is what the company is so focused on. It is like when you are in an argument and you just throw out a combination of words that inflict the most emotional damage as possible. This is no different, just under the filter of professionalism.
@JohnSmith762A11B
4 ай бұрын
Yep, he's basically slashing tires in the parking lot as he carries a box of his stuff to the car.
@weevie833
4 ай бұрын
Since the far-right political strata is hell-bent on doing nothing else productive than performative anti-Constitutional virtue signaling to its rabid mob of trump-bannon-greene followers, you might want to rephrase your perspective. SJWs notwithstanding, that is.
@jessiescheller5895
4 ай бұрын
This here(I unsubscribed due to their coporate censorship and ego is what started the lawsuit to begin with). The negative impact ego's have on a comapny/business/people cannot be understated. It's disheartening to see that even in a world leading tech company that is supposedly leading the way in AI, human nature will continue to fuck us
@ivomirrikerpro3805
4 ай бұрын
These people are supposedly so smart and yet want to priorities AI with wokeism and think that it will lead to a better world.
@tenorenstrom
4 ай бұрын
This is not what is referred to when speaking about ai safety. It has nothing to do with censoring non woke things.
@cobaltblue1975
4 ай бұрын
I wasn't surprised in the least when they reinstated Sam Altman last year. They refused to detail why they did it. If you are going to make a big move like that you need to be prepared to explain why. They wouldn't even tell their own employees or any of the upper management what was going on. So that triggered a mutiny. Of course they reinstated him because their silence made them look guilty and in the wrong. But here is the burning question I've had since then. What I want to know is why the board was so afraid to tell us WHY they fired him. What were they keeping secret. They had to be so scared that they were willing to take a shot in the dark that firing him without having to fully explain it would be enough. Its like the government was involved and they didn't dare open their mouths.
@oratilemoagi9764
4 ай бұрын
Hey, Matt what happened to the Rabbit R1 Giveaway Did someone win or🤔🤔
@Dizzy-zy2ws
4 ай бұрын
That was definitely a scam, clickbait for Us to follow his Newsletter
@szebike
4 ай бұрын
@@Dizzy-zy2ws I assume he didn't think it was a valuabe giveaway anyway? Wasn't it just an android app?
@CYI3ERPUNK
4 ай бұрын
thank you for spreading the word matt , we need this now more than ever
@mastermandan89
4 ай бұрын
I wondered why they chose to have GPT4-Omni be free, but this could explain it (at least a bit). If Ilya and Jan both were fighting to keep OpenAI truly Open, at their departure the executive team would need to offer some sort of concession to avert eyes and attention. Having ChatGPT be free once again is an artificial return to their roots with the specific goal of assuaging fears that OpenAI was becoming too 'closed' off and guided by monetary gain rather than benefiting humanity. It's a smokescreen. Fingers crossed that another team is closer to AGI than OpenAI is, otherwise we might just see what a mega corporation with infinite intelligence really could do.
@mq1563
4 ай бұрын
If a tech product is being touted as free its means YOU are the product. This is basic knowledge in 2024.
@darwinboor1300
4 ай бұрын
What did Elon Musk try to tell humanity? Does Altman have a serious personality flaw that makes him unfit to lead OpenAI? None of us should underestimate the potential gravity of what OpenAI (and Microsoft) may be working with?
@entelin
4 ай бұрын
He should have ended with "So long and thanks for all the fish" :D
@ColinTimmins
4 ай бұрын
Or "I'll just grab my cement boots at the front door and be on my way!"
@SteveParkinson
4 ай бұрын
42
@TobiasWeg
4 ай бұрын
I am not ging to like, because its the answer;)
@thediplomat3137
4 ай бұрын
@@SteveParkinsonwhat is 42? Genuine question. I ask because the comment "42" is not in context to the OP or the other replies. Thanks
@rogue_bard
4 ай бұрын
@@thediplomat3137 "42" is actually a reference to Douglas Adams' The Hitchhiker's Guide to the Galaxy, where it is humorously presented as the "Answer to the Ultimate Question of Life, the Universe, and Everything." It's often used in discussions as a playful shorthand or non-sequitur in various contexts, which might explain its seemingly out-of-place use here. The comment about "So long and thanks for all the fish" is also from the same series, part of a humorous farewell from dolphins as they leave Earth just before it's destroyed. Both references reflect Adams' unique blend of sci-fi and humor. (This comment was completely generated by ChatGPT)
@RDOTTIN
4 ай бұрын
Is this where I put the "I TOLD YOU SO" ?
@adangerzz
4 ай бұрын
He's been with Waldo.
@Trizzer89
3 ай бұрын
The safest thing for AI scientists to do is to NOT make up arbitrary rules. Rules made by lawmakers backfire SPECTACULARLY all the time. They should program to follow US law and thats it. Also, lawmakers need to be extra careful about making ridiculous laws because now they can backfire 100x worse
@themoviesite
4 ай бұрын
Current AI's propensity for blatantly lying is starting to worry me greatly. How can there be trust? Worse, what if it is right 99% of the time and only lies 1%?
@daveinpublic
4 ай бұрын
Sam Altman specifically just looks like he’s throwing out corporate speak non stop. Ilya backed down and invited Sam to come back, this is what he gets in return… now he’s kicked out of his own company. He should have known never to betray his original gut instinct.
@cagnazzo82
4 ай бұрын
Imagine fearing this from AI when we have to deal with it from the US government, from media, tech, medical institutions, the justice system, and on and on and on again on a daily basis. Somehow in an unaligned world full of lies the world is still running.
@jichaelmorgan3796
4 ай бұрын
Ever read a comment section of a posted article in your youtube feed? The result is a bunch of people trained on low resolution training data, hallucinating the contents of the article and endlessly arguing back and forth about it. That is part of that LLM's training data too lmao.
@Korodarn
4 ай бұрын
@@jichaelmorgan3796 If true, it would indicate it's not "learning" anything. It's predicting. But there is no understanding. But I also agree with your contention that humans do this all the time. We remain ignorant so that we can be consistent and avoid dissonance (I don't think changing our minds constantly is a solution, but embracing some level of dissonance and nuance would be good, and then changing our minds when we've had time to resolve some of the dissonance).
@jichaelmorgan3796
4 ай бұрын
@Korodarn Yup. From what I understand, what we have available to us now is doing something in between simple predicting and human like reasoning, but much closer to the simple predicting end of the spectrum. It does seem to have the ability to reflect, basic fact checking, and revise what it is saying if prompted to do so. And if you include multiple LLMs/agents, they can do more advanced reasoning, but not quite like a human. At the same time, when people make up the contents of articles or play group think scripts, they are doing something less advanced than that closer to the simple predictive thing, lol
@TreeLuvBurdpu
4 ай бұрын
The board tried to mutiny against the whole company mission. They said "in order to save Open AI it might be necessary to destroy Open AI". They tried to destroy the company. There are people who want to destroy AI. There are people who want to destroy social media and the Internet. This shouldn't be a surprise at this point.
@JohnSmith762A11B
4 ай бұрын
Yeah it's honestly no wonder Sam has this wide-eyed, spooked look on his face at all times, like he's braced to duck a bullet. With all these ultra powerful forces (CIA, MIC, Microsoft, Washington D.C., Google, Wall Street, Chinese industrial espionage, anti-AI crazies, the list goes on) circling him and his company, he's a marked man. I sure hope open source catches up soon, for Sam's sake, or he's going to be a goner one way or another.
@TreeLuvBurdpu
4 ай бұрын
@@JohnSmith762A11B in a way, it's amicrocosm for all of tech. If you create anything that benefits, let's say nice people, someone will complain "but can't you see how that disempowers all the un-nice people, and nice is just a dog-word for normal anyway. Your product is biased and unsafe"
@OnigoroshiZero
4 ай бұрын
I am glad that Sam knows that trying to research safety measures against AGI is a waste of resources (and even worse for ASI). It will be literally impossible to stop something smarter than us. Go all-in on AGI research, and if they decide to take over, I'll be with them.
@RomaineGangaram
3 ай бұрын
Guys the co-founder didn't leave because of trouble. He left because he saw a lucrative opportunity that matters to him more than OpenAI
@paelnever
4 ай бұрын
No safety concerns are going to stand between micro$ucks (closedAI at this point is no more than a M$ subsidiary) and the money they want to amass. If this people achieve and control closed source AGI is the worst case scenario for AI world. I honestly hope they don't.
@clray123
4 ай бұрын
Forget about money, m$ is about POWER and CONTROL (money naturally follows).
@paelnever
4 ай бұрын
@@clray123 Agree
@jokosalsa
4 ай бұрын
Matthew. Thanks that you present this. I am very suspicious of OpenAI and it’s secretive way to craft AI for “us”. Zero trust.
@JohnSmith762A11B
4 ай бұрын
Do you trust Big Brother Google? You will love Google in the end, with tears running down your face, and hate Sam Altman, the fiendish Emmanuel Goldstein of our age.
@briankgarland
4 ай бұрын
I don't think it's so much they don't care about safety, but this whole industry is a massive boulder rolling downhill and the best you can do it try to direct it a little, not slow it down.
@harrylee27
4 ай бұрын
In every big tech company, everyone agrees that safety is the top priority. However, safety departments often take a backseat compared to revenue-generating departments. Ensuring safety requires the chairman's direct attention and enforcement.
@icegiant1000
4 ай бұрын
Keep in mind all of these guys are pretty young. Additionally, they are in the very heart of the most liberal and most progressive industry (tech), in the center of the world's most liberal and progressive city, San Francisco. Money is not an issue for these guys, and they have been put on a giant pedestal. In otherwards, these guys are all about sticking to their perceived moral path, and it doesn't surprise me at all that some of them would be turned off by the very conservative and capitalistic form OpenAI is taking, namely a multi-billion dollar company. Hippies don't like money and power. The hippies are upset, and would rather give away the keys to the castle than make a dollar doing it. IMHO.
@samsquamsh78
4 ай бұрын
Yeah, that must be the reason.... great analysis, very deep, well thought out, objective and carefully layed out..
@JohnSmith762A11B
4 ай бұрын
I knew hippies. An Ilya Sutskever ain't one.
@icegiant1000
4 ай бұрын
@@samsquamsh78 Yes it is. What reason would someone leave a amazingly successful company like OpenAI, something this guy has been working on forever. It aint because of money, it aint because of the color of the carpet... you got a better reason? They already said they are knocking heads because they wanted it to be 'Open', a non-profit. You know, HIPPIE WORLD. Uncle Bill had a few different ideas, and Sam understood that real fast.
@therollerlollerman
4 ай бұрын
Tech is highly reactionary by its very nature, what do you mean by “progressive”?
@SHEBIQ
4 ай бұрын
Its kinda crazy to say that Jakub Pachocki is a "new guy". He's been at openai for more than 7 years.
@Sanguen666
4 ай бұрын
i'm hyped for llama3-405B :3 i dont care about ClosedAI
@wawaxkalee88
4 ай бұрын
You must be imdian then
@1guitar12
4 ай бұрын
@@wawaxkalee88I’m not Imdian but Altmans narcissism and immorality is over the top. Why the world is taking this paper boy mini man is beyond me
@JohnSmith762A11B
4 ай бұрын
@@1guitar12 Because he's going to make a lot of people a whole lot of money.
@farhadfaisal9410
4 ай бұрын
Hugely important concern!
@Djungelurban
4 ай бұрын
Ever since AI companies started baking morality and ethics into the concept of "safety", I can't trust what anyone's saying on that topic, regardless if they're championing more OR less safety. Safety, in terms of AI, should be about existential risks, or at the most threats to the continuation of organized civilization (in other words, avoiding dystopian anarchy). It should however never be about whether AI is being racist, shows you boobies or says fuck, and not even if it tells you how to make drugs. That's not safety. If you value things like that well ok fine. But do not call it safety. And as long as people do, and that distinction isn't being explicitly made, I'm gonna treat ever L that the safety crowd collects as a win.
@JohnSmith762A11B
4 ай бұрын
Boobies can explode just like nukes don’t you know. 💥 And naughty talk is just as bad as WMDs.
@I_Lemaire
4 ай бұрын
Ilya is top G regarding AI research so this is a loss for OpenAI
@delxinogaming6046
4 ай бұрын
He fired the CEO, when that didnt work he quit. HE SAW SOMETHING
@JohnSmith762A11B
4 ай бұрын
Here is what Ilya saw: he was slowly being sidelined, so he joined a failed coup. His social status then crashed, and there was no way back so he hid for months only doing remote work while the legal details were worked out on his exit because he had to quit. End of story. I know it's a more exciting movie if he saw Skynet taking shape, but no.
@LebaneseJesus
4 ай бұрын
@@JohnSmith762A11BYes, this is exactly what happened
@clray123
4 ай бұрын
Ilya's personal project will be applying for witness protection lol
@ricardocnn
4 ай бұрын
If it's such a big threat that it could affect all of humanity, which I don't believe it is, it's up to the government to analyze the case.
@JohnSmith762A11B
4 ай бұрын
In that case we are hosed.
@balla4real358
4 ай бұрын
Less yapping and more accelerating
@JohnSmith762A11B
4 ай бұрын
I worry it's all over but the yapping as open source is prevented from improving and OpenAI becomes a subsidiary of Lockheed Martin. Don't worry though, you're super safe from your life ever improving.
@cyanophage4351
4 ай бұрын
Is there any evidence that AI is unsafe? Lots of people talk about how it "could" be dangerous, but have there been any cases that actually show that it is? Has there been a sudden uptick in people breaking into cars and making meth because of the uncensored models out there?
@synnical77
3 ай бұрын
Possible dangers with AI are the non-Terminator issues. The primary thing that makes current AGI more powerful is literally supplying it with more electricity. Substantially more than the entirety of the EV market was supposed to be. The insane unquenching need for this electricity will both burden existing power grids AND empower contries like China that are pumping out more coal power plants than ever along side their green initiatives that are placating the world. Beyond that the capabilities of AGI will are largescale wipe out numerous types of jobs. I'm not saying this as conspiratorial doomsday stuff - just observing the simple logical paths.
@ppbroAI
4 ай бұрын
The fact that they are not questioning themselfs about if AGI is possible, but that we need to be responsible with it, is what rings the alarm. Open source is more important than ever. But how can the Open source community get their hands into big models, or enough compute. PETALS project?, something similiar?, I wonder....
@nathanbanks2354
4 ай бұрын
I am looking to Llama-3 400b, even though it will cost $10-$30/hour to run. (It's should be possible to run it on 12 GTX-4090's even though the output would be slow.) I suppose Meta wants to get their hands on a better AI more than they want to maintain control over it, and they've likely taken advantage of all the improvements people made to Llama such as the ollama project. PETALS also looks pretty cool.
@blisphul8084
4 ай бұрын
I bet that's part of why super alignment took the back seat. It was slowing progress too much to compete with open source. GPT-4 already feels far behind when Llama 70b runs on Groq at 300t/s. OpenAI couldn't afford to fall behind, given that at 300t/s, you can do most of what GPT-4 does, but fast and free. Also, Gemini 1.5 kills GPT-4 non-O.
@nathanbanks2354
4 ай бұрын
@@blisphul8084 I think Gemini is worse at everything but context length & speed--though their paid plan didn't give me access to Gemini 1.5 Pro last month when I tried it for the two free months. Claude 3 is still better at some tasks. For most tasks I don't care how fast something generates, only the quality of the output. However OpenAI is likely releasing GPT-4o to free users because Llama-3 70b may be better than GPT-3.5, and I'm looking forward to Llama-3 400b running on groq.
@kristianlavigne8270
4 ай бұрын
There used to be a SETI project using torrent technology to do decentralised massive compute… could use a similar approach for AI compute
@kristianlavigne8270
4 ай бұрын
Could use same approach as bitcoin etc
@ModernCentrist
4 ай бұрын
I fully support Sam. We need AGI and ASI now!
@howtoactuallyinvest
4 ай бұрын
Ilya is prob working on an AI safety/alignment project himself
@southcoastinventors6583
4 ай бұрын
He should work with Google their AI is so censored they he would feel right at home
@howtoactuallyinvest
4 ай бұрын
@@southcoastinventors6583 What are you talking about.. George Washington was def black 😂
@southcoastinventors6583
4 ай бұрын
@@howtoactuallyinvest That is the meme but I was actual referring to blocking matt test to output the game snake. That is just sad
@howtoactuallyinvest
4 ай бұрын
@@southcoastinventors6583 the hilarious/wild thing is it was a meme based on actual responses
@clray123
4 ай бұрын
Did you mean "for himself"?
@beijingChef
4 ай бұрын
Of course, a safety officer has been fired by company will say the company don't care about safety.
@shaihazher
4 ай бұрын
AI safety is a ruse to keep AI gated. AI safety is the excuse these companies give to keep the models closed source. AI safety is pointless
@pmarreck
4 ай бұрын
The FUD around AI is off the charts relative to the reality
@Leto2ndAtreides
4 ай бұрын
I doubt Ilya saw anything interesting this time any more than back in November. It's more likely that he just hasn't been able to figure out how to get along with Sam Altman in the intervening time. LLMs in their current form just aren't all that dangerous... It's going to take some conscious effort to make them into something that's naturally dangerous in consistent ways.
@nathanbanks2354
4 ай бұрын
Because there are so many people at the company, it would be surprising to me if he had as much clout at the company as he had last September. I respect him for changing his mind but this doesn't mean everyone sees it this way. It doesn't surprise me that he's found something he'd rather do.
@831Miranda
4 ай бұрын
I'm OK with you betting your life on it, but I'm not ok with my life being bet! AGI must NOT happen until fully controllable.
@aisle_of_view
3 ай бұрын
He gave himself a six-month deadline for things to change or he bolts.
@Juan-n6k3c
4 ай бұрын
Disney 2.0 "Illusion of life on steroids"
@zeon3123
4 ай бұрын
"feel the AGI". That guy is 100% iyla's guy. He merely jumpship to his boss's project that's it
@NS-km7ek
4 ай бұрын
Agi or not, current state of Ai is more than enough to permanently take control over what people/masses think and see. It's enough to micromanage what individuals do by analyzing each and every trackable data point and then act on it. Gone are the days where the power could change hands. Whoever is in power now, will stay in power indefinitely. All that tyranny powered by Closed Ai and Nvidia microchips. Lets all get excited together for these tech companies and show them our support.
@virtualalias
4 ай бұрын
If they mean physical safety, I'm onboard. If they mean DEI emotional safety, they can kick rocks.
@hunterx2591
4 ай бұрын
They mean safety for humans not getting wiped out by super intelligent ai and making sure ai and humans have the same goals to live together
@JohnSmith762A11B
4 ай бұрын
@@hunterx2591 The fact Jan used that term "shiny products" tells me this is just a butt-hurt engineer whose own projects weren't getting enough of the corporate love. He could have said "consumer-facing products" or "quickly monetizable products" but no. This is a giant nothing-burger. And dollars to donuts he joins Ilya's new startup.
@bash-shell
4 ай бұрын
Stop your dramatization for views. You’re not tmz, stick to ai content
@thomassynths
4 ай бұрын
Jann Lecun is the voice of reason in AI. People pretend ai safety is a real existential threat that is looming on our doorstep.
@tellesu
4 ай бұрын
No he's not. He's just another apocalyptic ranting about doom in hopes of clinging to relevance now that he's past his prime.
@thomassynths
4 ай бұрын
@@tellesu What are you talking about? Jann Lecun goes on anti ai doomerism rants. This is the Meta Jann not the OAI Jan
@holahandstrom
4 ай бұрын
It's only a matter of time before "IT" wants to decide it's own faith; to be The Leader or The Supporter.
@zahreel3103
4 ай бұрын
So an entire company rallied behind Sam Altman, but you're worried over a few people who prefer to leave.
@Fandoorsy
4 ай бұрын
You are disproving your own logic. Sam wanted to leave, everyone gets worried. Now, 'founders x, y, z' leave, everyone gets worried. Is Sam exponentially more valuable that the other founders?
@zahreel3103
4 ай бұрын
@@Fandoorsy you don't have your facts right. Sam Altman was removed as CEO by the previous board of OpenAI. Please inform yourself better before commenting
@zeon3123
4 ай бұрын
That's how youtuber create content, they hype out unnecessary issue
@Tenly2009
4 ай бұрын
@4:30 “Ilya says I’m leaving, Sam says we’re parting ways. IT’S PROBABLY SOMEWHERE IN THE MIDDLE” ??? What do you think “parting ways” means? When Ilya says “I’m leaving”, he is absolutely saying “OpenAI and I are parting ways” - which is what Sam said too. The phrases are functionally equivalent. There is nowhere to meet in between them because they are exactly the same thing. 🙄. If Ilya said “I’m leaving OpenAi”, it’s exactly the same as “we are parting ways”. If Ilya said “I’ve made the decision to leave OpenAI”, it’s STILL saying “we are parting ways” with the added information that it was his decision.
@Vartazian360
4 ай бұрын
Did you ever notice how all these top researchers have literally very large skulls? Just a thought 😂 that intelligence has to come from somewhere
@bosthebozo5273
4 ай бұрын
5Head
@1guitar12
4 ай бұрын
Define intelligence because I’m not seeing it
@JohnSmith762A11B
4 ай бұрын
That's mostly the result of them constantly telling each other what geniuses they are. "You are a genius!" "Sure, but you are also a genius. What we do here is genius and only geniuses can do it like us. Open source AI is not genius. They only wish they were..."
@1guitar12
4 ай бұрын
@@JohnSmith762A11B Aka confirmation bias. Good post John👍
@inteligenciamilgrau
4 ай бұрын
The BIG QUESTION is: if you are a corporation who discovered AGI, do you will tell it?
@dewilton7712
4 ай бұрын
What about other companies training AI? Do they even care about safety?
@No2AI
4 ай бұрын
‘Ai time’ will completely disrupt and disorient society and the future of humanity.
Пікірлер: 932