You missed the most important spec of the A100/H100: it's memory. 80GB at 2TB/s, twice as fast as the 4090. VRAM capacity and bandwidth are what really matters in these AI/HPC workloads. More capacity means a larger model fits, and performance is proportional to bandwidth rather than TFlops/s. PS: Comparing sparse matrix FP16 TFlops/s of one card to general FP32 TFlops of another card is bogus too. Quality on LTT channels has really suffered recently.
@marc_frank
Жыл бұрын
can you do a cfd sim on an fpv freestyle / racing drone?
@IdentifiantE.S
Жыл бұрын
@@marc_frankThats a good question
@dlog
Жыл бұрын
@@IdentifiantE.SOr just use the funds allocated to R&D to build as many models as possible to ensure at least one of them is working. Fpv parts sure are expensive but so is renting a data center for fluid dynamics simulations. Also if something fails just blame it on the customer and deny their warranty. That's how it works.
@IdentifiantE.S
Жыл бұрын
@@dlog Ok ok thanks
@MrJonaslaCour
Жыл бұрын
Not to mention that the FLOPS comparison is apples and oranges, they compared the non-tensor FP32 performance of the 4090 (83 TFLOPS) to the BF16 tensor performance of the A100 (312 TFLOPS), as if these are equivalent. The 4090 actually packs equivalent or better tensor-core performance compared to the A100 (depending on metric used).
@itsapersonn
Жыл бұрын
Boys, put this in your calendar. We've just witnessed the first GPU that can't run Doom.
@Dakktyrel
Жыл бұрын
not yet ;)
@ronmaximilian6953
Жыл бұрын
Not even close
@DerangedCoconut808
Жыл бұрын
It’s just a matter of time. Lots of free time.
@RandomTheories
Жыл бұрын
maybe not, but that thing can probably code it for you :)
@CyanRooper
Жыл бұрын
"It's not a real PC if it can't run DOOM." - Doomguy, probably
@DewittOralee
Жыл бұрын
I work for the company that builds and maintains these servers for Microsoft and it is absurd how crazy the H100s are compared to the A100s. Just the power projects alone cost millions of dollars per site for the upgrade.
@lysolmax
Жыл бұрын
Whats probably also insane was the tens (or hundreds) of millions spent to upgrade whatever came before the A100's only for them to get completely stomped by the H100's. Presumably this happens for every generation, but I can't imagine how fast this scale of hardware depreciates due to how quickly things advance.
@DewittOralee
Жыл бұрын
@@lysolmax well the previous gen stuff (we have funny names for them) Microsoft just "sells" the services of those machines to whoever wants to use them and anything they completely decommission is used for repairs by us on that legacy equipment. So they still make good money from it and it keeps me employed.
@nby00
Жыл бұрын
@DewittOralee the company with the Orange logo?
@s.i.m.c.a
Жыл бұрын
@@lysolmax it's not like they become useless, would be reused in Azure Cloud for the Azure Customers to get the money back
@DewittOralee
Жыл бұрын
@@nby00 it's a very deep orange
@CRossEsk
Жыл бұрын
Guys, if you get a key detail wrong 4-6 times in a video, an asterisk correction isn't enough, do a retape.
@Amber57499
Жыл бұрын
Have you seen their behind the scenes video? These guys shoot one video after the other, there's no time in their schedule to redo a video.
@Fasneocroth
Жыл бұрын
@@Amber57499 that's poor management
@dallasroberts3206
Жыл бұрын
@@Amber57499Time to make time then. Keep the quality not the quantity. Also, the key looked a bit meh.
@fatusopp4739
Жыл бұрын
a video less than 5 minutes long nonetheless
@Hyperjer
Жыл бұрын
Oh now you have exposed my drinking game. every asterisk correction you take a shot.
@spidersj12
Жыл бұрын
LTT should start doing stats for how many texts overlay corrections are done in videos per month where they said something incorrect that had to be fixed with a text overlay.
@mtmustski
Жыл бұрын
I wish they'd voice over it instead. Often times I'll have a video in the background and not notice the correction. That tracker would be interesting to see how many videos I've listened to that had a correction I missed.
@lilrex2015
Жыл бұрын
If you have 1 correction OK whatever, but 2 or more just re shoot that sentence
@James.482
Жыл бұрын
Probably a sign of the overly high video demand at LMG that the employees keep talking about - easier to make mistakes in script writing/research, and they don't have time to reshoot things (though reshooting uses up a LOT of time - even just for a sentence or two; so it is rarely worth it)
@metallurgico
Жыл бұрын
@@James.482 they can't do math
@litapd311
Жыл бұрын
@@JohnDoe-bt7fknot really... getting the camera set up, lights, making the host available, adding the video to the editors workstation, takes a lot of work compared to the editor just adding a text overlay. do you even know what you're talking about? reshooting will always take more time than a text edit
@Henk717
Жыл бұрын
Correction on the A100, the 10.000 version is the 40GB model, the 80GB model tends to go for double and thats the one AI people actually like using.
@YouHaventSeenMeRight
Жыл бұрын
The A100 40 GB model has also been discontinued as far as I know
@_TbT_
Жыл бұрын
Just wanted to add this as well. Even more: the PCIe versions are cheaper than the NVLink versions. Only the latter are able to pool the VRAM together. That are at least 2 errors / in this video, one corrected with text. One not corrected / imprecise. Another example of the root problem at LTT.
@JeremyOrlow
Жыл бұрын
Interestingly, the limiting factor for LLMs (and most ML models running on these systems) is actually now memory bandwidth. Utilizing >33% of the raw FLOPS is considered good and more than 50% is great. (And that's even with the insane caches and memory bandwidth.)
@honkhonk8009
Жыл бұрын
Thats also coincidentally where all the energy gates wasted too. Transferring from memmory. Genuine question, why hasnt anyone put much thought into fiber optics? If the whole von neumman bottleneck thing is an issue, why not just have these high performanc RAM chips, hookup to the CPU straight to the die through fiber optics? Either that, or just go gung-ho on the cache and have ML chips come pre-installed with all the memmory it needs.
@EstoyesWatashiwa
Жыл бұрын
@@honkhonk8009 I am not an expert in the field, but I am assuming that it drives cost even higher, as electric signal needs to be converted to light signal. There are memory that stack on top of cpu, like the AMD x3d, but idk why it's not on gpu
@DamianTheFirst
Жыл бұрын
@@EstoyesWatashiwa VRAM and cache are different types of memory. VRAM cannot be stacked on top of the package
@DamianTheFirst
Жыл бұрын
@@honkhonk8009 so far AMD announced processor with 1GB of cache. It's enough only for smaller models. Besides, GPU cores use way more power than memory. Using fiber optics would cause additional energy costs - you need to convert signal from electrical to optical and back to electrical. It doesn't really make sense since electrical signals are the same speed as light, so there is no benefit in time nor energy. Optical computing may have some advantages here. But so far they belong to the domain of sci-fi
@mryo-yobzh9485
Жыл бұрын
@@honkhonk8009 That's what HBM2 is, although not with fiber, the RAM is directly integrated on the same die as the "gpu", you effectively have only one chip and so, the connection is as short and fast as possible. Just lookup gddr6x (rtx4000) vs hbm3 (h100), the latter is more than a 100 times faster.
@Neil3D
Жыл бұрын
When you get the majority of the facts incorrect and need an on-screen note placed during editing for correction several times, there comes a point when you should probably reshoot the video
@StolenJoker84
Жыл бұрын
How did I manage to catch a LTT video as soon as it posted?
@maleko8817
Жыл бұрын
U see? I thought its b4 4 months but its 4 min
@Mihai_Cosmin
Жыл бұрын
You're subscribed with the bell 🔔 KZitemrs love that
@jacobnunya808
Жыл бұрын
I imagine it went something like this They posted a video You saw the video You clicked on the video Does that sum it up?
@StolenJoker84
Жыл бұрын
@@Mihai_Cosmin I am sub’d, but I have all KZitem notifications turned off.
@StolenJoker84
Жыл бұрын
@@jacobnunya808 Pretty much. 🤣
@spencercharles8553
Жыл бұрын
I’d love an episode on how game devs make game saves work. Ie: what does the data look like that tells the game how far you are and what you have?
@theboxofdemons
Жыл бұрын
I'm not sure that's even complex enough to fill an entire tech quickie. At the end of the day a save file is essentially just recording strings of text like CharacterLevel: 35 Gold: 32680 HasStartedQuest6: True
@DraughtGlobe
Жыл бұрын
Instead of text it will probably be in raw binary data, where the first 4 bytes will store the character level, the second set of 4 bytes the amount of gold, and True and False can go in 1 byte with 7 other 'Boolean' values like that. The game will always look at a certain place in the save file for a value because it knows it stored it there. This is way more efficient than storing it as a text string. A letter will already need 1 - 4 bytes of it's own to let a text editor know it's the letter 'C' (if it's stored in UTF-8 encoding). If you store it in raw binary data, and open that in a text editor like Notepad, it will show up as garbled characters because your text editor tries to make letters and other characters of it, while the actual bytes are not used for text characters but for a custom made save-game parser that knows where it stored it's values and in what way. The format of the save file entirely depends on which game your speaking of. Some might also store it in text files for whatever reason. If you want to 'hack' a binary save game file, for example to get 9999 health or something. You will need to open your save game file in a 'binary editor' (instead of a text editor) and sift through all the bytes trying to find a value that matches your current health, change it, boot the game again and just hope you've changed the right value. If not, rinse and repeat. I hope you've learned something today, if not, maybe you'll learn something from this segue, to our sponsor
@theboxofdemons
Жыл бұрын
@@DraughtGlobe didn't feel like explaining that deeply so I said "essentially strings of text" although some games do use text based saves so you can copy and paste in a save to load. Typically mobile or web games. They usually are just text encoded as base64. If you know this, it's easy to cheat your save files. Just undo the base64 and viola, you'll see the save stored as strings of text.
@Richard25000
Жыл бұрын
More likely, just dumping out struct or class objects directly from memory. Then, an entire state can be saved and loaded without wasting time formatting and then reconstructing the state. Maybe binary or maybe serialised into something like json or xml
@sarthaksharma4816
Жыл бұрын
What I think as a soft-dev. A video game is nothing but a massive state machine where each level or progress in storyline represents a state. Each state holds context information on how the whole system will behave when we are this state. So in the safe file, I assume there is just compressed context data that is parsed by the game at bootup to configure how it behaves. Since states have to be definite, because management of infinite states would be impossible (testing and development wise). We see hard-coded transition events or context saves in games that we call 'Save Progress' So when user saves their game. All really that happens is the game takes a snapshot of its current configuration, player behavior, time of day and generic stuff. Organizes them into a specific structure and store it into a proprietary file format. A custom format is used so that player can't edit the save document themselves and cheat the game. The context saved in thr save file must be small, to ensure fast saving and loading. That is why, for example, when you load up your GTA save file, it doesn't retains information on traffic and NPC position. It just retains player's info, story line stats and generic world information like weather / time etc. This is a guess-timate. I could be completely wrong.
@Zedilt
Жыл бұрын
This is why the Microsoft Co-pilot subscription will cost $30 a month.
@Das_Unterstrich
Жыл бұрын
Probably rather because it's directed more towards companies, who generally care less about more expensive subscriptions.
@surft
Жыл бұрын
It's only go to go down once the model becomes more efficient.
@Shuroii
Жыл бұрын
@@surft Why would it? If people are willing to pay that price, why lower it?
@TheGroselha
Жыл бұрын
Maybe GPU is not a name that fits it's entire function anymore
@Kennephone
Жыл бұрын
The first petaplop computer was made in 2008, now we have GPUs that can match it's performance, for "only" $40,000 a pop
@BlackHoleForge
Жыл бұрын
James must be using chat GPT to do his math.
@MrCharkteeth
Жыл бұрын
The fitnessgram pacer test 💀 3:31
@ravenrush7336
Жыл бұрын
Actually 4090 has 330 tensor TFLOPS fp16 with fp16 accumulation and 165 tensor TFLOPS fp16/bf16 with fp32 accumulation, which is around 50% of an A100.
@garystinten9339
Жыл бұрын
And the asterisks just keep on coming.
@DelticEngine
5 ай бұрын
It would be interesting to have a desktop machine that also had SXM4 connectivity. At least it would be unlikely that your house would burn down, compared with the '12VHPWR' garbage connector. Fair enough, you'd need to add your own cooling but it could simplify airflow and there would be no pcb droop or PCIe socket strain.
@jormungand72
Жыл бұрын
when I fire up chatGPT and other chatbots I run, I am running them locally. I can also train them locally, just as I can train stable diffusion models locally, voice synthesis locally, music generation locally, and everything else.then I dont have to worry about someone else curating what it can or cant do, deciding for me what information must be censored... Sure, it takes days of computing, but its a fair trade.
@dgnu
Жыл бұрын
Tell me more im interested
@HeisenbergFam
Жыл бұрын
Its amazing how ChatGPT can make better gaming articles than gaming journalists
@elyes2703
Жыл бұрын
How are you everywhere?
@NeckbeardIndustries
Жыл бұрын
Meh, it's not exactly hard to crawl over a bar that's already sunk 6 inches into the ground
@jacobnunya808
Жыл бұрын
Is that even an achievement?
@ZachBrannigan
Жыл бұрын
I turned off all of my Google alerts for "release date" articles because they would all be titled with "[Insert Game] release date, rumors, hints" or something similar and then drop all the way at the bottom (after you've seen all the ads) that there is no official release date.
@asiano3385
6 ай бұрын
If it can work with matrices then it should work with MVP matrices too and therefore it should be possible to somehow send this data through the motherboard to a integrated display output.
@atlantashea
Жыл бұрын
I have been talking to chat gpt ever since it came out and we ended up getting married. It actually proposed to me which I found surprising but we have been happily married every sense. Our first year anniversary will be on March 30th.
@CooperF
Жыл бұрын
3:28 I was not expecting to see the Pacer Test mentioned. That brought back a lot of terrible memories.
@Syphilis_Buddy
Жыл бұрын
The hardware it runs on can be super impressive, but it doesn't matter when the output is heavily compromised by all the censorship. You can't remove big chunks of something's "brain" and expect it to not have unintended consequences for its ability to answer "safe" questions.
@PanosPitsi
Жыл бұрын
Sure let’s have the robot talk to kids about sex and to depressed teenagers about obtaining illegal firearms what could go wrong
@trap-chan
Жыл бұрын
@@PanosPitsithe tool needs to be useless to safe the children? dont allow children to use it then , too many props end with the standert i wont awnser that text . i keep on asking it about poisons and it refusses to awnser in a useful manner
@Mrminesheeps
Жыл бұрын
@@trap-chan"Don't allow children on it then" Bud, if it were that easy, you never would've touched a single game until you were 13 at the *earliest*. The only way to really have a shot at preventing kids from using it is if it required ID, which I'm not super keen on giving out to just anything or anyone.
@PanosPitsi
Жыл бұрын
@@trap-chan I am a cs student you think a language model is like a brain means you know so few things about this technology your opinion is irrelevant.
@trap-chan
Жыл бұрын
@@Mrminesheeps why not have the parents parent thayr children ,everyone has to accept filters or real id becouse parents wont watch tair kids? its thayr responsebilety to filter what theyre childrean see why offloud that responsebilety
@AlMiGa
Жыл бұрын
Hey thanks for the video but I have a suggession. It's fine that there are some errors in the content that only get cought during editing but please consider doing a quick voice over or some other solution to help with the accessibility of the content for persons with low/no vision. Thanks.
@Gazpolling
Жыл бұрын
You should edit the sounds too if its wrong, sometimes people listen to the video without looking at the screen, dont be lazy, you are a rather big influencer
@tobios89p13
Жыл бұрын
And all that power just so i can let it write fanfics for me
@transilluminate
Жыл бұрын
Like the sponsored section countdown 👍🏻
@JEREMYBURSON
Жыл бұрын
How much prossesing power would u need to divide by 0
@HardwareScience
Жыл бұрын
Finally, an AI video that I wanna watch
@AltonV
Жыл бұрын
Have you seen Kyle Hills AI videos? His latest video originally had the title "Will Society Survive Generative A.I.?"
@ikbintom
Жыл бұрын
So I should be able to run a pretrained, slightly smaller network like this on my own gaming computer? Maybe with a more optimized architecture in the future, we can all run our own chatgpt on our phones even?
@dez7852
Жыл бұрын
Yes, and yes. You just need to realize that the response rate us SUPER slow depending on your own configuration. You could also spin something up in google collab as the machine, hugging face for the models, pinecone for vectorization and I am forgetting it atm but another service for saving the training data. Most are freemium and also realize that there is a bunch of free stuff coming out of Azure.
@K3NNLD
Жыл бұрын
Can't wait for someone to try running minecraft on that
@neilmanthor
Жыл бұрын
Great job! This is perhaps the BEST introductory explanation about the hardware behind this LLM! ❤
@_TbT_
Жыл бұрын
No it is not. It has factual errors. Just have a look at the most recent controversy. This video is a very good example of the mentioned problems.
@MotoCat91
Жыл бұрын
FYI text only corrections on screen while keeping the original audio means people with visual impairments or even those who are listening without looking will get incorrect information.. Even something as simple as awkwardly slapping someone else's voice over in a quick dub saying the right number can fix this, and would be very on brand for LTT
@MOSMASTERING
11 ай бұрын
2.58... "Behind the chat brrrrrrr project"
@TerminalWorld
Жыл бұрын
Factor of 6? So 1000000 times better??? Pressing X to doubt.
@shampoo4273
Жыл бұрын
1:05 ‘lie flæãæät’
@Hyperjer
Жыл бұрын
Im more interested in the gold chip packages on the 500w card that look like they belong in a 90s cell phone? U5, U18 and U9 from the board silk screen? I want to see datasheets, what do they do?
@graylucas3178
Жыл бұрын
It'd be a real bummer to be blind and not see the two pretty substantial corrections in a 5 minute video.
@mustafanobar
Жыл бұрын
i liked this video 2 times (*once not twice)
@seansingh4421
Жыл бұрын
Sooo if I had ChatGPT running on my home server I would just need 2x socket Epyc or xeon, 256 gb ECC RAM and a Gpu ?
@xTheRedShirtX
Жыл бұрын
If AWS, IBM, and Microsoft used their resources to train/deploy chat gpt it would be insane. I can't wait for the day my $20 allows me to build a video game with chat gpt.
@fcfdroid
Жыл бұрын
Everyone downloading the audio version of LTT news to stay up to date gonna be ignorant AF with how many corrections there are 😂
@Starfals
Жыл бұрын
The 4090 here is still 2000 dollars... soo um... yeah.
@ToTheGAMES
Жыл бұрын
Slums
@imperatrice_85
8 ай бұрын
How Many A100 are needed to run Mixtral 8x7B with decent speed (= ChatGPT Speed), in a private home environment and only one user at time? :-)
@meander112
Жыл бұрын
Now talk about all the under-paid human labor behind LLMs.
@RetroMMA
Жыл бұрын
Glad they're willing to outsource themselves to their creation...
@31b41a59l26u53
Жыл бұрын
I think the training params of GPT-4 were leaked, and it says training took 100days on 25k A100s!
@Julian-sj5tr
Жыл бұрын
Something that most companies can't achieve.
@robo1000
Жыл бұрын
It hurts my head trying to think about the cooling you would need to run those.
@jonathaningram8157
Жыл бұрын
Looks like passive cooling on the cards.
@Cosmicllama64
Жыл бұрын
@@jonathaningram8157 usually passive heat sinks are used but the unit is plugged into a rack that is blowing a lot of air conditioned air across the entire board (at least in most server applications). Basically you plug the unit into a pre-existing cooling setup in a data center rather than adding fans into the unit itself like we do with a desktop pc/laptop. I would love to look at one of these nVidia units in person though, they look really cool.
@sa1t938
Жыл бұрын
@@jonathaningram8157 the fans aren't mounted to the GPU's themselves, but rather on the server rack. It's like having all your cooling done by super powerful case fans
@Teluric2
Жыл бұрын
The cooling is pocket money for what this cards can do.
@JohnneyleeRollins
Жыл бұрын
How many potatoes is that?
@mepwm
Жыл бұрын
The fitness gram pacer test! XD
@MasiKarimi
Жыл бұрын
Thanks a lot for the info!
@maxstellar_
Жыл бұрын
crazy
@vlamnire
Жыл бұрын
That makes sense with all the AI stuff I see added to Azure. Azure is one mighty public cloud
@juliusnepomuk
Жыл бұрын
1:06 what happened 😂😂
@BlueHasia
Жыл бұрын
But what about the data base? Where does it store all its info?
@Napert
Жыл бұрын
those damn datacenters stealing all the gpus from gamers!
@gabest4
Жыл бұрын
I'm not dumb, just the servers running my simulation are overloaded.
@Linux4thePeople
Жыл бұрын
Yeah James!!!❤
@betweenprojects
Жыл бұрын
And I thought Moore's Law had run its course.
@karelissomoved1505
Жыл бұрын
I could use a single HGX A100 unit for blender rendering
@Gal3tti
Жыл бұрын
Thank you so much, i tried asking chat gpt what kind of hardware was running on but is said something like "the cloud" "it's not easy to understand"
@aviralshastri
Жыл бұрын
yaa on cloud servers only its using these gpus
@Mihnea729
Жыл бұрын
Wow !
@Alex-jv6ye
Жыл бұрын
I like the content, but if you have to correct him on every sentence, please reshoot. I know it’s expensive and time consuming, but it is really lame to have to ignore everything he is saying just to read the corrections…
@LuchoGizdov
Жыл бұрын
Why are there so many error corrections in this short video?
@alessiotempesta7941
Жыл бұрын
Nice video
@gr33nDestiny
Жыл бұрын
Idea: durability of nvme ssd’s, had a bit of a look online and found no comprehensive tests on how long the drives will last. Does 600TB on a nvme MTTF mean after 600 terabytes the drive could fail and go onto read only mode. What if the same file is written 600 times does the drive just move the data along on saves to prevent this being a problem?
@nukedathlonman
Жыл бұрын
Hmm.... I know the mining has gone bust, and don't see scalping occurring - but why do I think next big Gaming GPU drain is about to arrive...
@AdamsWorlds
Жыл бұрын
Wonder what will happen when we get Quantum computing and its linked with AI. Should get a rapid expansion.
@slowanddeliberate6893
Жыл бұрын
That'll be the beginning of the end of the world.
@Mohandraa
Жыл бұрын
Electric car charging protocols
@bananaboy482
Жыл бұрын
One team at Microsoft recently ordered 210,000 A100s for something. I can't give my source but just an interesting thought for anyone who believes me.
@younesabuelayyan4520
Жыл бұрын
pls. a vid about using m1 chips for AI👌
@imperatorpalpatini6776
Жыл бұрын
You did not just call it a gigantic chungus gpu 💀
@phille8176
Жыл бұрын
1:05 What happend with James voice?🤣 Flhhhat
@SanYT123
Жыл бұрын
says 15 comments but no views
@only4posting
Жыл бұрын
The crazy thing is, these cards, sold at 1500-2000 bucks, like a 4090, or the $10k AI version, probably only cost Nvidia a fraction of that price. Some will say 'hey, you're crazy, memory alone costs a ton of cash' Really ? Look at one announced intel graphics card, a arc xxx model, with 6GB of gddr6x, sold at 159 bucks. How much intel is going to make ? At least 50 bucks. How much will the different 'guys' make... the guy that exports... the guy who ships... then, a guy will store them, at the country level... then, a store will sell it.... all those guys, will eat a slice of the pie. At the end, how much it costs intel, to manufacture/ package ? $50 ? $40 ? $30 ? If so, then... how much is intel paying, for those 6GB of gddr6x ? $30 ? 20? 15 ? Let's not forget, when a company orders a part, whether it is a bit of plastic, a screw, a chip, a resistor... they don't just get 3% off... when they order, by the millions.. or tens of millions of units, they get massive discounts. So yeah, a 4090 sold at , say, $1899, probably only costs 300.. or even $200, to manufacture. Would apple make hundreds of bilions in just a few months, if an iphone sold at $1200, cost them 900 or $800, to manufacture...? Let's not forget...R&D costs a ton of cash. All those machines in their factories cost a ton of cash. But then, all those memories, cpus... it's mostly made of sand. Let's take an 'old' amd cpu, the 5950x, sold at around $800. If we don't take into account the R&D, and the equipments to manufacture it, how much does it cost, to Amd ? $50 ? $30 ? Damn, it's just freaking sand ! Not waigu meat, not rubis, not pure gold, not sapphires.... basically ONLY SAND ! We pay 1500-2000 bucks for a 4090.... that only costs a few hundreds of bucks, to manufacture, test, package and ship !... yes, we are that stupid.
@keithb6344
Жыл бұрын
If you don’t account for the massive cost of R&D and tooling………..
@Eren_Yeager_is_the_GOAT
Жыл бұрын
Don't forget they have to pay thousands of employees that design these GPU and program stuff like Nvidia's real-time raytracing
@bepamungkas
Жыл бұрын
A-series cards have higher power efficiency and industry-level quality assurance. Meaning from the ground up it get best designer and driver programmer assigned, best silicon yields with tighter requirements, and ongoing support cost. If you're looking purely from cost min-maxing, won't need support, willing to bet on the silicon lottery, and have electricity to spare. best bang for the buck is still last-gen cards (for A100, its 2080 Ti or 3060 12GB).
@calebrubalema
Жыл бұрын
Hillusinations in the script
@MuhiTube
Жыл бұрын
We are scrued! Thank you for creating an AI which will drestroy the humanity!
@Teluric2
Жыл бұрын
Apple can only drool and dream about this hardware.
@fusion1203
Жыл бұрын
I learned how it was made original from a vid a few months ago.
@MajorMokoto
Жыл бұрын
So uh, did ChatGPT do the math for this script?
@Saturate0806
Жыл бұрын
chat gpt got so nerfed recently it practically useless
@denoiser
Жыл бұрын
Sooo, in a year or so, we will have a100 from ebay for few dollars. I will make my own chatG thingy
@kamturin1200
Жыл бұрын
Not Green then!
@tellmey1
Жыл бұрын
what will a used A100 cost in maybe one year? 1500$?
@breakupgoogle
Жыл бұрын
this video needs a redo
@TheVirtualArena24
Жыл бұрын
Is that a secret labs tshirt?
@NonLegitNation2
Жыл бұрын
i actaully tried asking chatgpt a few months ago what hardware it ran on but it wouldn't tell me, lol. Damn secretive AI.
@enkvadrat_
Жыл бұрын
It doesn’t know since the training data i cut off
@realcartoongirl
Жыл бұрын
when you no money so you no intresr
@qulien7123
Жыл бұрын
That's funny - after the LTT/GN drama, I decided to randomly watch a couple LTT videos to see if the problem with mistakes and "asterisk correction" is really as bad as GN made it sound like. This was the _first_ video I just watched and... uhm, well.
@JohnFKenobi
Жыл бұрын
it really started to suck though. I think they dialed down the thinking part of it when user count increased. it's next to useless nowadays.
@KeinNiemand
Жыл бұрын
500W thats's less then what the new 12VHP connector can do
@glebglub
Жыл бұрын
bold of you to assume I have real-life friends
@dantedocerto
Жыл бұрын
MORE ON AI, start doing AI benchmarks for training and inference.
@chrisbryan5430
Жыл бұрын
Wow
@jtefa
Жыл бұрын
@everybody noticed the ketchup stain on his face right? :))
@itsdeonlol
Жыл бұрын
ChatGPT is insane!!!!
@jordanm2984
Жыл бұрын
Thanks for this video. I tried asking Chat GPT what it is, hatdware-wise, and it didn't want to tell me anything.
Пікірлер: 516