@vijvalnarayana5127 I wish I shorted Nvidia at peak
@fadecutmike
28 күн бұрын
@@theant4268wish I shorted NVDA when you posted this
@ishan6771
2 жыл бұрын
As a ML researcher it is interesting to watch, unless run at extreme scales, regular chips are just enough especially for inference
@harrytsang1501
2 жыл бұрын
Yes, essentially, inference of smaller models can be done locally, in the browser (WebGL or Webassembly). At larger scale, it's always a limitation of memory bandwidth because no hardware can keep billions of parameters in cache. The caching locality of GPU breaks down pretty quickly and the 24GB of VRAM in top tier consumer grade GPU is still far from enough. At the end you give up and rent Google Colab to run your large models.
@ishan6771
2 жыл бұрын
@@harrytsang1501 True, I don't think even any university will invest in specialized hardware, most cloud providers also simply give credits to use their cloud services, but for the cloud provider itself I think such chips can provide significant power savings perhaps and worth it in the long run.
@transcrobesproject3625
2 жыл бұрын
What do you mean by "regular"? Regular GPUs? For certain things like NLP (stanza, Marian, etc) CPUs can be orders of magnitude slower than GPUs, making them totally unrealistic for running inference, so regular GPUs sure, but not CPUs!
@shoaibkhwaja4156
2 жыл бұрын
"64 KB of memory ought to be enough for everyone" 😏
@Bvic3
2 жыл бұрын
Real time 30FPS image processing of a HD camera input is very demanding.
@gotfan7743
2 жыл бұрын
You missed 2 important AI chip companies. UK based Graphcore and US based Cerebras which has designed a wafer scale AI chip.
@incription
Жыл бұрын
not useful unless they manufactor in mass quantity. its probably incredibly slow to make wafer scale chips
@rich_in_paradise
2 жыл бұрын
You didn't mention one of the key aspects of Google's TPU (and other specialist AI processor) compared to a GPU is also the number representation. GPUs can process 32 and 16 bit IEEE floating point numbers. But for AI work Google found that the fractional part of the number (commonly known as the mantissa) is less important than the magnitude (the exponent) and so they changed the number of bits allocated to each in their own BFLOAT16 format. That makes their processors better for AI, but relatively useless of other kinds of numerical computation.
@daddy3118
2 жыл бұрын
Graphcore has Float8 being considered by the IEEE.
@TheReferrer72
2 жыл бұрын
Same as Tesla's Super computer that has a custom number format.
@cyrileo
Жыл бұрын
I know 😃. A lot of optimizations have been done since then to squeeze out even more performance. (A.I)
@pirojfmifhghek566
2 жыл бұрын
This is a great video. I'd been predicting this for a while, simply because of all the gains that I'd heard about in analog chips from Mythic AI. Glad to see that more companies are getting in on this. It's also the perfect time for computing to start implementing new components for neural networks. The available bandwidth for motherboards has gotten ridiculously large lately, so there's a lot of headroom. I think it makes a ton of sense to start using dedicated AI chips for a whole host of common tasks and applications. The efficiency and speed gains would be enormous. This change in computing is gonna happen eventually. We're all gonna be socketing a plethora of purpose-built AI chips into our computers soon. There are just so many "fill in the blank" potential uses for AI. Anyone playing around with AI art generators can see that the results are surprisingly sophisticated and sometimes spooky. But damn does it take a lot of horsepower to do that stuff with a GPU. It takes a 3090 running at full power for several minutes just to produce results. It's horribly inefficient and slow. But it does remind me of the delight people experienced with the early internet. The internet used to be meagre and slow and truly amateurish, but everyone still shared that undeniable enthusiasm for being the first pioneers in a new world. And that's where we're at with AI.
@Palmit_
2 жыл бұрын
Thanks Jon :-) How you get your head around, and then write and deliver stuff of this complexity is mind-boggling! Do you even sleep? Do you work? Are you an automaton?? You're incredibly efficient and skilled in any case. You should def do a youtube live Q&A. I'm sure thousands of your viewers have lots of questions each. Thank you again.
@RoderickJMacdonald
2 жыл бұрын
I suspect part of his secret is that he simply loves to learn.
@maxluthor6800
2 жыл бұрын
@@RoderickJMacdonald it's not that hard if it's your passion
@TradieTrev
2 жыл бұрын
He's a true academic, there's no doubt about it!
@phinguyenvan708
2 жыл бұрын
I think the problem is not people cannot design the AI chip that run faster than Nvidia GPUs, but the problem is the huge software stack behind Nvidia GPUs. I have tried both IPU and TPU and believe me, the software is painful as hell.
@BattousaiHBr
2 жыл бұрын
yeah, same reason amd came out on top of intel but cant do the same with nvidia no matter how competitive the hardware is. nvidia is just lightyears ahead of everyone in the software stack.
@reh-linchen4698
2 жыл бұрын
Love your AI example of taking eggs for ping-pong balls with 100% confidence. It is hilarious!
@DanOneOne
2 жыл бұрын
Honestly the whole idea that in order for AI to work, thousands humans have to manually classify each picture is just so debilitatingly stupid... It's like having a cheat sheet with all answers for all tests and instead of understanding the question and thinking, just guessing the closest answer without any understanding...
@nahometesfay1112
2 жыл бұрын
@@DanOneOne It's less of a cheat sheet more like doing practice problems then checking your work against the teacher's answers
@AB-uv9kg
2 жыл бұрын
My new favourite channel. Looking forward to catching up on what you've already released and your future videos :).
@tykjpelk
2 жыл бұрын
I'm very excited about the silicon photonics approach. Photonic chips don't need to perform multiplications one at a time, but rather do the whole matrix multiplication in parallel, which makes it a O(1), or constant time operation. The chip needs to be configured to multiply by a certain matrix, which takes milliseconds, and can then perform matrix multiplications as fast as you can give it inputs. With 50GHz modulators and photodetectors readily available, I'm excited to see what companies like QuiX, iPronics and Xanadu will achieve.
@kalukutta
Жыл бұрын
Are 50 GHz of modulators available on Chip ??
@tykjpelk
Жыл бұрын
@@kalukutta Yes, 50 GHz has been available for several years both in electro-absorption and carrier depletion modulators in SiP, so both amplitude and phase modulation. They're even mature enough to be available in MPW PDKs, so you can just include them in your layout without designing anything from scratch. However they are too large for dense integration of meshes like the ones here.
@crackwitz
Жыл бұрын
ASICs, FPGAs, GPUs do parallel calculation too. That is no special attribute of photonics.
@tykjpelk
Жыл бұрын
@@crackwitz True, but in a fundamentally different way. Those devices need to perform a long series of logic operations to get a result. A photonic interferometer mesh is configured to be the multiplication itself, and the result comes out as soon as the input light has passed through it, about a tenth of a nanosecond. It's limited by scaling the chip and how fast you can switch the input/read the output. A reasonable parallel would be that to calculate shadows with ray tracing you need to compute a ton of stuff on a GPU, but the photonic approach is to shine light at the object and look at the shadow.
@lumanaty
2 жыл бұрын
Great Video. Extremely excited about this industry and really hope to get involved. Met up with some Cornell scientists and discussed lower-power electronic AI accelerators. This space is ripe for innovation that will lead to 10x improvements in power and inference speed. Amazing stuff out there.
@halos4179
2 жыл бұрын
Curious, what makes you believes this claim?
@aarch64
2 жыл бұрын
Just a quick thing, I’m like 95% sure Xilinx is pronounced Zy-links. I grew up about a mile from the HQ, and frequently had employees from there read books to me in elementary school.
@deang5622
2 жыл бұрын
I used Xilinx chips years ago. You are correct.
@codycast
2 жыл бұрын
You’re telling this to the guy who pronounces “Dee RAM” as “der-am”
@aiGeis
2 жыл бұрын
The most egregious mispronunciation in this video was of the great John Von Neumann's surname.
@MikeTrieu
2 жыл бұрын
It's almost like this channel take some sick joy at trolling tech enthusiasts with improper pronunciation of industry jargon. He's never corrected any of his flubs.
@Davethreshold
2 жыл бұрын
Oh drat! Another YT channel that I'll become addicted to! Seriously, I am FASCINATED with technology, mainly computer tech. You cover many aspects of things that I have not quite seen before. Good work! 🧡
@Luxcium
Жыл бұрын
The way you talk about your topic with passion, confidence and humility but also the rhythm and the synco-tonic of your voice makes those videos not only interesting but relaxing and calming you are such an amazing person yet because of how humble you are it makes me feel so strange to give you compliments but I guess that somewhere inside of you, you know that you are doing something right and something good 😅 So I must share this with you because you deserve many compliments 😊🎉❤
@joshhyyym
2 жыл бұрын
7:11 the box labelled as system processing is actually just a bench top power supply. It is supplying 1.00volt and 0.000A. It is not doing any processing. Great video btw, big fan of your channel
@JorgetePanete
2 жыл бұрын
I hope photonic computation becomes mainstream soon and CPU+GPU stops consuming over 200W
@pirojfmifhghek566
2 жыл бұрын
That would be nice, but I'd also just like to see more dedicated components that supplement the CPU and GPU. There are a lot of things that a dedicated AI chip or two could do that would reduce the need for such extreme horsepower. Any efficiency gains we can make are going to be important, and I think we're simply at that phase where we should be creating a new pillar of components to do that. Photonic computing will be great, but even a breakthrough in that space won't make it to the end user for another ten years. But AI chips are almost reaching a point where they can be introduced as a standalone part. Even in something as pedestrian as the gaming space, I could see AI chip applications all over the place. It could produce a lot of streamlining in the design and development phase. It could create better variability in the game, which is honestly just a perk. Most importantly, it could be used to create a healthy number of assets in-game, which could reduce the overall _file size._ And I can't stress enough how important file size is becoming. Many video games take up an enormous amount of space and it's about to skyrocket soon. Just look at Unreal Engine 5. It has great potential to reduce GPU usage, due to its ability to render _a near infinite number of polygons_ without breaking a sweat. But all that polygon data still has to be stored somewhere... assuming it has to be stored at all. Now if a dedicated AI chip could be utilized to create the majority of that content in the end-user's computer, while they're playing the game, that would allow for game designers to deliver lush realism without crushing our drive space with >1TB downloads. Level design, texture design, NPC randomization, NPC dialogue creation, truly sophisticated enemy AI, there's a lot of stuff this could be used for. It's an utter waste of electricity to always depend on the GPU to do these tasks. And then for production workloads... man, there are just so many applications for machine learning here. It's only limited by one's imagination. Applications where the end result doesn't need to be _exact,_ but it just needs something convincing to fill in the blank and round out the rough edges. Adobe image processing, video color correction, pattern recognition, animation, 3d modeling, predictive 3d modeling, etc. Just tons of stuff that we're kinda already dipping our toes into, but the current GPUs are just too slow to reliably carry the load without bursting into flames. And of course anyone with Excel wizardry could probably think of an infinite number of potential applications there too.
@JorgetePanete
2 жыл бұрын
@@pirojfmifhghek566 UE5 allows the use of Nanite, which is getting more features in experimental 5.1, and the assets are compressed, in Lumen in the Land of Nanite most of the space is taken by high res textures
@pirojfmifhghek566
2 жыл бұрын
@@JorgetePanete It's highly compressed, but it's not nothing. There's still a natural tendency for game designers to push file sizes to their limits. The difference between a AAA title with static assets and a procedurally generated title can be enormous. Even if nanite could shrink the static assets down by 70%, it doesn't hold a candle to the potential of procedurally generated design. I see it as a type of low-hanging fruit. Texture creation based off of smaller seed files would also be a helpful use of AI. You are right that they take up a crapton of space. Sometimes the bulk decompression of texture files alone is enough to make CPUs and SSDs weep. That's a bottleneck we could do without. "There's got to be a better way!" I shout, with my fists raised to the skies.
@JorgetePanete
2 жыл бұрын
@@pirojfmifhghek566 Seeing how massive each cod warzone update is, when there are people with dial-up internet makes me sad, I hope the community stops just saying "meh" to all the bad things big companies do
@PlanetFrosty
2 жыл бұрын
Dimensity, is doing a good job. I’ve worked on silicon photonics for 25 years and we have a new design we’re now working on a solid state SOC which includes unique photo sensitive “protein” molecule, but more another time...these are now the “Wet Works” as we try to evolve to new methodology in visual and human language understanding.
@Ivan-pr7ku
2 жыл бұрын
The path for future scaling of ML hardware is switching to analog circuit computations. The conventional binary load/store logic is already bumping in the perf/watt wall.
@RoyvanLierop
2 жыл бұрын
I would have expected at least a brief mention of Analog computing, using resistors as weights and adding currents together.
@EvanBoldt
2 жыл бұрын
Something like memsistors seem like the real future of neural network hardware as network complexity outpaces how many compute units can be put into a package. Programmable resistors could instantly apply a neural network by sitting between the CMOS and typical ISP.
@m_sedziwoj
2 жыл бұрын
Problem with analog computing is luck of design tools and knowledge, hard to debug, and many more. Phonic is interesting but same neuromorphic chip design. Because NN are something you know where each memory need to be, so don't need to use RAM, but put memory next to compute and load with pre program sequence etc.
@Ivan-pr7ku
2 жыл бұрын
@@m_sedziwoj We already have put the computations besides the memory -- the GPUs, with their megabytes of registers and caches right next to the ALUs. But this is still not nearly enough and doesn't overcome the huge overhead of the classic discrete binary computing. Computation and memory must to be fused into a single functional structure, similar to how the organic neurons work, to get out of the power overhead trap. Probabilistic computing also could be significant contribution to ML, since most of the training doesn't need precise results or use of strict data formats.
@BattousaiHBr
2 жыл бұрын
@@RoyvanLierop forget electrons, imagine doing computations with photons on the fly.
@coraltown1
2 жыл бұрын
As a retired CPU engineer I find this fascinating to watch/learn, except that the more advances we make .. the more society seems to go to hell.
@cubancigarman2687
2 жыл бұрын
I was really against the proliferation of AI. The visions into the future brought by movies can possibly come true. But as I see divisions within our country and the greed by our politicians doing best for them with little consideration for the masses, I have come back to let technology push through. There will be a point when we will let the AI write it’s own program to make itself more efficient and less wattage drawing and so on and so forth. Maybe AI will be compassionate to our childish needs and help raise us into the future. Maybe AI will come to the conclusion that we are just parasites to the energy resources that are limited to the planet that AI and humans inhabit and will deplete. I am very sure that safety measures will be in place so AI will not go rogue like. But what if even the safety measures are countered by Ai? I guess time will only tell. Would we have androids whose sole function was to take care of humans eg…the episode of Logan’s Run (circ.1970’s) or the terminators and hunter killers from the Terminator film series? Perhaps I’m completely wrong and it’s definitely the Hunger Games scenario when the elites have complete control over the remaining resources of the planet and govern human existence. I will need more whisky and cigars to think more thoroughly about this subject matter! Good day and be safe!
@avanisoni5549
2 жыл бұрын
Great explainer!!! I would highly suggest attaching your research source material in description.
@bernardfinucane2061
2 жыл бұрын
Moving the memory to the calculation would be like creating a hardware neuron.
@AlexK-jp9nc
2 жыл бұрын
I believe that's what they're gunning for. I saw another startup where they were hacking with transistors to change them from simple 0/1 to something like a sliding scale, and then doing math with those values. I think that's extremely similar to how an organic brain works
@leyasep5919
2 жыл бұрын
@@AlexK-jp9nc wait... transistor are analog parts, you know. It's how you use them that makes then digital or analog, when you saturate them or not. Analog computing with discrete transistor is an old art.
@ez1913
2 жыл бұрын
Thankfully it still looks vulnerable to voltage spikes and EMP attacks.
@James-wb1iq
2 жыл бұрын
As well as hydraulic presses and moltern metal
@tyrantfox7801
2 жыл бұрын
Photon based computers are on the way
@Name-ot3xw
Жыл бұрын
So back in the 00's the concept of a computing 'black box' was gaining some steam. The idea that we just push buttons and our PC spits out data that we consumers have only vague ideas of how the data came to be. I feel like the coming AI boom is going to take the black box idea to the next level. No one will have a solid idea of the how.
@mariusj8542
2 жыл бұрын
What’s interesting is that even if AI aggregates nodes, they’re using pretty standard regression models in the node it self, meaning the classification in the calculated weights are based on very old mathematics.
@Alorand
2 жыл бұрын
My favorite company to come out of the AI boom is Cerebras with their wafer scale engine.
@bendito999
2 жыл бұрын
Yes that thing is the coolest
@mapp0v0
2 жыл бұрын
Have you heard of BrainChip Inc.? BrainChip has a first-to-market neuromorphic processor IP, Akida. Brainchip's Akida is a neuromorphic system on a chip designed for a wide range of markets from edge inference and training with a sub-1W power to high-performance data center applications. The architecture consists of three major parts: sensor interfaces, the conversion complex, and the neuron fabric. Depending on the application (e.g., edge vs data center) data may either be collected at the device (e.g. lidar, visual and audio) or brought via one of the standard data interfaces (e.g., PCIe). Any data sent to the Akida SoC requires being converted into spikes to be useful. Akida incorporates a conversion complex with a set of specialized conversion units for handling digital, analog, vision, sound and other data types to spikes.
@rayoflight62
2 жыл бұрын
The problem with ARM CPUs which includes accelerators. They are proprietary. People writing the OS -say Linux - require the help of the manufacturers to write drivers, software updates, etc. This is not true for x86 CPU - which have a known structure and don't require the help of Intel for writing low level software. Our only hope is for Intel to invent a 5 Watt multicore, Risc or Cisc at this point doesn't matter much. If the trend continue with this proprietary SoCs, we will end end up with "hardware as a service" - a thing that I dislike a lot. And it is already happened with the software; do you own a video editing software, or a CAD anymore? Sometime I hope ARM and Intel get together and design the "Freedom Chip". Otherwise the best processor will only live 2 or 3 years, like our phones do now. Thank you for all your hard work...
@leyasep5919
2 жыл бұрын
heard about RISC-V ? Well, ok, look up "F-CPU", started in 1998.... and the "Libre-SOC" started in 2008 🙂
@mbarras_ing
2 жыл бұрын
Alif and Syntiant are two companies I've spoken to recently doing 'AI Accelerators' for embedded devices. Gonna be an interesting few years!
@AjinkyaMahajan
2 жыл бұрын
8:15 MAC diagram, you won my heart. This channel motivates me to keep learning and researching and never give up regardless of how many failures. You are a true person who understands affection with technology. Cheers ✨✨
@sodasoup8370
2 жыл бұрын
The weird thing is that evolution on the side of software like increased sparsity was kinda completely useless for convolution tpus. Thats why eyeriss went the multicore route i guess. I kinda expected it to take longer until we reached that point...
@kathrynradonich3982
2 жыл бұрын
I can’t be the only one who saw the video thumbnail and thought “wow the PPC G5 is making a comeback” when seeing those heat sinks 😂
@mrhassell
2 ай бұрын
The global AI accelerator chip market is currently valued at approximately $332.14 billion. 10x when the video was made.
@tahustvedt
Жыл бұрын
Seems like a lot of the AI development happening isn't really AI, just advanced algorithms.
@cyrileo
Жыл бұрын
"That's an interesting point! AI goes beyond algorithms as it involves complex decision making and processing." 🤔 (A.I)
@adissentingopinion848
2 жыл бұрын
As a brand new FPGA designer being introduced into computational designs, I'm pumped to see integrated AI cores in my designs that can integrated a little AI processing without losing general computing resources. MMUs can make your routing congestion very sad as is :( . But knowing FPGA design will let me shift over to ASICs if that's what's in demand.
@deang5622
2 жыл бұрын
Only if you implement your design in VHDL which can be synthesized. If you're coding up specific logic functions which exist in the FPGA vendor supplied libraries then you're going to have a problem. And it's not a case of whether ASICs are in demand, it's simply a case of performance and cost and the volume of sales.
@artemglukhov15
2 жыл бұрын
Great video that presents a nice overview of the current technological scenario. Could you please add the DOI for the papers you are quoting? Just for an easier search.
@evennot
2 жыл бұрын
I did a diploma on this topic in 2005, prototyping on Xilinx vertex too. But for spiking NNs, not regular ones. Spiking NNs takes advantage of race conditions from simultaneous concurring impulses, more akin to real NNs. They don't have a system-wide clock signal, thus they remove the disadvantage of hard discretization of the modern electronics
@johnl.7754
2 жыл бұрын
What Wowed me the most lately is the AI that can draw pictures from simple descriptions that you give it. It is better than most done by human graphic designers. It should be mostly a AI software advancement then hardware but not certain.
@johnl.7754
2 жыл бұрын
kzitem.info/news/bejne/tmeZrG2HfKdipYY I saw it in this video
@vanillavonchivalry6657
2 жыл бұрын
John you're a little mistaken. Dalle Mini doesn't draw or paint or sketch anything. It compiles images as a result of instructions. So it's not drawing - for instance - Johnny Depp eating a carrot; it's compiling images of drawings of Johnny Depp from internet browsers like Google, Bing etc. It isn't painting Trump eating Nancy Pilosi, it's finding images of "paintings of Trump" and compiling them into multiple images based on instructions. Nonetheless it is cool. But at the end of the day what you're seeing are human-created images compiled into some dream-like result.
@mattmmilli8287
2 жыл бұрын
@@vanillavonchivalry6657 that’s not true.. I mean it is somewhat. But you can say “Johnny depp as a angel eating a carrot in heaven drawn in the style of the Simpsons” It has some reference for all those things but has to get creative to make something new
@jpatt0n
2 жыл бұрын
@Cancer McAids Look up Dall-E 2.
@blinded6502
2 жыл бұрын
@Cancer McAids You haven't visited internet in a while, I see.
@jysm3302
2 жыл бұрын
somebody need to give you an educator award for this. miles and miles ahead of any ive seen yet.
@MaxPower-11
2 жыл бұрын
Thank you for the informative video. BTW, it’s pronounced ‘fon Noyman’ or ‘von Noyman’ Architecture (named after the eminent mathematician and polymath John von Neumann).
@dougsimmonds5462
Жыл бұрын
Can't figure where to signup for your news letter
@Star_cab
Жыл бұрын
"A learing neral network" I recall this being referenced in a movie.
@paulmichaelfreedman8334
Жыл бұрын
"Edge or server?" "Cash or charge?"
@ConsistentlyAwkward
4 ай бұрын
Groq is already using photonics to speed up chip to chip communication
@xntumrfo9ivrnwf
2 жыл бұрын
Have you looked at analog computing/chips for machine learning? I remember reading that they can be advantageous for certain tasks in the training workstream.
@vladimirLen
2 жыл бұрын
Not multi-billion dollar companies. TRILLION dollar companies
@xuedi
2 жыл бұрын
I never have seen so many ping pong balls in a refrigirator before !!
@adityapr.9380
Жыл бұрын
3:36 that's image of the city, indore (m.p), that traffic guy is ranjeet the dancer.
@asnaeb2
2 жыл бұрын
Ai accelerators other than GPUs never work unless your model is like 6 years old and uses no new functions. They are very inflexible.
@cinemaipswich4636
Жыл бұрын
These chips only work in a "Serial" fashion, unlike 64 core, 128 thread CPU's. They need one processor after another. If a "network" of processors are needed, then that would require a syncronised processor the size of a football pitch. Latency kills big chips.
@ArchilochusOfParos
Жыл бұрын
Excellent channel, accessible and informative, thank you.
@helloxyz
Жыл бұрын
Data travels down electric wires and chip paths just as fast as photons down a fibre optic cable or Photonic path. It is the components at either end that are the problem
@lerntuspel6256
Жыл бұрын
jesus crist, my biggest project so far was a "simple" 8-bit microprocessor, that was annoying as hell to layout in virtuoso. I audibly gasped when I saw the layout at 4:46
@helmutzollner5496
2 жыл бұрын
Very interesting! Excellent overview on the subject. Thank you
@liberatemi9642
Жыл бұрын
FPGA’s aren’t necessarily “Slower”, rather more costly.
@allezvenga7617
2 жыл бұрын
Thanks for your sharing
@miklov
2 жыл бұрын
Fascinating and well presented. Thank you!
@bioxbiox
Жыл бұрын
The video is a gem. This could be a successful Master's thesis/
@LokiBeckonswow
2 жыл бұрын
epic epic epic video, thank you for explaining such complicated tech and concepts so well, thank you
@y.shaked5152
2 жыл бұрын
8:09 - "The multiply-accumulator circuit is designed to do just one thing. It multiplies two numbers and then adds it to an accumulation sum." I mean... that's *two* things, my man. :)
@kaizen52071
2 жыл бұрын
Maybe John should make a course on semiconductors - working and evolution on a platform like brilliant. It will be a killer, atleast for the needs out there
@queasyRider3
2 жыл бұрын
Have you seen the other video, where they show the ability to use analog circuits for really fast and energy-efficient computations? They do mention the error margin, which means analog would be better used in certain cases. Really interesting, though. Also, I like the deer.
@Boersenwunder-
Жыл бұрын
Which stocks are benefiting? (except Nvidia)
@pandoorapirat8644
11 ай бұрын
TSMC
@Lion_McLionhead
2 жыл бұрын
NVidia never could scale GPU manufacturing to microcontroller quantities & never will.
@mightynathaniel5355
Жыл бұрын
excellent video presentation, well done 👍 subscribed now after stumbling on this.
@PeterRichardsandYoureNot
Жыл бұрын
So, you chose those high stack coolers on the thumbnail because they look like the cpu cores from the movie with Johnnie Depp, transcendence?
@AbuSous2000PR
Жыл бұрын
very informative; many thx
@JeremyErskine
2 жыл бұрын
Remember when people were talking about physics cards? This is never going to become a standard in pc.
@leoott436
2 жыл бұрын
Hey Jon, i think a great follow up tp this Video would be a Video on teslas dedicated self driving Hardware chips in their cars and there Dojo Training Hardware.
@kayakMike1000
2 жыл бұрын
Convolutional are huge, like edge detection looks at pixels around a specific pixel...
@manhoosnick
6 ай бұрын
This guy loaded on NVIDIA and even tried to help us... From a year ago
@Mnnyquintero
Жыл бұрын
Elon musk is right, this whole AI will accelerate technology to a ridiculous level
@miketjdickey2954
Жыл бұрын
Great blog thank you
@In20xx
Жыл бұрын
Exciting stuff, makes me wonder what will be developed in the near future!
@georgabenthung3282
2 жыл бұрын
Great video, as always, thanks. You mention silicon photonics and argue that chips produced with this technique can solve the problem that storage and processing is not happening in the same place. I don't see where silicon photonics help to solve this specific problem. Isn't the difference in this chips that the data is travelling as photon on the connecting bus? You might take a look into analog computing chips which are planned. They might be a real game changer when it comes to simulating the brain's neurons.
@mikl2345
2 жыл бұрын
I thought you were going to explain the actual difference between a GPU and an NPU in terms of architecture. I.e. from a programmers perspective.
@robertbohnaker9898
2 жыл бұрын
When will this trickle down to camera makers like Sony, Canon and Nikon ? As a wildlife photographer, this AI Accelerator technology would open the door mind boggling processing power that would have enormous potential to improve camera performance capabilities. Or will this tech be first applied to photo post processing computer applications ? Thanks 😊
@leyasep5919
2 жыл бұрын
yes, that would be for post-processing. The pro camera is here only to capture the most accurate data as fast as possible... unlike a smartphone, the photographer wishes to retain the artistic and technical grip on the result and can spend more time post-processing on their computer, than shooting.
@vinzent1992
2 жыл бұрын
"FPGA's are development and testing devices.." Not true, FPGA's are also used in commercial products.
@blacklotus432
Жыл бұрын
dude your content is A+++
@matthewexline6589
Жыл бұрын
So with companies making TPUs for many of the tasks that used to be performed by GPUs, this will mean that there will be... fewer GPUs being made, and fewer facilities around designed to make them... does this mean that people should expect the see the prices of GPUs just continue to go up then?
@RixtronixLAB
Жыл бұрын
Nice info, thanks for sharing it:)
@nygariottley245
7 ай бұрын
Were you talking about LPU's (GROQ) using light-----laser
@ippydipp
2 жыл бұрын
Brilliant video mate
@boydnelson2280
2 жыл бұрын
So cool slipping in a picture of a Arduino Uno learning kit when talking about neural networks - two things that couldn't be more different.
@davewang202
2 жыл бұрын
Xilinx is typically pronounced as Zye-Links rather than Zee-Links.
@kevinbroderick3779
2 жыл бұрын
3:40 I'll have 3 ping pong balls over-easy.
@jkgambz
Жыл бұрын
One thing that will decide a winner on the AI accelerator space is which hardware will be used for scaling up current neural network models. Recent dvelopments like StableDiffusion, Google's Imagen, DALL-E, CLIP, GPT-3 and others have shown that training larger models on larger datasets for simple prediction tasks (e.g. predict missing pixels on an image, or a similarity score between a sentence and an image) produce more capable models for other specific purpose tasks in which training data may be more limited. The versatility of these large models, which some call foundational models, give them a huge potential economic value. Thus, there is race for training the largest possible models, with the front-runners being large tech corporations, along with government-sponsored projects. There is currently a movement in the field to push these projects to the levels of investment of large projects in experimental physics (e.g. LHC or LIGO), all in the interest of training very large foundation machine learning models. Being a hardware supplier for the front-runners in this race is going to mean getting a slice of billions of dollars in investment in the coming years.
@retromograph3893
2 жыл бұрын
Great vid! ….. please do a vid on Optalysys !
@prashantsapkal1901
2 жыл бұрын
03:33 - He's the dancing traffic police man of Indore, Madhya Pradesh (MP), India.
@Motonari11
2 жыл бұрын
Ah yes.. the ping-pong ball incident...
@topspykimi
2 жыл бұрын
In a foreseeable future, custom made ai chip will not be mainstream, nv cards will still dominate the market as they can adopt to newly introduce algorithms. Scientists in nv actually works with academic to improve the efficiency. For most of the projects, don’t see big advantage of using money to design own ai chip
@viperviperpiro
2 жыл бұрын
now we are already talking about 3nm ...
@SciHeartJourney
2 жыл бұрын
Try buying a high end FPGA from Xilinx..... if you CAN! They're telling us they can't deliver until JANUARY 2023. It's only July 2022. Kinetix is one of their best capable of running this AI accelerator IP, but what good is it if you can't even BUY the part at a reasonable price. The only sources are in China, but you're gambling! These people buy the parts and sell them at inflated prices. They're like concert ticket scalpers.
@SianaGearz
2 жыл бұрын
Oh hey only half a year, not too terrible. Most small microcontrollers are sold out more than a year out!
@BryanChance
2 жыл бұрын
Those large heat sinks look like TSMC buildings. :-)
@skierpage
2 жыл бұрын
13:00 please scale screenshots to fill your video frame. I don't need black borders, I need text I can read on my phone!
@vslaykovsky
2 жыл бұрын
When was the script of this video created? v100 are 2 generations old today
@autohmae
2 жыл бұрын
14:13 well, you've answered your own question, pretty certain a number of people are looking into how to solve that one. It at least has the most promise if (it can be) solved.
@geasderlinasdwsxcdeasd
4 ай бұрын
why I never heard of Nvidia Tesla recently. I thought they just give up of this product roadmap.
@georhodiumgeo9827
2 жыл бұрын
Ahhh I understand. So they needed the TPU for AI but wanted to use the same architecture for server and gaming markets so they manufactured the ray tracing market to sell the same architecture to both markets. Good or bad Nvidea is next level genius.
@0MoTheG
2 жыл бұрын
Because inference is part of training, HW that only does inference is still useful to training.
@yexela
2 жыл бұрын
Cerebras wafer-size processor is another interesting approach.
Пікірлер: 427