To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/ArtemKirsanov/. The first 200 of you will get 20% off Brilliant’s annual premium subscription.
@KnowL-oo5po
Жыл бұрын
your videos are amazing you are the Einstein of today
@RegiJatekokMagazin
Жыл бұрын
@@KnowL-oo5po Business of today.
@ironman5034
Жыл бұрын
I would be interested to see code for this, if it is available of course
@muneebdev
Жыл бұрын
I would love to see a more technical video explaining how a TEM transformer would work.
@waylonbarrett3456
Жыл бұрын
I have many mostly "working" "TEM transformer" models although I've never called them that. This idea is not new; just its current sysnthesis. Basically, all of the pices have been around for a while and I've been building models out of them. I don't ever have enough time or help to get them off the ground.
@jonahdunkelwilker2184
Жыл бұрын
Yes same, I would love a more technical video on how this works too! Ur content is so awesome, currently studying CogSci and I wanna get into neuroscience and ai/agi development, thank u for all the amazing content:))
@mryan744
Жыл бұрын
Yes please
@Arthurein
Жыл бұрын
+1, yes please!
@StoutProper
Жыл бұрын
Predictive coding sounds a bit like what LLMs do.
@tmarshmellowman
Жыл бұрын
In answer to your question at 21:55, yes please. Our brains light in all kinds of delight thanks to you
@kevon217
Жыл бұрын
top notch visualizations! great video!
@raimo7911
Жыл бұрын
I think I just found my passion and purpose in life. this is what the world should be focusing on
@trejohnson7677
Жыл бұрын
In the end, the implementation will be lots of LOT’s of LOTS of LOTS.
@666shemhamforash93
Жыл бұрын
A more technical video exploring the architecture of the TEM and how it relates to transformers would be amazing - please give us a part 3 to this incredible series!
@kyle5519
9 ай бұрын
It's a path integrating recurrent neural network feeding into a Hopfield network
@al3k
Жыл бұрын
Finally, someone talking about "real" artificial intelligence.. I've been so bored of the ML models... just simple algos.. What we are looking for is something far more intricate.. Goals.. 'Feelings' about memories and current situations... Curiosity... Real learning and new assumptions...A need to grow and survive.. and a solid basis for benevolance, and a fundamental understanding of sacrifice and erring..
@xenn4985
7 ай бұрын
What the video is talking about is using simple algos to build an AI, you reductive git.
@DaleIsWigging
28 күн бұрын
LLMs are an attempt to add semantics to words so a computer can understand meaning based on context. This is only one aspect of the brain. If you add memory to this (usually through a vector database or through a knowledge graph) you end up simulating most of the functionality of the brain (when it comes to text input and outputs). If you are bored it's because you are waiting for someone to solve it for you instead of programming it yourself. Plenty of awesome tutorials , libraries and APIs to get started. Make what you mean, release it for everyone then you can make a video on that!
@GiRR007
Жыл бұрын
This is what I feel like current machine learning models are, different primitive sections of a full brain. Once all the pieces are brought together you get actual artificial general intelligence.
@josephlabs
Жыл бұрын
I totally agree like a 3D net
@aaronyu2660
Жыл бұрын
Well, we’re still way miles off
@jeffbrownstain
Жыл бұрын
@@aaronyu2660 Closer than you might think
@cosmictreason2242
Жыл бұрын
@@jeffbrownstainno you need to see the neuron videos. Computers are binary and neurons are not. Besides, each bit of storage is able to be used to store multiple different files.
@didack1419
Жыл бұрын
@@cosmictreason2242 you can simulate the behavior of neurons in computers. There are still advantages to physical-biological neural networks but those could be simulated with a sufficient number of transistors. If it's too difficult they will end up using physical artificial neurons. What I understand that you mean by "each bit of storage is able to be used to store multiple different files" is that biological NNs are very effective at compressing data (ANNs also compress data in that basic sense), but there's no reason to think that carbon-based physical-biological NNs are unmatchable. I'm not gonna say that I have a conviction that it will happen sooner rather than later, and people here are also really vague regardless. What I could say is that I know of important technologists who think that it will happen sooner (others say that it will happen later).
@---capybara---
Жыл бұрын
I just finished my final for behavioral neuroscience, lost like 30% of my grade to late work due to various factors this semester, but this is honestly inspiring and makes me wonder how the fields of biology and computer science will intersect in the coming years. Cheers, to the end of a semester!
@joesmith4546
Жыл бұрын
Computer scientist here: they do! I’m absolutely no expert on neuroscience, but computer science (a subfield of mathematics) has many relevant topics. One very interesting result is that if you start from the perspective of automata (directed graphs with labeled transitions and defined start and “accept” states) and you try to characterize the languages that they recognize, you very quickly find as you layer on more powerful models of memory that language recognition and computation are essentially the exact same process, even though they seem distinct. If you want to learn more about this topic, I have a textbook recommendation: Michael Spiders Theory of Computation, 3rd edition Additionally, you may be interested in automated theorem proving as another perspective on machine learning that you may not be familiar with. Neither automata nor automated theorem proving directly describe the behavior of neural circuits, of course, but they may provide good theoretical foundations for understanding what is required for knowledge, memory, and signal processing in the brain, however obfuscated by evolution these processes may be.
@NeuraLevels
Жыл бұрын
"Perfection is enemy of efficiency" - they say, but in the long run, quality wins when we run for trascendent work instead of immediate rewards. BTW, the same happend to me. Mine was the best work in the class. the only which also incorporated beauty, and the most efficient design, but the professor took 9/20 points because a 3 days delay. His lessons I never learned. I am not an average genius. Nor are you! No one has achieved what I predicted on human brain internal synergy. Here the result (1min. video). kzitem.info/news/bejne/k2uN2Xeqr4F2bG0
@jeffbrownstain
Жыл бұрын
Look up Micheal Levin and his TAME framework (technological approach to mind everywhere), cognitive light cones and the computational boundary of the self. He's due for an award of some type for his work very soon.
@DaleIsWigging
28 күн бұрын
Mathematicians (including the specialised mathematicians we call computer scientists) have always been intimately connected with developing new routes for nueroscience to test. There is a newish field of Math called "category theory" that seems better at linking the similar/equivalent theories/models in all these fields
@Special1122
Жыл бұрын
Thank you. You're master at explaining complex stuff. What's your opinion on criticism of LLM's like GPT-4 not having "understanding" and being just "stochastic parrots"? In recent talk someone from OpenAI talked about LLM's ability of adding 40 digit numbers arguing that tt could not be memorised because there are fewer atoms in the world than numbers up to 40 digits.
@ceoofsecularism8053
Жыл бұрын
"Gpt" contains the word "pretrained" .. they are just stochastic parrot in the sense that they just cannot move their understanding out of their trained data .. they just try to predict they behaviorist approch in the same context to the data which they were provided with ... intelligence has nothing to do with scale .. gpt and all other "LLMS" are predictive machine learning model not AI .. there is a long way before we get to it .. and agi is far off .. generailzation with adaptability is one course of intelligence. If we want to achieve intelligence will'ed to crack open generailzation first , which is beyond the knowledge of deep learning as whole of machine learning.
@SuperNovaJinckUFO
Жыл бұрын
Watching this I had a feeling there was some similarities to transformer networks. Basically what a transformer does is create a spatial representation of a word (with words of similar meaning being mapped closer together), and then the word is encoded in the context of its surroundings. So you basically have a position mapping, and a memory mapping. It will be very interesting so see what a greater neuroscientific understanding will allow us to do with neural network architectures.
@cacogenicist
Жыл бұрын
That is rather reminiscent of the mental lexicon networks mapped out by psycholinguists -- using priming in lexical decision tasks, and such. But in human minds, there are phonological as well as semantic relationships.
@marcellopepe2435
Жыл бұрын
A more technical video sounds good!
@yeysoncano2002
Жыл бұрын
I want to create an ai that uses all of this, I'm studying on my own, if someone can recommend some websites or tools or books, I would appreciate it 😊👍
@yeysoncano2002
Жыл бұрын
@@boumedinebilal7566 Thanks, I appreciate the information. Good luck to you too.
@egor.okhterov
Жыл бұрын
Andrej Karpathy currently is THE guy if you want to learn Transformers: kzitem.info/news/bejne/t4Ogk2eJaqacqGU
@AlbertPerrienII
Жыл бұрын
Thanks! Does this line up with the research done by professor Theodore Berger on his artificial hippocampus implant? I understand that it is in use in several human test subjects presently.
@ArtemKirsanov
Жыл бұрын
Hmm, I'm not that familiar with Dr. Berger's work, to be honest. Thank you for pointing it out! From what I know, Berger's prosthesis uses electrical stimulation to essentially "amplify" specific patterns of neurons to boost memory encoding. This is an incredible application, but I feel like it tells us quite little about how hippocampus works under the hood. TEM, on the other hand, is purely a conceptual / computational model that unifies several of hippocampal phenomena, but hasn't been applied in practice yet. There is a good chance that there is indeed a link between the two, but it is too early to say for sure.
@silvomuller595
Жыл бұрын
Please don't stop making these videos. Your channel is the best! Neuroscience is underrepresented. Golden times are ahead.
@memesofproduction27
Жыл бұрын
A renaissance even... maybe
@lake5044
Жыл бұрын
But, at least in humans, there is at least two crucial things that is model of intelligence is missing. First, the abstraction is not only applied to the sensory input, it's also applied to internal thoughts (and no, it's not just the same as running the abstraction on the prediction). For example, you could think of a letter (a symbol from the alphabet) and imagine what it would look like rotated or mirrored. And no recent sensory input has a direct relation to the letter you choose, what transformation you chose to imagine or even to imagine all of this in the first place. (You can also think of this as the ability to execute algorithms in your mind, a sequence of transformations based on learned abstractions.) Second, there is definition a list of remembered structures/abstractions that we can run through when we're looking to find a good match for a specific problem or data. Sure, maybe this happens for the "fast thinking" (the perception part of thinking, you see a "3" you perceive it without thinking it has two incomplete circles) but also for the slow deliberate thinking. Take this following example, you're trying to solve some math problem, you're trying to fit it on abstractions you already learned, but then suddenly (whether someone gave you a hint or the hint popped in your mind) you know found a new abstraction that would better fit the problem, the input data didn't change but now you decided to see in as a different structure. So there has to be a mechanism of trying any piece of data with any piece of structure/abstraction.
@brendawilliams8062
Жыл бұрын
It is a separate intelligence. It communicates with the other cookie cutters by a back propagation similar to telepathic. It is as a plate of sand making patterns on it’s plate by harmonics. It is not human. It is a machine.
@cobyiv
Жыл бұрын
This feels like what we should all be obsessed with as opposed to just pure AI. Top notch content!
@aw2031zap
Жыл бұрын
LLM are not "AI" they're just freaking good parrots that give too many people the "mirage" of intelligence. A truly "intelligent" model doesn't make up BS to make you go away. A truly "intelligent" model can draw hands FFS. This is what's BS.
@gorgolyt
10 ай бұрын
idk what you think "pure AI" means
@astralLichen
Жыл бұрын
This is incredible! Thank you for explaining these concepts so well! A more detailed video would be great, especially if it went into the mathematics.
@Alex.In_Wonderland
Жыл бұрын
your videos floor me absolutely every time! You clearly put a LOT of work in to these and I can't thank you enough. These are genuinely a lot of fun to watch! :)
@ArtemKirsanov
Жыл бұрын
Thank you!!
@Mad3011
Жыл бұрын
This is all so fascinating. Feels like we are close to some truly groundbreaking discoveries.
@CharlesVanNoland
Жыл бұрын
Don't forget groundbreaking inventions too! ;)
@egor.okhterov
Жыл бұрын
The missing ingredient is how to make NN changes on the fly when we receive sensory input, without backpropagation. There's no backpropagation in our brain
@CharlesVanNoland
Жыл бұрын
@@egor.okhterov The best work I've seen so far in that regard is the OgmaNeo project, which explores using predictive hierarchies in lieu of backpropagation.
@egor.okhterov
Жыл бұрын
@Charles Van Noland the last commit in github is from 5 years ago and the website didn't update for quite a while. What happened to them?
@yangsong4318
Жыл бұрын
@@egor.okhterov There is an ICLR 2023 paper from Hinton: SCALING FORWARD GRADIENT WITH LOCAL LOSSES
@alexkonopatski429
Жыл бұрын
A technical video about TEM transformers would be amazing!!
@klaudialustig3259
Жыл бұрын
I was surprised to hear at the end that this is almost identical to the transformer architecture
@timothytyree5211
Жыл бұрын
I would also love to see a more technical video explaining how a TEM transformer would work.
@jasonabc
Жыл бұрын
For sure would love to see a video on the transformer/hopfield networks and the relationship to the hippocampus. Great stuff keep up the good work.
@444haluk
Жыл бұрын
if it is equivalent to transformers, then the model is definitely wrong.
@robertpfeiffer4686
Жыл бұрын
I would *love* to see a deeper dive into the technology of transformer networks as compared with hippocampal research! These videos are outstanding!!
@AlecBrady
Жыл бұрын
Yes, please, I'd love to know how GPT and TEM can be related to each other.
@arturgasparyan2523
Жыл бұрын
Hello Artem, would it be possible to get a PDF accompanying the video, to keep on the side while the video plays? Perhaps as a Patreon exclusive?
@tenseinobaka8287
Жыл бұрын
I am just learning about this and it sounds so exciting! A more technical video would be really cool!
@EmmanuelMess
Жыл бұрын
As an AI engineer I would like to see more of the models that are used in neuroscience and just a light touch of artificial models, as there are many others that explain how AI models work.
@ramanShariati
Жыл бұрын
YES PLEASE !!!!
@dinodinoulis923
Жыл бұрын
I am very interested in the relationships between neuroscience and deep learning and would like to see more details on the TEM-transformer.
@briankleinschmidt3664
Жыл бұрын
Memory isn't stored in the brain like data. It is integrated into the "world view" If the new information is incompatible. The world view is altered, or the info is altered or rejected. The recollection of the original input includes a host of other inputs. Often when you learn a new thing, it seems as if you are remembering something you already knew. After a while it as if you always knew it.
@nova2577
Жыл бұрын
I would like to see a more technical video.
@GeoffryGifari
Жыл бұрын
how can tolman-eichenbaum machine recognize an object as a reward, instead of a barrier or even danger?
@rb8049
Жыл бұрын
I’ve been hoping someone would pursue this topic. Great! GPT neocortex is not everything.
@arnau2246
Жыл бұрын
Please do a deeper dive into the relation between TEM and transformers
@TheSpyFishMan
Жыл бұрын
Would love to see the technical video describing the details of transformers and TEMs!
@michaelgussert6158
Жыл бұрын
Good stuff man! Your work is always excellent :D
@inar.timiryasov
Жыл бұрын
Amazing video! Both the content and production. Definitely looking forward for a TEM-transformer video!
@divided_by_dia446
23 күн бұрын
I loved the video, well explained! One thing for future videos, that might make it easier to understand: I don't think everyone, even in CS/bio/neuro -sciences knows all of the terms you are using. I.e. the term 'latent' i would not have known it if i didn't go for ML/Neural networks classes at my Uni.
@binxuwang4960
Жыл бұрын
Well explained!! The video is just sooooo beautiful.....even more beautiful than the talk given by Whittington himself visually. How did you make such videos? using python or Unity? Just curious!
@petemoss3160
Жыл бұрын
interesting! i've been looking at how to equip an agent with powers of observation via a vector database to log the facts and judgements (including reward expectation) from what it observers of other agents and the environment. so far i'm figuring for a vector space of logs clustering all the memories with strong positive and strong-negative reward, as well as everything closely related to them. perhaps generalization will be found this way, especially if using a decision transformer with linguistic pretraining.
@isaacgroen3692
Жыл бұрын
yes more technical video about transformers please and thank you
@josephlabs
Жыл бұрын
I was trying to build something similar, but I thought of the memory module as an event storage, where it would store events and the location of which those events happened. Then we would be able to query things that happened by events or locations or things involved in events at certain locations. However, my idea was to take the memory storage away from the model and create a data structure(graph like) uniquely for it. TEM transformers are really cool.
@egor.okhterov
Жыл бұрын
How to store location? Some kind of hash function of sensory input?
@josephlabs
Жыл бұрын
@@egor.okhterov that was the plan or some graph like data structure to denote relationships.
@GabrielLima-gh2we
Жыл бұрын
What an amazing video, knowing that we can now understand how the brain works through these artificial models is incredible, neuroscience research might explode in discoveries right now. We might be able to fully understand how this memory process works in the brain by the end of this decade.
@mkteku
Жыл бұрын
Awesome knowledge! What app are you using for graphics, graphs and editing? Cheers
@user-zl4fp3ml4e
Жыл бұрын
Please also consider a video about the PFC and its interaction with the hippocampus.
@egor.okhterov
Жыл бұрын
Excellent video as always :) Do you have ideas on how to get rid of backpropagation to train a transformer and implement one-shot(online) life-long learning?
@GeoffryGifari
Жыл бұрын
how can tolman-eichenbaum machine deal with cases where the same "thing" is found at multiple locations (like orchids in a garden) or when one location has multiple "things" (like the location of a dining table and the utensils on top?)
@foreignconta
Жыл бұрын
I really liked your video. And I would like to see a technical video on TEM transformer. Especially the difference. Subscribed
@Jonathan-ru9zl
Ай бұрын
Keep up yhe great work!!❤
@markwrede8878
Жыл бұрын
It would need to host some sophisticated pattern recognition software. These would arise from values similar to phi, which, like phi itself, are described by dividing the square root of the first prime to host a specific sequential difference by that difference. For phi, square root of 5 by 2, then square root of 11 by 4, square root of 29 by 6, square root of 97 by 8, and so on. I have a box with the first 150 terms.
@silvomuller595
Жыл бұрын
Could you make a video about the Integrated Information Theory? Or the neuronal correlate of consciousness? I don't get the math behind IIT and I think you probably can :).
@ramanShariati
Жыл бұрын
please make the video about transformers / TEM / Hopfield networks
@jamessnook8449
Жыл бұрын
Yes, read Jeff Krichmar's work at UC Irvine, it is dramatically different than what people view as the traditional neural network approach.
@JoeTaber
Жыл бұрын
Nice video! You didn't mention the representational format that location and sensory nets were provided. Did location nets get cartesian coordinates? What was the representation for sensory input?
@goldnutter412
Жыл бұрын
8:35 think of the brain as just more software, but with a permanent token database that is constantly reoptimizing over time. We store this as we experience this data based reality, and make our own set of information every time. This parallelizes the learning, and all the variation and complexity of the soup of individual us, makes that exponential At the brain level, we store a specific version of events according to our intent, how much attention we are paying to the huge amount of detail.. thus we get the detective problem with asking eye witnesses and getting various stories about what someone looked like etc. awareness has a prioritization for data processing.. we call this focus Neuroplasticity to the extent of say, Einstein is very possible because it is just complex rule based data structures. You can't magically make your brain into a fault tolerant supercomputer, but over time biological processes tend toward the most economical, low entropy state.. a la muscle memory
@xavierhelluy3013
Жыл бұрын
So beautifull to watscg once again and very nice and very instructive. I would love a more technical video on the matter. I see a direct link between Jeff Hawkins vision of how the neocortex works, since according to him cortical columns are kind of stripped down neuronal hippocampal orientation system, but which act on concepts or sensory inputs depending on input output connections. The link llm and TEM remains amazing.
@egor.okhterov
Жыл бұрын
The thing is that Jeff Hawkins is also against backpropagation. That is the last puzzle to solve. We need to make changes in the network on the fly, at the same time as we are receiving sensory input. We learn new models in a few seconds and we don't need billions of samples
@anywallsocket
Жыл бұрын
Your visual aesthetic is SO smooth on my brain, I just LOVE it
@bluecup25
Жыл бұрын
The Hippocampus knows where it is at all times. It knows this because it knows where it isn't. By subtracting where it is from where it isn't, or where it isn't from where it is (whichever is greater), it obtains a difference, or deviation. The guidance subsystem uses deviations to generate corrective commands to drive the organism from a position where it is to a position where it isn't, and arriving at a position where it wasn't, it now is. Consequently, the position where it is, is now the position that it wasn't, and it follows that the position that it was, is now the position that it isn't. In the event that the position that it is in is not the position that it wasn't, the system has acquired a variation, the variation being the difference between where the missile is, and where it wasn't. If variation is considered to be a significant factor, it too may be corrected by the GEA. However, the Hippocampus must also know where it was. The Hippocampus works as follows. Because a variation has modified some of the information the Hippocampus has obtained, it is not sure just where it is. However, it is sure where it isn't, within reason, and it knows where it was. It now subtracts where it should be from where it wasn't, or vice-versa, and by differentiating this from the algebraic sum of where it shouldn't be, and where it was, it is able to obtain the deviation and its variation, which is called error.
@Wlodzislaw
Жыл бұрын
Great job explaining TEM, congratulations!
@bladekiller2766
Жыл бұрын
What software do you use for the animation? NVM, I just found it in the description :)
@ReyhanJoseph
Жыл бұрын
I want to see that technical video
@KonstantinosSamarasTsakiris
Жыл бұрын
The video that convinced me to become a patron! Super interested in a part 3 about TEM-transformers.
@ArtemKirsanov
Жыл бұрын
Thanks :3
@jamessnook8449
Жыл бұрын
This has already been done at The Neurosciences Institute back in 2005. We developed a model that not only led to place cell formation, but also prospective and retrospective memory - the beginning of episodic memory. We used the model to control a mobile device that ran the gold standard of spatial navigation ' the Morris water maze'. In fact Professor Morris was visiting the Institute for other reasons and viewed our experiment and gave it his blessing.
@memesofproduction27
Жыл бұрын
Incredible. Were you on the Build-A-Brain team? Could you please direct me to anything you would recommend me read on your work there to familiarize myself and follow citations toward influence on present day research? Much respect, me
@lucyhalut4028
Жыл бұрын
I would love to see a more technical video! Amazing work, Keep it up!😃
@treydelbonis4028
Жыл бұрын
Would *love* a deep dive into how transformers *actually* work.
@SuperHddf
Жыл бұрын
Humanity needs your video about lTEM transformers. Please do it!
@gametophacker5047
Жыл бұрын
If I stop training one thing will I forget it? Like swimming? cycling
@TheRimmot
Жыл бұрын
I would love to see a more technical. video about how the TEM transformer works!
@jopmens6960
Жыл бұрын
Wouldnt our model of the world prob look as strange as a humonculus? Not reflecting amounts of neural stimuli but importance? That would be interesting to visualize.
@itay0na
Жыл бұрын
Wow this is just great! I believe it somehow contradicts the message of AI & Neuroscience video. In any case really enjoyed that one, keep up the good work.
@alexharvey9721
Жыл бұрын
Definitely keen to see a more technical video, though I know it would be a lot of work!
@SeanDriver
Жыл бұрын
Great video…the moment you showed the function of the Medial EC and LateralEC I thought …hey transformers….so really nice to see that come out at the end, albeit for a different reason. My intuition for transformers came from the finding of the ROME paper which suggested structure is stored in the higher attention layers and sensory information in the mid level dense layers
@dysphorra
Жыл бұрын
Actually 10 years ago Bergman build a prosthetic hippocampus with much simpler architecture. It was tested in three different conditions. 1) Bergman take input from healthy rat's hippocampus and successfully predicted it's output with his device. 2) He removed the the hippocampus and replaced it with his prosthesis. Electrodes collected inputs to hippocampus sent it to computer then back to the output neurons. And it worked. 3) He connected an input of the device to the brain of a training mice and the output of device to the brain of an untrained one. And he showed some sort of memory transfer (!!!). Noticeable is that he used very simple mathematical algorithm to convert input into output.
@waylonbarrett3456
Жыл бұрын
I've been building and revising this machine and machines very similar for about 10 years. I didn't know for a long time that they weren't already known.
@arasharfa
Жыл бұрын
If you want to think more about how to build a framework for how to construct emotions I suggest reading "How emotions are made" By Lisa Feldman Barret. I think you would find the book fascinating given your area of interest and I would love to see what you'd be able to abstract from it.
@BleachWizz
Жыл бұрын
Thanks man I might actually reference those papers! I just need to be able to actually become a researcher now. I hope I can do it.
@MrHichammohsen1
Жыл бұрын
This series should win an award or something!
@0pacemaker0
Жыл бұрын
Amazing video as always 🎉! Please do go over how Hopfield networks fit in the picture if possible. Thanks
@thebestofthequest7486
Жыл бұрын
please make a video about the transformer architecture
@alik7754
Жыл бұрын
Sir your graphics prsentations are really impressive! May I know which tool you utilize for creating them?
@ArtemKirsanov
Жыл бұрын
Thank you!! A combination of Adobe After Effects, Blender and Python (matplotlib or manim modules) :)
@donaldgriffin6383
Жыл бұрын
More technical video would be awesome! More BCI content in general would be great too
@tomaubier6670
Жыл бұрын
Such a nice video! A deep dive in TEM / transformers would be awesome!!
@thegloaming5984
6 ай бұрын
Videos like this make me want to go back to school
@ginogarcia8730
Жыл бұрын
finally content I needed to fill my anxiety hole around AI
@oberonpanopticon
Жыл бұрын
I was just wondering earlier today what kind of file formats would be involved in the digitalization of biological memories!
@oberonpanopticon
Жыл бұрын
If my consciousness is reactivated 1000 years in the future and I find out they’ve been storing my memories in .jpg files I’m gonna unplug myself
@_sonu_
Жыл бұрын
I lo❤ your videos more than any videos nowadays.
@nazgulXVII
Жыл бұрын
I would appreciate a technical dive in the transformer architecture from the point of view of neurobiology!
@marcc16
Жыл бұрын
So TEM is just like a network router. Got it.
@quantumfineartsandfossils2152
Жыл бұрын
this is how ive memorized weather conditions gear my environment so I fly around like a bicycle skateboarder the outdoor boys ms josey are also 1200% like this have these skills pro social pro environmental skills
@GeoffryGifari
Жыл бұрын
hmmm so its like understanding is just a byproduct of predicting?
@ArtemKirsanov
Жыл бұрын
In a way, yeah. It's actually similar how GPTs were essentially trained to predict the next token (word) in a sequence, but over the course of optimization they turned into what they are today (one could argue that there is a spark of "understanding")
@En1Gm4A
Жыл бұрын
pls make a video about transformers
@arasharfa
Жыл бұрын
how fascinating that you talk about sensory, structural and constructed model/interpretation, those are the three base modalities of thinking i've been able to narrow down all of our human experience to in my artistic practice. I call them "phenomenologic, collective and the ideal" modalities of thinking.
@IdleBystander1
Жыл бұрын
Would love to see you go over the transformer!
@astha_yadav
Жыл бұрын
Please also share what software and utilities you use to make your videos ! I absolutely love their style and content 🌸
@porroapp
Жыл бұрын
I like how neurotransmitters and white matter formation in the brain are analogues to weights/biases and back prop in machine learning. Both are used to amplify the signal and re-enforce activation based on rewards be it neurons and synapses or convolution layers and the connection between nodes in each layer.
Жыл бұрын
This is cool! Thank you for sharing. The visualization is stunning, I'm curious know if you do it yourself and which tools you use
@ArtemKirsanov
Жыл бұрын
Thank you! Yeah, I do everything myself ;) Most of it is done in Adobe After Effects with the help of Blender (for rendering 3D scenes) and matplotlib (for animations of neural activity of TEM, random-walk etc)
Пікірлер: 325