I don't usually comment on youtube videos, but this one was very easy and intuitive for such a complex subject. The details on the animations and your clear explanation really helped me a lot!
@mathacademy-jeeimocuet6566
9 күн бұрын
"Hello, I am living Dubai and from India and I have a very strong background in advanced mathematics across multiple disciplines. Recently, I started learning Data Science and AI. I came across your channel, and believe me, it has motivated me a lot. I feel like I am learning Algebra with you. You're doing a great job, and I enjoy all your videos. Nice work! May Allah bless you."
@BatBallBites
2 ай бұрын
Perfect, but please continue this series and make a video that why we need mamba as a transformer replacement
@hackerborabora7212
12 күн бұрын
i just discover your channel you are like dream thank you so much
@mohamedfarrag3869
Ай бұрын
I was fascinated by the power of state space models in control theory field and now it finds its way in the new era of AI. I really love these models and thank you Mr. Serrano for the easy and interesting explanations
@MaartenGrootendorst
2 ай бұрын
Great video! You always find an amazingly intuitive way to explain these technical and detailed subjects.
@SerranoAcademy
2 ай бұрын
@@MaartenGrootendorst oh thank you! What an honor to hear from you, I love your articles and your recent book! It’s thanks to your article that I learned SSMs.
@sarthak.AiMLDL
2 ай бұрын
Brilliant, Next Video on "KAN"
@emiyake
2 ай бұрын
Thank you for the video, very informative! It would be really interesting to see a video explaining the training phase of SSM. What are the trainable parameters and how does the training process work?
@IceMetalPunk
2 ай бұрын
I'm not confident at all in this, so take this with a grain of salt, but I'd assume the parameters would be the entries of the three matrices A, B, and C.
@devkaranjoshi816
Ай бұрын
Hi, please make a video on samba model just like this masterpiece. Thanks in advance.
@luisleal4169
Ай бұрын
One of the advantages of transformers and something that helped train very big transformers on very big datasets was paralellism(and it was said it was an advantage compared to RNNs), isn't that lost with SSMs? maybe that's the reason why they have not been so widely adopted?
@AravindUkrd
Ай бұрын
Thanks for the explanation. Was curious to know your thoughts on why Mamba is not already replacing transformers in mainstream large language models?
@SerranoAcademy
Ай бұрын
@@AravindUkrd thanks, great question! My guess is that implementing it is hard and may be disruptive. They would only do it if the performance is much better, and right now it’s comparable but not a lot better. But lemme find out and if it’s something different I’ll post it here.
@AravindUkrd
Ай бұрын
@@SerranoAcademy Thought so. Thanks for reply 😊.
@nicolemerkle713
2 ай бұрын
I would found a video about Kalman Filters interesting.
@IceMetalPunk
2 ай бұрын
Maybe I'm not understanding because it's getting pretty late here, but this seems like it's using a neural network to learn the transition functions (represented by the matrices A, B, and C) of a finite state machine, no? Also, I've heard a lot of people contrasting Mamba and SSMs with Transformers and claiming Mamba will replace Transformers, going so far as to say "we don't need attention after all!" But isn't the matrix A (or at least, the combination of A and B) basically acting similarly to an attention matrix anyway?
@mohammedshuaibiqbal5469
2 ай бұрын
How will it know which word to focus more on. Is there any logic it uses in the backend
@ArielGoesdeCastro
2 күн бұрын
The matrix h_t-1 is not easily read by us (interpretable). This detail was also omitted in mamba's brief explanation of attention mechanisms. But which I believe is similar to the attention mechanisms of transformative networks. But to understand in more detail, you would need to read the disruptive work in the field of AI mentioned at the beginning of the video. 0:12
Пікірлер: 22