Why this is so Underrated, this should be on every one playlist for linear regression. Hatsoff man :)
@ajaytaneja111
Жыл бұрын
Hi Ajay, great video, as always. One suggestion with your permission;) I think it might be worthwhile introducing the concept of regularization by comparing: Feature elimination ( which is equivalent to making the weight zero) vs reducing the weight ( which is regularization) and elaborate on this and then drfting towards Lasso and, Ridge. ;)
@paull923
Жыл бұрын
I had to watch it twice to truly digest your approach, but I like your approach to the contour plot in particular. I hope to boost your channel with my comments a tiny bit ;). tyvm! what I was taught and what is helpful to know imo: 1) Speaking on an abstract level what regularization achieves: it punishes high-dimensional terms. 2) The notion of L1- and L2- regularization and when you talk about "Gaussian" for Ridge, you could also talk about "Laplace" distribution instead of double exponential distribution for Lasso regression
@CodeEmporium
Жыл бұрын
Thanks so much for your comments Paul! And yea, I feel like I have seen similar contour plots in books but never truly understood “why” they were like that until I started diving into details myself. Hopefully in the future I can explain it in a way that you’d be able to get it in a single pass through the video too :)
@data_quest_studio4944
Жыл бұрын
My man looks sharp and dapper
@CodeEmporium
Жыл бұрын
Haha. Thanks! I think this shirt looked better on camera than in person. :)
@ivanalejandrogarciaramirez8976
Ай бұрын
Thank you very much for this answer, I have been looking for it for a while: 7:42
@blairnicolle2218
4 ай бұрын
Excellent videos! Great graphing for intuition of L1 regularization where parameters become exactly zero (9:45) as compared with behavior of L2 regularization.
@NicholasRenotte
Жыл бұрын
Well hello everyone right back at you Ajay! These are fire, the live viz is on point!
@CodeEmporium
Жыл бұрын
Thank you for noticing ma guy. I will catch up to the 100K gang soon. Pls wait for me 😂
@NicholasRenotte
Жыл бұрын
@@CodeEmporium 😂 you're one hunnit in my eyes 🙏
@cormackjackson9442
5 ай бұрын
Such an awesome video! Can't believe i hadn't made the connection between ridge and Lagrangians, literally has a lambda in it lol!
@cormackjackson9442
5 ай бұрын
With the lasso intuition, the stepwise function you get for theta, how do you get the conditions on the right i.e. yi < lambda/2.I thought perhaps instead of writing theta < 0, you are just using the implied relationship between yi and lambda. E.g. that if theta < 0, and therefore |theta|.= - theta, which then after optimising gives theta = y - lambda/2 i.e. y = lambda/2 + theta, but then i get the opposite conditions as you...i.e. as theta is negative in this case wouldn't that give y = lambda/2 + theta < lambda/2?
@lucianofloripa123
3 ай бұрын
Good explanation!
@chadx8269
Жыл бұрын
Nice explaination of Bayesian. Isn't Regularization just the Lagrange multiplier. The optimum point is where the the gradient of the constraint is proportional to the gradient of the cost function.
@abhirajarora7631
6 ай бұрын
It is mathematically written in the same way but they are not the same. Langrange multipliers are used when you need to min/max a given function provided a constraint, and then you find the value of lambda, but in regularisation, we set the lambda value ourselves. Regularisation gives us a penalty if we take steps towards the non minimum direction and thus allows us to go back to the correct direction in the following iteration.
@mathwithsidiqat
2 ай бұрын
Thank you
@sivakrishna5530
Жыл бұрын
always find interesting things here ,Keep going .Good luck .
@CodeEmporium
Жыл бұрын
Hah! Glad that is the case. I am here to pique that interest :)
@fujinzhou7150
Жыл бұрын
Love your awesome videos! Salute! Thank you so much!
@CodeEmporium
Жыл бұрын
You are so welcome! I am happy this helps
@TheRainHarvester
Жыл бұрын
Great content on your channel. I just found it! Heh i used desmos to debug/visualize too! I just added a video explaining easy multilayer back propogation. The book math with all the subscripts is confusing, so i did it without any. Much simpler to understand.
@CodeEmporium
Жыл бұрын
Thank you! And Solid work on that explanation :)
Жыл бұрын
Nice video, thanks! The only thing I think is slightly incorrect is that you could see polynomials with increasing degrees as complex. Since you are talking about maths, I was expecting to see imaginary unit when I first heard complex.
@kakunmaor
Жыл бұрын
AWESOME!!!!! thanks!
@jpatel0924
Ай бұрын
8:27 yi < -(lambda/2)
@lijinhui6902
Жыл бұрын
thx !
@alexandergeorgiev2631
Жыл бұрын
How does Gauss-Newton for nonlinear regression change with (L2) regularization?
Пікірлер: 29