This knowledge is worth thousands of dollars. Thank you so much Nitsh sir. I hope I get to repay you some time.
@rajsharma-bd3sl
10 ай бұрын
Buy his DSMP2.0 course and repay him ... simple bro
@siyays1868
2 жыл бұрын
I m out of words. Thanku very much sir....! I m feeling awful watching such quality stuff in free! Waiting for debit card renewal. I got benefit from this channel hence I should contribute & i m going to in few months then i'll feel good.
@abhaykumaramanofficial
2 жыл бұрын
Visualization ke karan understanding badhte ja rhi meri .....kya badhiya tarike se padhate ho aap gajab awesome...
@fashionvella730
6 ай бұрын
One thing more that why our value of coefficient is reducing to zero is because of the location of lambda in a loss function equation if you will look at the loss function you will find out that term lambda is in the denominator of loss function as we know if we have bigger value in the denominator than the nominator our value is going to decrease. So, lambda in denominator is nominating when its value is bigger.
@SMBEATS-cb7sh
3 ай бұрын
I agree with your answer also
@ParthivShah
7 ай бұрын
Thank You Sir.
@bangarrajumuppidu8354
3 жыл бұрын
never seen this kind of explanationnn
@tanmaygupta8288
5 ай бұрын
sir! you are a gem. I am loving data science because of only u
@rockykumarverma980
15 күн бұрын
Thank you so much sir🙏🙏🙏
@yashnatholia2332
Ай бұрын
Excellent !!
@tanb13
2 жыл бұрын
Sir ji , just a gentle reminder by this comment, hard constraint and soft constraint ridge regression ke detail video reh gayi hai jo aapne promise kari thee is video mein
@rajsharma-bd3sl
10 ай бұрын
toh khud padh le..
@saurabhbarasiya4721
3 жыл бұрын
Please upload videos regularly.
@sober_22
Жыл бұрын
Seriously ,Your explanations are just WOWWWWWW.
@rajsharma-bd3sl
10 ай бұрын
so beautiful, so elegant , just looking like a wow
@balrajprajesh6473
2 жыл бұрын
Best Video ever!! Thank you sir.
@ali75988
9 ай бұрын
if possible, kindly share the lecture on "hard constraint ridge regression" (as suggested in lecture)
@kadambalrajkachru8933
2 жыл бұрын
In depth learning method... Thanks
@shashankbangera7753
Жыл бұрын
what an explanation wonderful!
@shabak178
3 жыл бұрын
Best... content... really thanks a lot
@stevegabrial1106
3 жыл бұрын
Another great video, thx.
@parthshukla1025
3 жыл бұрын
Great Teaching Method Sir
@ZuhaibAshraf-w8g
Жыл бұрын
Sir kindly make a playlist on computer vision
@nitinghumare8086
2 жыл бұрын
sir it is good for understanding .but write proper answer it is help for notes making.
@Ishant875
9 ай бұрын
Khud bhi karle bhai kuchh to
@umendchandra4731
2 жыл бұрын
Greatest video ever
@mohitkushwaha8974
Жыл бұрын
Doubt- So can i say- loss function is increasing on increasing lambda value???
@ronylpatil
Жыл бұрын
Same doubt
@TheAtulsachan1234
Жыл бұрын
I think as we increase the lambda/alpha value, the Loss function converges towards zero. Please check ''Effect of Regularization on Loss Function'' section on this video. so with increasing lambda/alpha value, the loss/cost function decreases.
@casepoint10
Жыл бұрын
U-shaped curve shows that as lambda increases, the loss initially decreases (reducing overfitting) until it reaches a minimum point. After the minimum, further increasing lambda leads to an increase in the loss function (increasing underfitting). | \ / | \ / L | \ / o | \ / s | \ / s | \ / | \ / | \ / | \ / | \ / | \ / | \ / | \/ ---------------------------------------> Lambda minimum loss
@stevegabrial1106
3 жыл бұрын
After Day 53 Polynomial Day 54 video is missing or Day 55 1-4 include Day 54 video. plz comment it. thx.
@campusx-official
3 жыл бұрын
kzitem.info/news/bejne/mGp6u2Rof6ujm6A
@ajaykushwaha-je6mw
3 жыл бұрын
Bear ever video as Take away for L2!
@rajsharma-bd3sl
10 ай бұрын
very bear video
@travellingtart5845
2 жыл бұрын
Hey sir can u suggest best book for learning logic behind machine learning algorithms
@rajsharma-bd3sl
10 ай бұрын
Patterns recoginition using ML by Bishop
@rohitdahiya6697
Жыл бұрын
why there is no learning rate hyperparameter in scikit-learn Ridge/lasso/Elasticnet . As it has a hyperparameter called max_iteration that means it uses gradient descent but still there is no learning rate present in hyperparameters . if anyone knows please help me out with it.
@barryallen3051
Жыл бұрын
sklearn provides 2 ways to implement ridge/lasso/E-net. First from sklearn.linear_model import Ridge/Lasso/ElasticNet and second through SGDRegressor with hyperparameter "penalty" (L1 for lasso and L2 for ridge). The first method uses a close form equation, so there is no iteration. Second method uses Gradient descent, thus iteration hyperparameters. I think you are mixing both.
@rohitdahiya6697
Жыл бұрын
@@barryallen3051 i know this point but my point is what is that hyperparameter max _iteration doing in normal ridge If it is using closed form solution as max_iteration means the epochs in SGD
@YogaNarasimhaEpuri
Жыл бұрын
@@rohitdahiya6697 default solver is sag (use gradient descent) You need to specify, if you want to solver using OLS. I hope you got the point...
Пікірлер: 42