Slides: www.slideshare.net/SebastienFischman/tab-netpresentation/SebastienFischman/tab-netpresentation GitHub: github.com/dreamquark-ai/tabnet Thank you Sebastien for the great Talk!
@jacquepang
7 ай бұрын
2:55 Tabnet Paper introduction 4:20 Main ideas from Tabnet 7:21 Architecture 8:55 feature transformer block 10:51 attentive transformer block 14:25 individual explainability intro 15:10 self supervised learning ( pretrainning ) 17:10 pytorch implementation intro ( 19:18 fastai wrapper avialable ) 20:59 demo from a notebook 29:34 Kaggle competition notebooks using Tabnet Pytorch 29:55 Code base architecture 32:18 tricky implementation tips! 34:36 future work 40:52 QA session 41:09 explainability 42:30 computing resource 43:50 tabnet parameters explain 47:55 feature selection ( from sparse mask)
@solomonadeyemi53
Жыл бұрын
hi from South Africa ......have been using Tabnet for 2yrs now in R studio ......works very well.... will give the pytorch-tabnet a trial
@abhishekkrthakur
4 жыл бұрын
To give a talk in Talks, fill out this form here: bit.ly/AbhishekTalks
@davidvictor7124
4 жыл бұрын
Can you please post the link of the code in the description.
@sebastienfischman8671
4 жыл бұрын
@@davidvictor7124 All the code is available here github.com/dreamquark-ai/tabnet I'll also add all the links and the presentation on this same page, so this is the place to go for any information!
@risabb
4 жыл бұрын
This is the best Talk Session! Learnt a lot and a great explanation. Thanks Abhishek and Sebastien!
@ritamshome
4 жыл бұрын
Actually an in-depth session and Sebastian answered most of the queries. Great work!
@AIPlayerrrr
4 жыл бұрын
After watching this video, I jumped right into implementing it on some of the kaggle competitions and my research. LGB still works better than Tabnet in most of my implementations. Pytorch-Tabnet is really user-friendly tho if you are new to deep learning for tabular data.
@林奕勳-c5t
4 жыл бұрын
Hi, Tony. Do you know how much LGB perform better than Tabnet and what kind of tasks LGB beats Tabnet? Do you tune the parameter size of Tabnet?
@matteomele3303
Жыл бұрын
Thank you, excellent work for both of you!
@memories2692
3 жыл бұрын
Thanks so much guys! It's a perfect architecture (and lecturer). I've implemented it easily for couple of days, works great!
@nirjharyou
4 жыл бұрын
Thank you so much Abhishek for this . I am also extremely happy to see my kernel and my name on your video , even though for a flash :)
@FrankHerfert
4 жыл бұрын
This is great! Thank you both.
@ParsiadAzimzadeh
3 жыл бұрын
Great talk. You mentioned being uncertain about the origin of sqrt(0.5) factor. I believe the reason the authors use it is because given two IID random variables X and Y, Var(sqrt(0.5) X + sqrt(0.5) Y) = 0.5 Var(X) + 0.5 Var(Y) = Var(X). In the context of the GLU summation, it is a heuristic to ensure that the variance does not increase.
@aditya_01
Жыл бұрын
u r doing really great thanks a lot for such a awesome content
@sayedathar2507
3 жыл бұрын
Amazing Talk , thanks for sharing , your channel is best :)
@vslaykovsky
Жыл бұрын
9:17 should be "element-wise multiplication" I guess
@JaskaranSingh-hp3zy
4 жыл бұрын
Great Session
@shrikantnarayankar4778
4 жыл бұрын
Hi Abhishek..I was trying to buy your book but link said it will be available on 15 july..how to buy it today? ...u held a session with krish ..
@tempdeltavalue
2 жыл бұрын
It's strange what author call it "transformers" because (if I understand correctly) here's not used attention masks (I mean QVK matrices)
@jacquepang
7 ай бұрын
I have the same confusion. Do you have a clue?
@oculustech1904
4 жыл бұрын
Great thank Abhishek and Sebastien !!!. you mention about copy of book, how to get that, please share link.
@abhishekkrthakur
4 жыл бұрын
Sebastien explains it at the end of the the talk
@deepaksadulla8974
4 жыл бұрын
Really good explanations...
@razzor_hero
4 жыл бұрын
Hey, do you know how to monitor and fit the tabnet based on a metric other than accuracy, say roc_auc_score ? I tried looking for this in the github, couldn't find it :/
@sebastienfischman8671
4 жыл бұрын
default monitoring for binary classification is already roc_auc_score, for multi class it's accuracy, for regression it's MSE. Easy way of changing early stopping metrics still need to be added!
@manelallani4746
3 жыл бұрын
@@sebastienfischman8671 Is it possible now to use a customized loss function ?
@consistentthoughts826
3 жыл бұрын
I applied this Santander Classification Kaggle dataset and got 81% accuracy without any preprocessing
Пікірлер: 33