0:58 Overview 1:18 BN-ReLU common practice 2:50 Swish Activation from AutoML 3:34 The Power of Normalization Layers 3:58 Normalization-Activation Design Space 5:20 Note on Indexing Tensors 6:33 Connection with AutoML-Zero 7:10 Evolutionary Search 8:16 Mutations 9:10 Pareto Selection for Multi-Objective Optimization 9:52 Generalizing to Multiple Architectures 10:58 Products of the Search 11:43 Results 13:52 Testing the Design Space
@PeterOtt
4 жыл бұрын
Your videos are so information dense, I love it!
@connor-shorten
4 жыл бұрын
Thank you!! I hope you find this one useful, I thought this was a really interesting idea to couple normalization layer and activation function search!
@CristianGarcia
4 жыл бұрын
One part of me thinks that its neat to finds these new layers/activations that give some boosts to the performance (highly desired when you are going to put a model in production), the other part thinks it brings in even more black magic and uncertainty since its hard to understand why these things work :s
@academicconnorshorten6171
4 жыл бұрын
Interesting ideas! My question to you though is, how well did we understand the BatchNorm-ReLU transform originally? I actually think EvoNorm-B0 is more interpretable than BN-ReLU, I recommend checking out the functions in Appendix B of the paper before we continue to debate the interpretability of each block.
@SunilKumar-zd5kq
4 жыл бұрын
@@academicconnorshorten6171 Since these new Transforms are trained with respect to a design space and a task at hand. How well do we understand Evonorm?
@CristianGarcia
4 жыл бұрын
@@academicconnorshorten6171 you have a great point! At least cognitively, the appeal of most regular papers is that they build a "story" (unless they have actual proof) of why the technique makes sense. In the case of BN it was the idea of reducing "Covariate Shift" between layers, but I've read that it might actually work for different reasons. I'll definitely look at the appendix you mention! Maybe they have better insights because they experimented so much :D
@muhammadbilal902
4 жыл бұрын
Hey connor! Glad to be one of the first few viewers finally xD Thanks for the work.
@connor-shorten
4 жыл бұрын
Haha, thank you so much! Hope you like the video, I thought this was a really interesting paper!
@muhammadbilal902
4 жыл бұрын
@@connor-shorten Already on queue for this weekend, u take off alot of abstraction in the video, and reading the paper gets alot easier afterwards. So glad , u do this work. keep it up God bless uh sir.
@citiblocsMaster
4 жыл бұрын
11:57 Woah woah WOOOOOOOOOOOOOOAAAAAAAAAAAAAAAAAAAAAAAAAAAAH calm down I wasn't ready for that :)
@connor-shorten
4 жыл бұрын
Lol! Sorry about that, should have moved the slide down in advance!
@sayakpaul3152
4 жыл бұрын
Amazing video man!
@connor-shorten
4 жыл бұрын
Thank you so much!
@MartinFerianc
4 жыл бұрын
Thank you again for a great video and I noticed that you are talking a bit slower - it really helps (at least to me)!
Пікірлер: 17