Thank you for the clear explanation! But next time please use screenshots of the actual formulas this way it is much more readable.
@sordesderisor
2 жыл бұрын
If you also read the TRPO and PPO paper this video provides the perfect concise summary of PPO !
@alph4b3th
11 ай бұрын
Sensational! Dude, you explain in such a simple way! I was wondering what the difference was between deep Q-Learning and PPO, and I was looking for exactly a video like this. Congratulations on your great didactic way of explaining the basic mathematical concepts and abstracting them to a more intuitive approach; you are really very good at this! Excellent video!
@GnuSnu
Жыл бұрын
4:25 "let me write it real quick" 💀💀
@James-qv1lh
Жыл бұрын
Insanely good video! Simple and straight to the point - thanks so much! :)
@sayyidj6406
6 ай бұрын
i wish i know this channel sooner. thanks for video
@carloscampo9119
Жыл бұрын
That was very, very well done. Thank you for the clear explanation.
@datonefaridze1503
2 жыл бұрын
Thank you for your effort, i really appreciate it, you are working for us to learn, thanks
@anibus1106
6 ай бұрын
Thank you so much, you save my day
@ivanwong863
3 жыл бұрын
DQN is not an offline method is it?
@EdanMeyer
3 жыл бұрын
My bad, I meant to say it’s an off-policy method, q-learning performs very poorly an in offline setting
@FlapcakeFortress
2 жыл бұрын
Much appreciated. Cheers!
@LatpateShubhamManikrao
2 жыл бұрын
Nicely explained man
@hemanthvemuluri9997
9 ай бұрын
for DQN you mean Offpolicy method right? DQN is not an Offline method.
@awaisahmad5908
6 ай бұрын
Thanks
@vadimavkhimenia5806
2 жыл бұрын
Can you make a video on maddpg with code?
@labreynth
Ай бұрын
Damn. I learned nothing.
@alexkonopatski429
2 жыл бұрын
I really love your vids and I also love how you explain things! And could you pls maybe make a video about TRPO, 'cause it is a really complex thing to understand in my opinion and the lack of available resources makes the situation not better. Therefor, I and I think a lot of others would be really glad about a good explanation! Thanks in advance
@canoksuzoglu6540
16 күн бұрын
Thanks dude. That was perfect explanation
@marcotroster8247
Жыл бұрын
Just evaluate the derivative of the policy gradients. Only then, you can really understand why PPO works. PPO adds the policy ratio as a factor to the derivative of the vanilla policy gradients. The clipping erases samples from the dataset with bad policy ratios because the derivative of a constant is zero. Also you need to understand from advantage actor-critic that the sign of the advantage determines whether the probabilities increase or decrease. Given the same training data, positive advantages will increase probs for good actions and decrease probs for bad actions. And the min always picks the clipped objective for bad policy ratios, so the gradients become constants. Otherweise they're the same and make only steps of policy ratios withing the epsilon bound. And because the policy gradients are multiplied by the policy ratio, this actually works as expected and gives PPO its stability.
@boldizsarszabo883
Жыл бұрын
This video was super helpful and informative! Thank you so much for your effort!
Пікірлер: 23