what would happen if, instead of survival time, all you ask the networks to do is: - minimize prediction error (based on the current situation and the planned move, have it predict what the next frame's state will be) - maximize exploration (the network gets extra rewards for reaching a state it could not yet predict well) I think the right way to do those contradictory goals is to have the average prediction error be low but the error on the final frame be high. It should lead the networks to explore the whole space. And since the space is potentially infinite, the only way to do that is to manage to fly for long sequences. But the goal also asks to do so in creative, surprising ways
@martinagnardahl
3 жыл бұрын
This is the Best video I have seen so far about NEAT with detailed explanation how it works. Thank You. I still struggle to understand how speciation works in this case. Could You guide me to some other resources about this topic? The formula itself is difficult for me to understand.
@neatai6702
3 жыл бұрын
Glad it helped!.. I have a speciation deep dive planned for next week.. Any area specifically you're having difficulty with ?
@martinagnardahl
3 жыл бұрын
@@neatai6702 Awesome! Well... I get the main idea with speciation and crossover but what I would like to see is the example how exactly this thing works. You talked about the elements of equation for comparison. I would love to see this equation filled with data. The same goes for crossover. And at the end, recurrent connections - this is a black magic for me. Why do we need them and how do they work? Can connections go in any direction and to any layer? How about connection within the same layer? I am quite new to ML so there is a lot I don't quite understand. But I feel that I have learned a tone from Your videos. Especially about NEAT algorithm that is as close as possible to original paper and not some simplified version. Looking forward to new videos, especially the speciation one :) Thank You!
@neatai6702
3 жыл бұрын
Hi Martin, I've gone back over the original paper and my own XOR implementation. My next videos will be a deep dive into each section with worked examples.. In fact I've made some improvements based on Ken Stanley's follow-up notes..
@martinagnardahl
3 жыл бұрын
Awesome, can't wait!
@vunpac5
2 жыл бұрын
@@martinagnardahl I used to think recurrent connections were black magic as well. Then I came across an article that explained it so simply that makes me think people over complicate it. I'll explain what I think it is (If I am wrong please someone correct me) Basically a recurrent connection is a node or neuron that saves it's value, the next time you run data through the network the connection that is recurrent uses that old value to send through that connection. This gives it kind of a basic memory of sorts because it retains data from previous runs. This is also why you can't have the connection on the same node because it will always be larger +/- depending on the connection if you take a node value of 1 with a connection value of 2 you get 1 * 2 = 2, then 2 is used for the next run so 2* 2 = 4, then 4 for the next run etc.. hope this helps
@npgabriel
2 жыл бұрын
Do you have any kind of activation for those added connections? If not, then any additional hidden node is pretty redundant, as the output is still just a linear combination of the inputs, and can be expressed without any of the hidden nodes.
@neatai6702
2 жыл бұрын
All hidden nodes have an activation function, otherwise as you state, its going to be redundant..
@hothivanhanh5031
4 ай бұрын
I wonder what programing language you implement this in?
@rpraver1
3 жыл бұрын
No code for any of your videos?
@Nathouuuutheone
2 жыл бұрын
I got really lost with the spawning new neurons during the simulation and making them recurrent? Recurrent as in the neurons loop within the network? What was that about memory?
@neatai6702
2 жыл бұрын
as the signal flows from left to right.. RC's allow what happened in the last iteration to influence whats about to happen.. as its contributing its signal from the last iteration..
@typicalhog
3 жыл бұрын
Why is it so important for nodes to be in the correct layer? I thought the purpose of that was to prevent them depending on a "later" node, but backflowing connections make that happen anyway, even if nodes are ordered correctly.
@neatai6702
3 жыл бұрын
Thats correct, if the structure isn't stable you'll start to get some odd/unstable results. I've coded the recurrent connections but never used them as yet. In fact Ken Stanley has the following to say on them.. "Make sure recurrency is disabled for the XOR test. If NEAT is able to add recurrent connections, it may solve XOR by memorizing the order of the training set. (Which is why you may even want to randomize order to be most safe) All documented experiments with XOR are without recurrent connections."
@typicalhog
3 жыл бұрын
@@neatai6702 I see, thanks for replying!
@henrivi330
2 жыл бұрын
@@neatai6702 Ken Stanley writes about the NEAT algorithm as if it's a kid trying to cheat on a test hahaha. Shows how far we've gotten with AI.
@palmberry5576
2 жыл бұрын
@@henrivi330 well… that’s because it kinda is
@dsaasdasf
3 жыл бұрын
Shouldn't there be a neural network that determines the structure and weights?
@neatai6702
3 жыл бұрын
That's an option as well but in this case the evolutionary process try's out different options and favors the fittest. Although with flappy birds its such an easy one, it doesn't have to do too much.
@typicalhog
3 жыл бұрын
It would be interesting to see NEAT parameters themselves evolved through GA.
@Kraus-
2 жыл бұрын
That's cool.
@zzador
Жыл бұрын
Why working with layers? NEAT works completely without any layers. Hidden units are layerless in NEAT. Just look at the hidden nodes as a potentially fully recurrent network where all the recurrent connections and hidden units itself are ommited at the start of the evolution.
@revimfadli4666
6 ай бұрын
Looking at the source codes, NEAT doesn't even seem to have a hidden layer, just hidden outputs fed back as inputs during the next time step
Пікірлер: 24