In the next video we’re going to be making a blockchain in JavaScript, so subscribe if you’re interested in that stuff!
@SoumilShah
5 жыл бұрын
great video so made everything so easy
@CrypticConsole
5 жыл бұрын
Dow stupid schools blocked pip and zip archives so I can't install numpy
@itstatanka
5 жыл бұрын
Which compilar did you use?
@siddhant5697
4 жыл бұрын
in which software r u coding??
@frankynakamoto2308
4 жыл бұрын
Polycode Can the neurons and inputs be placed together, like neurons with much built in data?? Also I need a very powerful neural network for several different purposes, speech, faceID and math solving problems, do you have something that you made that is open source that you can share with me??
@hfe1833
5 жыл бұрын
What the?....this is it, finally I found good tutorial
@Pancake3000
4 жыл бұрын
same lol Ive finally can actually flippin understand thank much +1 sub i can english.
@scottpatterson9136
4 жыл бұрын
I agree
@koksem
3 жыл бұрын
ye someone finally explains what it is XD
@mariomuysensual
3 жыл бұрын
same!
@johnc3403
4 жыл бұрын
"stay with me, it's gonna be ok"... dude, that's such a lovely sentiment. You were born to teach I think, with that ability to keep pupils onboard. Very good video my man, thank you so much..
@mohamedsuhailirfankhazi6628
4 жыл бұрын
My friend, your explanation in 15 minutes gave more clarity to me than hours of crash course tutorials online. So simple and well explained. Awesome stuff my man!
@calmo15
5 жыл бұрын
Amazing video, too few sources do the absolute basics. however, can you please crank your volume up!
@ciencialmente9969
4 жыл бұрын
1:39 "so we need a little meth"
@Loading-tr7yv
4 жыл бұрын
I think we all do
@astrainvictum9638
3 жыл бұрын
Adderall is good for that
@deekshithtirumala6474
3 жыл бұрын
It's math LoL 😛
@mattisaderp8929
5 жыл бұрын
"stay with me it's gonna be okay"
@wirly-
4 жыл бұрын
TypeError: '
@henil0604
4 жыл бұрын
@@wirly- Loll
@chinmayhattewar4456
4 жыл бұрын
@@wirly- hahaha
@morphman86
5 жыл бұрын
After watching hyper-advanced tensorflow/keras stock market prediction tutorials for a while, being completely lost, I stumbled on this. I finally, after weeks of trying to learn NN and decades of practical programming experience, understand it. The iterative backpedaling was what confused me with all of those other videos, but taken down to its most simple form, like in this video, I can now see that it's merely looking at what it got, what it was trying to get and make adjustments to the appropriate synapses based on that, then trying again. It's not the maths that confused me, it's how the machine actually learned. And that was perfectly demonstrated in this video. Thank you!
@RandageJr
5 жыл бұрын
Do you know where I can find these tutorials? It would be very helpful for me, thanks!
@jacobokomo1880
4 жыл бұрын
kindly feel free to share with us Who was the teacher who took you through the Previous Tutorials. However, This teacher is doing well. Credits 💪
@GovindKumar-bt2ne
4 жыл бұрын
B
@morphman86
4 жыл бұрын
@Isaiah _ Neural Network
@KennTollens
4 жыл бұрын
I agree too. So many videos complicate and dance around simple mechanics. Knowing the flow of the engine and the simple concept of what is happening, the other videos might make more sense now that I can put it into context.
@TheLolfaceftwOfficial
3 жыл бұрын
I have no idea what I’m doing.
@djjjo6130
4 жыл бұрын
“Stay with me, it’s gonna be okay” that makes me feel like I’m actually learning something and not just being told something
@MC_MrOreo
3 жыл бұрын
(I know I’m late but) Literally came to the comment section about this 😂
@arifmeighan3162
3 жыл бұрын
This tutorial is a perfect blend of talking/programming and slides. Its also quick and to the point 8)
@paulschmidt8742
5 жыл бұрын
Bro, it was much easier then I thought. Thx for explaining.
@MCLooyverse
4 жыл бұрын
If Φ(x) = 1 / (1 + e^(-x)), then Φ'(x) = e^(-x) / (1 + e^(-x))^2, not x(1 - x). I'm curious about your Atom setup. Are the text overview on the side and the code suggestions hidden in Atom somewhere, or are they plugins?
@gamescript6449
2 жыл бұрын
huh
@industrialdonut7681
4 жыл бұрын
15 minute video... takes me 2 hours to get through XD
@mdmarufhossainkhan2047
3 жыл бұрын
Wouldn't the derivative of the sigmoid function is def sigmoid_derivative(x): return sigmoid(x) * (1 - sigmoid(x)) ????
@maximilienchau696
3 жыл бұрын
you are right, but when he uses the sigmoid_derivative function, he uses the variable "outputs" which is sigmoid(x)
@MrFrostsonic
5 жыл бұрын
In line 16, why have you multiplied the random weights by 2 and then subtracted 1 ? Great video .. very helpful .. Thank you very much.
@JonasBostoen
5 жыл бұрын
np.random.random returns floating point values between 0 and 1, but since we need values between -1 and 1, this is the way to do it.
@nurhaida1983
5 жыл бұрын
@@JonasBostoen thank you for this clarification. i was lost at this line but luckily stumbled to this comment. thank you very much! cheers!
@BiCool03
4 жыл бұрын
@@JonasBostoen I'm very late to the party, but since we need a random number between -1 and 1, wouldn't it be better to add two random numbers, then substract 1, or does it matter?
@robertdraxel7175
5 жыл бұрын
Most useful video on the internet for a total beginner, for anyone new to AI. Thanks.
@nukzzz5652
4 жыл бұрын
There is something i'm not understanding, when its time to change the weights, you're supposed to multiply the input with the adjustment and add it to the weights right? doesn't that mean if the input is 0 then the weights wont change at all? i noticed this when i tried different inputs and outputs, your example works fine but when i tried {0,0,0},{0,1,0},{0,1,1},{0,0,1} as inputs and {0,0,0,0} for outputs it was a mess and no matter how many tests i did it couldnt figure out the correct answer
@sonic597s
4 жыл бұрын
it does, this is a mistake in the code and can be fixed if you add a learning rate variable to multiply by the adjustments, rather than using the training inputs.
@sonic597s
4 жыл бұрын
@@havoc3135 instead of dotproducting the (transposed) training inputs with the adjustments, multiply the adjustments by some scalar, so you can scale your adjustments manually. hope this helps
@sorooshnazem
5 жыл бұрын
The derivative of sigmoid function is: \phi*(1-\phi). x*(1-x) is wrong
@taravanova
5 жыл бұрын
Lol, spent like 10 min trying to get his result and then eventually googled it to find out I had the correct result the whole time. At least the correct version was used in the code.
@VoidFame
3 жыл бұрын
yet somehow it gives the incorrect result when using the correct derivative. Something else is missing here.
@JonasBostoen
6 жыл бұрын
Coding starts at 2:30
@ChillGuyYoutube
4 жыл бұрын
Polycode ping your comment so others will see it!
@du42bz
3 жыл бұрын
@@ChillGuyKZitem maybe his firewall blocks icmp packets
@rr.studios
3 жыл бұрын
@@du42bz I read that as "pimp packets"
@brehontechologies
5 жыл бұрын
Finally, a clear, straightforward tutorial to code along. GREAT JOB!
@karim741
5 жыл бұрын
Thanks for the video, I try to follow this but I see the solution can be other way in binary logic, the first column is multiplied by the sum of the two other columns, not only first column is what decides the output but the others also as bellow. if we take this table at 0:20 Example 1: 0x(0+1)=0 Example 2: 1x(1+1)=1 Example 3: 1x(0+1)=1 Example 4: 0x(1+1)=0 New situation: 1x(0+0)=0
@nocopyrightgameplaystockvi231
3 жыл бұрын
Line no 16 : synaptic_weights=2 * np.random.random((3,1))-1 this line makes an array of 3X1 or a matrix of size 3X1. I did not understand this line before I tried this line separately. This makes an easy grasp of the random concept, but as I learned in Soft Computing in my Btech, you can directly initialize the weights as 1, which will then get adjusted during training. you can also replace the line with it : synaptic weights=np.array([[1,1,1]]).T THANKS TO YOU for making this short and easy tutorial!
@Retriiiii
7 ай бұрын
Hey can you tell me why are we multiplying 2 and subtracting 1?
Output = array[1[1]].value Lol just kidding. This was a great video and I understood a ton
@flymeedrone6350
4 жыл бұрын
Are those examples enough to understand the rules of that problem? It could also be that if the first and the last are different, the result is 0, and otherwise the result is 1. And probably there are other that I don't see. Wait, maybe that is the real purpose of the neural network... :D
@REVscape95
6 жыл бұрын
waiting for the next video, this type of explanation really helps
@JonasBostoen
6 жыл бұрын
I've uploaded it!
@brotheradamfromups
4 жыл бұрын
Can anyone help me with my AI (github.com/Gigaboy-01/Winter)? The sigmoid function causes it to only output 0 to 1 values and I need values in temperature Kelvin.
@ashrafbeshtawi3556
4 жыл бұрын
The derivative of sigmoid u have used was wrong! S(x)'=s(x)(1-s(x)) not S(x)'=x(1-x) math.stackexchange.com/questions/78575/derivative-of-sigmoid-function-sigma-x-frac11e-x
@pantepember
4 жыл бұрын
Error: ValueError: non-broadcastable output operand with shape (3,1) doesn't match the broadcast shape (3,3) Solution: stackoverflow.com/questions/47493559/valueerror-non-broadcastable-output-operand-with-shape-3-1-doesnt-match-the
@hdluktv3593
4 жыл бұрын
I watched a lot of videos about Machine Learning because I wanted to unterstand how that works. Non of these Videos explained so good like yours how a neuron and the adjustment actually works. Good work, now I finally understood it.
@fiveoneecho
5 жыл бұрын
Great tutorial, but I might have used a different approximation for d-sigmoid. I'm not sure where you got x(1-x) from as an approximation- it does not share a derivative with d-sigmoid and the vertex is off in space. I'm not sure if it is a standard to use and I'm just misunderstanding (I'm watching this tutorial to learn, after all), but I did a quick Taylor polynomial approximation and got the function: d-sigmoid ~= (2 - x^2) / 8 -------This won't work very well for things not centered at x = 0 This is about the same in terms of typing effort and computer processing, but a little more accurate. It is also based around x = 0 so it won't be biased towards one outcome (unless you built a weight into your function, in which case it makes a lot of sense). You can continue on to the 4th derivative in the series and add a third term which doesn't factor as nice but is extremely accurate (+/- 0.001) on the domain -1
@shimuk8
5 жыл бұрын
I joined my university 2 months late, absolutely had no idea how to learn the lost neural network project topic and then I saw your video !!! Thanks a lot dude !!! For saving my semester HAHAHA
@JonasBostoen
5 жыл бұрын
meaaaww hahaha nice, share it to any of your buddies if you think they need it ;-)
@shimuk8
5 жыл бұрын
@@JonasBostoen Oh yes already did that,,, right now you have blessings of many helpless students LOL
@prathameshjoshi9199
3 жыл бұрын
Please help me, I've a doubt, While calculating slope of cost function, if we don't know the cost function beforehand, how can we calculate the slope of cost function ? I mean, if I know that my cost function looks like a sigmoid(for eg.) Then I can use Sigmoid Dervative to find out the Slope of Cost function. But If don't know what my cost function looks like, how can I decide which derivative formula to use, to calculate slope ? Please help me, I'm stuck.
@jstdntcr5419
3 жыл бұрын
Hello everyone! I don't understand why we multiply 2 * by np.random.random((3,1)) and substract 1? Please help me ):
@k.chriscaldwell4141
5 жыл бұрын
Superb! Using the seeded weights so that you and the viewer get the same results was a brilliant touch. Helps the viewer know if he miscoded or not. Thanks.
@Patrick-ed7hd
5 жыл бұрын
Great tutorial! I dont code in Python but in Java and translated the code. If anyone is interested, I uploaded it into a git repository: github.com/PatrickRic/SimplePerceptron-Polycode-Tut-
@Awesomer5696
4 жыл бұрын
What a fantastic way of explaining it. Whilst this is obviously not immediately useful, It's a sort of toy approach that gives you a building block to understand the greater scope.
@lifeisstr4nge
2 жыл бұрын
50 seconds in - already more clear than most """""explainers"""""
@davidsorensen4400
4 жыл бұрын
9:42 That is not the derivative , the derivative is e^(-x) / ( 1 + e^(-x))^2
@SoumilShah
5 жыл бұрын
can you make video on how to create python code with hidden layer using class python
@ankitds1369
5 жыл бұрын
in output after training : you can use this, and this will round off the decimal as a round off value - print(np.round(outputs,1))
@MW-rb5fs
2 жыл бұрын
Great video! Thanks for creating it! I am totally new to coding neural nets. Question: After training the neural net, when I supply the inputs x1=0, x2=0 and x3=0 the perceptron output is sigmoid(0)=0.5. Can you explain what extra complexity would need to be included in the code to get a correct predicted output of 0 for a 0,0,0 input?
@skippy1130
2 жыл бұрын
I have the same question. Looking forward to hearing ideas.
@warrenkuah4314
3 жыл бұрын
Incredible! I think this is the first video that has helped me understand the formulas behind a neural network! However, I was wondering how you implement the calculation of biases into the actual code and Backpropagation steps and formula?
@NandoRooster
4 жыл бұрын
ValueError: shapes (4,1) and (3,1) not aligned: 1 (dim 1) != 3 (dim 0). How did you do that dot product? it is not possible
@manosbouzetos4132
4 жыл бұрын
if the input is 0,0,0 i always get 0,5 output can someone explain?
@aizej9896
4 жыл бұрын
thx for the totorial gived the neural network my own training data and it worked geat!
@HISEROD
4 жыл бұрын
There's a pretty major problem with this AI. If the input is [0, 0, 0] the output is 0.5 no matter what the weights are.
@mostrengo
4 жыл бұрын
using ReLU would fix this, wouldn't it?
@HISEROD
4 жыл бұрын
@@mostrengo That's a good point. For those who don't know, ReLU is an alternative to the sigmoid function which returns 0 for any negative input and returns x for any input of 0 or more _f(x) = max(0, x)._ This would make the output of [0, 0, 0] always 0 which is correct for this case's training data, but if the desired output was 1 no possible set of weights would achieve the correct output. So the problem is still present.
@peregudovoleg
4 жыл бұрын
It took some time to understand your derivative choice, but it is all good now. The question: any idea why if we use "real" derivative instead of pre-calculating the sigmoid(x) and inserting it into x*(1-x) we could get better results sooner with fewer loops? If you just simply change sigmoid_derivative to "real" one, wouldn't you "sigmoid" the results 2 times? 1st you "sigmoid" the "np.dot(input_layer, weights)", then you "sigmoid" the results of previous "sigmoid". I can't say it is better, but the results are somehow better.
@pluronic123
4 жыл бұрын
I have noticed the same. It really interesting. If you sigmoid the sigmoid, the result is always between 0.25 and somewhat 0.2 (almost at the peak of the derivative curve). So I think if you "double sigmoid" the correction is always "aggressively". You always at the peak. I am not sure if I assume correctly.
@siddharthsinghchauhan8664
5 жыл бұрын
derivative of pi(x) is pi(x)*(1-pi(x)) and not x(1-x) at 10:00
@frostlessful
5 жыл бұрын
I tried using a test input as [0, 0, 0] and the result is 0.5? And it does not seem to improved with more iterations
@ethanwintill9865
3 жыл бұрын
It's because the zeros dotted to any synaptic weights results in a zero, which will always become .5 once it's plugged into the sigmoid function if you adjust the sigmoid function to be 1 / ( 1 + e^(-x+8)) is should take care of it
@sermuns
4 жыл бұрын
9:37 Why did you write x * (1 - x) as the sigmoid derivative? Isn't it sigmoid(x)(1 - sigmoid(x)) ?
@adamsamulak780
4 жыл бұрын
That is why he needed to train it so many times.
@adamsever7084
5 жыл бұрын
should bump up the font a little
@abdechafineji8782
5 жыл бұрын
The best one who can give you the right explanation of creating of a neural network from scratch.
@needywallaby2030
5 жыл бұрын
1:39 >"so we need a little math"
@Alashure6
4 жыл бұрын
to anyone that actually learned in calc 1, that is a tiny bit of math
@Adriano70911
4 жыл бұрын
@@Alashure6 no, it's not
@alexrawson8492
4 жыл бұрын
I'm in Geometry 1, this is black magic math. If anyone can explain it that would be great.
@baylorwarrick7826
4 жыл бұрын
@@alexrawson8492 It is a summation. The summation goes from i = 1 (on the bottom) to i = 3, so I will be 1, 2, and then 3. The i is plugged into the equation each iteration, so the first iteration the expression inside the summation will be x1 * w1. The second iteration x2 * w2, and the third, x3 * w3. The summation adds all these up, so that is why it expands to what he put below it. It looks fancy but it's not too bad, like for example the summation from i = 1 to i = 5 of i (as in the expression inside is just i) is just 1 + 2 + 3 + 4 + 5 because i takes on those values each iteration and they are all added up.
@youngtrader6968
4 жыл бұрын
Actually it is pretty simple just watch a video on how to read math in youtube and you will underdstand i take 6 min to learn don't worry
@Vic378
Жыл бұрын
This is so cool! One question tho, shouldn't the sigmoid derivative be = sigmoid(x)•(1-sigmoid(x)) instead of x•(1-x) ? Tried following along with a calculator and thats what I get by doing the derivative of the sigmoid function Thanks in advance!!
@KomputasiStatistik
3 жыл бұрын
The best neural network hands on
@ycart_tech6726
4 жыл бұрын
Anyways, I've been wondering. What if you build an array of interacting perceptrons(I am guessing modular, so that each individual perceptron can be rearranged in almost infinite ways)? Only, instead of plugging in arbitrary binary values for I/O and a generic RNG, like you did, you plug it into certain SELECT aspects (depends on what the aim of one's research would be...) of, say, the Google algorithm, real time, while hundreds of millions over the entire world use it for their own personal purposes?
@mwont
5 жыл бұрын
Just a note: sigmoid_derivative is based on the exact analytical formula for the sigmoid derivative.
@sonic597s
4 жыл бұрын
thanks so much for this, I was really confused during that bit!
@pluronic123
4 жыл бұрын
@@sonic597s dont get it. He still uses x(1-x) which has nothing to do with sigmoid, but it is just an approximation to the shape of the curve (signs are opposit)
@sonic597s
4 жыл бұрын
@@pluronic123 a derivative finds the slope of the line at some given point. the sigmoid derivative being the formula x(1-x) (where x is the sigmoid fn.) means that if you were to plug in some sigmoid function given some value (z) as x, you would get the slope of the sigmoid fn at that value (z)
@pluronic123
4 жыл бұрын
@@sonic597s thanks precious internet dude
@povmaster235
3 жыл бұрын
At last... the video that doesn't just explain stuff but, but actually tells you what to do too!
@MsRAJDIP
5 жыл бұрын
So far the best simplest and practical tutorial I got. U cleared all my doubt and little background in python helped me lot.
@EricCanton
4 жыл бұрын
Just a note on sigmoid_derivative, for myself as much as anyone else. Since you're inputting the output of sigmoid to sigmoid_derivative, he's using that sigmoid satisfyies the differential equation y'(x) = y * (1 - y) so we can compute the derivative sigmoid'(x) by inputing sigmoid(x) into [y --> y(1-y)]. That's very clever!
@victoryfirst06
Жыл бұрын
But you should run the outputs through the sigmoid derivative, right? And the outputs are sigmoided by default, so shouldn't you use the sigmoid twice?
@Oleg-kk6xv
4 жыл бұрын
Thank you very much. I constantly see these videos about the theory of Machine Learning and AI but I have never found an in-depth start from scratch tutorial with mo libraries, all while explaining everything. Thank you!
@joesminis
5 жыл бұрын
At the 10 minute mark and I just wanted to say that your explanations are clicking left and right with me thank you!!!!
@notyourtypicalanime7475
3 жыл бұрын
This is what I'm looking for, on how to train your datasets by adjusting weights. Thank you so much!
@sreedeepsreedeep2260
5 жыл бұрын
Best tutorial on neural networks i have seen till now....thanks buddy😘
@rohan1002
4 жыл бұрын
i think phi'(x) = phi(x)*(1-phi(x)) not x(1-x)
@Ryan_Parmelee
5 жыл бұрын
I know Python but I don't know what the F**K is going on...
@harjitsingh7308
5 жыл бұрын
Haha, the math isn't too bad for this. Just brush up on some calculus and linear algebra then this will make more sense.
@hhhgggds
5 жыл бұрын
Ofc you dont because "knowing" python doesnt mean anything.
@BenchAnimes
4 жыл бұрын
help me sir I'm new in programming pls help me I don't know what is this Traceback (most recent call last): File "C:/Users/KRV-PC/PycharmProjects/TESTING/NN.py", line 22, in outputs = sigmoid(np.dot(input_layer, synaptic_weights)) File "", line 5, in dot ValueError: shapes (4,1) and (3,1) not aligned: 1 (dim 1) != 3 (dim 0)
@pluronic123
4 жыл бұрын
maybe you have forgotten to add the T for matrix transponse... check your code
@povmaster235
3 жыл бұрын
It says, ModuleNotFoundError: No module named 'numpy'
@kelpdock8913
3 жыл бұрын
same
@backflipbro790
4 жыл бұрын
I don't really understand anything in this video. When i tried to change anything, i get an error. What should i do?
@btheether1635
5 жыл бұрын
Cuzz I need you to put my status quo to be at least to be average, but seems how I've learned this much can I get a boost
@alpeshpatil1621
5 жыл бұрын
I am getting an error: "ValueError: non-broadcastable output operand with shape (3,1) doesn't match the broadcast shape (3,4)". What should I do?
@alan5506
5 жыл бұрын
learn matrix math. then your error will be blatant
@alpeshpatil1621
5 жыл бұрын
@@alan5506 Thank you, although I just needed a single alteration in my code. I got it, though.
@yerhing6406
5 жыл бұрын
@@alpeshpatil1621 I got that error, how did you fix it?
@pluronic123
4 жыл бұрын
@@yerhing6406 maybe you have forgotton to add the T for Transponse
@ichamatitales9459
5 жыл бұрын
What platform you have used for coading?
@JonasBostoen
5 жыл бұрын
Atom Editor by Github
@inigo8740
5 жыл бұрын
Instead of actual output, I find it more fitting to say expected output.
@portalsrule1239
3 жыл бұрын
I assume this method (no hidden layer and no biases) does not work for more complicated applications like fitting for the MNIST dataset? Because no matter what I do, I cant get it to converge
@th69100
5 жыл бұрын
There is an error when I put in numpy. Where do you get the numpy pack? I have it on PyCharm 2019.1.3 64 bit.
@seddaouiyassine7814
5 жыл бұрын
pip install numpy on the command line :)
@th69100
5 жыл бұрын
@@seddaouiyassine7814 Did that already along with scipy but the error is still there.
@samayvarjangbhay8987
5 жыл бұрын
finally a properly structured tutorial
@0siiris
5 жыл бұрын
Nice profile pic 😂
@BrandoAli
3 жыл бұрын
Great video! I have only one question: why do we put the input when computing the adjustments? Don't we want to adjust also the weights when we have 0 as input?
@NuevoVR
6 жыл бұрын
holy shit this is way above me
@JonasBostoen
6 жыл бұрын
insecto you should check out 3Blue1Brown’s video series on neural networks, I image he does a better job than me at explaining it
@aakarshan01
5 жыл бұрын
I changed the sigmoid derivative function to this and got better results in less tries and this is the actual derivative of a sigmoid function: def sigmoiddeivative(x): return np.exp(-x) / ( pow( ( 1 + np.exp(-x) ), 2) )
@JonasBostoen
5 жыл бұрын
This is indeed a better derivative, good job! For the purposes of simplicity though I have kept the less complicated function since it's almost the same shape. Yours is better though.
@aakarshan01
5 жыл бұрын
@@JonasBostoen thanks but I didn't understand the use of the 2 * random.random(3,1) in the beginning of the class initialisation
@paulferner2826
5 жыл бұрын
Why, when I run the same code, I get to 19 iterations before I start dividing by zero? It fills my outputs and weights with NaN instead of numbers. It’s looking like that’s what happens when you try to divide by zero, but I didn’t think I was going to actually hit it that quickly. In the video, he iterates for thousands of times without getting close. I am running python off my iPad using Pythonista, I wonder if there is some background stuff happening causing this.
@JonasBostoen
5 жыл бұрын
If your code is exactly the same as mine and you've made no syntax mistakes, it must be Pythonista
@walterjorgemazzoni
5 жыл бұрын
Thanks a lot, great tutorial! Can you explain why is that, even though [1,0,0] predicts correctly, [0,1,0] or [0,0,0] predicts around 0.5? Im using the synaptic weights generated after 100K iterations of training.
@unixgaming6880
5 жыл бұрын
probably based on weight initialization and complexity? Remember, he said that weight initialization of a neural network can essentially take you down a rabbit hole.
@1iMaGiiK1
4 жыл бұрын
I ran it 2 million times in the for loop. Why do the outputs after training start going above one? Here are my outputs after 2 million loops of training: [[6.71693477e-04] [9.99451618e-01] [9.99552373e-01] [5.48295402e-04]]
@shahidlatif5871
3 жыл бұрын
That is not > 1. All those values are less then one. 0.0006717 0.9994516 0.9995524 0.0005483
@AllenReviews
4 жыл бұрын
TypeError: list indices must be integers or slices, not tuple
@ransikerandeni9086
4 жыл бұрын
I had the exact same error and later I found out that I have missed a comma, where the arrays are
@fernandojackson7207
4 жыл бұрын
Nice job. Would it be possible for you to use lighter colors for contrast? It is difficult to tell appart an * and = . Or if possible use larger font? Thanks for the cool video.
@timothec.8216
5 жыл бұрын
Thanks a lot. This is much more comprehensible than all I have watched and read
@kooltyme
4 жыл бұрын
um, arent you just basically generating a formula such that x*w1 + y*w2 + z*w3 = x? so wouldn't it obv just eventually just make w1 = 1, w2 = 0, w3 = 0?
@this-is-bioman
5 жыл бұрын
Holy cow! I love this video!
@edwardfeldman3533
4 жыл бұрын
Wait are you going to code it in Python or are you going to code it in Scratch?
@ibnbattuta1304
5 жыл бұрын
Dumb question, but if the weights are random, why are the results always the same, instead of varying a little? I know changing iterations changes the results, but they also stay the same for a given iteration. Anyways, thanks for the videos. I've done both and scripts are working as shown.
@JonasBostoen
5 жыл бұрын
I’ve seeded the random numbers with np.random.seed(), this is so that you guys get the same results as I do. Remove that line and your results will vary.
@learningverse123
10 ай бұрын
Thank you for the very helpful video! I have a question. For the sigmoid_derivative, should it be e^(-x)/(1+e^(-x))^2, which actually is the derivative of the sigmoid function. I wonder why we chose x(1-x) for sigmoid_derivative. Any clarification will help. Thank you!
@alexjando4880
7 ай бұрын
I'm thinking the same thing. Have you found out why this is?
@chessprogramming591
3 жыл бұрын
Man, this was so to the point! Thanks for your efforts. Best NN basics tutorial I've found so far! Very very useful!
@PatricioContrerasCea
5 жыл бұрын
OK..Once yo feel that you understand how a machine learns, try this from scikit-learn, a machine learning library in Python that implements a decision tree for solving this kind of problems: from sklearn import tree X = [[0, 0, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]] Y = ["0", "1", "1", "0"] # classifier clf = tree.DecisionTreeClassifier() clf = clf.fit(X, Y) prediction = clf.predict([[1, 0, 0]]) print(prediction)
@KennTollens
4 жыл бұрын
I tried to change to rules so the last number in the input determined the output. Then ran a million times and the network never learned it. training_input = np.array(([[0,0,0],[1,1,1],[0,0,1],[0,1,0]])) training_out = np.array([[0,1,1,0]]).T New input to run through trained network [[1, 1, 0], [0, 0, 0], [1, 0, 0], [0, 1, 1]] Answer from trained network [[0.29213096] [0.5 ] [0.99762949] [0.77790092]] It thinks the 3rd input is the most likely to have a 1 output. Why doesn't it learn that the last number determines the output? I used the same code and only changed the training input/output
@christernilsson1
6 ай бұрын
Sigmoid derivative should be np.exp(-x) / (1 + np.exp(-x)) ** 2 [0 1 0] converges to 0.4. Should be zero or one. Maybe 0.4 indicates "I don't know" Maybe [0 1 0] should be part of the training set
Пікірлер: 710