Watch my latest "YOLO Object Detection Series" videos for an in-depth understanding of YOLO architectures kzitem.info/door/PL1u-h-YIOL0sZJsku-vq7cUGbqDEeDK0a
@taj-ulislam6902
6 ай бұрын
Exceptional in every respect. Very detailed explanation in just about 30 minutes. My deepest appreciation.
@AkashRaj-qu2bb
Жыл бұрын
sir why does it gives wrong output when i give any data by myself (which is created using paint) otherwise testing with test data it works fine.
@MLForNerds
Жыл бұрын
Sorry for the late reply, can you check the format of data you are passing. Also you can paste the actual error message to get better clarity. Thanks.
@pouriaravesh5089
Жыл бұрын
UFuncTypeError: ufunc 'subtract' did not contain a loop with signature matching types (dtype('
@MLForNerds
Жыл бұрын
On which line are you facing this issue? Can you paste the full error message?
@pouriaravesh5089
Жыл бұрын
@@MLForNerds in backward_pass(self, y_train, output) 40 change_w = {} 41 ---> 42 error = 2 * (output - y_train) / output.shape[0] * self.softmax(params['Z1'], derivative=True) 43 change_w['W1'] = np.outer(error, params['A0']) 44 UFuncTypeError: ufunc 'subtract' did not contain a loop with signature matching types (dtype('
@pouriaravesh5089
Жыл бұрын
@@MLForNerds Can You please give your email or telegram Id or Whatsapp number to contact together?
@ohm7163
2 ай бұрын
@@pouriaravesh5089 def backward_pass(self, y_train, output): change_w = {} y_train = y_train.astype(float) output = output.astype(float) error = 2 * (output - y_train) / output.shape[0] * self.softmax(params['Z1'], derivative=True) change_w['W1'] = np.outer(error, params['A0']) since you are trying to subtract different data types like float and string it gives type error
@HarshRaj-e6z4n
6 ай бұрын
Hi, I was trying to implement Regression task of predicting car price based on features of cars using Deep neural network with modification as implemented by you. I have normalized the data properly and left the price column as ground truth. I am successfully able to start the training but I am getting exploding gradient, how can i avoid this. I have used ReLU activation and the number of neuron in layers are [10, 8, 4, 1] as there are 10 features.
@MLForNerds
6 ай бұрын
Reduce the learning rate and check. Also use gradient clipping by setting maximum limits..
@Harsh-s2d
6 ай бұрын
Hi I am successfully able to train the model now. My best performing model architecture is having layers [10, 16, 8, 4, 1] and have a learning rate of 0.01 and tolerance for accuracy is 0.02. the training accuracy is 79% and test accuracy is 51%. The problem is the target label price has a wide range from 29999 to 10000000. It needs such high tolerance and still test accuracy is so low. I wanted to know what else I could do to improve the training and testing accuracy. The learning rate is perfectly set and I have observed that accuracies tank at around 100 epochs. The dataset has 10 features and some of them I have to label encode. There is correlation among features around 0 only mostly. Some anomalous data is also present.
@Harsh-s2d
6 ай бұрын
By the way this tutorial is the best implementation of MLP possible. I mean you must be on a professional level with Artificial Neural Network concepts. Much appreciated!
@21beit3oo10omchaudhari
3 ай бұрын
@@Harsh-s2d try to normalize the dataset and then try
@Jawahar534
Жыл бұрын
How do i add biases to this?
@MLForNerds
Жыл бұрын
Hi Jawahar, you can add bias as a list with the length of number of neurons in that layer. Usually each neuron has one bias.
@Harsh-s2d
6 ай бұрын
Are biased really needed? I mean which case are they helpful
@mrjamp9549
3 ай бұрын
@@Harsh-s2d biases are very useful for training the network since its a way to modify the output from the network. If you don't belive me, you should learn more about neural networks and the general structure.
@dragonfly1487
11 ай бұрын
Indian accent makes me more confused but content is good :((
@taj-ulislam6902
6 ай бұрын
Indian accent is not a problem - better than Australian, British or Irish! I have not seen a single video which develops in 30 minutes a complete basic neural network solution. Brilliantly done with very lucid commentary.
@oudarjatanmoy384
Жыл бұрын
imgf.read(16) labelf.read(8) why u have done this please explain
@ohm7163
2 ай бұрын
used to skip the header information of the files like skipping the meta data of that files
@TihanyiPéter-b3b
3 ай бұрын
Very cool project! I have only one question: based on the video, you achieve 71.6% accuracy for 10 epochs, but I can't go above 52-54%. However, you also only have 56.94% accuracy in your github project. What is the reason of this? I'm very new to DL/MI, that's why I'm asking. I tried so much that I went up to 20 epochs, but the end result was almost exactly the same as that of 10 epochs. However, the accuracy increased by half per epoch... Thank you very much for any suggestions!
@ohm7163
2 ай бұрын
u can try increase learning rate like (0.1,0.5).. it will gradually increase the accuracy , but also note that extreme Learning rate also lead to skiping ..and dont run entire program every time if you run entire program it will reset the all the values so run only part of training part alone jupiter nb and no of times running the training part = reducing the loss, and a=increases the accuracy if accuracy is there might be logic error
@vishwa_randunu
2 жыл бұрын
Amazing sir.... that project helped me to continue my projects.... thank you....
@MLForNerds
2 жыл бұрын
Thanks @Vishwa, Glad it helped you!
@parzvalishere
Жыл бұрын
from the output layer I am getting a 60000 * 10 matrix but I cant subtract this with the correct values to get the difference. Should I take the highest value of each row or what should I do?
Пікірлер: 27