Sunday 7 January 2018

Is Using DropOut in training Deep Neural Network is waste of GPU Computation!



Is Using DropOut in training Deep Neural Network is waste of GPU Computation!

Lets break this statement down first,


Deep Neural Network: Its a simulated network of nodes consisting of some operation per layer of the network mostly this operation is Multiplication of Matrix or something similar, Its called Deep when it consist of many layers(hidden layers) in it, this is used with Back Propagation to train this network by moving forward and backward while updating the wights of those nodes.







Here in this image below you can see a simple Neural Network.



DropOut: Dropout is the mechanism used in the Deep Learning Neural Network training phase when we intentionally randomly remove 50%(Generally) of total activation's values while training a Deep

Neural Network: WIki Def for you "neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve performance on) tasks by considering examples, generally without task-specific programming. For example, in image recognition, "



Activation: Activation functions are used in the Neural Network training to check each result of any operation and determine based on there calculation whether it is a valid value to be considered to activate it as output for next operation in the model. Think of it as value of certain function is above a certain value, declare it activated. If it’s less than the threshold, then say it’s not.


Now lets talk about our main topic of the day and that is :_

DOES USING DROPOUT IN THE NEURAL NETWORK TRAINING IS WASTING THE GPU COMPUTATION -
ANSWER to this question is both YES and NO
and here is my explanation why i said what i said in last line.

YES is is wasting the GPU resource that we have because what we doing basically is throwing out the 50% of good probably valid values of activations that we get while training so our GPU has to work twice as hard to get to the desired efficiency level that we want it to be,

NO - yes i said no because we use Dropout in our model for example vgg16 model of image classification to prevent over-fitting and under fitting of our trained model for the desired application this is very necessary that we use Dropout for this purpose because there is no other practical solution to the problem of over-fitting and under-fitting in the deep learning world right now.

Now if you don't know what OVER Fitting and under fitting means there here is explanation in few lines for you.

This was not the case before as this concept of overfitting and underfitting was not known this came around in the Deeplearning in last 4 years.

Under-fitting is when our model is not trained to that limit where it can do what it has been trained to do for example classifying between a ball and chair, means its accuracy is very low.

Over fitting is when our trained model is over trained and only classifying the exact model and dimension ball whose data it was trained on which means its not working for general classification of other kind of balls and chair purposed which was the original motive behind training that model.



No comments:

Post a Comment

Proper way to install nvidia 390 fix error

Proper way to install nvidia 390 if you see any error in the process look below; command  sudo apt purge --autoremove '*nvidia*&#...