Overfitting occurs when the neural network tries to memorise training data by heart instead of learning the relationship underlying them. More precisely, overfitted bots map point-to-point training input and output and end up learning from noises. As a result, the neural network performs well on the training dataset (high accuracy on the training set) but fails to generalise (low accuracy on new unseen data).
During the training process, you can easily detect an overfitting problem by comparing the training loss and the cross-validation loss. The higher the difference between the two values, the larger the extent of the overfitting.
Generally, the neural network model overfits if the available dataset is not sufficient to cope with the complexity of your neural network model. In order to reduce overfitting, one of the options is to add more data. Moreover, depending on the logics that describe your data relationship, you may also want to remove or combine some features and therefore lower the number of inputs for your model.
However, removing features is not always the best solution when dealing with a scientific dataset -especially if each attribute is equally important. The following option is then available to make your model less prone to overfitting:
By selecting the proper regularisation type and increasing the regularisation rate, you can force your neural network to remain simple and then avoid point-to-point mapping.
Adding dropout layer:
Increasing the dropout rate improves the model performance on unseen data. Actually, by randomly switching off more and more neurons during the training, the model will learn more and more independent representations of the same data, and will generalise well.
In any case, you should note that while regularisation and dropout help in preventing overfitting, they can also penalise the learning process and reduce the prediction accuracy.
Stopping the training early:
By using the cross-validation loss history graph, you can stop the training when the loss value reaches the bottom.