Download PDFOpen PDF in browserImproving the Generalization of Deep Neural Networks Through Regularization TechniquesEasyChair Preprint 1581012 pages•Date: February 11, 2025AbstractDeep neural networks (DNNs) have demonstrated impressive performance across various domains, from computer vision to natural language processing. However, they are prone to overfitting, especially when the size of the training data is limited. Regularization techniques play a crucial role in improving the generalization ability of DNNs. In this paper, we explore various regularization methods, including L2 regularization, dropout, and batch normalization, to mitigate overfitting and improve model performance. We provide a mathematical analysis of each technique and evaluate their effectiveness on benchmark datasets such as CIFAR-10 and MNIST. Our results show that combining multiple regularization techniques significantly enhances the model's ability to generalize, achieving better performance on unseen data while maintaining computational efficiency. Keyphrases: Algorithms, DNN, NLP, deep learning
|