Abstract:
With the rapid development of artificial intelligence, deep learning technology has been widely applied in fields such as medical diagnosis and autonomous driving. However, the inherent black-box nature and structural complexity of deep learning models make their uncertainty neither directly derivable from physical principles nor explicitly expressible through clear functional formulas—this has become a critical bottleneck restricting the deployment of models in high-reliability scenarios. Monte Carlo Dropout is a commonly used method for evaluating the uncertainty of deep learning models; in this study, we use it as the core tool to design experiments exploring the effects of different dropout rates on model uncertainty and performance. Focusing on three typical deep learning models (ResNet18, LeNet, and ViT), we systematically analyzed the dynamic evolution patterns of their uncertainty characteristics and performance with varying dropout rates. Experiments were conducted on the MNIST dataset, and the results showed that the model uncertainty captured by the Monte Carlo Dropout exhibited significant dynamic changes as the dropout rate was adjusted. Furthermore, this study systematically describes the core sources of uncertainty in deep learning models and the current research status of uncertainty measurement methods. Combined with the experimental results, we further summarized the evolution patterns of model uncertainty characteristics with dropout rates. Besides, in view of the limitations of Monte Carlo Dropout, we proposed some directions for future work.