Skip to content Skip to footer

This AI Research from UCLA Radically Changes Uncertainty Quantification in Deep Neural Networks Through Cycle Consistency

Deep learning has myriad applications including data mining, natural language processing, and addressing inverse imaging problems like image denoising and super-resolution imaging. However, deep neural networks are not always accurate in functioning and can yield unreliable results. This aspect led researchers to investigate methods of increasing the accuracy level of these networks.

Research conducted at the University of California, Los Angeles, has indicated that uncertainty quantification (UQ) can be incorporated into deep learning models to improve their confidence in making predictions. This also helps in identifying unusual circumstances like anomalous data and malicious attacks. Despite this, a majority of deep learning models do not possess strong UQ capabilities which can identify shifts in data distribution during testing stages.

To respond to this challenge, the UCLA researchers designed a new UQ method based on cycle consistency. This technique improves the reliability of deep neural networks in inverse imaging situations by quantitatively estimating the uncertainty of neural network outcomes and automatically detecting shifts in the distribution and corruption of unknown input data. The model undertakes forward-backward cycles using a physical forward model alongside an iteratively-trained neural network. It then estimates the accrued uncertainty by combining a computational representation of the underlying processes with a neural network and performing cycles between input and output data.

The researchers established upper and lower limits for cycle consistency to provide clarity on its relation to a neural network’s output uncertainty. They developed a machine learning model which can sort images based on the disturbances identified through forward-backward cycles. This categorization method enhanced the precision of their final classifications.

The team addressed the problem of out-of-distribution (OOD) image identification in relation to image super-resolution by collecting three types of low-resolution images: manga, microscopic, and human faces. They used separate super-resolution neural networks for each image type and compared the outcomes across all three systems. They found that the model triggered alerts for OOD instances when input images didn’t match the category images it was trained on. This displays that the overall accuracy in identifying OOD images was greater than with other methods.

In conclusion, the UCLA researchers’ cycle-consistency-based UQ method can improve the reliability of neural networks in inverse imaging problems and can be applied to other areas requiring uncertainty estimates. This model may play a crucial role in addressing uncertainty in neural networks and could lead to more reliable deployment of deep learning models in real-world applications.

Leave a comment

0.0/5