Behold the power of Coherent Diffractive Imaging (CDI)! This revolutionary technique uses diffraction from a beam of light or electron to reconstruct images of specimens without the need for complex optics. With applications ranging from nanoscale imaging to X-ray ptychography and astronomical wavefront settings, CDI promises to revolutionize imaging.
Yet, a major issue with CDI is the phase retrieval problem. Detector failure to record the phase of the diffracted wave leads to information loss. To address this, a considerable amount of research has been done, with artificial neural networks being the focus. Although these methods are much more efficient than conventional iterative methods, they require a large volume of labeled data for training, making them experimentally burdensome. Additionally, these methods also lead to degraded reconstructed image quality.
That’s why the researchers of this project from SLAC National Accelerator Laboratory, USA have introduced PtychoPINN. This unsupervised neural network reconstruction method retains a significant speedup of previous deep learning-based methods while improving the quality simultaneously. It is a combination of physics-based CDI methods and neural networks, allowing the network to learn diffraction physics.
PtychoPINN leverages an autoencoder architecture incorporating convolutional, average pooling, upsampling, and custom layers. A Poisson model output and corresponding negative log-likelihood objective is used to model the Poisson noise intrinsic in the experimental data. Three distinct types of datasets were used for training and evaluating the model: ‘Lines’ for randomly oriented lines, Gaussian Random Field (GRF), and ‘Large Features’ for experimentally derived data.
The researchers compared the performance of PtychoPINN with the supervised learning baseline PytchoNN. The former showed minimal real-space amplitude and phase degradation, while the latter experienced significant blurring. Moreover, PytchoPINN also demonstrated a better peak signal-to-noise ratio (PSNR). When evaluated against the reconstruction of the ‘Large Features’ amplitude, PytchoPINN outperformed the other with a better Fourier ring correlation at the 50% threshold (FRC50).
All in all, PtychoPINN is a groundbreaking autoencoder framework for coherent diffractive imaging. It incorporates physical principles to improve the accuracy, resolution, and generalization while requiring less training data. This framework significantly outperforms the supervised learning baseline PytchoNN on metrics like PSNR and FCR50. Although a promising tool, it is still far from perfect, and the researchers are working on further improving its capabilities. Nonetheless, PtychoPINN is an incredibly exciting tool and has the potential to be used in real-time, high-resolution imaging that exceeds the resolution of lens-based systems without compromising imaging throughput. Don’t miss out on this amazing opportunity to revolutionize imaging with the power of PtychoPINN!