BVM 2018
Abstract: Fundus fluorescein angiography yields complementary image information when compared to conventional fundus imaging. Angiographic imaging, however, may pose risks of harm to the patient. The output from both types of imaging have different characteristics, but the most prominent features of the fundus are shared in both images. Thus, the question arises if conventional fundus images alone provide enough information to synthesize an angiographic image. Our research analyzes the capacity of deep neural networks to synthesize virtual angiographic images from their conventional fundus counterparts.
We delve into synthesizing angiographic images from conventional color fundus images using deep neural networks, aiming to provide safer diagnostic alternatives due to the potential risks associated with angiographic imaging.
Utilizing CycleGAN, we translate between conventional and angiographic fundus images. This model comprises generators and discriminators, trained to create images that are nearly indistinguishable from real ones. The images were preprocessed and augmented to enhance the dataset size, and the network underwent training with a learning rate that decreased over epochs.
The synthesized images closely resembled the real angiographic images, enhancing certain structures like vessels. However, there were variations in brightness and contrast, and some small details were not accurately synthesized.
This study demonstrates the potential of using synthetic angiographic images for developing robust algorithms, but the practical utility for medical practitioners needs further exploration. Future research will focus on refining image resolution and exploring advanced data-augmentation methods.