We have seen documentaries or news clips of Night-vision systems being used to see things in dark or low light conditions. Night vision cameras are amazing equipment, but the lack of detail in the acquired images can hamper decision-making. Some night vision systems employ infrared light – invisible to humans, and then convert the pictures to a digital display – which displays a monochrome image in the visible range. However, they also inhibit the interpretation of video footage since the display is monochromatic, with little difference between objects and poor shading.
Recently, scientists at the University of California, Irvine, used a deep learning system to rebuild night vision images in color. The system employs infrared pictures, which are undetectable to the naked eye. Humans can only perceive light wavelengths between 400 and 700 nanometers (violet to red), but infrared light exists beyond 700 nanometers. The scientists then merged the infrared data with an AI color prediction system to produce images, in the same manner, they would appear if the light was in the visible spectrum.
According to the team, image enhancement and deep learning have been used in computer vision jobs with low illuminance imaging to help in object recognition and characterization from the infrared spectrum, but not with appropriate interpretation of the same picture in the visible spectrum. The researchers explained that each dye and pigment that gives an item a visible color reflects a set of visible wavelengths as well as a set of infrared wavelengths in the current study. They could show visuals employing the visible hues associated with each dye and pigment if they could train a night-vision system to recognize the infrared fingerprint of each dye and pigment.
The scientists used a monochromatic camera sensitive to visible and near-infrared light to collect an image dataset of printed images of faces under multispectral illumination that included visible red (604 nm), green (529 nm), and blue (447 nm) wavelengths as well as infrared wavelengths to achieve their goal (718, 777, and 807 nm). They then used solely the near-infrared data to train a convolutional neural network to predict visible spectrum images. The training procedure yielded three architectures: a baseline linear regression, a U-Net inspired CNN (UNet), and an enhanced U-Net (UNet-GAN), all of which could generate three images per second.
Once the neural network had generated color pictures, the scientists presented them to graders, who chose which outputs looked nearly similar to the ground truth image. This information aided the researchers in determining the most effective neural network architecture. They discovered that a deep U-Net-based architecture could convert three infrared photos into a full-color photo that looked nearly identical to a normal photo of the same image.
Read More: Top 8 Deep Learning Libraries
According to the University of California scientists, the system requires two significant upgrades before it can be employed in medical applications. The first option is to increase the existing data capture rate beyond the present three frames per second in order to enable video imaging. The second is to modify the system so that it may be utilized with biologically relevant specimens like retinal tissues, which have different infrared reflectance spectra than the pictures used in this research. Aside from that, the scientists believe the discovery may be useful in security and military activities, as well as animal monitoring.
The scientist team at the University of California, Irvine published their findings on April 6th, in the journal PLOS ONE titled “Deep learning to enable color vision in the dark.”