Available Technology

Automated video data fusion method

A method for mitigating image distortions induced by optical wave propagation through a random media (e.g., atmospheric turbulence or volume of water) from a stream of video data provided by a single shortexposure image sensor is described. The method is based on the two following sequential steps: (1) enhancement of the raw video stream and (2) fusion of the enhanced stream using the lucky region fusion (LRF) technique. The first step enhances features of the raw image stream the LRF method success is based on and especially mitigates the effect of low light level, aerosol pollution, dust, haze, and other deteriorating factors. The second step, fusion of the enhanced stream, is realized by sequentially merging image regions with highest quality within a temporal buffer into a single image before sliding the temporal window forward. The process is continuously repeated in order to generate a stream of fused images. The resulting fused stream hence has an image quality superior to that of any image within the buffer and demonstrates improved contrast as well as increased detail visualization. In addition, the disclosed invention offers a method for automated extraction of random media (atmospheric turbulence for example) characteristics needed for optimizing the LRF method performance. Based solely on analysis of the enhanced video stream, this has the advantage to eliminate the need for turbulence strength characterization devices (e.g., scintillometer) and it allows the invention to provide an optimal fused stream even when operating within an evolving environment.
Inventors: 

Mikhail A. Vorontsov, Gary W. Carhart, Mathieu Aubailly

Patent Number: 
US8611691
Patent Issue Date: 
January 17, 2014
Email: 
ORTA@arl.army.mil
Lab Representatives
Share to Facebook Share to Twitter Share to Google Plus Share to Linkedin