Deep HDR Imaging via A Non-local Network


One of the most challenging problems in recon-structing a high dynamic range (HDR) image from multiple lowdynamic range (LDR) inputs is the ghosting artifacts causedby the object motion across different inputs. When the objectmotion is slight, most existing methods can well suppress theghosting artifacts through aligning LDR inputs based on opticalflow or detecting anomalies among them. However, they oftenfail to produce satisfactory results in practice, since the realobject motion can be very large. In this study, we present anovel deep framework, termed NHDRRnet, which adopts analternative direction and attempts to remove ghosting artifactsby exploiting the non-local correlation in inputs. In NHDRRnet,we first adopt an Unet architecture to fuse all inputs and map thefusion results into a low-dimensional deep feature space. Then,we feed the resultant features into a novel global non-local modulewhich reconstructs each pixel by weighted averaging all the otherpixels using the weights determined by their correspondences.By doing this, the proposed NHDRRnet is able to adaptivelyselect the useful information (e.g., which are not corrupted bylarge motions or adverse lighting conditions) in the whole deepfeature space to accurately reconstruct each pixel. In addition,we also incorporate a triple-pass residual module to capturemore powerful local features, which proves to be effective infurther boosting the performance. Extensive experiments on threebenchmark datasets demonstrate the superiority of the proposedNDHRnet in terms of suppressing the ghosting artifacts in HDRreconstruction, especially when the objects have large motions.

In IEEE Transactions on Image Processing