Two-stream convolutional networks for blind image quality assessment

Abstract

Traditional image quality assessment (IQA) methods do not perform robustly due to the shallow hand-designed features. It has been demonstrated that deep neural network can learn more effective features than ever. In this paper, we describe a new deep neural network to predict the image quality accurately without relying on the reference image. To learn more effective feature representations for non-reference IQA, we propose a two-stream convolution network that includes two subcomponents for image and gradient image. The motivation for this design is using a two-stream scheme to capture different-level information of inputs and easing the difficulty of extracting features from one steam. The gradient stream focuses on extracting structure features in details, and the image stream pays more attention to the information in intensity. In addition, to consider the locally non-uniform distribution of distortion in images, we add a region-based fully convolutional layer for using the information around the center of the input image patch. The final score of the overall image is calculated by averaging of the patch scores. The proposed network performs in an end-to-end manner in both the training and testing phases. The experimental results on a series of benchmark datasets, e.g., LIVE, CISQ, IVC, TID2013, and Waterloo Exploration Database, show that the proposed algorithm outperforms the state-of-the-art methods, which verifies the effectiveness of our network architecture.

Publication
In IEEE Transactions on Image Processing