The current document blur/quality estimation algorithms rely on the OCR accuracy to measure their success. A sharp document image, however, at times may yield lower OCR accuracy owing to factors independent of blur or quality of capture. The necessity to rely on OCR is mainly due to the difficulty in quantifying the quality otherwise. In this work, we overcome this limitation by proposing a novel dataset for document blur estimation, for which we physically quantify the blur using a capture set-up which computationally varies the focal distance of the camera. We also present a selective search mechanism to improve upon the recently successful patch-based learning approaches (using codebooks or convolutional neural networks). We present a thorough analysis of the improved blur estimation pipeline using correlation with OCR accuracy as well as the actual amount of blur. Our experiments demonstrate that our method outperforms the current state-of-the-art by a significant margin.
We present a novel dataset for document blur estimation, where the ground truth is created by physically estimating the blur radius.
We propose a selective search algorithm for extracting the appropriate patches from the document image, which we argue are crucial for estimating the focus blur. We propose an improved pipeline for document blur estimation by combining this selective search algorithm with a recently proposed CNN based regression network.
We present extensive experiments over two different datasets to validate the proposed pipeline. The results demonstrate that it improves the learning procedure and brings about 4% improvement over the state-of-the-art in estimating the physical blur and over 8-10% improvement in cross dataset experiments using OCR accuracies as ground truth.