Content Based Image Retrieval by Fusion of Multilevel Results

Document Type : Original Article

Authors

Faculty of Engineering, Bu-Ali Sina University, Hamedan, Iran

Abstract

Content based image retrieval (CBIR) applies machine vision techniques to extract similar images for a given query image. The main challenge of CBIR is the semantic gap between low level pixel and segment based features and high-level concepts in the image. An approach towards reducing this gaps is to use high level region and object based features. However, the low-level features describe image details and enforce between image discriminations. Accordingly, it is expected that the use of both feature types will lead to better results. This paper tries to reduce the mentioned gap by combining decision results at four granularities, namely pixel, region, object, and concept levels. Pixel level retrieval adopts SIFT features and local binary patterns. Region level subsystem partitions the image into a set of segments and extracts their color and texture features using hue descriptor and Gabor filters for subsequent processing. AlexNet convolutional neural network is employed for object based retrieval. Word2vec embedding is used for concept level retrieval that exploits conceptual relations between objects to enhance the retrieval results. Experiments over Wang and GHIM datasets confirm the feasibility of the proposed combination and conclude that it improves overall performance of the retrieval system.

Keywords


[1]      A. Cawkell, “Imaging systems and picture collection management: a review,” Information Services & Use, vol. 12, no. 4, pp. 301-325, 1992.
[2]      P. G. Enser and C. G. McGregor, Analysis of visual information retrieval queries, British Library Board London, 1993.
[3]      V. N. Gudivada and V. V. Raghavan, “Content based image retrieval systems,” Computer, vol. 28, no. 9, pp. 18-22, 1995.
[4]      J. Huang, S. R. Kumar, M. Mitra, W.-J. Zhu and R. Zabih, “Image indexing using color correlograms,” in Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on, pp. 762-768, 1997.
[5]      A. W. Smeulders, M. Worring, S. Santini, A. Gupta and R. Jain, “Content-based image retrieval at the end of the early years,” IEEE Transactions on pattern analysis and machine intelligence, vol. 22, no. 12, pp. 1349-1380, 2000.
[6]      R. Datta, D. Joshi, J. Li and J. Z. Wang, “Image retrieval: Ideas, influences, and trends of the new age,” ACM Computing Surveys (Csur), vol. 40, no. 2, p. 5, 2008.
[7]      J. Z. Wang, J. Li and G. Wiederhold, “SIMPLIcity: Semantics-sensitive integrated matching for picture libraries,” IEEE Transactions on pattern analysis and machine intelligence, vol. 23, no. 9, pp. 947-963, 2001.
[8]      G. Carneiro, A. B. Chan, P. J. Moreno and N. Vasconcelos, “Supervised learning of semantic classes for image annotation and retrieval,” IEEE transactions on pattern analysis and machine intelligence, vol. 29, no. 3, pp. 394-410, 2007.
[9]      J.-H. Su, W.-J. Huang, S. Y. Philip and V. S. Tseng, “Efficient relevance feedback for content-based image retrieval by mining user navigation patterns,” IEEE transactions on knowledge and data engineering, vol. 23, no. 3, pp. 360-372, 2011.
[10]      J. Yu, Z. Qin, T. Wan and X. Zhang, “Feature integration analysis of bag-of-features model for image retrieval,” Neurocomputing, vol. 120, pp. 355-364, 2013.
[11]      F. Jing, M. Li, H.-J. Zhang and B. Zhang, “An efficient and effective region-based image retrieval framework,” IEEE Transactions on Image Processing, vol. 13, no. 5, pp. 699-709, 2004.
[12]      L.-J. Li, H. Su, L. Fei-Fei and E. P. Xing, “Object bank: A high-level image representation for scene classification & semantic feature sparsification,” in Advances in neural information processing systems, pp. 1378-1386, 2010.
[13]      G. Pass, R. Zabih and J. Miller, “Comparing images using color coherence vectors,” in Proceedings of the fourth ACM international conference on Multimedia, pp. 65-73, 1997.
[14]      Z.-C. Huang, P. P. Chan, W. W. Ng and D. S. Yeung, “Content-based image retrieval using color moment and Gabor texture feature,” in Machine Learning and Cybernetics (ICMLC), 2010 International Conference on, vol. 2, pp. 719-724, 2010.
[15]      C. Dagli and T. S. Huang, “A framework for grid-based image retrieval,” in Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on, vol. 2, pp. 1021-1024, 2004.
[16]      J. Sivic and A. Zisserman, “Video Google: A text retrieval approach to object matching in videos,” IEEE, p. 1470, 2003.
[17]      ساناز کشوری و عبداله چاله‌چاله، «طبقه‌بندی سبک نقاشی هنرمندان با استفاده از هیستوگرام گرادیان جهت‌دار و الگوی باینری محلی»، مجله مهندسی برق دانشگاه تبریز، دوره 47، شماره 3، 1396.
[18]      S. Murala, R. Maheshwari and R. Balasubramanian, “Directional local extrema patterns: a new descriptor for content based image retrieval,” International journal of multimedia information retrieval, vol. 1, no. 3, pp. 191-203, 2012.
[19]      A. Bala and T. Kaur, “Local texton XOR patterns: A new feature descriptor for content-based image retrieval,” Engineering Science and Technology, an International Journal, vol. 19, no. 1, pp. 101-112, 2016.
[20]      C. Carson, S. Belongie, H. Greenspan and J. Malik, “Blobworld: Image segmentation using expectation-maximization and its application to image querying,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 8, pp. 1026-1038, 2002.
[21]      Y. Deng and B. Manjunath, “Unsupervised segmentation of color-texture regions in images and video,” IEEE transactions on pattern analysis and machine intelligence, vol. 23, no. 8, pp. 800-810, 2001.
[22]      D. Hoiem, R. Sukthankar, H. Schneiderman and L. Huston, “Object-based image retrieval using the statistical structure of images,” in Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, vol. 2, pp. II, 2004.
[23]      Y. Li, Object and concept recognition for content-based image retrieval, Citeseer, 2005.
[24]      اسما شمسی گوشکی، سعید سریزدی، حسین نظام‌آبادی‌پور و محمد شهرام معین، «روشی جدید در بازخورد ربط برای بازیابی تصویر بر اساس محتوا به شیوه چند پرسشی»، مجله مهندسی برق دانشگاه تبریز، دوره 40، شماره 2، 1389.
[25]      Y. Jin, L. Khan, L. Wang and M. Awad, “Image annotations by combining multiple evidence & wordnet,” in Proceedings of the 13th annual ACM international conference on Multimedia, pp. 706-715, 2005.
[26]      هنگامه دلجویی و امیرمسعود افتخاری‌مقدم، «حاشیه‌نویسی خودکار تصویر با استفاده از ارتباط معنایی بین نواحی مبتنی بر تئوری تصمیم چندشرطی»، مجله مهندسی برق دانشگاه تبریز، دوره 42، شماره 2، 1391.
[27]      J. Van De Weijer and C. Schmid, “Coloring local feature extraction,” in European conference on computer vision, Springer, pp. 334-348, 2006.
[28]      A. a. S. I. a. H. G. E. Krizhevsky, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, pp. 1097-1105, 2012.
[29]      J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng and T. Darrell, “Decaf: A deep convolutional activation feature for generic visual recognition,” in International conference on machine learning, pp. 647-655, 2014.
[30]      K. a. Z. A. Simonyan, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
[31]      T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado and J. Dean, “Distributed representations of words and phrases and their compositionality,” in Advances in neural information processing systems, pp. 3111-3119, 2013.
[32]      Tool for computing continuous distributed representations of words: https://code.google.com/archive/p/word2vec/
[33]      Wang Image Database: http://wang.ist.psu.edu/docs/related/
[34]      GHIM Image Database: http://www.ci.gxnu.edu.cn/cbir/Dataset.aspx
[35]      G.-H. Liu, J.-Y. Yang and Z. Li, “Content-based image retrieval using computational visual attention model,” pattern recognition, vol. 48, no. 5, pp. 2554-2566, 2015.