Correcting Multi-focus Images via Simple Standard Deviation for Image Fusion

Full Text (PDF, 266KB), PP.56-61

Views: 0 Downloads: 0

Author(s)

Firas A. Jassim 1,*

1. Management Information Systems Department, Irbid National University, Irbid 2600, Jordan

* Corresponding author.

DOI: https://doi.org/10.5815/ijigsp.2013.12.08

Received: 20 Jun. 2013 / Revised: 19 Jul. 2013 / Accepted: 23 Aug. 2013 / Published: 8 Oct. 2013

Index Terms

Image fusion, Multi-focus, Multi-sensor, Standard deviation

Abstract

Image fusion is one of the recent trends in image registration which is an essential field of image processing. The basic principle of this paper is to fuse multi-focus images using simple statistical standard deviation. Firstly, the simple standard deviation for the kk window inside each of the multi-focus images was computed. The contribution in this paper came from the idea that the focused part inside an image had high details rather than the unfocused part. Hence, the dispersion between pixels inside the focused part is higher than the dispersion inside the unfocused part. Secondly, a simple comparison between the standard deviation for each kk window in the multi-focus images could be computed. The highest standard deviation between all the computed standard deviations for the multi-focus images could be treated as the optimal that is to be placed in the fused image. The experimental visual results show that the proposed method produces very satisfactory results in spite of its simplicity.

Cite This Paper

Firas A. Jassim,"Correcting Multi-focus Images via Simple Standard Deviation for Image Fusion", IJIGSP, vol.5, no.12, pp. 56-61, 2013. DOI: 10.5815/ijigsp.2013.12.08

Reference

[1]Ben Hamza A., He Y., Krimc H., and Willsky A., “A multiscale approach to pixel-level image Fusion”, Integrated Computer-Aided Engineering, Vol. 12, pp. 135–146, 2005.

[2]Cao, W.; Li, B.C.; Zhang, Y.A. “A remote sensing image fusion method based on PCA transform and wavelet packet transform”, International Conference of Neural Network and Signal Process, vol. 2, pp. 976–981, 2003.

[3]Choi M., “A New Intensity-Hue-Saturation Fusion Approach to Image Fusion With a Tradeoff Parameter”, IEEE Transactions On Geoscience And Remote Sensing, Vol. 44, No. 6, June 2006.

[4]Choi M., Kim R. Y., Nam M. R., “Fusion of multi-spectral and panchromatic satellite images using the curvelet transform”, IEEE Geosci. Remote Sens. Lett., vol. 2, pp. 136–140, 2005.

[5]Cunha A. L., Zhou J. P, Do M. N., “The non-subsampled contourlet transform: Theory, design and applications”, IEEE Trans. Image Process, vol. 15, p.. 3089–3101, 2006.

[6]Das S., Chowdhury M., and Kundu M. K., “Medical Image Fusion Based On Ripplet Transform Type-I”, Progress In Electromagnetics Research B, Vol. 30, pp. 355-370, 2011.

[7]Delleji T., Zribi M., and Ben Hamida A., “Multi-Source Multi-Sensor Image Fusion Based on Bootstrap Approach and SEM Algorithm”, The Open Remote Sensing Journal, Vol. 2, pp. 1-11, 2009.

[8]Do M. N., Vetterli M., “The contourlet transform: An efficient directional multi-resolution image representation”, IEEE Trans. Image Process, vol. 14, pp. 2091–2106, 2005.

[9]Flusser J., Sroubek F., and Zitov B., “Image Fusion: Principles, Methods, and Applications”, Lecture notes, Tutorial EUSIPCO 2007.

[10]Hill P. R.; Bull D. R., Canagarajah C.N., “Image fusion using a new framework for complex wavelet transforms”, Int. Conf. Image Process, vol. 2, pp. 1338–1341, 2005.

[11]Hill P., Canagarajah N., and Bull D., “Image fusion using complex wavelets”, Proceedings of the 13th British Machine Vision Conference, University of Cardiff, pp. 487-497,2002.

[12]Ioannidou S.; Karathanassi V., “Investigation of the dual-tree complex and shift-invariant discrete wavelet transforms on Quickbird image fusion”, IEEE Geosci. Remote Sens. Lett., vol. 4, pp. 166–170, 2007.

[13]Klonus S., Ehlers M., “Performance of evaluation methods in image fusion”, 12th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 2009, pp. 1409 – 1416.

[14]Li S., Kwok J. T., and Wang Y., “Multifocus image fusion using artificial neural networks”, Pattern Recognition Letters, Vol. 23, pp. 985–997, 2002.

[15]Miles B., Ben Ayed I., Law M. W. K., Garvin G., Fenster A., and Li S., “Spine Image Fusion via Graph Cuts”, IEEE tansaction on biomedical engineering, Vol. PP, No. 99, 2013.

[16]Nencini F., Garzelli A., Baronti S.,”Remote sensing image fusion using the curvelet transform”, Inf. Fusion, vol. 8, pp. 143–156, 2007.

[17]Pajares G., De la Cruz J. M., “A wavelet-based image fusion tutorial”, Pattern Recognition, Vol. 37, pp. 1855 – 1872, 2004.

[18]Qiguang M., Baoshu W., Ziaur R., Robert A. S.” Multi-Focus Image Fusion Using Ratio Of Blurred And Original Image Intensities”, Proceedings Of SPIE, The International Society For Optical Engineering. Visual Information Processing XIV : (29-30 March 2005, Orlando, Florida, USA.

[19]Luo R., Kay M., Data fusion and sensor integration: state of the art in 1990s. Data fusion in robotics and machine intelligence, M. Abidi and R. Gonzalez, eds, Academic Press, San Diego, 1992.

[20]Riyahi R., Kleinn C., Fuchs H., “Comparison of Different Image Fusion Techniques For individual Tree Crown Identification Using Quickbird Images”, In proceeding of ISPRS Hannover Workshop 2009, 2009.

[21]Rockinger O., Fechner T., “Pixel-Level Image Fusion: The Case of Image Sequences”, In Proceedings of SPIE , Vol. 3374, No. 1, pp. 378-388, 1998.

[22]Song H., Yu S., Song L., “Fusion of multi-spectral and panchromatic satellite images based on contourlet transform and local average gradient”, Opt. Eng., vol. 46, pp. 1–3, 2007.

[23]Wan T., Canagarajah N., and Achim A., “Compressive Image Fusion”, In the Proceedings of the IEEE International Conference on Image Processing (ICIP), pp. 1308-1311, October, 2008, San Diego, California, USA.

[24]Wang Z. and Bovik A. C., “A universal image quality index”, IEEE Signal Processing Letters, Vol. 9, No. 3, pp. 81–84, 2002.

[25]Wang Z., Bovik A. C. , “Image Quality Assessment: From Error Visibility to Structural Similarity”, IEEE Transactions on Image Processing, Vol. 13, No. 4, pp. 600-612, 2004.

[26]Wang Z., Ziou D., Armenakis C., Li D., and Li Q., “A Comparative Analysis of Image Fusion Methods”, IEEE Transactions On Geoscience And Remote Sensing, Vol. 43, No. 6, pp. 1391-1402, 2005.

[27]Yuan J., Shi J., Tai X.-C., Boykov Y., “A Study on Convex Optimization Approaches to Image Fusion”, Lecture Notes in Computer Science, Vol. 6667, pp. 122-133, 2012.

[28]Zebhi S., Aghabozorgi Sahaf M. R., and Sadeghi M. T., “Image Fusion Using PCA In CS Domain”, Signal and Image Processing: An International Journal, Vol.3, No.4, 2012.

[29]Zhang L., Zhang L., Mou X. and Zhang D., “FSIM: A Feature SIMilarity Index for Image Quality Assessment”, IEEE Transactions on Image Processing, Vol. 20, No. 8, pp. 2378-2386, 2011.