簡易檢索 / 詳目顯示

研究生: Imran Ali
Imran Ali
論文名稱: Pixel-based Approach to improve the Classification Performance of Hyperspectral Image for Taiwan Agriculture by using PCA Edge Preserving Features
Pixel-based Approach to improve the Classification Performance of Hyperspectral Image for Taiwan Agriculture by using PCA Edge Preserving Features
指導教授: 張以全
Peter I-Tsyuen Chang
柯正浩
Cheng-Hao Ko
口試委員: 沈志霖
Ji-Lin Shen
李敏凡
Min-Fan Lee
柯正浩
Ko Cheng-Hao
學位類別: 碩士
Master
系所名稱: 工程學院 - 機械工程系
Department of Mechanical Engineering
論文出版年: 2020
畢業學年度: 108
語文別: 英文
論文頁數: 134
中文關鍵詞: Edge Preserving FiltersPrincipal Component AnalysisSupport Vector MachineHyperspectral DataTaiwan AgricultureImage Classification
外文關鍵詞: Edge Preserving Filters, Principal Component Analysis, Support Vector Machine, Hyperspectral Data, Taiwan Agriculture, Image Classification
相關次數: 點閱:206下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • When it comes to remote sensing, hyperspectral data processing has gained growing importance. However, there is still a lack of appropriate approaches to fusing the hyperspectral data-cube features. Hyperspectral images have a cube shape in which two-dimensions contains spatial information while the third dimension has substantial spectral details. The high number of spectral bands allows for distinction in high specificity between the various materials. In fact, the use of Image spatial features such as shape, texture and geometric forms would enhance the land cover discrimination. By integrating the spatial and spectral details, the classification accuracy for Hyperspectral Image will dramatically improve. So, in this article a novel “Pixel-based Approach to improve the Classification Performance of Hyperspectral Image for Taiwan Agriculture by using PCA Edge Preserving Features” is proposed.
    So how this Pixel-based Approach does for Hyperspectral Image Classification is performed? First, the standard EPFs are constructed with different parameter settings by applying edge-preserving filters to the examiner image and then the resulting EPFs are pile up together.
    Secondly, we reduce the spectral dimension of the pile up EPFs with PCA, which not only can represent the EPFs in the mean square sense but also highlight the separability of pixels in the EPFs. Thirdly, the resulting PCA-EPFs are classified by a support vector machine (SVM) classifier.
    Initially the above steps are implemented on Hyperspectral data in the range of visible and near infrared (VNIR 400-100nm) for Hyperspectral Image Classification to assessed their performance in terms of classification accuracy. Then the same steps are implemented on Hyperspectral data in the range of short-wave infrared (SWIR 950-1700 nm) to assessed the Classification accuracy for SWIR. And in the Last, the Spatial and Spectral Fusion for Hyperspectral data in range of VNIR and SWIR (FuSI) is performed to assessed their Classification accuracy.
    Through the comparison of various classification accuracies such as Individual Class accuracy, Average Accuracy, Overall Accuracy and Kappa Factor for Hyperspectral data in the range of VNIR SWIR and Fusion (VNIR-SWIR-FuSI). The Classification accuracy for Fusion (VNIR-SWIR-FuSI) was found significantly better than as compared to Classification accuracy for SWIR and VNIR respectively.
    The thesis presents, a comprehensive approach to improve the classification performance of Hyperspectral data for Taiwan agriculture by using “Principal component analysis Edge Preserving Features” method.


    When it comes to remote sensing, hyperspectral data processing has gained growing importance. However, there is still a lack of appropriate approaches to fusing the hyperspectral data-cube features. Hyperspectral images have a cube shape in which two-dimensions contains spatial information while the third dimension has substantial spectral details. The high number of spectral bands allows for distinction in high specificity between the various materials. In fact, the use of Image spatial features such as shape, texture and geometric forms would enhance the land cover discrimination. By integrating the spatial and spectral details, the classification accuracy for Hyperspectral Image will dramatically improve. So, in this article a novel “Pixel-based Approach to improve the Classification Performance of Hyperspectral Image for Taiwan Agriculture by using PCA Edge Preserving Features” is proposed.
    So how this Pixel-based Approach does for Hyperspectral Image Classification is performed? First, the standard EPFs are constructed with different parameter settings by applying edge-preserving filters to the examiner image and then the resulting EPFs are pile up together.
    Secondly, we reduce the spectral dimension of the pile up EPFs with PCA, which not only can represent the EPFs in the mean square sense but also highlight the separability of pixels in the EPFs. Thirdly, the resulting PCA-EPFs are classified by a support vector machine (SVM) classifier.
    Initially the above steps are implemented on Hyperspectral data in the range of visible and near infrared (VNIR 400-100nm) for Hyperspectral Image Classification to assessed their performance in terms of classification accuracy. Then the same steps are implemented on Hyperspectral data in the range of short-wave infrared (SWIR 950-1700 nm) to assessed the Classification accuracy for SWIR. And in the Last, the Spatial and Spectral Fusion for Hyperspectral data in range of VNIR and SWIR (FuSI) is performed to assessed their Classification accuracy.
    Through the comparison of various classification accuracies such as Individual Class accuracy, Average Accuracy, Overall Accuracy and Kappa Factor for Hyperspectral data in the range of VNIR SWIR and Fusion (VNIR-SWIR-FuSI). The Classification accuracy for Fusion (VNIR-SWIR-FuSI) was found significantly better than as compared to Classification accuracy for SWIR and VNIR respectively.
    The thesis presents, a comprehensive approach to improve the classification performance of Hyperspectral data for Taiwan agriculture by using “Principal component analysis Edge Preserving Features” method.

    ABSTRACT i ACKNOWLEDGEMENT iii TABLE OF CONTENTS iv LIST OF FIGURES vii LIST OF TABLES xii Chapter 1 Introduction 1 Chapter 2 Parameters of PCA-EPFs 4 2.1 Edge-Preserving Filters 4 2.2 Principal Component Analysis 6 2.3 Spectral dimension reduction 8 2.4 Feature extraction with multiple EPFs 9 2.5 Feature fusion with PCA 10 2.6 Parameter and component analysis 11 2.7 Experiments 12 2.7.1 1st way of training 13 2.7.2 2nd way of training 14 2.8 Data set 14 Chapter 3 Hyperspectral Data Set in VNIR & SWIR range 15 3.1 Hyperspectral VNIR data set 15 3.2 Envi software 15 3.3 Geometric correction of HSI in VNIR range 16 3.3.1 Georeference from IGM 16 3.4 Region of interest of HSI in VNIR range 18 3.5 Supervised classification for HSI VNIR range 20 3.6 Ground truth data set for HSI VNIR range 20 3.7 Hyperspectral SWIR data set 24 3.8 Region of interest of HSI in SWIR range 27 3.9 Supervised classification for HSI SWIR range 28 Chapter 4 Fusion of Hyperspectral Data Set (VNIR, SWIR, FuSI) 30 4.1 Geometric correction of VNIR & SWIR Images 30 4.2 Spatial fusion of VNIR & SWIR HS Images 31 4.3 Region of interest of Fusion for HSI in VNIR range 34 4.4 Spectral fusion of Hyperspectral image in VNIR & SWIR range 38 4.5 Supervised classification on fusion image 39 Chapter 5 Results and Discussion 40 5.1 VNIR classification result with 1st way of training 40 5.1.1 Spectral Signatures of Classes in VNIR range. 40 5.1.2 EPFs / PCA based EPFs classification results in VNIR range 45 5.1.3 VNIR classification result with training 2 55 5.2 SWIR Classification result with 1st way of training 64 5.2.1 Spectral Signatures of Classes in SWIR range. 64 5.2.2 EPFs / PCA based EPFs classification results in SWIR range 69 5.2.3 SWIR Classification result with training 2 77 5.3 Fusion (VNIR & SWIR) classification result with 1st way of training 86 5.3.1 Signatures of classes in fusion (VNIR & SWIR) range. 86 5.3.2 EPFs / PCA based EPFs classification results of fusion for HSI in (VNIR & SWIR) range. 91 5.3.3 Fusion (VNIR & SWIR) classification result with training 2 100 5.4 Combine results of classification for all methods. 109 Chapter 6 Conclusion 113 References 115

    [1] Y. Liu, G. Gao, and Y. Gu, “Tensor Matched Subspace Detector for Hyperspectral Target Detection,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 4, pp. 1967–1974, 2017, doi: 10.1109/TGRS.2016.2632863.
    [2] S. Li, K. Zhang, Q. Hao, P. Duan, and X. Kang, “Hyperspectral anomaly detection with multiscale attribute and edge-preserving filters,” IEEE Geosci. Remote Sens. Lett., vol. 15, no. 10, pp. 1605–1609, 2018, doi: 10.1109/LGRS.2018.2853705.
    [3] F. Kizel, M. Shoshany, N. S. Netanyahu, G. Even-Tzur, and J. A. Benediktsson, “A stepwise analytical projected gradient descent search for hyperspectral unmixing and its code vectorization,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 9, pp. 4925–4943, 2017, doi: 10.1109/TGRS.2017.2692999.
    [4] J. Li, I. Dópido, P. Gamba, and A. Plaza, “Complementarity of discriminative classifiers and spectral unmixing techniques for the interpretation of hyperspectral images,” IEEE Trans. Geosci. Remote Sens., vol. 53, no. 5, pp. 2899–2912, 2015, doi: 10.1109/TGRS.2014.2366513.
    [5] T. Lu et al., “Set-to-Set Distance-Based Spectral – Spatial Classification of Hyperspectral Images,” IEEE Trans. Geosci. Remote Sens.,vol. 54, no. 12, pp. 7122–7134, 2016.
    [6] B. Sun, X. Kang, S. Li, and J. A. Benediktsson, “Random-walker-based collaborative learning for hyperspectral image classification,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 1, pp. 212–222, 2017, doi: 10.1109/TGRS.2016.2604290.
    [7] G. Cheng, J. Han, L. Guo, Z. Liu, S. Bu, and J. Ren, “Effective and Efficient Midlevel Visual Elements-Oriented Land-Use Classification Using VHR Remote Sensing Images,” IEEE Trans. Geosci. Remote Sens., vol. 53, no. 8, pp. 4238–4249, 2015, doi: 10.1109/TGRS.2015.2393857.
    [8] G. Cheng, J. Han, and X. Lu, “Remote Sensing Image Scene Classification: Benchmark and State of the Art,” Proc. IEEE, vol. 105, no. 10, pp. 1865–1883, 2017, doi: 10.1109/JPROC.2017.2675998.
    [9] P. Perona and J. Malik, “Scale-Space and Edge Detection Using Anisotropic Diffusion,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 12, no. 7, pp. 629–639, 1990, doi: 10.1109/34.56205.
    [10] F. Melgani and L. Bruzzone, “Classification of Hyperspectral Remote Sensing,” IEEE Trans. Geosci. Remote Sens., vol. 42, no. 8, August, pp. 1778–1790, 2004.
    [11] Y. Chen, N. M. Nasrabadi, and T. D. Tran, “Hyperspectral image classification using dictionary-based sparse representation,” IEEE Trans. Geosci. Remote Sens., vol. 49, no. 10 PART 2, pp. 3973–3985, 2011, doi: 10.1109/TGRS.2011.2129595.
    [12] L. Fang, S. Li, X. Kang, and J. A. Benediktsson, “Spectral-Spatial Classification of Hyperspectral Images with a Superpixel-Based Discriminative Sparse Model,” IEEE Trans. Geosci. Remote Sens., vol. 53, no. 8, pp. 4186–4201, 2015, doi: 10.1109/TGRS.2015.2392755.
    [13] G. Licciardi, P. R. Marpu, J. Chanussot, and J. A. Benediktsson, “Linear versus nonlinear PCA for the classification of hyperspectral data based on the extended morphological profiles,” IEEE Geosci. Remote Sens. Lett., vol. 9, no. 3, pp. 447–451, 2012, doi: 10.1109/LGRS.2011.2172185.
    [14] J. Ren, J. Zabalza, S. Marshall, and J. Zheng, “Effective feature extraction and data reduction in remote sensing using hyperspectral imaging [Applications Corner],” IEEE Signal Process. Mag., vol. 31, no. 4, pp. 149–154, 2014, doi: 10.1109/MSP.2014.2312071.
    [15] B. Demir and S. Ertürk, “Empirical mode decomposition of hyperspectral images for support vector machine classification,” IEEE Trans. Geosci. Remote Sens., vol. 48, no. 11, pp. 4071–4084, 2010, doi: 10.1109/TGRS.2010.2070510.
    [16] A. Villa, J. A. Benediktsson, J. Chanussot, and C. Jutten, “Hyperspectral image classification with Independent component discriminant analysis,” IEEE Trans. Geosci. Remote Sens., vol. 49, no. 12 PART 1, pp. 4865–4876, 2011, doi: 10.1109/TGRS.2011.2153861.
    [17] J. Zabalza, J. Ren, Z. Wang, H. Zhao, J. Wang, and S. Marshall, “Fast Implementation of Singular Spectrum Analysis for Effective Feature Extraction in Hyperspectral Imaging,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 8, no. 6, pp. 2845–2853, 2015, doi: 10.1109/JSTARS.2014.2375932.
    [18] D. Lunga, S. Prasad, M. M. Crawford, and O. Ersoy, “Manifold-learning-based feature extraction for classification of hyperspectral data: A review of advances in manifold learning,” IEEE Signal Process. Mag., vol. 31, no. 1, pp. 55–66, 2014, doi: 10.1109/MSP.2013.2279894.
    [19] X. Kang, X. Xiang, S. Li, and J. A. Benediktsson, “PCA-Based Edge-Preserving Features for Hyperspectral Image Classification,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 12, pp. 7140–7151, 2017, doi: 10.1109/TGRS.2017.2743102.

    [20] X. Kang, S. Member, S. Li, and J. A. Benediktsson, “Spectral – Spatial Hyperspectral Image Classification With Edge-Preserving Filtering,” IEEE Trans. Geosci. Remote Sens., vol. 52, no. 5, pp. 2666–2677, 2014.
    [21] B. Pan, Z. Shi, and X. Xu, “Hierarchical Guidance Filtering-Based Ensemble Classification for Hyperspectral Images,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 7, pp. 4177–4189, 2017, doi: 10.1109/TGRS.2017.2689805.
    [22] J. Xia, L. Bombrun, T. Adali, Y. Berthoumieu, and C. Germain, “Spectral-Spatial Classification of Hyperspectral Images Using ICA and Edge-Preserving Filter via an Ensemble Strategy,” IEEE Trans. Geosci. Remote Sens., vol. 54, no. 8, pp. 4971–4982, 2016, doi: 10.1109/TGRS.2016.2553842.
    [23] C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” Proc. IEEE Int. Conf. Comput. Vis., pp. 839–846, 1998, doi: 10.1109/iccv.1998.710815.
    [24] J. Chen, S. Paris, and F. Durand, “Real-time edge-aware image processing with the bilateral grid,” Proc. ACM SIGGRAPH Conf. Comput. Graph., pp. 1–9, 2007, doi: 10.1145/1275808.1276506.
    [25] Q. Yang, K. Tan, and N. Ahuja, “Real-time O(1) bilateral filtering,” IEEE., no. 1, pp. 557–564, 2009.
    [26] S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Process., vol. 22, no. 7, pp. 2864–2875, 2013, doi: 10.1109/TIP.2013.2244222.
    [27] E. S. L. Gastal and M. M. Oliveira, “Domain Transform for Edge-Aware Image and Video Processing,” ACM Trans. Graph., vol. 30, no. 4, pp. 1–12, 2011, doi: 10.1145/2010324.1964964.
    [28] D. Cheng, G. Meng, S. Xiang, and C. Pan, “FusionNet: Edge Aware Deep Convolutional Networks for Semantic Segmentation of Remote Sensing Harbor Images,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 10, no. 12, pp. 5769–5783, 2017, doi: 10.1109/JSTARS.2017.2747599.

    QR CODE