簡易檢索 / 詳目顯示

研究生: Shimaa Amin Ali Ahmed Bergies
Shimaa Amin Ali Ahmed Bergies
論文名稱: Vision Based Dirt Detection with Deep Learning for Floor Cleaning Robots
Vision Based Dirt Detection with Deep Learning for Floor Cleaning Robots
指導教授: 蘇順豐
Shun-Feng Su
郭重顯
Chung-Hsien Kuo
口試委員: 劉孟昆
Meng-Kun Liu
Shu-Hao Liang
Shu-Hao Liang
學位類別: 碩士
Master
系所名稱: 電資學院 - 電機工程系
Department of Electrical Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 英文
論文頁數: 47
外文關鍵詞: Dirt detection
相關次數: 點閱:147下載:5
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

  • Abstract
    Indoor dirt area detection and localization based on modified yolov4 object detection algorithm and depth camera is the main goal of this research work. The empowerment of autonomous cleaning for the wide environment poses a challenge due to energy and time consumption. This work introduces a novel experimental vision strategy for cleaning robot to clean indoor dirt areas. A developed deep learning algorithm named YOLOv4-Dirt algorithm is utilized to classify if the floor is clean or not, and detects the position of the dirt areas. This system reduces the autonomous cleaning machine energy consumption and minimize the time of the cleaning process which increases the life of the autonomous cleaning machine especially in wide buildings based on real-time object detection by deep learning YOLOv4 algorithm and RealSense depth camera. The YOLOv4 algorithm is modified by adding up sampling layers to be able to detect the trash and wet areas successfully then the RealSense depth camera calculates the distance between the cleaning machine and dirt area based on the point cloud library using the robot operating system (ROS). Various classes of trash are utilized to emphasize the performance of the developed cleaning system. The experiment confirms the effectiveness of the proposed autonomous cleaning system to handle the detected dirt areas with low effort and time consumption compared with other cleaning systems.

    Table of Contents ABSTRACT i LIST OF TABLES v LIST OF FIGURES vi NOMENCLATURE viii CHAPTER 1 INTRODUCTION 1 1.1 Overview…………………………………………………………………………………..1 1.2 Problem Statement……………………………………………………………………..….2 1.3 Research Contribution and Novelty 3 1.4 Thesis Organization 3 1.5 Cleaning Robots Literature Review 4 1.5.1 Cleaning Robots Based on Navigation 4 1.5.2 Cleaning Robots Based on Vision Systems 4 1.5.3 Cleaning Robot Vision System Based on Deep Learning……………………………..5 1.5.4 Cleaning Robot Based on YOLOv4 Algorithm……………………………………….6 CHAPTER 2 CLEANING ROBOT SYSTEM DESCRIPTION 7 2.1 Cleaning Robot Hardware System 7 2.2 Differential Drive Kinematics Model 9 2.3 Cleaning Robot Software System 11 2.3.1 Robot Operating System (ROS) 12 2.3.1.1 Nodes…………………………………………………………………………...13 2.3.1.2 Master…………………………………………………………………………..13 2.3.1.3 Topics…………………………………………………………………………...13 2.3.1.4 Messages………………………………………………………………………..13 2.3.1.5 Services…………………………………………………………………………14 2.3.1.6 Bags…………………………………………………………………………….14 2.3.2 Google Colab Defining 14 2.3.3 CUDA………………………………………………………………………………...15 2.3.4 NVIDIA Driver…………………………………………………………………….....15 2.3.5 Darknet ROS. 15 2.3.6 Open CV. 15 2.3.7 Point Cloud. 15 2.3.8 JSK_PCL Node……………………………………………………………………….16 2.3.8.1 Viewpoint Feature Histogram (VFH)……………………………………………17 CHAPTER 3 REALSENCE (RGB-D) CAMERA POINT CLOUD AND DEPTH CALCULATION 18 3.1 Determination of the Coordinates of a Point in The Image from a Scene…………………19 3.2 Technical Specification…………………………………………………………………...22 CHAPTER 4 YOLO OBJECT DETECTION ALGORITHMS……………………………..23 4.1 YOLO Object Detection Overview……………………………………………………….23 4.1.1 YOLO v2 or YOLO9000……………………………………………………………26 4.1.2 YOLOv3 algorithm…………………………………………………………………27 4.2 YOLOv4 Algorithm 28 4.3 YOLOv4 Loss Function…………………………………………………………………..29 4.3.1 GIOU loss (Generalized IOU Loss) 29 4.3.2 DIOU loss (Distance IOU loss) 31 4.4 YOLOv4_Trash Model…………………………………………………………………..32 4.5 Data Set Construction………………………………………………………………..…..33 4.5.1 Data Preprocessing………………………………………………………………..…..33 4.5.2 Dataset Annotating 35 4.5.3 Training, Validation, and Testing Sets. 36 4.6 Training and Testing Process 37 4.7 Dirt Localization in Real-Time Discussion. 40 CHAPTER5 CONCLUSION AND FUTURE WORKS 45 5.1 Conclusion. 45 5.2 Future Works 45 REFERENCES 47

    REFERENCES

    [1] J. S. Oh, Y. H. Choi, J. B. Park, and Y. F. Zheng, “Complete coverage navigation of cleaning robots using triangular-cell-based map,” IEEE Trans. Ind. Electron., vol. 51, no. 3, pp. 718–726, 2004.
    [2] V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 12, pp. 2481–2495, 2017.
    [3] U. Schmucker, N. Elkmann, T. Böhme, and M. Sack, “Facade cleaning by service robots,” in ISARC proceedings of the 15th International Symposium on Automation and Robotics in Construction : Automation and robotics--todays reality in construction : bauma 98, 1998.
    [4] B. Ramalingam et al., “STetro-deep learning powered staircase cleaning and maintenance reconfigurable robot,” Sensors (Basel), vol. 21, no. 18, p. 6279, 2021.
    [5] V. Sivanantham, A. V. Le, Y. Shi, M. R. Elara, and B. J. Sheu, “Adaptive floor cleaning strategy by human density surveillance mapping with a reconfigurable multi-purpose service robot,” Sensors (Basel), vol. 21, no. 9, 2021.
    [6] K. N. Baluprithviraj, M. M. Madhan, T. K. Devi, K. R. Bharathi, S. Chendhuran, and P. Lokeshwaran, “Design and development of automatic gardening system using IoT,” in 2021 International Conference on Recent Trends on Electronics, Information, Communication & Technology (RTEICT), 2021.
    [7] A. P. Murdan and P. K. Ramkissoon, “A smart autonomous floor cleaner with an Android-based controller,” in 2020 3rd International Conference on Emerging Trends in Electrical, Electronic and Communications Engineering (ELECOM), 2020.
    [8] M. B. Alatise and G. P. Hancke, “A review on challenges of autonomous mobile robot and sensor fusion methods,” IEEE Access, vol. 8, pp. 39830–39846, undefined 2020.
    [9] A. Taneja, G. Bansal, R. Setia, and N. Hema, “Moedor Cleaning Robot,” in 2018 Eleventh International Conference on Contemporary Computing (IC3), 2018.
    [10] B. Xue, B. Huang, G. Chen, H. Li, and W. Wei, “Deep-sea debris identification using deep convolutional neural networks,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 14, pp. 8909–8921, 2021.
    [11] K. Kisron, B. S. B. Dewantara, and H. Oktavianto, “Improved performance of trash detection and human target detection systems using robot Operating System (ROS),” J. Rekayasa Elektr., vol. 17, no. 2, 2021.
    [12] Y. Cao, H. Chen, X. Yu, H. Zang, and X. Li, “Machine vision-based detection system of a cleaning robot for vertical type air-conditioning duct,” in 2010 2nd International Conference on Advanced Computer Control, 2010.
    [13] P. Ping, G. Xu, E. Kumala, and J. Gao, “Smart street litter detection and classification based on faster R-CNN and edge computing,” Int. j. softw. eng. knowl. eng., vol. 30, no. 04, pp. 537–553, 2020.
    [14] U. Jost and R. Bormann, "Water Streak Detection with Convolutional Neural Networks for Scrubber Dryers", Lecture Notes in Computer Science, pp. 259-273, 2019.
    [15] G. Hiteshkumar, P. Gour, N. Steeve, T. Parmar, and P. Singh, “Cleanliness Automation: YOLOv3,” in 2021 6th International Conference for Convergence in Technology (I2CT), 2021.
    [16] Y. Wang and X. Zhang, “Autonomous garbage detection for intelligent urban management,” MATEC Web Conf., vol. 232, p. 01056, 2018.
    [17] T. Wang, Y. Cai, L. Liang, and D. Ye, “A multi-level approach to waste object segmentation,” Sensors (Basel), vol. 20, no. 14, p. 3816, 2020.
    [18] R. Bormann, X. Wang, J. Xu, and J. Schmidt, “DirtNet: Visual dirt detection for autonomous cleaning robots,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020.
    [19] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” In Proceedings of the IEEE conference on computer vision and pattern recognition, 2015.
    [20] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, 2017.
    [21] J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” In Proceedings of the IEEE conference on computer vision and pattern recognition, 2016.
    [22] G. Liu, J. C. Nouaze, P. L. Touko Mbouembe, and J. H. Kim, “YOLO-tomato: A robust algorithm for tomato detection based on YOLOv3,” Sensors (Basel), vol. 20, no. 7, p. 2145, 2020.
    [23] J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” arXiv [cs.CV], 2018.
    [24] T.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” In Proceedings of the IEEE conference on computer vision and pattern recognition, 2017.
    [25] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” In Proceedings of the IEEE conference on computer vision and pattern recognition, 2017.
    [26] V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” In Proceedings of the 27th International Conference on International Conference on Machine Learning, 2010.
    [27] D. Misra, “Mish: A self regularized non-monotonic activation function,” arXiv [cs.LG], 2019.
    [28] K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 9, pp. 1904–1916, 2015.
    [29] Y.-Q. Huang, J.-C. Zheng, S.-D. Sun, C.-F. Yang, and J. Liu, “Optimized YOLOv3 algorithm and its application in traffic flow detections,” Appl. Sci. (Basel), vol. 10, no. 9, p. 3079, 2020.
    [30] L. Donati, T. Fontanini, F. Tagliaferri, and A. Prati, “An energy saving road sweeper using deep vision for garbage detection,” Appl. Sci. (Basel), vol. 10, no. 22, p. 8146, 2020.
    [31] R. B. Rusu, G. Bradski, R. Thibaux, and J. Hsu, “Fast 3D recognition and pose using the Viewpoint Feature Histogram,” in 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2010.
    [32] X. Han, J. Chang, and K. Wang, “Real-time object detection based on YOLO-v2 for tiny vehicle object,” Procedia Comput. Sci., vol. 183, pp. 61–72, 2021.

    QR CODE