簡易檢索 / 詳目顯示

研究生: Martin Suda
Martin Suda
論文名稱: 實現可見光通訊的室內定位系統
Infrastructure of Indoor VLP System
指導教授: 呂政修
Jenq-Shiou Leu
口試委員: 阮聖彰
Shanq-Jang Ruan
鄭瑞光
Ray-Guang Cheng
黃天偉
Tian-Wei Huang
學位類別: 碩士
Master
系所名稱: 電資學院 - 電子工程系
Department of Electronic and Computer Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 英文
論文頁數: 72
中文關鍵詞: Visible Light Positioning (VLP)VLC fingerprintingLoRaWANMQTTCircle Hough Transfromedge detectionweb applications
外文關鍵詞: Visible Light Positioning (VLP), VLC fingerprinting, LoRaWAN, MQTT, Circle Hough Transfrom, edge detection, web applications
相關次數: 點閱:243下載:9
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

The increasing demand for indoor positioning systems and poor accuracy of Global
Positioning System (GPS) in enclosed premises has necessitated other viable solutions.
A vastly exploited technique in this field is Visible Light Positioning (VLP), due to its promising properties that can allow high accuracy and cost-effective solution in comparison to other techniques.

The academic research proposes various VLP-based indoor navigation algorithms that
tackle issues coming with Visible Light Positioning. Unfortunately, no algorithm has
solved the issues well enough to be massively adopted on the market yet.
Recent studies propose unique localization algorithms utilizing
different methods. Despite their differences, we noticed one common characteristic shared by all. The papers always assemble an infrastructure prototype to test their approach.

This procedure seems rather unnecessary since the core infrastructure is the same and
could be shared. We believe a shared assembly would enable the algorithm’s rapid
prototyping and eventually intensify the team’s focus on the main goal, the localization itself.

Therefore, the thesis proposes a VLP system that solves this issue. The system allows
the development of localization algorithms with different system settings, and thus it
serves the developer’s needs.

Firstly, the thesis presents the system infrastructure and discusses its elements. Secondly, it reviews common fingerprinting techniques and selects Pulse Width Modulation (PWM) and sinusoidal modulation to demonstrate the functionality. Custom
firmware is proposed to drive VLP transmitters with a fingerprinting method. Thirdly,
the thesis proposes a system application that configures the whole system and delivers
a user-friendly environment. Lastly, we propose a detection algorithm to localize VLP
transmitters within a single frame image utilizing a mobile robot with CMOS camera.


The increasing demand for indoor positioning systems and poor accuracy of Global
Positioning System (GPS) in enclosed premises has necessitated other viable solutions.
A vastly exploited technique in this field is Visible Light Positioning (VLP), due to its promising properties that can allow high accuracy and cost-effective solution in comparison to other techniques.

The academic research proposes various VLP-based indoor navigation algorithms that
tackle issues coming with Visible Light Positioning. Unfortunately, no algorithm has
solved the issues well enough to be massively adopted on the market yet.
Recent studies propose unique localization algorithms utilizing
different methods. Despite their differences, we noticed one common characteristic shared by all. The papers always assemble an infrastructure prototype to test their approach.

This procedure seems rather unnecessary since the core infrastructure is the same and
could be shared. We believe a shared assembly would enable the algorithm’s rapid
prototyping and eventually intensify the team’s focus on the main goal, the localization itself.

Therefore, the thesis proposes a VLP system that solves this issue. The system allows
the development of localization algorithms with different system settings, and thus it
serves the developer’s needs.

Firstly, the thesis presents the system infrastructure and discusses its elements. Secondly, it reviews common fingerprinting techniques and selects Pulse Width Modulation (PWM) and sinusoidal modulation to demonstrate the functionality. Custom
firmware is proposed to drive VLP transmitters with a fingerprinting method. Thirdly,
the thesis proposes a system application that configures the whole system and delivers
a user-friendly environment. Lastly, we propose a detection algorithm to localize VLP
transmitters within a single frame image utilizing a mobile robot with CMOS camera.

1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.3 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2 System Infrastructure 3 2.1 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.3 System Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.4 Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3 VLP Fingerprinting Techniques 7 3.1 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.1.1 On-Off Keying Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.1.2 Pulse-Width Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.1.3 Sinusoidal Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4 Node Firmware 13 4.1 Functional requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.2 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.2.1 I-CUBE-LRWAN package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.2.2 Custom Firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.3 Implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.3.1 Main program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.3.2 Configuration files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.3.3 Indoor Navigation library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.4 Functional testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 5 System Application 25 5.1 Current state & revision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 5.2 Functional requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 5.3 Non-functional requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 5.4 Architecture types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 5.4.1 Static Web Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 5.4.2 Dynamic Web Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5.4.3 Single-Page Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5.4.4 Multiple-Page Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5.4.5 Progressive Web Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5.5 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 5.6 Implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 5.6.1 Back End . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 5.6.2 Front End . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 5.6.3 Functionality overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 6 Node Detection 35 6.1 Circle Object Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 6.1.1 Circle Hough Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 6.2 Edge Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 6.2.1 Gradient-Based detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 6.2.2 Laplacian-Based detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 6.2.3 Canny detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 6.3 Implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 6.3.1 Detection module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 6.3.2 Detection algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 6.3.3 Test environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 6.3.4 Algorithm evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 7 Conclusions 55 7.1 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 A List of Abbreviations 57 B B-L072Z-LRWAN1 Extension connectors 59 C Bibliography 61

[1] H. Aoyama and M. Oshima, “Visible light communication using a conventional image
sensor,” in 2015 12th Annual IEEE Consumer Communications and Networking
Conference (CCNC), 2015, pp. 103–108.
[2] J. Fang, Z. Yang, S. Long, Z. Wu, X. Zhao, F. Liang, Z. L. Jiang, and Z. Chen,
“High-speed indoor navigation system based on visible light and mobile phone,” IEEE
Photonics Journal, vol. 9, no. 2, pp. 1–11, 2017.
[3] C. Xie, W. Guan, Y. Wu, L. Fang, and Y. Cai, “The led-id detection and recognition
method based on visible light positioning using proximity method,” IEEE Photonics
Journal, vol. 10, no. 2, pp. 1–16, 2018.
[4] STMicroelectronics, “Stm32 lorawan® expansion package for stm32cube,” USA,
2021. [Online]. Available: https://www.st.com/resource/en/user_manual/um2073-
stm32-lorawan-expansion-package-for-stm32cube-stmicroelectronics.pdf
[5] P. A. Mlsna and J. J. Rodríguez, “Chapter 19 - gradient and laplacian
edge detection,” in The Essential Guide to Image Processing, A. Bovik,
Ed. Boston: Academic Press, 2009, pp. 495–524. [Online]. Available: https:
//www.sciencedirect.com/science/article/pii/B9780123744579000196
[6] P. Chen, M. Pang, D. Che, Y. Yin, D. Hu, and S. Gao, “A survey on visible light
positioning from software algorithms to hardware,” Wireless Communications and
Mobile Computing, vol. 2021, 2021.
[7] Z. Zhou, M. Kavehrad, and P. Deng, “Indoor positioning algorithm using
light-emitting diode visible light communications,” Optical Engineering, vol. 51,
no. 8, pp. 1 – 7, 2012. [Online]. Available: https://doi.org/10.1117/1.OE.51.8.085009
[8] Y.-S. Kuo, P. Pannuto, K.-J. Hsiao, and P. Dutta, “Luxapose: Indoor positioning
with mobile phones and visible light,” in Proceedings of the 20th annual international
conference on Mobile computing and networking, 2014, pp. 447–458.
[9] R. Zhang, W.-D. Zhong, K. Qian, S. Zhang, and P. Du, “A reversed visible light
multitarget localization system via sparse matrix reconstruction,” IEEE Internet of
Things Journal, vol. 5, no. 5, pp. 4223–4230, 2018.
[10] Z. Zhang, H. Chen, X. Hong, and J. Chen, “Accuracy enhancement of indoor
visible light positioning using point-wise reinforcement learning,” in Optical Fiber
Communication Conference (OFC) 2019. Optica Publishing Group, 2019, p. Th3I.3.
[Online]. Available: http://opg.optica.org/abstract.cfm?URI=OFC-2019-Th3I.3
[11] M. Suda, “Sw for indoor visible light positioning testbed,” Bachelor’s Thesis, CTU
FEE, Technická 2, 5 2020.
[12] S. Bosak, “Hw for indoor visible light positioning testbed,” Bachelor’s Thesis, CTU
FEE, Technická 2, 5 2020.
[13] ——, “Mobile-robot and platform for vlc indoor navigation,” Master’s Thesis, CTU
FEE, Technická 2, 5 2022.
[14] W. Guan, Y. Wu, C. Xie, L. Fang, X. Liu, and Y. Chen, “Performance
analysis and enhancement for visible light communication using cmos sensors,”
Optics Communications, vol. 410, pp. 531–551, 2018. [Online]. Available:
https://www.sciencedirect.com/science/article/pii/S0030401817309458
[15] W. Guan, X. Zhang, Y. Wu, Z. Xie, J. Li, and J. Zheng, “High precision
indoor visible light positioning algorithm based on double leds using cmos
image sensor,” Applied Sciences, vol. 9, no. 6, 2019. [Online]. Available:
https://www.mdpi.com/2076-3417/9/6/1238
[16] S.-H. Song, D.-C. Lin, Y.-H. Chang, Y.-S. Lin, C.-W. Chow, Y. Liu, C.-H.
Yeh, K.-H. Lin, Y.-C. Wang, and Y.-Y. Chen, “Using dialux and regression-based
machine learning algorithm for designing indoor visible light positioning (vlp) and
reducing training data collection,” in Optical Fiber Communication Conference
(OFC) 2021. Optica Publishing Group, 2021, p. Tu5E.3. [Online]. Available:
http://opg.optica.org/abstract.cfm?URI=OFC-2021-Tu5E.3
[17] J. Jin, L. Feng, J. Wang, D. Chen, and H. Lu, “Signature codes in visible light
positioning,” IEEE Wireless Communications, vol. 28, no. 5, pp. 178–184, 2021.
[18] A. F. Hussein, H. Elgala, and T. D. C. Little, “Visible light communications: Toward
multi-service waveforms,” in 2018 15th IEEE Annual Consumer Communications
Networking Conference (CCNC), 2018, pp. 1–6.
[19] M. Saadi, T. Ahmad, Y. Zhao, and L. Wuttisttikulkij, “An led based indoor localization
system using k-means clustering,” in 2016 15th IEEE International Conference
on Machine Learning and Applications (ICMLA), 2016, pp. 246–252.
[20] U. Nadeem, N. Hassan, M. Pasha, and C. Yuen, “Highly accurate 3d wireless indoor
positioning system using white led lights,” Electronics Letters, vol. 50, no. 11, pp.
828–830, 2014.
[21] S.-Y. Jung, S. Hann, and C.-S. Park, “Tdoa-based optical wireless indoor localization
using led ceiling lamps,” IEEE Transactions on Consumer Electronics, vol. 57, no. 4,
pp. 1592–1597, 2011.
[22] T. Sato, S. Shimada, H. Murakami, H. Watanabe, H. Hashizume, and M. Sugimoto,
“Alisa: A visible-light positioning system using the ambient light sensor assembly in
a smartphone,” IEEE Sensors Journal, vol. 22, no. 6, pp. 4989–5000, 2022.
[23] K. Abe, T. Sato, H. Watanabe, H. Hashizume, and M. Sugimoto, “Smartphone positioning
using an ambient light sensor and reflected visible light,” in 2021 International
Conference on Indoor Positioning and Indoor Navigation (IPIN), 2021, pp. 1–8.
[24] H. Dhaduk, “An ultimate guide to web application architecture,” https://www.
simform.com/blog/web-application-architecture/, 2021, accessed: 2022-01-21.
[25] Spaceotechnologies, “9 different types of web applications (examples + use
cases),” https://www.spaceotechnologies.com/types-of-web-applications/, 2021, accessed:
2022-01-21.
[26] Staticapps, “Defining static web apps,” https://www.staticapps.org/articles/definingstatic-
web-apps/, 2014, accessed: 2022-01-21.
[27] Adobe, “Understand web applications,” https://helpx.adobe.com/gr_en/
dreamweaver/using/web-applications.html#processing_static_web_pages, 2021,
accessed: 2022-01-21.
[28] Pluralsight, “The differences between static and dynamic websites,” https:
//www.pluralsight.com/blog/creative-professional/static-dynamic-websites-theresdifference,
2020, accessed: 2022-01-21.
[29] Mozilla and individual contributors, “Spa (single-page application),” https://
developer.mozilla.org/en-US/docs/Glossary/SPA, 2021, accessed: 2022-01-21.
[30] Y. Luchaninov, “Web application architecture in 2021: Moving in the right direction,”
https://mobidev.biz/blog/web-application-architecture-types, 2021, accessed: 2022-
01-21.
[31] Asperbrothers, “Single page application (spa) vs multi page application (mpa) –
two development approaches,” https://asperbrothers.com/blog/spa-vs-mpa/, 2019,
accessed: 2022-01-21.
[32] Mozilla and individual contributors, “Webassembly,” https://developer.mozilla.org/
en-US/docs/WebAssembly, 2021, accessed: 2022-01-21.
[33] S. Richard and P. LePage, “What are progressive web apps?” https://web.dev/whatare-
pwas/, 2020, accessed: 2022-01-21.
[34] C.-T. Ho and L.-H. Chen, “A fast ellipse/circle detector using geometric symmetry,”
Pattern Recognition, vol. 28, no. 1, pp. 117–124, 1995. [Online]. Available:
https://www.sciencedirect.com/science/article/pii/003132039400077Y
[35] Z. Yao and W. Yi, “Curvature aided hough transform for circle detection,”
Expert Systems with Applications, vol. 51, pp. 26–33, 2016. [Online]. Available:
https://www.sciencedirect.com/science/article/pii/S0957417415008210
[36] W. Gander, G. H. Golub, and R. Strebel, “Least-squares fitting of circles and ellipses,”
BIT Numerical Mathematics, vol. 34, no. 4, pp. 558–578, 1994.
[37] T.-C. Chen and K.-L. Chung, “An efficient randomized algorithm for detecting
circles,” Computer Vision and Image Understanding, vol. 83, no. 2, pp. 172–
191, 2001. [Online]. Available: https://www.sciencedirect.com/science/article/pii/
S1077314201909233
[38] J. Illingworth and J. Kittler, “A survey of the hough transform,” Computer Vision,
Graphics, and Image Processing, vol. 44, no. 1, pp. 87–116, 1988. [Online]. Available:
https://www.sciencedirect.com/science/article/pii/S0734189X88800331
[39] T. D’Orazio, C. Guaragnella, M. Leo, and A. Distante, “A new algorithm
for ball recognition using circle hough transform and neural classifier,” Pattern
Recognition, vol. 37, no. 3, pp. 393–408, 2004. [Online]. Available: https:
//www.sciencedirect.com/science/article/pii/S0031320303002280
[40] S.-M. Hou, C.-L. Jia, Y.-B. Wanga, and M. Brown, “A review of the edge
detection technology,” Sparklinglight Transactions on Artificial Intelligence and
Quantum Computing, vol. 1, no. 2, p. 26–37, Oct. 2021. [Online]. Available:
http://www.sparklinglightpublisher.com/index.php/slp/article/view/16
[41] Z. Su, W. Liu, Z. Yu, D. Hu, Q. Liao, Q. Tian, M. Pietikäinen, and L. Liu, “Pixel
difference networks for efficient edge detection,” in Proceedings of the IEEE/CVF
International Conference on Computer Vision (ICCV), October 2021, pp. 5117–5127.
[42] R. Sun, T. Lei, Q. Chen, Z. Wang, X. Du, W. Zhao, and A. Nandi, “Survey of image
edge detection,” Frontiers in Signal Processing, vol. 2, p. 826967, 03 2022.
[43] H. Spontón and J. Cardelino, “A Review of Classic Edge Detectors,” Image Processing
On Line, vol. 5, pp. 90–123, 2015, https://doi.org/10.5201/ipol.2015.35.
[44] “Coins.png,” USA, 2014. [Online]. Available: https://www.bogotobogo.com/Matlab/
images/MATLAB_DEMO_IMAGES/coins.png
[45] S. R. Gunn, “On the discrete representation of the laplacian of gaussian,”
Pattern Recognition, vol. 32, no. 8, pp. 1463–1472, 1999. [Online]. Available:
https://www.sciencedirect.com/science/article/pii/S0031320398001630
[46] G. Shrivakshan and C. Chandrasekar, “A comparison of various edge detection techniques
used in image processing,” International Journal of Computer Science Issues
(IJCSI), vol. 9, no. 5, p. 269, 2012.
[47] X. Wang, “Laplacian operator-based edge detectors,” IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 29, no. 5, pp. 886–890, 2007.
[48] J. Canny, “A computational approach to edge detection,” IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. PAMI-8, no. 6, pp. 679–698, 1986.
[49] L. Ding and A. Goshtasby, “On the canny edge detector,” Pattern Recognition,
vol. 34, no. 3, pp. 721–725, 2001. [Online]. Available: https://www.sciencedirect.
com/science/article/pii/S0031320300000236
[50] G. Shrivakshan and C. Chandrasekar, “A comparison of various edge detection techniques
used in image processing,” International Journal of Computer Science Issues
(IJCSI), vol. 9, no. 5, p. 269, 2012.
[51] R. Padilla, S. L. Netto, and E. A. B. da Silva, “A survey on performance metrics for
object-detection algorithms,” in 2020 International Conference on Systems, Signals
and Image Processing (IWSSIP), 2020, pp. 237–242.

QR CODE