簡易檢索 / 詳目顯示

研究生: 譚恆力
Hendrik Tampubolon
論文名稱: 隱私保護深度學習方法:實現普遍人類交互識別的隱私意識
Privacy-Preserving Deep Learning Approaches: Toward Privacy-Aware for Pervasive Human Interaction Recognition
指導教授: 花凱龍
Kai-Lung Hua
楊朝龍
Chao-Lung Yang
口試委員: 陳永耀
Yung-Yao Chen
陳宜惠
Yi-Hui Chen
曹昱
Yu Tsao
陳駿丞
Jun-Cheng Chen
學位類別: 博士
Doctor
系所名稱: 電資學院 - 資訊工程系
Department of Computer Science and Information Engineering
論文出版年: 2023
畢業學年度: 111
語文別: 英文
論文頁數: 100
外文關鍵詞: Security and Privacy, Privacy-Preserving HAR/HIR, Human Interaction Recognition, Edge-Fog-Cloud Computing, Pervasive Healthcare Monitoring
相關次數: 點閱:270下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報

在基於雲的人工智能即服務 (AIaaS) 中,在處理立即理解人們的行為/交互時,魯棒的人類行為/交互識別 (HAR/HIR) 至關重要,特別是在監控系統、遠程互聯網等普遍系統中。醫療和人機交互。雖然深度學習(DL) 方法在 HAR/HIR 中得到了重視,但隱私問題也越來越重要,特別是當視頻數據在公共場所捕獲並直接由 DL 模型使用而無需額外保護時。
除了隱私之外,安全性、快速推理和低延遲對於 AIaaS 的成功採用也起著至關重要的作用。為了應對這些挑戰,本文討論了面向普及 HAR/HIR的隱私感知綜合框架。首先,研究了用於 HAR/HIR 任務的基於骨架的模型。其次,安全且保護隱私的 HIR 框架應用於普遍的醫療保健監控任務,稱為 STGCN-PAM-EFCC。在 STGCN-PAM-EFCC 中,使用人體姿勢估計 (PoseNet) 將人體動作視頻數據模糊為骨架數據。然後,添加了具有輕量級加密模式的安全層。儘管兩個 DL 協同工作會增加額外成本,但可以在保持識別性能、低延遲和安全普遍的 HAR/HIR 的同時保護數據隱私。
後續部分提供了詳細的描述、實驗結果和分析。


Robust Human action/interaction recognition (HAR/HIR) is crucial when
dealing with immediately understanding the actions/interactions of people in a Cloud-Based Artificial Intelligence as a Service (AIaaS), specifically in pervasive systems such as surveillance systems, remote internet
of medical things, and human-robot interaction. While Deep Learning
(DL)approaches have gained prominence in HAR/HIR, privacy concerns
are increasingly significant, especially when video data is captured in public places and directly utilized by DL models without additional protection.
In addition to privacy, security, fast inference, and low latency play a vital
role in the success of the AIaaS adoption. Addressing these challenges, this
dissertation discusses the comprehensive framework for privacy-aware toward pervasive HAR/HIR. First, a skeleton-based model was studied for
HAR/HIR tasks. Second, a secure and privacy-preserving HIR framework
is applied to pervasive healthcare monitoring tasks called STGCN-PAMEFCC. In STGCN-PAM-EFCC, human action video data were obscured
to skeleton data using Human Pose Estimation (PoseNet). Then, a security layer was added with a lightweight encryption schema. Despite adding
extra cost with two DLs working collaboratively, preserving data privacy
while maintaining the recognition performance, low latency, and secure
pervasive HAR/HIR can be achieved. Detailed descriptions, experimental
results, and analysis are provided in subsequent sections.

Contents Recommendation Letter . . . . . . . . . . . . . . . . . . . . . . . . i Approval Letter . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Abstract in Chinese . . . . . . . . . . . . . . . . . . . . . . . . . . iii Abstract in English . . . . . . . . . . . . . . . . . . . . . . . . . . iv Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . v Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii List of Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . xv 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Human Action Recognition . . . . . . . . . . . . . . . . . 2 1.3 Two-Person Interaction Recognition . . . . . . . . . . . . 5 1.4 Typical Approaches on Privacy-Preserving Deep Learning . . . . . . . . . . 7 1.4.1 Existing Privacy-Preserving of Human Action/Interaction Recognition . . . . . . . . . . . . . . . . 7 1.5 Towards Privacy-Preserving of Pervasive Human Interaction Recognition . . . . . . . . . . . . . . . . . . . . . . . 9 2 Study Literatures . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.1 Two-Person Interaction Recognition (TPIR) . . . . . . . . 13 2.2 Emerging Internet of Things, Edge, Fog, Cloud Computing, and Application to Pervasive HAR/HIR . . . . . . . . 15 2.2.1 A Cloud-Based Artificial Intelligence as a Services . . . . . . . . . . 15 2.2.2 Issues and Characteristics of Internet of Medical Things, Edge, Fog, and Cloud Computing Paradigms . . . . . . . . . . 17 2.3 Privacy-Preserving of Pervasive Human Interaction Recognition Issues, Requirement, and Challenges . . . . . . . . 18 3 Preliminary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.1 Graph Fundamental . . . . . . . . . . . . . . . . . . . . . 21 3.1.1 Graph Data and Adjacency Matrix . . . . . . . . . 21 3.1.2 Pairwise Graph Connectivity . . . . . . . . . . . . 22 3.2 Spatial Temporal Graph Convolution Network . . . . . . . 25 3.2.1 Pairwise Adjacency Matrix . . . . . . . . . . . . . 30 3.2.2 Discussion . . . . . . . . . . . . . . . . . . . . . 31 4 Secure and Privacy-Preserving Human Interaction Recognition of Pervasive Healthcare Monitoring . . . . . . . . . . . . . . . 34 4.1 Background Overview . . . . . . . . . . . . . . . . . . . 34 4.1.1 Motivation . . . . . . . . . . . . . . . . . . . . . 36 4.1.2 Contribution . . . . . . . . . . . . . . . . . . . . 37 4.2 Framework Design and Implementation . . . . . . . . . . 38 4.2.1 Overview . . . . . . . . . . . . . . . . . . . . . . 38 4.2.2 Data Acquisition via Pose Estimation Model . . . 39 4.2.3 STGCN-PAM-EFCC Model for TPIR . . . . . . . 40 4.2.4 Security and Privacy Scheme in STGCN-PAM-EFCC . . . . . . . . . . 41 4.2.5 Skeleton Data Encryption Algorithm in STGCNPAM-EFCC . . . . . . . . . . . . . . . . . . . . . 44 5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . 51 5.1 Experimental Settings . . . . . . . . . . . . . . . . . . . . 51 5.1.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . 51 5.1.2 Environmental Settings . . . . . . . . . . . . . . . 51 5.1.3 STGCN-PAM-EFCC Model for TPIR Setup . . . . 52 5.1.4 Deployments Cases Setup . . . . . . . . . . . . . 53 5.1.5 Skeleton Data Encryption Scheme Settings . . . . 54 5.2 Result and Discussion . . . . . . . . . . . . . . . . . . . . 55 5.2.1 Model Performances . . . . . . . . . . . . . . . . 55 5.2.2 Security Schemes Complexity Analysis . . . . . . 57 5.2.3 Study on Scheme Selection for STGCN-PAM-EFCC Skeleton Data Encryption . . . . . . . . . . . . . 60 5.2.4 Evaluation of EFC, ECC, and EFCC Deployments 64 5.2.5 Evaluation on Multiple Stream Video Application . 70 6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 6.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . 74 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Autobiography . . . . . . . . . . . . . . . . . . . . . . . 83

[1] H. C. Tanuwidjaja, R. Choi, and K. Kim, “A survey on deep learning techniques for privacypreserving,” in International Conference on Machine Learning for Cyber Security, Machine Learning
for Cyber Security, Springer International Publishing, 2019.
[2] X. Liu, L. Xie, Y. Wang, J. Zou, J. Xiong, Z. Ying, A. V. Vasilakos, X. Liu, L. Xie, Y. Wang, J. Zou,
J. Xiong, Z. Yin, and A. V. Vasilakos, “Privacy and security issues in deep learning: A survey,” IEEE
Access, vol. 9, pp. 4566–4593, 2021.
[3] F. Mireshghallah, M. Taram, P. Vepakomma, A. Singh, R. Raskar, and H. Esmaeilzadeh, “Privacy in
deep learning: A survey,” ArXiv, vol. abs/2004.12254, 2020.
[4] C. Wang and J. Yan, “A comprehensive survey of rgb-based and skeleton-based human action recognition,” IEEE Access, vol. 11, pp. 53880–53898, 2023.
[5] Z. Sun, Q. Ke, H. Rahmani, M. Bennamoun, G. Wang, and J. Liu, “Human action recognition from
various data modalities: A review,” IEEE Transactions on Pattern Analysis and Machine Intelligence,
vol. 45, no. 3, pp. 3200–3225, 2023.
[6] T. Ahmad, L. Jin, X. Zhang, S. Lai, G. Tang, and L. Lin, “Graph convolutional neural network for
human action recognition: A comprehensive survey,” IEEE Transactions on Artificial Intelligence,
vol. 2, no. 2, pp. 128–145, 2021.
[7] M. Fu, N. Chen, Z. Huang, K. Ni, Y. Liu, S. Sun, and X. Ma, “Human action recognition: A survey,”
in Signal and Information Processing, Networking and Computers (S. Sun, M. Fu, and L. Xu, eds.),
pp. 69–77, Springer Singapore, 2019.
[8] A. Bhoi, “Spatio-temporal action recognition: A survey,” arXiv preprint arXiv:1901.09403,
vol. arXiv:1901.09403, 2019.
[9] A. Stergiou and R. Poppe, “Analyzing human–human interactions: A survey,” Computer Vision and
Image Understanding, vol. 188, p. 102799, 2019.
[10] M. J. Marín-Jiménez, E. Yeguas, and N. Pérez de la Blanca, “Exploring stip-based models for recognizing human interactions in tv videos,” Pattern Recognition Letters, vol. 34, no. 15, pp. 1819–1828,
2013.
[11] L. Wang, Y. Xu, J. Cheng, H. Xia, J. Yin, and J. Wu, “Human action recognition by learning spatiotemporal features with deep neural networks,” IEEE Access, vol. 6, pp. 17913–17922, 2018.
[12] G. Sreenu and M. A. Saleem Durai, “Intelligent video surveillance: a review through deep learning
techniques for crowd analysis,” Journal of Big Data, vol. 6, no. 1, p. 48, 2019
[13] Z. W. Wang, V. Vineet, F. Pittaluga, S. N. Sinha, O. Cossairt, and S. Kang, “Privacy-preserving action recognition using coded aperture videos,” in 2019 IEEE/CVF Conference on Computer Vision
and Pattern Recognition Workshops (CVPRW), (Los Alamitos, CA, USA), pp. 1–10, IEEE Computer
Society, jun 2019.
[14] Z. Wu, Z. Wang, Z. Wang, and H. Jin, “Towards privacy-preserving visual recognition via adversarial training: A pilot study,” in ECCV 2018, Computer Vision –ECCV 2018, Springer International
Publishing, 2018.
[15] S. Ji, W. Xu, M. Yang, and K. Yu, “3d convolutional neural networks for human action recognition,”
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 221–231, 2013.
[16] G. Varol, I. Laptev, and C. Schmid, “Long-term temporal convolutions for action recognition,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 6, pp. 1510–1517, 2018.
[17] K. Simonyan and A. Zisserman, “Two-stream convolutional networks for action recognition in
videos,” in Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 1, NIPS’14, (Cambridge, MA, USA), p. 568–576, MIT Press, 2014.
[18] L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool, “Temporal segment networks:
Towards good practices for deep action recognition,” in Computer Vision –ECCV 2016 (B. Leibe,
J. Matas, N. Sebe, and M. Welling, eds.), pp. 20–36, Springer International Publishing, 2016.
[19] L. Wang, J. Zang, Q. Zhang, Z. Niu, G. Hua, and N. Zheng, “Action recognition by an attention-aware
temporal weighted convolutional neural network,” Sensors (Basel), vol. 18, no. 7, p. 1979, 2018.
[20] T. L. Minh, N. Inoue, and K. Shinoda, “A fine-to-coarse convolutional neural network for 3d human
action recognition,” ArXiv, vol. abs/1805.11790, 2018.
[21] Q. Ke, M. Bennamoun, S. An, F. Sohel, and F. Boussaid, “Learning clip representations for skeletonbased 3d action recognition,” IEEE Transactions on Image Processing, vol. 27, no. 6, pp. 2842–2855,
2018.
[22] C. Caetano, J. Sena, F. Brémond, J. A. Dos Santos, and W. R. Schwartz, “Skelemotion: A new representation of skeleton joint sequences based on motion information for 3d action recognition,” in 2019
16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), (Los
Alamitos, CA, USA), pp. 1–8, IEEE Computer Society, sep 2019.
[23] J. Liu, N. Akhtar, and A. Mian, “Skepxels: Spatio-temporal image representation of human skeleton
joints for action recognition.,” in CVPR workshops, pp. 10–19, 2019.
[24] S. Yan, Y. Xiong, and D. Lin, “Spatial temporal graph convolutional networks for skeleton-based action recognition,” in Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and
Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium
on Educational Advances in Artificial Intelligence, AAAI’18/IAAI’18/EAAI’18, AAAI Press, 2018.
[25] T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in
5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26,
2017, Conference Track Proceedings, OpenReview.net, 2017.
[26] F. Sener and N. Ikizler-Cinbis, “Two-person interaction recognition via spatial multiple instance embedding,” Journal of Visual Communication and Image Representation, vol. 32, pp. 63–73, 2015.
[27] K. Yun, J. Honorio, D. Chattopadhyay, T. L. Berg, and D. Samaras, “Two-person interaction detection
using body-pose features and multiple instance learning,” in 2012 IEEE Computer Society Conference
on Computer Vision and Pattern Recognition Workshops, pp. 28–35, 2012.
[28] C. Li, Z. Cui, W. Zheng, C. Xu, and J. Yang, “Spatio-temporal graph convolution for skeleton based
action recognition,” in Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence,
pp. 3482–3489, AAAI Press, 2018.
[29] A. Manzi, L. Fiorini, R. Limosani, P. Dario, and F. Cavallo, “Two-person activity recognition using
skeleton data,” IET Computer Vision, vol. 12, no. 1, pp. 27–35, 2018.
[30] M. Li and H. Leung, “Multi-view depth-based pairwise feature learning for person-person interaction
recognition,” Multimedia Tools and Applications, vol. 78, no. 5, pp. 5731–5749, 2019.
[31] B. Liu, H. Cai, X. Ji, and H. Liu, “Human-human interaction recognition based on spatial and motion
trend feature,” in 2017 IEEE International Conference on Image Processing (ICIP), pp. 4547–4551,
2017.
[32] T. Hu, X. Zhu, S. Wang, and L. Duan, “Human interaction recognition using spatial-temporal salient
feature,” Multimedia Tools and Applications, vol. 78, no. 20, pp. 28715–28735, 2019.
[33] J. Dai, J. Wu, B. Saghafi, J. Konrad, and P. Ishwar, “Towards privacy-preserving activity recognition
using extremely low temporal and spatial resolution cameras,” in 2015 IEEE Conference on Computer
Vision and Pattern Recognition Workshops (CVPRW), 2015.
[34] E. Chou, M. Tan, C. Zou, M. Guo, A. Haque, A. Milstein, and L. Fei-Fei, “Privacy-preserving action
recognition for smart hospitals using low-resolution depth images,” ArXiv, vol. abs/1811.09950, 2018.
[35] A. S. Rajput, B. Raman, and J. Imran, “Privacy-preserving human action recognition as a remote cloud
service using rgb-d sensors and deep cnn,” Expert Systems with Applications, vol. 152, p. 113349,
2020.
[36] Z. Wu, Z. Wang, Z. Wang, and H. Jin, “Privacy-preserving deep action recognition: An adversarial
learning framework and a new dataset,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–1, 2020.
[37] Z. Xiao, X. Xu, H. Xing, F. Song, X. Wang, and B. Zhao, “A federated learning system with enhanced
feature extraction for human activity recognition,” Knowledge-Based Systems, vol. 229, p. 107338,
2021.
[38] M. Kim, X. Jiang, K. Lauter, E. Ismayilzada, and S. Shams, “Hear: Human action recognition via
neural networks on homomorphically encrypted data,” ArXiv, vol. abs/2104.09164, 2021.
[39] L. Shi, Y. Zhang, J. Cheng, and H. Lu, “Two-stream adaptive graph convolutional networks for
skeleton-based action recognition,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), pp. 12018–12027, 2019.
[40] C. Wu, X.-J. Wu, and J. Kittler, “Spatial residual layer and dense connection block enhanced spatial
temporal graph convolutional network for skeleton-based action recognition,” in 2019 IEEE/CVF
International Conference on Computer Vision Workshop (ICCVW), pp. 1740–1748, 2019.
[41] C.-L. Yang, A. Setyoko, H. Tampubolon, and K.-L. Hua, “Pairwise adjacency matrix on spatial temporal graph convolution network for skeleton-based two-person interaction recognition,” in 2020 IEEE
International Conference on Image Processing (ICIP), pp. 2166–2170, IEEE, 2020.
[42] A. Aslam and E. Curry, “A survey on object detection for the internet of multimedia things (iomt)
using deep learning and event-based middleware: Approaches, challenges, and future directions,”
Image and Vision Computing, vol. 106, 2021.
[43] J. Pan and J. McElhannon, “Future edge cloud and edge computing for internet of things applications,”
IEEE Internet of Things Journal, vol. 5, no. 1, pp. 439–449, 2018.
[44] L. Chettri and R. Bera, “A comprehensive survey on internet of things (iot) toward 5g wireless systems,” IEEE Internet of Things Journal, vol. 7, no. 1, pp. 16–32, 2020.
[45] J. Robertson, J. M. Fossaceca, and K. W. Bennett, “A cloud-based computing framework for artificial
intelligence innovation in support of multidomain operations,” IEEE Transactions on Engineering
Management, pp. 1–10, 2021.
[46] H. Cao and M. Wachowicz, “An edge-fog-cloud architecture of streaming analytics for internet of
things applications,” Sensors, vol. 19, no. 16, p. 3594, 2019.
[47] B. Omoniwa, R. Hussain, M. A. Javed, S. H. Bouk, and S. A. Malik, “Fog/edge computing-based
iot (feciot): Architecture, applications, and research issues,” IEEE Internet of Things Journal, vol. 6,
no. 3, pp. 4118–4149, 2019.
[48] M. D. Donno, K. Tange, and N. Dragoni, “Foundations and evolution of modern computing paradigms:
Cloud, iot, edge, and fog,” IEEE Access, vol. 7, pp. 150936–150948, 2019.
[49] A. Ceccarelli, M. Cinque, C. Esposito, L. Foschini, C. Giannelli, and P. Lollini, “Fusion—fog computing and blockchain for trusted industrial internet of things,” IEEE Transactions on Engineering
Management, pp. 1–15, 2020.
[50] Y. I. Alzoubi, V. H. Osmanaj, A. Jaradat, and A. Al-Ahmad, “Fog computing security and privacy for
the internet of thing applications: State-of-the-art,” Security and Privacy, vol. 4, no. 2, p. e145, 2021
[51] S. Khan, S. Parkinson, and Y. Qin, “Fog computing security: a review of current applications and
security solutions,” Journal of Cloud Computing, vol. 6, no. 1, p. 19, 2017.
[52] T. Hofmann, On Pairwise Graph Connectivity. PhD thesis, Chemnitz University of Technology, Faculty of Mathematics, 08 2023.
[53] S. Mehnaz and M. S. Rahman, “Pairwise compatibility graphs revisited,” in 2013 International conference on informatics, electronics and vision (ICIEV), pp. 1–6, IEEE, 2013.
[54] X. He, M. Gao, M. Kan, and D. Wang, “Birank: Towards ranking on bipartite graphs,” IEEE Transactions on Knowledge and Data Engineering, vol. 29, no. 1, pp. 57–71, 2017.
[55] G. Bergami, M. Magnani, and D. Montesi, “A join operator for property graphs,” in EDBT/ICDT
Workshops, 2017.
[56] D. M. Cardoso, M. A. A. de Freitas, E. A. Martins, and M. Robbiano, “Spectra of graphs obtained
by a generalization of the join graph operation,” Discrete Mathematics, vol. 313, no. 5, pp. 733–741,
2013.
[57] C. Zhe, H. Gines, S. Tomas, W. Shih-En, and S. Yaser, “Openpose: Realtime multi-person 2d pose estimation using part affinity fields,” IEEE Transactions on Pattern Analysis and Machine Intelligence,
vol. 43, no. 1, pp. 172–186, 2021.
[58] D. M. Cardoso, M. A. A. de Freitas, E. A. Martins, and M. Robbiano, “Spectra of graphs obtained
by a generalization of the join graph operation,” Discrete Mathematics, vol. 313, no. 5, pp. 733–741,
2013.
[59] A. Shahroudy, J. Liu, T.-T. Ng, and G. Wang, “Ntu rgb+d: A large scale dataset for 3d human activity
analysis,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1010–
1019, 2016.
[60] J. Liu, A. Shahroudy, M. Perez, G. Wang, L.-Y. Duan, and A. C. Kot, “Ntu rgb+d 120: A largescale benchmark for 3d human activity understanding,” IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 42, no. 10, pp. 2684–2701, 2020.
[61] Y. Ji, G. Ye, and H. Cheng, “Interactive body part contrast mining for human interaction recognition,”
in 2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), pp. 1–6, IEEE,
2014.
[62] J. Liu, A. Shahroudy, D. Xu, A. C. Kot, and G. Wang, “Skeleton-based action recognition using
spatio-temporal lstm network with trust gates,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 40, no. 12, pp. 3007–3021, 2018.
[63] J. Liu, G. Wang, P. Hu, L. Duan, and A. C. Kot, “Global context-aware attention lstm networks for 3d
action recognition,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
pp. 3671–3680, IEEE, 2017.
[64] J. Liu, A. Shahroudy, G. Wang, L. Duan, and A. C. Kot, “Skeleton-based online action prediction using
scale selection network,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42,
pp. 1453–1467, jun 2020.
[65] M. Perez, J. Liu, and A. C. Kot, “Interaction relational network for mutual action recognition,” IEEE
Transactions on Multimedia, vol. 24, pp. 366–376, jan 2022.
[66] C. Li, Z. Cui, W. Zheng, C. Xu, and J. Yang, “Spatio-temporal graph convolution for skeleton based
action recognition,” in Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence,
pp. 3482–3489, AAAI Press, 2018.
[67] M. Saleemi, M. Anjum, and M. Rehman, “The ubiquitous healthcare facility framework: A proposed
system for managing rural antenatal care,” IEEE Access, vol. 7, pp. 161264–161282, 2019.
[68] A. Sengupta, K. Dutta, T. Beckie, and S. Chellappan, “Designing a health coach-augmented mhealth
system for the secondary prevention of coronary heart disease among women,” IEEE Transactions on
Engineering Management, pp. 1–16, 2020.
[69] G. Ogbuabor and R. Labs, “Human activity recognition for healthcare using smartphones,” in Proceedings of the 2018 10th International Conference on Machine Learning and Computing, ACM,
2018.
[70] A. Subasi, M. Radhwan, R. Kurdi, and K. Khateeb, “Iot based mobile healthcare system for human
activity recognition,” in 2018 15th Learning and Technology Conference, pp. 29–34, 2018.
[71] X. Chen, S. Jiang, Z. Li, and B. Lo, “A pervasive respiratory monitoring sensor for covid-19 pandemic,” IEEE Open Journal of Engineering in Medicine and Biology, vol. 2, pp. 11–16, 2021.
[72] G. Rathee, S. Garg, G. Kaddoum, Y. Wu, D. N. K. Jayakody, and A. Alamr, “Ann assisted-iot enabled
covid-19 patient monitoring,” IEEE Access, vol. 9, pp. 42483–42492, 2021.
[73] Y. Sun, F. P.-W. Lo, and B. Lo, “Security and privacy for the internet of medical things enabled healthcare systems: A survey,” IEEE Access, vol. 7, pp. 183339–183355, 2019.
[74] S. Singh, P. K. Sharma, S. Y. Moon, and J. H. Park, “Advanced lightweight encryption algorithms
for iot devices: survey, challenges and solutions,” Journal of Ambient Intelligence and Humanized
Computing, 2017.
[75] G. Papandreou, T. Zhu, N. Kanazawa, A. Toshev, J. Tompson, C. Bregler, and K. Murphy, “Towards
accurate multi-person pose estimation in the wild,” in 2017 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), pp. 3711–3719, 2017.
[76] M. S. Ryoo, C.-C. Chen, J. Aggarwal, and A. Roy-Chowdhury, “An overview of contest on semantic
description of human activities (sdha) 2010,” in Recognizing Patterns in Signals, Speech, Images and
Videos, (Berlin, Heidelberg), pp. 270–285, Springer Berlin Heidelberg, 2010.
[77] H. Choi and S. C. Seo, “Optimization of pbkdf2 using hmac-sha2 and hmac-lsh families in cpu environment,” IEEE Access, vol. 9, pp. 40165–40177, 2021.
[78] P. Jindal and B. Singh, “Quantitative analysis of the security performance in wireless lans,” Journal
of King Saud University - Computer and Information Sciences, vol. 29, no. 3, pp. 246–268, 2017.
[79] A. Leekha and A. Shaikh, “Implementation and comparison of the functions of building blocks in
sha-2 family used in secured cloud applications,” Journal of Discrete Mathematical Sciences and
Cryptography, vol. 22, no. 2, pp. 323–335, 2019.
[80] C. Dobraunig, M. Eichlseder, and F. Mendel, “Analysis of sha-512/224 and sha-512/256,” in Advances
in Cryptology – ASIACRYPT 2015 (T. Iwata and J. H. Cheon, eds.), pp. 612–630, Springer Berlin
Heidelberg, 2015.
[81] A. V. Mota, S. Azam, B. Shanmugam, K. C. Yeo, and K. Kannoorpatti, “Comparative analysis of
different techniques of encryption for secured data transmission,” in 2017 IEEE International Conference on Power, Control, Signals and Instrumentation Engineering (ICPCSI), pp. 231–237, 2017.
[82] N. Advani, C. Rathod, and A. M. Gonsai, “Comparative study of various cryptographic algorithms
used for text, image, and video,” in Emerging Trends in Expert Applications and Security (V. S.
Rathore, M. Worring, D. K. Mishra, A. Joshi, and S. Maheshwari, eds.), pp. 393–399, Springer Singapore, 2019.
[83] K. Wolter and P. Reinecke, Performance and Security Tradeoff, pp. 135–167. Berlin, Heidelberg:
Springer Berlin Heidelberg, 2010.
[84] K. N. e. H. Slimani, Y. Benezeth, and F. Souami, “Human interaction recognition based on the cooccurrence of visual words,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition
Workshops, pp. 461–466, 2014.
[85] M. S. Ryoo and J. K. Aggarwal, “Spatio-temporal relationship match: Video structure comparison for
recognition of complex human activities,” in 2009 IEEE 12th International Conference on Computer
Vision, pp. 1593–1600, 2009.
[86] Q. Ke, M. Bennamoun, S. An, F. Boussaid, and F. Sohel, “Human interaction prediction using deep
temporal features,” in Computer Vision – ECCV 2016 Workshops, pp. 403–414, Springer International
Publishing, 2016.
[87] T. Lan, T.-C. Chen, and S. Savarese, “A hierarchical representation for future action prediction,” in
European Conference on Computer Vision, pp. 689–704, Springer, 2014.
[88] M. Mahmood, A. Jalal, and M. A. Sidduqi, “Robust spatio-temporal features for human interaction
recognition via artificial neural network,” in 2018 International Conference on Frontiers of Information Technology (FIT), pp. 218–223, IEEE, 2018.
[89] O. A. Arqub and Z. Abo-Hammour, “Numerical solution of systems of second-order boundary value
problems using continuous genetic algorithm,” Information Sciences, vol. 279, pp. 396–415, 2014
[90] O. A. Arqub and M. Al-Smadi, “Fuzzy conformable fractional differential equations: novel extended
approach and new numerical solutions,” Soft Computing, vol. 24, no. 16, pp. 12501–12522, 2020.

無法下載圖示 全文公開日期 2025/08/28 (校內網路)
全文公開日期 2026/08/28 (校外網路)
全文公開日期 2026/08/28 (國家圖書館:臺灣博碩士論文系統)
QR CODE