簡易檢索 / 詳目顯示

研究生: 張宜平
Yi-Ping Chang
論文名稱: 智能股市預測應用程式之介面資訊呈現及系統反饋程度之研究
A Study on the Interface Information Presentation and Degree of System Feedback of Intelligent Stock Market Prediction Applications
指導教授: 陳建雄
Chien-Hsiung Chen
口試委員: 柯志祥
Chih-Hsiang Ko
陳詩捷
Shih-Chieh Chen
學位類別: 碩士
Master
系所名稱: 設計學院 - 設計系
Department of Design
論文出版年: 2023
畢業學年度: 112
語文別: 中文
論文頁數: 141
中文關鍵詞: 股市預測應用程式智能應用程式介面設計系統反饋使用性研究信任度
外文關鍵詞: Stock market prediction applications, Intelligent applications, Interface design, System feedback, Usability study, Trust
相關次數: 點閱:284下載:3
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 2023年人工智能技術迅速普及化並在各個領域廣泛應用,任何人都能透過應用程式取用這項技術,根據Google研究預測,2025年90%企業會使用包含嵌入式人工智能的應用程式,在人工智能的相關研究及技術領域也意識到,建立使用者對智能應用程式的使用體驗及信任感,才能有效促進人類與人工智能的協作與未來科技的發展,顯示人工智能應用程式使用體驗研究的重要性。本研究目的是透過股市預測應用程式的介面資訊設計與使用體驗研究,了解使用體驗的問題及改善方法、並提出介面資訊設計與使用體驗的建議,作為未來智能應用介面設計的參考。
    本研究實驗分為前導實驗與驗證實驗:(1)前導實驗透過專家訪談與使用者使用現況問卷調查,深入了解人工智能應用工具的使用現況與運作模式;(2) 驗證實驗根據前導實驗之結論,以改善智能預測工具的資訊介面設計和使用體驗,以提高人工智能應用工具的協作體驗為研究目的。以智能股市預測應用程式的2種介面資訊呈現方式(純文字資訊呈現/圖表資訊呈現)與4種系統反饋程度(無反饋/低度反饋/中度反饋/高度反饋)作為自變項,設計八個實驗樣本介面,並採用2(介面資訊呈現方式)×4(系統反饋程度)雙因子組間實驗設計,探討不同條件下之介面操作績效、操作滿意度、系統反饋程度感受、信任度、整體滿意度、整體價值滿意度及系統使用性尺度量表(SUS)評量。
    驗證實驗研究結果如下:(1)主效應「介面資訊呈現方式」方面,發現在操作滿意度、信任度、整體滿意度、整體價值滿意度及系統使用性尺度量表(SUS),圖表資訊呈現均顯著優於純文字呈現,顯示受測者認為圖表資訊呈現的方式可以帶來較好的使用性、信任度、整體滿意及價值感受;(2)主效應系統反饋程度」方面,低度反饋在操作績效與操作滿意度方面顯著較優,顯示使用低度反饋的系統反饋程度有助於提升操作績效及滿意度;無反饋及高度反饋價值滿意度較優,而中度反饋之平均價值滿意度均低於中位數3,顯示系統反饋程度在最高和最低時,為受測者帶來較高的價值感受,而中度反饋的反饋程度則會帶來較低的價值感受;(3)在兩主效應之交互作用方面,操作滿意度在無反饋、低度反饋和高度反饋時,以圖表資訊呈現滿意度較高,在中度反饋時則以純文字呈現較高;信任度方面,在無反饋、低度反饋和中度反饋時,以圖表資訊呈現帶來較高的信任感,在高度反饋時則以純文字呈現有較高的信任感;整體使用滿意度,在低度反饋、中度反饋和高度反饋時,以圖表資訊呈現整體滿意較高,在無反饋時則以純文字呈現較高。
    統整前導實驗及驗證實驗結果,本研究提出以下人工智能應用程式之介面資訊設計與使用體驗的建議:(1)以圖表資訊呈現方式進行介面設計;(2)提供使用者滿意度反饋功能;(3)使用清晰簡潔的介面設計風格;(4)控制系統反饋程度並加以實驗。


    In 2023, artificial intelligence technology rapidly proliferated and was widely applied in various fields, allowing anyone to access this technology through applications. According to Google's research predictions, by 2025, 90% of enterprises will use applications that include embedded artificial intelligence. In the field of AI research and technology, there is an awareness that establishing a user experience and trust in intelligent applications is essential to effectively promote collaboration between humans and AI, as well as the development of future technologies. This underscores the importance of research into the user experience of AI applications. The purpose of this study is to understand the issues and improvements in user experience through the interface information design and user experience research of a stock market prediction application, and to propose suggestions for interface information design and user experience, serving as a reference for future intelligent application interface designs.
    The study's experiment was divided into a pilot experiment and a validation experiment: (1) The pilot experiment, through expert interviews and user questionnaires, delved into the current use and operation modes of AI application tools; (2) The validation experiment, based on conclusions from the pilot experiment, aimed to improve the information interface design and user experience of intelligent prediction tools, enhancing the collaborative experience with AI application tools. Two types of interface information presentation methods (i.e., text-only information presentation/graphical information presentation) and four levels of system feedback (i.e., no feedback/low feedback/medium feedback/high feedback) were used as independent variables to design eight experimental sample interfaces. A 2 (information presentation method) × 4 (system feedback level) factorial between-subjects experimental design was used to investigate interface operation performance, operation satisfaction, system feedback perception, trust, overall satisfaction, overall value satisfaction, and the System Usability Scale (SUS) assessment.
    The validation experiment yielded the following results: (1) For the main effect of 'interface information presentation method,' it was found that graphical information presentation significantly outperformed text-only presentation in operation satisfaction, trust, overall satisfaction, overall value satisfaction, and the System Usability Scale (SUS), indicating that participants believe graphical information presentation can bring better usability, trust, overall satisfaction, and value perception; (2) For the main effect of 'system feedback level,' low feedback performed significantly better in operation performance and operation satisfaction, showing that low-level system feedback can help improve operational performance and satisfaction; no feedback and high feedback had superior value satisfaction, while the average value satisfaction of medium feedback was below the median of 3, indicating that the highest and lowest levels of system feedback bring higher value perception to the participants, while medium feedback leads to lower value perception; (3) In terms of the interaction between the two main effects, operation satisfaction was higher with graphical information presentation under no feedback, low feedback, and high feedback, and higher with text-only presentation under medium feedback; in terms of trust, graphical information presentation brought higher trust under no feedback, low feedback, and medium feedback, while text-only presentation brought higher trust under high feedback; for overall usage satisfaction, graphical information presentation was higher under low feedback, medium feedback, and high feedback, and text-only presentation was higher under no feedback.
    Integrating the results of the pilot and validation experiments, this study proposes the following suggestions for the interface information design and user experience of AI applications: (1) use graphical information presentation for interface design; (2) provide a user satisfaction feedback function; (3) use a clear and concise interface design style; (4) control the system feedback level and conduct experiments.

    摘要 ii ABSTRACT IV 誌謝 VII 目錄 VIII 圖目錄 XII 表目錄 XIV 第一章 緒論 1 1.1研究背景與動機 1 1.2研究目的 2 1.3研究架構 3 1.4研究範圍與限制 5 第二章 文獻探討 6 2.1人工智能應用的發展現況 6 2.1.1可解釋AI研究 6 2.1.2人工智能應用的使用體驗問題 7 2.2智能應用的使用體驗 8 2.2.1智能應用程式的認知心理 10 2.2.2智能應用程式的心智模式 11 2.3智能應用的使用體驗目標 12 2.3.1人機互動 12 2.3.2使用性目標 13 2.3.3使用者體驗目標 14 2.4智能應用的介面設計指南 15 2.5智能應用的創新設計方法 18 2.6智能股市預測應用程式 19 2.6.1介面資訊呈現 19 2.6.2功能服務 20 2.7智能應用的使用體驗評估方式 22 2.7.1使用性工程評估方法 22 2.7.2智能應用程式的信任度評估 25 2.8文獻小結 29 第三章 研究方法 31 3.1研究步驟與流程 31 3.2實驗流程與架構 33 3.3實驗方法 35 3.3.1使用性工程評估方法 35 3.3.2遠端使用性測試 36 第四章 前導實驗 39 4.1前導實驗架構 39 4.2前導實驗設計 40 4.2.1專家訪談 40 4.2.2使用現況調查 41 4.3前導實驗結果與分析 44 4.3.1專家訪談結果分析 44 4.3.2現況調查結果分析 49 4.4前導實驗結論 53 4.4.1影響人工智能應用程式使用體驗的關鍵因素 53 4.4.2人工智能應用程式的運作模式 54 4.4.3前導實驗結論與建議 55 第五章 驗證實驗 57 5.1驗證實驗架構 57 5.2研究變項設計 58 5.2.1控制變項 58 5.2.2自變項 60 5.3驗證實驗設計 63 5.3.1實驗任務設計 63 5.3.2實驗樣本設計 66 5.3.3驗證實驗問卷設計 79 5.3.4驗證實驗流程 82 5.4 驗證實驗結果與分析 84 5.4.1基本資料分析 84 5.4.2智能應用程式認知測驗結果分析 85 5.4.3任務操作績效結果分析 85 5.4.4任務操作滿意度結果分析 91 5.4.5參與感受度結果分析 101 5.4.6信任度評量結果分析 103 5.4.7整體滿意度評量結果分析 106 5.4.8價值滿意度評量結果分析 109 5.4.9 系統使用性尺度量表(SUS)評量結果分析 111 5.4.10開放式問答結果分析 112 第六章 結論與建議 114 6.1研究結論 114 6.1.1前導實驗之結論統整 115 6.1.2驗證實驗之研究結果 116 6.2介面及使用體驗設計建議 122 6.3後續研究建議 123 參考文獻 126 英文文獻 126 中文文獻 131 網路文獻 131 附錄 133 附錄一、前導實驗訪談大綱「專家訪談」 133 附錄二、前導實驗問卷「使用現況調查」 135 附錄三、驗證實驗問卷 137

    英文文獻
    [1] Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. (2018, April). Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. In Proceedings of the 2018 CHI conference on human factors in computing systems (pp. 1-18).
    [2] Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., ... & Horvitz, E. (2019, May). Guidelines for human-AI interaction. In Proceedings of the 2019 chi conference on human factors in computing systems(pp. 1-13).
    [3] Banathy, B. H. (1996). Conversation in social systems design. Educational Technology, 36(1), 39-41.
    [4] Bangor, A., Kortum, P., & Miller, J. (2009). Determining what individual SUS scores mean: Adding an adjective rating scale. Journal of usability studies, 4(3), 114-123.
    [5] Barria P. J., & Brusilovsky, P. (2019, March). Making educational recommendations transparent through a fine-grained open learner model. In Proceedings of Workshop on Intelligent User Interfaces for Algorithmic Transparency in Emerging Technologies at the 24th ACM Conference on Intelligent User Interfaces, IUI 2019, Los Angeles, USA, March 20, 2019 (Vol. 2327).
    [6] Böckle, M., Yeboah-Antwi, K., & Kouris, I. (2021, July). Can You Trust the Black Box? The Effect of Personality Traits on Trust in AI-Enabled User Interfaces. In International Conference on Human-Computer Interaction (pp. 3-20). Springer, Cham.
    [7] British Design Council: What is the framework for innovation? Design council’sevolved double diamond. (2004). Accessed 30 May 2022
    [8] Brooke, J. (1996). Sus: a “quick and dirty’usability. Usability evaluation in industry, 189(3).
    [9] Chander, A., Srinivasan, R., Chelian, S., Wang, J., & Uchino, K. (2018, January). Working with beliefs: AI transparency in the enterprise. In IUI Workshops.
    [10] Council, A. U. P. P. (2017). Statement on algorithmic transparency and accountability. Commun. ACM.
    [11] Das, A., & Rad, P. (2020). Opportunities and challenges in explainable artificial intelligence (xai): A survey. arXiv preprint arXiv:2006.11371.
    [12] Datta, A., Sen, S., & Zick, Y. (2016, May). Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In 2016 IEEE symposium on security and privacy (SP) (pp. 598-617). IEEE.
    [13] Draelos, T. J., Miner, N. E., Lamb, C. C., Cox, J. A., Vineyard, C. M., Carlson, K. D., ... & Aimone, J. B. (2017, May). Neurogenesis deep learning: Extending deep networks to accommodate new classes. In 2017 International Joint Conference on Neural Networks (IJCNN)(pp. 526-533). IEEE.
    [14] Ehsan, U., Tambwekar, P., Chan, L., Harrison, B., & Riedl, M. O. (2019, March). Automated rationale generation: a technique for explainable AI and its effects on human perceptions. In Proceedings of the 24th International Conference on Intelligent User Interfaces (pp. 263-274).
    [15] Ferreira, J. J., & Monteiro, M. S. (2020, July). What are people doing about XAI user experience? A survey on AI explainability research and practice. In International Conference on Human-Computer Interaction (pp. 56-73). Springer, Cham.
    [16] Fontana, A., & Frey J. H. (1994). Interviewing: The art of science. In N. K. Denzn (Eds.), The Handbook of Qualitative Research. Thousand Oaks: Sage Publications
    [17] Goebel, R., Chander, A., Holzinger, K., Lecue, F., Akata, Z., Stumpf, S., ... & Holzinger, A. (2018, August). Explainable ai: the new 42?. In International cross-domain conference for machine learning and knowledge extraction (pp. 295-303). Springer, Cham.
    [18] Goetz, J., & LeCompte, M. (1984). Ethnography and qualitative design in education- al research. New York : Academic Press.
    [19] Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G. Z. (2019). XAI—Explainable artificial intelligence. Science robotics, 4(37), eaay7120.
    [20] Hewett, T. T., Baecker, R., Card, S., Carey, T., Gasen, J., Mantei, M., ... & Verplank, W. (1992). ACM SIGCHI curricula for human-computer interaction. ACM.
    [21] Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causability and explainability of artificial intelligence in medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 9(4), e1312.
    [22] Jacovi, A., Marasović, A., Miller, T., & Goldberg, Y. (2021, March). Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in ai. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 624-635)
    [23] Kahn, R., & Cannell, C. (1957). The Dynamics of Interviewing. New York: John Wiley and Sons
    [24] Liao, Q. V., & Varshney, K. R. (2021). Human-Centered Explainable AI (XAI): From Algorithms to User Experiences. arXiv preprint arXiv:2110.10790.
    [25] Liao, Q. V., Gruen, D., & Miller, S. (2020, April). Questioning the AI: informing design practices for explainable AI user experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-15).
    [26] Nielsen, J. (1994). Usability engineering. Morgan Kaufmann.
    [27] Norman, D. A. (1983). Some observations on mental models. Mental Models, 7(112), 7-14.
    [28] Norman, D. A. (1988). The psychology of everyday things. Basic books.
    [29] Paudyal, P., Lee, J., Kamzin, A., Soudki, M., Banerjee, A., & Gupta, S. K. (2019, March). Learn2Sign: Explainable AI for Sign Language Learning. In IUI Workshops.
    [30] Pearl, J., & Mackenzie, D. (2018). The book of why: the new science of cause and effect. Basic books.
    [31] Preece, J., Sharp, H., & Rogers, Y. (2004). Interaction design. Oltre l'interazione uomo-macchina. Apogeo Editore.
    [32] Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). " Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135-1144).
    [33] Rotsidis, A., Theodorou, A., & Wortham, R. H. (2019). Robots that make sense: transparent intelligence through augmented reality. In 2019 IUI Workshop in Intelligent User Interfaces for Algorithmic Transparency in Emerging Technologies (IUI-ATEC). CEUR Workshop Proceedings.
    [34] Schleith, J., & Tsar, D. (2022). Triple Diamond Design Process. In International Conference on Human-Computer Interaction (pp. 136-146). Springer, Cham.
    [35] Smith, A., & Nolan, J. (2018). The Problem of Explanations without User Feedback. In IUI Workshops.
    [36] Springer, A., & Whittaker, S. (2018). Progressive disclosure: designing for effective transparency. arXiv preprint arXiv:1811.02164.
    [37] Stumpf, S., Skrebe, S., Aymer, G., & Hobson, J. (2018, March). Explaining smart heating systems to discourage fiddling with optimized behavior. In CEUR Workshop Proceedings (Vol. 2068).
    [38] Stumpf, S. (2019). Horses for courses: Making the case for persuasive engagement in smart systems. In Joint Proceedings of the ACM IUI 2019 Workshops (Vol. 2327). CEUR.
    [39] Szeli, L. (2020). UX in AI: trust in algorithm-based investment decisions. Junior Management Science, 5(1), 1-18.
    [40] Thebault-Spieker, J., Terveen, L., & Hecht, B. (2017). Toward a geographic understanding of the sharing economy: Systemic biases in UberX and TaskRabbit. ACM Transactions on Computer-Human Interaction (TOCHI), 24(3), 1-40.
    [41] Tsai, C. H., & Brusilovsky, P. (2019). Designing Explanation Interfaces for Transparency and Beyond. In IUI Workshops.
    [42] van Oosterhout, A. (2019, June). Understanding the benefits and drawbacks of shape change in contrast or addition to other modalities. In Companion Publication of the 2019 on Designing Interactive Systems Conference 2019 Companion (pp. 113-116).
    [43] Vereschak, O., Bailly, G., & Caramiaux, B. (2021). How to evaluate trust in AI-assisted decision making? A survey of empirical methodologies. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1-39.
    [44] Zhao, J., Wang, T., Yatskar, M., Ordonez, V., & Chang, K. W. (2017). Men also like shopping: Reducing gender bias amplification using corpus-level constraints. arXiv preprint arXiv:1707.09457.
    [45] Zhao, R., Benbasat, I., & Cavusoglu, H. (2019). Transparency in Advice-Giving Systems: A Framework and a Research Model for Transparency Provision. In IUI Workshops.

    中文文獻
    [1] 陳建雄(譯)(2009)。互動設計(二版)。(原作者:Preece, Jennifer、Rogers, Yvonne、Sharp, Helen)。全華圖書。
    [2] 張紹勳(2007)。研究方法,第三版。台中:滄海。
    [3] 鄭麗玉(2006)。認知心理學:理論與應用。五南圖書出版股份有限公司。

    網路文獻
    [1] Google(2023)。2023 Data and AI Trends Report。上網日期:2023年8月2日。Retrieved from:https://services.google.com/fh/files/misc/data_and_ai_trends.pdf
    [2] IDC Corporate (2022)。IDC預測亞太區2025年人工智慧支出將達到320億美元,台灣市場持續成長。上網日期:2022年5月20日。Retrieved from:https://www.idc.com/getdoc.jsp?containerId=prAP49016222
    [3] Nielsen Norman Group(2013)。Remote Usability Tests: Moderated and Unmoderated。上網日期:2023年10月2日。Retrieved from:https://www.nngroup.com/articles/remote-usability-tests/
    [4] USACM(2017)。2017 Statement on Algorithmic Transparency and Accountability。上網日期:2023年10月2日。Retrieved from:https://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf
    [5] 設計大舌頭(2018)。使用者體驗於人工智慧時代的挑戰。上網日期:2022年5月13日。Retrieved from:https://reurl.cc/2ZLVa4

    無法下載圖示
    全文公開日期 2024/11/20 (校外網路)
    全文公開日期 2024/11/20 (國家圖書館:臺灣博碩士論文系統)
    QR CODE