簡易檢索 / 詳目顯示

研究生: 昝亭瑜
Ting-Yu Tsan
論文名稱: 探討聊天機器人個性化對使用感知與意願的影響
A Study on the Influence of Chatbot Personalization on User Perception and Willingness
指導教授: 董芳武
Fang-Wu Tung
口試委員: 唐玄輝
Hsien-Hui Tang
張永儒
Yung-Ju Chang
學位類別: 碩士
Master
系所名稱: 設計學院 - 設計系
Department of Design
論文出版年: 2020
畢業學年度: 108
語文別: 中文
論文頁數: 76
中文關鍵詞: 聊天機器人對話式介面個性化社會臨場感信賴度
外文關鍵詞: Chatbot, Conversational User Interface, Personalization, Social presence, Trust
相關次數: 點閱:629下載:31
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 互聯網愈趨成熟之發展與行動裝置之普及,且使用者對於社群媒體與即時通訊軟體的高黏著度,讓許多品牌與企業開始把產品服務,從既定介面操作轉變到人與人最基本的互動方式:「交談」上,繼而開啟聊天機器人之對話式介面的契機與商機。
    如何建構符合使用情境之聊天機器人個性,繼而提高對話式互動體驗之使用意願,為一重要議題。本研究以「外放」--「內斂」人格維度之語言風格的相關文獻作為聊天機器人個性語句的參考,設計與建構個性不同的腳本式聊天機器人實驗原型,而機器人的使用情境為基礎理財知識查詢之「資訊查詢」情境。再根據文獻探討與統整,提出四項影響對話式互動體驗之相關構面:聊天機器人功能性、社會臨場感、信賴度與聊天機器人性別,作為本研究架構與評估聊天機器人個性和使用感知的相關依據。實驗以年輕族群為研究對象,採用觀察、問卷調查與訪談等研究方法收集資料。
    透過文獻探討、量表調查與訪談、實驗分析,彙整歸納本研究之結論為:(1)受測者的確能透過不同個性之聊天機器人的對話語句,感知到其個性差異,因此當研究者需要建構聊天機器人個性特徵時,找到合適之人格模型的語言風格作為語句的參考,是非常有效率與適宜使用的工具;(2)影響對話式互動體驗之相關構面中,以信賴度影響感知與使用意願之程度最高。內斂機器人語句的語言風格,可讓使用者感知到較高的功能性與信賴度;外放機器人語句的語言風格,則讓使用者感知到較高的社會臨場感;(3)聊天機器人個性化確實會影響使用感知與使用意願,且以個性特徵之對話語句與功能性、社會臨場感、信賴度一同探討感知與意願時,可得出在理財知識查詢的使用情境下,內斂機器人的個性特徵,較適合作為此情境下可運用於機器人個性化的人格模型。
    根據研究成果,本研究可作為在建構聊天機器人個性化時的設計參考架構,為聊天機器人之對話式互動體驗提供有用的建議。


    Because of the advancement of Internet technology and wide diffusion of mobile devices, smartphone users are strongly bound to social media and instant messaging applications. Under such circumstances, various brands and businesses have transformed the way their products and services work from interface operation to chat—the most fundamental interaction between human beings. This change has fostered the development of conversational user interfaces with numerous business opportunities.
    Furthermore, this change highlights the importance of developing favorable personalities for chatbots according to different scenarios to increase users’ willingness to engage in conversational interactions with computers. In this study, texts reflecting the language styles of an extraversion or introversion personality dimension were introduced as reference materials for chatbots. These scripted chatbots were designed to represent either of the two personality dimensions, with dialogues focusing on queries of basic wealth management information. Through literature review, four dimensions influencing conversational interactions were then proposed as references for the framework of the study and the assessment of personality dimensions and user perception: chatbot functionality, social presence, trust, and chatbot gender. The study investigated the young generation with research methods such as observation, questionnaire surveys, and interviews.
    The results from literature review, scale questionnaires, and study analysis revealed three findings: (1) Participants were able to perceive personality difference through conversations with chatbots of different personality dimensions. Therefore, texts reflecting the language styles of certain personality dimensions can be a highly efficient and suitable tool for researchers who look to construct the personality traits of a chatbot; (2) Among the dimensions influencing conversational interactions, trust had the most significant impact on perception and willingness of usage. The linguistic output of an introvert chatbot led to higher user perception of functionality, while that of an extravert chatbot contributed to higher user perception of social presence; (3) Personalization of chatbot services does affect user perception and their willingness of usage. Personalized dialogues, functionality, social presence, and trust are relevant to user perception and willingness. The finding indicated the personality traits of an introvert chatbot are ideal for the prototype of a personalized chatbot providing query services of wealth management information.
    This study serves as a reference for chatbot personality designs and provides useful suggestions for improving user experience of conversational interaction with chatbots.

    摘要 III Abstract IV 誌謝 V 目錄 VI 圖目錄 VII 表目錄 VIII 第一章 緒論 1 1.1 研究背景 1 1.2 研究動機 2 1.3 研究目的 3 1.4 研究範圍 4 1.5 研究流程 4 第二章 文獻探討 6 2.1 聊天機器人 6 2.1.1 聊天機器人類型與使用情境 6 2.1.2 對話式介面設計 8 2.1.3 聊天機器人個性化 11 2.1.4 聊天機器人評估指標 14 2.2 影響對話式互動體驗之其他構面 15 2.2.1 社會臨場感 15 2.2.2 信賴度 17 2.2.3 性別 19 第三章 研究方法 21 3.1 前導實驗 21 3.2 研究架構 24 3.3 實驗設計 25 3.3.1 實驗設計原型 26 3.3.2 實驗量測工具 32 3.3.3 實驗流程 35 第四章 研究結果分析 37 4.1 樣本資料敘述統計 37 4.2 信度與效度 38 4.3 相關分析與迴歸分析 40 4.4 差異分析 42 4.4.1 感知外放程度之差異分析 43 4.4.2 功能性之差異分析 45 4.4.3 社會臨場感之差異分析 47 4.4.4 信賴度之差異分析 49 4.4.5 滿意度與使用意願之差異分析 51 第五章 結論與建議 57 5.1 研究結論與實務建議 57 5.1.1 聊天機器人個性化語句之規劃與設計 57 5.1.2 對話式互動體驗之四項構面對於使用感知與意願的影響 58 5.1.3 不同個性之聊天機器人對於使用感知與意願的影響 61 5.2 研究限制 62 5.3 未來研究建議 62 參考文獻 63 附錄一:線上問卷 72 附錄二:質性訪談大綱 76 圖目錄 圖2-2-1 對話式介面之選項型介面 8 圖2-1-2 對話式介面之確認型介面 9 圖2-2-2 對話式介面之列表型介面 9 圖3-1 前導性實驗聊天機器人設計框架與流程 21 圖3-2 研究架構 25 圖3-3-1-1 實驗原型設計流程 26 圖3-3-1-2 實驗原型資訊架構 27 圖3-3-1-3 實驗原型角色姓名與圖像 31 圖3-3-1-4 實驗原型視覺化介面 32 圖3-3-1-5 實驗原型對話式介面 32 表目錄 表3-1-1 前導性實驗使用者目標任務 21 表3-1-2 前測實驗腳本外向個性化語句 22 表3-1-3 前測實驗腳本內向個性化語句 23 表3-3-1-1 實驗腳本外放個性化語句 28 表3-3-1-2 實驗腳本內斂個性化語句 29 表3-3-1-3 實驗腳本個性化語句差異對照分析 30 表3-3-2 受測者個性語意差異量表 33 表4-1 受測者敘述統計 38 表4-2 實驗量表信度分析 40 表4-3-1 相關構面之於滿意度與使用意願相關分析 41 表4-3-2 滿意度迴歸分析 41 表4-3-3 使用意願迴歸分析 42 表4-4-1-1 聊天機器人個性敘述統計 44 表4-4-1-2 聊天機器人個性雙因子變異數分析 44 表4-4-1-3 聊天機器人個性訪談分析 45 表4-4-2-1 功能性敘述統計 45 表4-4-2-2 功能性雙因子變異數分析 46 表4-4-2-3 功能性訪談分析 47 表4-4-3-1 社會臨場感敘述統計 47 表4-4-3-2 社會臨場感雙因子變異數分析 48 表4-4-3-3 社會臨場感訪談分析 49 表4-4-4-1 信賴度敘述統計 49 表4-4-4-2 信賴度雙因子變異數分析 50 表4-4-4-3 信賴度之成對樣本T檢定分析 50 表4-4-4-4 信賴度訪談分析 51 表4-4-5-1 滿意度敘述統計 52 表4-4-5-2 滿意度雙因子變異數分析 52 表4-4-5-3 使用意願敘述統計 53 表4-4-5-4 使用意願雙因子變異數分析 53 表4-4-5-5 聊天機器人屬性之於滿意度與使用意願訪談分析 54 表4-4-5-6 外放機器人之於滿意度與使用意願訪談分析 55 表4-4-5-7 內斂機器人之於滿意度與使用意願訪談分析 56

    -英 文 文 獻-
    Alastair Gill. Jon Oberlander. (2002). Taking care of the linguistic features of extraversion. In Proceedings of the 24th Annual Conference of the Cognitive Science Society, 363-368.

    Amir Shevat. (2018),《Designing Bots: Creating Conversational Experiences》

    Arnold Lund. (2001). Measuring Usability with the USE Questionnaire. Usability Interface, 8(2): 3-6.

    Avril Thorne. (1987). The press of personality: A study of conversations between introverts and extroverts. Journal of Personality and Social Psychology, 53, 718- 726. http://doi:10.1037//0022-3514.53.4.718

    Biocca, F., & Harms, C. (2002). Defining and measuring social presence: Contribution to the Networked Minds Theory and Measure. ISPR Presence conference, 2002(517), 1-36.

    Biocca, F.,Harms, C.,& Gregg, J. (2001, May). The networked minds measure of social presence: Pilot test of the factor structure and concurrent validity. Paper presented at the conference of 4th Annual International Workshop, Pennsylvania, USA.

    Camiel J. Beukeboom. Martin Tanis. Ivar E. Vermeulen. (2012). The Language of Extraversion: Extraverted People Talk More Abstractly, Introverts Are More Concrete. Journal of Language and Social Psychology 32(2): 191–201.

    Cecilie Bertinussen Nordheim. Trust in chatbots for customer service findings from a questionnaire study Master thesis at the Department of Psychology UNIVERSITY OF OSLO 15.05.2018

    Coniam, D. (2014). The linguistic accuracy of chatbots: usability from an ESL perspective. Text & Talk, 34(5), 545-567.

    Corritore, C. L., Kracher, B., & Wiedenbeck, S. (2003). On-line trust: Concepts, evolving themes, a model. International Journal of Human-Computer Studies, 58(6), 737-758. doi:10.1016/s1071-5819(03)00041-7

    Costa P. T., Jr., Terracciano A., McCrae R. R. (2001). Gender differences in personality traits across cultures: robust and surprising findings J. Pers. Soc. Psychol. 81. 322-33110.1037/0022-3514.81.2.322

    Dautenhahn, K.; Ogden, B.; and Quick, T. (2002). From embodied to socially embedded agents–implications for interaction-aware robots. Cognitive Systems Research 3(3):397-428.

    De Angeli, A. & Brahnam, S. (2008). ‘I hate you! disinhibition with virtual partners’, Interacting With Computers 20(3), 302-310.

    Donald A. Norman. (2007). Emotional design: why we love (or hate) everyday things, Basic Books, New York.

    Doney, P. M., & Cannon, J. P. (1997). An examination of the nature of trust in buyer-seller relationships. the Journal of Marketing, 35-51.

    Eagle, M., L. Goldberger and M. Breitman, (1969). Field Dependence and Memory for Social Vs Neutral and Relevant Vs Irrelevant Incidental Stimuli, Perceptual and Motor Skill, 29(3), 903-910.

    Eeuwen, M. (2017). Mobile conversational commerce: messenger chatbots as the next interface between businesses and consumers (Master's thesis, University of Twente). Retrieved on February 1, 2017 from http://essay.utwente.nl/71706/1/van%20Eeuwen_MA_BMS.pdf

    Fast, L. A., & Funder, D. C. (2008). Personality as manifest in word use: Correlations with self-report, acquaintance report, and behavior. Journal of Personality and Social Psychology, 94, 334-346. doi:10.1037/0022-3514.94.2.334

    Feingold A. (1994). Gender differences in personality: a meta-analysis. Psychol. Bull. 116, 429–45610.1037/0033-2909.116.3.429

    Fogg, B. J. (2002). Persuasive technology: Using computers to change what we think and do. Ubiquity, 2002, December, 5.

    Francis Heylighen. Jean-Marc Dewaele. (2002). Variation in the contextuality of language: An empirical measure. Foundations of Science, 7, 293-340.

    Gefen, D. and Straub, D.W. Consumer trust in B2C e-Commerce and the importance of social presence: experiments in e-Products and e-Services. Omega 32, 6 (2004), 407–424.

    Gefen, D. (2002). Reflections on the dimensions of trust and trustworthiness among online
    consumers. ACM SIGMIS Database: the DATABASE for Advances in Information
    Systems, 33(3), 38-53.

    Goldberg, L. R. (1992). The development of markers for the Big-Five factor structure. Psychological Assessment, 4, 26-42.

    Isaac, A. M., & Bridewell, W. (2014). Mindreading deception in dialog. Cognitive Systems Research, 28, 12-19.

    Jacob B. Hirsch. Jordan B. Peterson. (2009). Personality and language use in self-narratives. Journal of Research in Personality, 43, 524-527.
    http://doi:10.1016/j.jrp.2009.01.006

    James W. Pennebaker. Laura A King. (1999). Linguistic styles: Language use as an individual difference. Journal of Personality and Social Psychology, 77, 1296-1312. http://doi:10.1037//0022- 3514.77.6.1296

    Jarvenpaa, S. L., Tractinsky, N., & Saarinen, L. (1999). Consumer Trust in an Internet Store: A Cross-Cultural Validation. Journal of Computer-Mediated Communication, 5(2), 45-71. doi:10.1111/j.1083-6101.1999.tb00337.x

    Jean-Marc Dewaele. Adrian Furnham. (1999). Extraversion: The unloved variable in applied linguistic research. Language Learning, 19, 509-544.
    http://doi:10.1111/0023-8333.00098

    John Pavlus. (2016). The Next Phase Of UX: Designing Chatbot Personalities. Retrieved May 01, 2016. from
    https://www.fastcompany.com/3054934/the-next-phase-of-ux-designing-chatbot-personalities

    John Short. Ederyn Williams. Bruce Christie. (1976). The social psychology of telecommunications. New York: Wiley Press.

    Jon Oberlander. Alastair Gill. (2006). Language with character: A stratified corpus comparison of individual differences in e-mail communication. Discourse Processes, 42, 239-270.
    http://doi:10.1207/s15326950dp4203_1

    Kraig Finstad. (2010). The Usability Metric for User Experience. Interacting with Computers 22: 323–327.

    Kulms, P., Krämer, N. C., Gratch, J. & Kang, S.-H. (2011). ‘It’s in their eyes: A study on female and male virtual humans’ gaze’, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 6895, 80–92.

    Lowenthal, P. R. (2009). Social presence. In P. Rogers, G. Berg, J. Boettcher, C. Howard, L. Justice, & K. Schenk (Eds.), Encyclopedia of distance and online learning. PA, Hershey: IGI Global.

    Marian McDonnell , David Baxter (2019). Chatbots and Gender Stereotyping Interacting with Computers, Volume 31, Issue 2, Pages 116-121,
    https://doi.org/10.1093/iwc/iwz007

    Matt Galligan. (2016). “Bot” is a hilariously over- simplified buzzword. Let’s fix that. Retrieved April 13, 2016. from
    https://medium.com/@mg/bot-is-a- hilariously-over-simplified-buzzword-let-s-fix-that- f1d63abb8ba7

    Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. The Academy of Management Review, 20(3), 709-734. doi:10.2307/258792

    McKnight, D. H., Choudhury, V., & Kacmar, C. (2002). Developing and validating trust measures for e-commerce: An integrative typology. Information Systems Research, 13(3), 334-359.
    doi:10.1287/isre.13.3.334.81

    Meira, M. O., & Canuto, A. M. P. (2015). Evaluation of Emotional Agents' Architectures: an Approach Based on Quality Metrics and the Influence of Emotions on Users. In Proceedings of the World Congress on Engineering (Vol. 1).

    Michael F. McTear, Zoraida Callejas, David Griol. (2016). Affective conversational interfaces, in ‘The Conversational Interface: Talking to Smart Devices’, Springer International Publishing, Switzerland.

    Michael Neff, Yingying Wang1, Rob Abbott, and Marilyn Walker. (2010). Evaluating the Effect of Gesture and Language on Personality Perception in Conversational Agents.

    Miner, A. S., Milstein, A., Schueller, S., Hegde, R., Mangurian, C., & Linos, E. (2016). Smartphone-based conversational agents and responses to questions about mental health, interpersonal violence, and physical health. JAMA internal medicine, 176(5), 619-625.

    Mori, M. (1970). ‘The uncanny valley’, Energy 7(4), 33–35.

    Morrissey, K., & Kirakowski, J. (2013, July). ‘Realness’ in Chatbots: Establishing Quantifiable Criteria. In International Conference on Human-Computer Interaction (pp. 87-96). Springer Berlin Heidelberg.

    Nass, C. and Moon, Y. (2000). Machines and Mindlessness: Social Responses to Computers. Journal of Social Issues 56, 1, 81-103.

    Nass, C., Steuer, J., and Tauber, E.R. (1994). Computers are Social Actors. Proceedings of the ACM CHI Conference on Human Factors in Computing Systems, ACM Press, 72-78.

    Nicole M. Radziwill, Morgan C. Benton. (2017). Evaluating Quality of Chatbots and Intelligent Conversational Agents.

    Pauletto, S., Balentine, B., Pidcock, C., Jones, K., Bottaci, L., Aretoulaki, M., & Balentine, J. (2013). Exploring expressivity and emotion with artificial voice and speech technologies. Logopedics Phoniatrics Vocology, 38(3), 115-125.

    Ramos, R. (2017, February 3) Screw the Turing Test - Chatbots don’t need to act human. VentureBeat. Retrieved on March 13, 2017 from
    https://venturebeat.com/2017/02/03/screw- the-turing-test-chatbots-dont-need-to-act-human/.

    Robinette, P., Wagner, A. R., & Howard, A. M. (2014). Assessment of robot guidance modalities conveying instructions to humans in emergency situations. Robot and Human Interactive Communication, 2014 RO-MAN: The 23rd IEEE International Symposium on., 1043-1049.

    Rourke, L., Anderson, T., Garrison, D. R., & Archer, W. (2001).Assessing Social Presence In Asynchronous Text-based Computerm Conferencing. Journal of Distance Education, 14(2), 50-71.

    Ryan M. Schuetzler, Mark G. Grimes, Justin Scott Giboney. (2018). An Investigation of Conversational Agent Relevance, Presence, and Engagement. Twenty- fourth Americas Conference on Information Systems, New Orleans.

    Solomon, M. (2017, March 23) If Chatbots Win, Customers Lose, Says Zappos Customer Service Expert. Forbes. Retrieved on March 24, 2017 from https://www.forbes.com/sites/micahsolomon/2017/03/23/customers-lose-if-chatbots-win- says-zappos-customer-service-expert

    Tal Yarkoni. (2010). Personality in 100,000 words: A large-scale analysis of personality and word use among bloggers. Journal of Research in Personality, 44, 363- 373. http://doi:10.1016/j. jrp.2010.04.001

    Thieltges, A., Schmidt, F., & Hegelich, S. (2016, March). The Devil’s Triangle: Ethical Considerations on Developing Bot Detection Methods. In 2016 AAAI Spring Symposium Series.

    Tuva L. Smestad. (2018). Personality Matters! Improving The User Experience of Chatbot Interfaces. Master Dissertation. Norwegian University of Science and Technology Department of Design, Norway.

    Vala, M., Blanco, G. & Paiva, A. (2011), Providing gender to embodied conversational agents Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 6895, 148–154.

    Walker, J., Sproull, L., and Subramani, R. Using a human face in an interface. Proceedings of the Conference on Human Factors in Computers, (1994), 85–91.
    The unfriendly user: exploring social reactions to chatterbots Antonella De Angeli, Graham I. Johnson, and Lynne Coventry NCR Self-Service, Advanced Technology & Research, Dundee DD2 3XX, UK. Helander, Khalid and Tham (Editors) Proceedings of The International Conference on Affective Human Factors Design Asean Academic Press, London, 2001

    Wallace R., (2003). The elements of AIML style. ALICE AI Foundation. Retrieved on March 15, 2017 from
    http://www.alicebot.org/style.pdf

    Weizenbaum, J. (1966). ELIZA--a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36-45.

    Wilson, H. J., Daugherty, P. R., & Morini-Bianzino, N. (2017, March 23) Will AI Create as Many Jobs as it Eliminates? MIT Sloan Management Review. Retrieved on March 24, 2017 from
    http://sloanreview.mit.edu/article/will-ai-create-as-many-jobs-as-it-eliminates/

    Yanna J. Weisberg, Colin G. DeYoung,2, and Jacob B. Hirsh. (2011) Gender Differences in Personality across the Ten Aspects of the Big Five 2-11
    doi: 10.3389/fpsyg. Front Psychol. 2011; 2: 178.

    Zimmerman, J., Ayoob, E., Forlizzi, J. & McQuaid, M. (2005), ‘Putting a face on embodied interface agents’.

    -中 文 文 獻-
    李長潔,〈為什麼視覺化很重要?〉,shucidi.mystrikingly.com›blog,2017

    房美玉,〈儲備幹部人格特質甄選量表之建立與應用-以某高科技公司為例〉,
    人力資源管理學報,2,001-018,2002

    張春興,《張氏心理學辭典》,東華出版社,1989

    張明琳,〈消費者人格對於品牌個性與品牌關係形成的影響與探討-以品牌體驗為觀點〉東海大學企業管理學系碩士班碩論,2004

    許雅婷,〈人格特質與團隊組合對知識分享、創新績效的影響〉
    東吳大學企業管理學系碩士班碩論,2002

    楊美雪、蔡雯婷,〈智慧型手機通訊軟體使用者之社會臨場感與愉悅感研究--以LINE即時通訊軟體為例〉,國立虎尾科技大學學報,32卷1期 P35-50,2014

    -網 路 文 獻-
    Chris, K. (2018),〈App使用時間趨飽和、購物型應用大幅成長〉
    https://www.smartm.com.tw/article

    Chris Klotzbach, Lali Kesiraju. (2017),〈2017 App年度關鍵報告〉,【Flurry】
    https://www.smartm.com.tw/article/34363233cea3

    Emmet Connolly. (2019),〈8 principles of bot design〉
    https://medium.com/intercom-inside/8-principles-of-bot-design-51f03df1d84c

    Matt, G. (2016),〈Bot is a hilariously over-simplified buzzword. Let’s fix that〉
    https://medium.com

    Paul Adams. (2011),〈Grouped: How small groups of friends are the key to influence on the social web〉
    https://www.amazon.com
    https://www.intercom.com/blog/bots-versus-humans/?utm_medium=article&utm_source=medium&utm_campaign=botdesign

    Simon, K.(2018),〈Global Digital Statshot 2018〉,【全球網路報告】
    https://www.funp.com›news
    http://wearesocial.cn/blog/2019/10/25/the-global-state-of-digital-in-october-2019

    Ting, H. (2018),〈如何用豐富的訊息格式,吸引使用者的目光?〉,【Super 8 對話式商務平台】
    https://medium.com›8-interactive

    〈Market Research Report〉,【GRAND VIEW RESEARCH】
    https://www.grandviewresearch.com

    〈Customer Experience: Can Chatbots Jump The Uncanny Valley?〉
    http://www.brandquarterly.com/customer-experience-can-chatbots-jump-uncanny-valley

    〈什麼是Chatbot聊天機器人?它能幫你導入客流量,是行銷自動化的必備工具〉
    https://blog.gogopartners.com

    QR CODE