簡易檢索 / 詳目顯示

研究生: 張君伃
Chun-Yu Chang
論文名稱: 威脅還是機會?探討大學生與碩士生對ChatGPT的認知與協作意願
Threat or Opportunity? Exploring the Perception and Collaborative Willingness of University and Master's Students towards ChatGPT
指導教授: 朱宇倩
Yu-Qian Zhu
口試委員: 黃世禎
Shih-Chen Huang
魏小蘭
Hsiao-Lan Wei
學位類別: 碩士
Master
系所名稱: 管理學院 - 資訊管理系
Department of Information Management
論文出版年: 2023
畢業學年度: 111
語文別: 中文
論文頁數: 100
中文關鍵詞: ChatGPT大型語言模型聊天機器人COR理論協作意願感知機會感知威脅一般自我效能感
外文關鍵詞: ChatGPT, large language model, chatbot, Conservation of Resources Theory, willingness to collaborate, perceived opportunities, perceived threats, general self-efficacy
相關次數: 點閱:368下載:12
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • ChatGPT僅上線2個月便擁有上億的使用者,成為了歷史上增長最快速的應用程式,其用途廣泛,不再只是普通的聊天機器人,而是提供給大眾使用的生成式人工智慧(GAI)工具,它不僅可以與人類自然的對話與互動,也能執行複雜的語言工作,例如:創作音樂、撰寫小說、編寫程式等。從ChatGPT推出以來,我們總能在新聞與媒體中看到關於ChatGPT取代人類工作、為人類帶來威脅與挑戰、產生出新型態的工作模式等報導,它強大的功能使得各階層的工作者皆可能受其影響,進而導致部分人們失去原先的工作機會。
    ChatGPT的認知能力與情感能力為人們的生活帶來眾多機會的同時,也帶來相當程度的挑戰與威脅,欲更進一步了解ChatGPT的出現對於學生群體是機會?還是威脅?是否會進而影響與其協作的意願? 同時,本研究也將進一步了解個人自我能力的主觀評價、就讀科系的差異是否也會造成影響。因此本研究將以COR理論框架作為基礎,並以「認知能力」、「情感能力」、「感知機會」、「感知威脅」、「一般自我效能感」、「協作意願」作為研究構面進行探討。
    本研究採網路問卷發放的形式,總收回份數為242份,有效問卷為189份,並以SPSS與SmartPLS對資料進行驗證與檢定。本研究結果表明ChatGPT的「認知能力」會顯著正向影響學生們的「感知機會」、ChatGPT的「情感能力」會顯著正向影響學生們的「感知威脅」。而學生們對於ChatGPT所帶來的「感知機會」會顯著正向影響其「協作意願」。同時,「一般自我效能感」會顯著調節「感知機會」與「協作意願」之關係,當一般自我效能感較高者,感知機會對於協作意願的正向影響會小於一般自我效能感較低者的正向影響。


    ChatGPT has achieved unprecedented growth as an application, amassing over a hundred million users in just two months since its launch. It has transformed from a conventional chatbot into a powerful Generative Artificial Intelligence (GAI) tool that is accessible to the general public. This versatile tool goes beyond engaging in natural conversations and interactions with humans; it also possesses the ability to perform intricate language tasks, such as composing music, writing novels, and even coding programs. Since the introduction of ChatGPT, news outlets and media platforms have been abuzz with reports on its potential to replace human jobs, pose threats and challenges to society, and give rise to new forms of employment. Its remarkable capabilities have the potential to influence individuals across various professions, potentially resulting in some people losing their previous job opportunities.
    The cognitive and emotional capabilities of ChatGPT bring numerous opportunities to people's lives, while also introducing a significant degree of challenges and potential threats. This study will delve deeper into the impact of ChatGPT on students—is it an opportunity or a threat—and to ascertain if it might consequently influence their willingness to collaborate with it. Additionally, the research will examine how subjective evaluations of personal abilities and differences in academic disciplines may further affect these perceptions. Grounded in the Conservation of Resources (COR) theory framework, the study will investigate various aspects, including cognitive abilities, emotional abilities, perceived opportunities, perceived threats, general self-efficacy, and the willingness to collaborate ChatGPT.
    In this study, an online questionnaire was administered, and a total of 242 responses were collected. After filtering, 189 valid responses were obtained, and the data were validated and analyzed using SPSS and SmartPLS. The results indicate that the "cognitive abilities" of ChatGPT significantly and positively influence students' "perceived opportunities," while the "emotional abilities" of ChatGPT significantly and positively influence students' "perceived threats". Moreover, students' perception of the "perceived opportunities" presented by ChatGPT significantly and positively affects their "willingness to collaborate" it. Additionally, the relationship between "perceived opportunities" and "willingness to collaborate" is significantly moderated by "general self-efficacy." Individuals with higher levels of general self-efficacy exhibit a smaller positive impact of perceived opportunities on their willingness to collaborate ChatGPT compared to those with lower levels of general self-efficacy.

    摘要 i ABSTRACT ii 致謝 iv 目錄 vi 表目錄 ix 圖目錄 x 壹、緒論 1 一、研究背景與動機 1 二、研究問題與目的 4 三、研究架構 8 四、研究流程 9 貳、文獻探討 10 一、COR理論 (Conservation of Resources) 10 二、認知能力 (Cognitive ability) 12 三、情感能力 (Emotional ability) 12 四、感知威脅 (Perceived threat) 13 五、感知機會 (Perceived opportunity) 14 六、協作意願 (Willingness to collaborate) 15 七、一般自我效能感 (General self-efficacy) 16 八、學生們對於ChatGPT的使用感受與評估 17 參、研究模型與假說 19 一、研究架構 19 二、研究假說 20 (一) 認知能力與感知威脅、機會之關係 20 (二) 情感能力與感知威脅、機會之關係 22 (三) 感知威脅與協作意願之關係 23 (四) 感知機會與協作意願之關係 24 (五) 一般自我效能感之調節作用 25 肆、研究方法 28 一、研究設計 28 二、研究對象 28 三、研究變數之定義 29 四、問卷構面及問項 31 伍、資料分析 37 一、敘述性統計資料 37 二、信效度分析 43 (一) 信度分析 44 (二) 效度分析 45 三、結構模型分析 51 (一) 路徑分析與假說檢定 51 (二) 中介效果檢定 54 (三) 調節效果檢定與簡單斜率分析 55 (四) 事後檢定 56 陸、結論與建議 60 一、研究發現與討論 61 (一) 認知能力對感知威脅與感知機會之影響 61 (二) 情感能力對感知威脅與感知機會之影響 62 (三) 感知威脅與感知機會對協作意願之影響 62 (四) 一般自我效能感調節感知機會與威脅對協作意願之影響 63 二、學術與實務之貢獻 64 (一) 學術之貢獻 64 (二) 實務之貢獻 65 三、研究限制與未來研究建議 66 參考文獻 68

    Agarwal, R., & Karahanna, E. (2000). Time flies when you're having fun: Cognitive absorption and beliefs about information technology usage. MIS quarterly, 665-694.
    Aiken, L. S., West, S. G., & Reno, R. R. (1991). Multiple regression: Testing and interpreting interactions. sage.
    Aldholay, A. H., Isaac, O., Abdullah, Z., & Ramayah, T. (2018). The role of transformational leadership as a mediating variable in DeLone and McLean information system success model: The context of online learning usage in Yemen. Telematics and Informatics, 35(5), 1421-1437.
    Ali, J. K. M., Shamsan, M. A. A., Hezam, T. A., & Mohammed, A. A. (2023). Impact of ChatGPT on learning motivation: teachers and students' voices. Journal of English Studies in Arabia Felix, 2(1), 41-49.
    Alkaissi, H., & McFarlane, S. I. (2023). Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus, 15(2).
    Allen, R., & Choudhury, P. (2022). Algorithm-augmented work and domain experience: The countervailing forces of ability and aversion. Organization Science, 33(1), 149-169.
    Ameen, N., Tarhini, A., Reppel, A., & Anand, A. (2021). Customer experiences in the age of artificial intelligence. Computers in Human Behavior, 114, 106548.
    Aoki, N. (2020). An experimental study of public trust in AI chatbots in the public sector. Government Information Quarterly, 37(4), 101490.
    Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85, 183-189.
    Ashfaq, M., Yun, J., Yu, S., & Loureiro, S. M. C. (2020). I, Chatbot: Modeling the determinants of users’ satisfaction and continuance intention of AI-powered service agents. Telematics and Informatics, 54, 101473.
    Bakker, A. B., & Demerouti, E. (2007). The job demands‐resources model: State of the art. Journal of managerial psychology, 22(3), 309-328.
    Bakker, A. B., Demerouti, E., & Sanz-Vergel, A. I. (2014). Burnout and work engagement: The JD–R approach. Annu. Rev. Organ. Psychol. Organ. Behav., 1(1), 389-411.
    Bandura, A. (1997). Self-efficacy: The exercise of control. W H Freeman/Times Books/ Henry Holt & Co.
    Beale, R., & Creed, C. (2009). Affective interaction: How emotional agents affect users. International journal of human-computer studies, 67(9), 755-776.
    Belanche, D., Casaló, L. V., & Flavián, C. (2019). Artificial Intelligence in FinTech: understanding robo-advisors adoption among customers. Industrial Management & Data Systems, 119(7), 1411-1430.
    Benbya, H., Davenport, T. H., & Pachidi, S. (2020). Artificial intelligence in organizations: Current state and future opportunities. MIS Quarterly Executive, 19(4).
    Benke, I., Gnewuch, U., & Maedche, A. (2022). Understanding the impact of control levels over emotion-aware chatbots. Computers in Human Behavior, 129, 107122.
    Bennett, C. C., & Hauser, K. (2013). Artificial intelligence framework for simulating clinical decision-making: A Markov decision process approach. Artificial intelligence in medicine, 57(1), 9-19.
    Bhattacherjee, A. (2001). Understanding information systems continuance: An expectation-confirmation model. MIS quarterly, 351-370.
    Blue, F. H. B. D. (2004). Building the computer that defeated the World Chess Champion. Princeton, Princeton Univ Pr.
    Brandtzaeg, P. B., & Følstad, A. (2017). Why people use chatbots. In Internet Science: 4th International Conference, INSCI 2017, Thessaloniki, Greece, November 22-24, 2017, Proceedings 4 (pp. 377-392). Springer International Publishing.
    Brey, P. A. (2012). Anticipatory ethics for emerging technologies. NanoEthics, 6(1), 1-13.
    Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877-1901.
    Brougham, D., & Haar, J. (2020). Technological disruption and employment: The influence on job insecurity and turnover intentions: A multi-country study. Technological Forecasting and Social Change, 161, 120276.
    Brynjolfsson, E., McAfee, A. (2011). Race against the machine: how the digital revolution is accelerating innovation, driving productivity, and irreversibly transforming employment and the economy. Digital Frontier Press, Lexington, MA.
    Brynjolfsson, E., & Mitchell, T. (2017). What can machine learning do? Workforce implications. Science, 358(6370), 1530-1534.
    Califf, C. B., Brooks, S., & Longstreet, P. (2020). Human-like and system-like trust in the sharing economy: The role of context and humanness. Technological Forecasting and Social Change, 154, 119968.
    Calo, R. (2015). Robotics and the Lessons of Cyberlaw. California Law Review, 513-563.
    Charles, S. T., Piazza, J. R., Mogle, J., Sliwinski, M. J., & Almeida, D. M. (2013). The wear and tear of daily stressors on mental health. Psychological science, 24(5), 733-741.
    Chattaraman, V., Kwon, W. S., Gilbert, J. E., & Ross, K. (2019). Should AI-Based, conversational digital assistants employ social-or task-oriented interaction style? A task-competency and reciprocity perspective for older adults. Computers in Human Behavior, 90, 315-330.
    Chaves, A. P., & Gerosa, M. A. (2021). How should my chatbot interact? A survey on social characteristics in human–chatbot interaction design. International Journal of Human–Computer Interaction, 37(8), 729-758.
    Chen, G., Gully, S. M., & Eden, D. (2001). Validation of a new general self-efficacy scale. Organizational research methods, 4(1), 62-83.
    Cheng, X., Bao, Y., Zarifis, A., Gong, W., & Mou, J. (2021). Exploring consumers' response to text-based chatbots in e-commerce: the moderating role of task complexity and chatbot disclosure. Internet Research, 32(2), 496-517.
    Chen, G., Gully, S. M., Whiteman, J. A., & Kilcullen, R. N. (2000). Examination of relationships among trait-like individual differences, state-like individual differences, and learning performance. Journal of applied psychology, 85(6), 835.
    Cheng, J. W., Chang, S. C., Kuo, J. H., & Cheung, Y. H. (2014). Ethical leadership, work engagement, and voice behavior. Industrial Management & Data Systems, 114(5), 817-831.
    Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H.P. d O., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman, G. (2021). Evaluating large language models trained on code. arXiv Preprint arXiv:2107.03374.
    Chiu, W., Cho, H., & Chi, C. G. (2020). Consumers' continuance intention to use fitness and health apps: an integration of the expectation–confirmation model and investment model. Information Technology & People, 34(3), 978-998.
    Choi, J.H., Hickman, K.E., Monahan, A., Schwarcz, D. (2023). ChatGPT goes to law school. Available at SSRN.
    Chung, M., Ko, E., Joung, H., & Kim, S. J. (2020). Chatbot e-service and customer satisfaction regarding luxury brands. Journal of Business Research, 117, 587-595.
    Coeckelbergh, M. (2010). Robot rights? Towards a social-relational justification of moral consideration. Ethics and information technology, 12, 209-221.
    Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological bulletin, 52(4), 281.
    Cuyper, N. D., Bernhard‐Oettel, C., Berntson, E., Witte, H. D., & Alarco, B. (2008). Employability and employees’ well‐being: Mediation by job insecurity 1. Applied Psychology, 57(3), 488-509.
    Danckwerts, S., Meißner, L., & Krampe, C. (2019). Examining user experience of conversational agents in hedonic digital services–antecedents and the role of psychological ownership. SMR-Journal of Service Management Research, 3(3), 111-125.
    Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS quarterly, 319-340.
    Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management science, 35(8), 982-1003.
    Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1992). Extrinsic and intrinsic motivation to use computers in the workplace 1. Journal of applied social psychology, 22(14), 1111-1132.
    DeLone, W. H., & McLean, E. R. (1992). Information systems success: The quest for the dependent variable. Information systems research, 3(1), 60-95.
    Deng, X., & Yu, Z. (2023). A meta-analysis and systematic review of the effect of chatbot technology use in sustainable education. Sustainability, 15(4), 2940.
    D'Mello, S., Olney, A., Williams, C., & Hays, P. (2012). Gaze tutor: A gaze-reactive intelligent tutoring system. International Journal of human-computer studies, 70(5), 377-398.
    Dolcos, F., Katsumi, Y., Moore, M., Berggren, N., de Gelder, B., Derakshan, N., ... & Dolcos, S. (2020). Neural correlates of emotion-attention interactions: From perception, learning, and memory to social cognition, individual differences, and training interventions. Neuroscience & Biobehavioral Reviews, 108, 559-601.
    Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., ... & Wright, R. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642.
    Ebadi, S., & Amini, A. (2022). Examining the roles of social presence and human-likeness on Iranian EFL learners’ motivation using artificial intelligence technology: A case of CSIEC chatbot. Interactive Learning Environments, 1-19.
    Eden, D. (1988). Pygmalion, goal setting, and expectancy: Compatible ways to boost productivity. Academy of Management Review, 13(4), 639-652.
    Efron, B., & Tibshirani, R. J. (1994). An introduction to the bootstrap. CRC press.
    Eloundou, T., Manning, S., Mishkin, P., Rock, D. (2023). GPTs are GPTs: an early look at the labor market impact potential of large language models. arXiv Preprint arXiv: 2303.10130.
    Fernandes, T., & Oliveira, E. (2021). Understanding consumers’ acceptance of automated technologies in service encounters: Drivers of digital voice assistants adoption. Journal of Business Research, 122, 180-191.
    Fethi, M. D., & Pasiouras, F. (2010). Assessing bank efficiency and performance with operational research and artificial intelligence techniques: A survey. European journal of operational research, 204(2), 189-198.
    Floridi, L. (2019). The logic of information: A theory of philosophy as conceptual design. Oxford University Press.
    Folkman, S., Lazarus, R. S., Dunkel-Schetter, C., DeLongis, A., & Gruen, R. J. (1986). Dynamics of a stressful encounter: cognitive appraisal, coping, and encounter outcomes. Journal of personality and social psychology, 50(5), 992.
    Følstad, A., Nordheim, C. B., & Bjørkli, C. A. (2018). What makes users trust a chatbot for customer service? An exploratory interview study. In Internet Science: 5th International Conference, INSCI 2018, St. Petersburg, Russia, October 24–26, 2018, Proceedings 5 (pp. 194-208). Springer International Publishing.
    Følstad, A., & Skjuve, M. (2019). Chatbots for customer service: user experience and motivation. In Proceedings of the 1st international conference on conversational user interfaces (pp. 1-9).
    Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of marketing research, 18(1), 39-50.
    Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation?. Technological forecasting and social change, 114, 254-280.
    Kim, J. K., Chua, M., Rickard, M., & Lorenzo, A. (2023). ChatGPT and large language model (LLM) chatbots: the current state of acceptability and a proposal for guidelines on utilization in academic medicine. Journal of Pediatric Urology.
    Gallix, B., & Chong, J. (2019). Artificial intelligence in radiology: who’s afraid of the big bad wolf?. European radiology, 29, 1637-1639.
    Gao, L., & Waechter, K. A. (2017). Examining the role of initial trust in user adoption of mobile payment services: an empirical investigation. Information Systems Frontiers, 19, 525-548.
    Gefen, D., Karahanna, E., & Straub, D. W. (2003). Trust and TAM in online shopping: An integrated model. MIS quarterly, 51-90.
    Ghazali, Badruddin Bin. (2023). Utilising ChatGPT. BDJ student. Nature 30.2.
    Go, E., & Sundar, S. S. (2019). Humanizing chatbots: The effects of visual, identity and conversational cues on humanness perceptions. Computers in Human Behavior, 97, 304-316.
    Goleman, D. (1998). Working with emotional intelligence. Bantam.
    Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. science, 315(5812), 619-619.
    Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125(1), 125-130.
    Guha, A., Grewal, D., Kopalle, P. K., Haenlein, M., Schneider, M. J., Jung, H., ... & Hawkins, G. (2021). How artificial intelligence will affect the future of retailing. Journal of Retailing, 97(1), 28-41.
    Gursoy, D., Chi, O. H., Lu, L., & Nunkoo, R. (2019). Consumers acceptance of artificially intelligent (AI) device use in service delivery. International Journal of Information Management, 49, 157-169.
    Guzman, A. L. (2020). Ontological boundaries between humans and computers and the implications for human-machine communication. Human-Machine Communication, 1, 37-54.
    Hackbarth, G., Grover, V., & Mun, Y. Y. (2003). Computer playfulness and anxiety: positive and negative mediators of the system experience effect on perceived ease of use. Information & management, 40(3), 221-232.
    Hair, J. F., Anderson, R. E., Babin, B. J., & Black, W. C. (2010). Multivariate data analysis: A global perspective (Vol. 7).
    Halbesleben, J. R., Neveu, J. P., Paustian-Underdahl, S. C., & Westman, M. (2014). Getting to the “COR” understanding the role of resources in conservation of resources theory. Journal of management, 40(5), 1334-1364.
    Hassanein, K., & Head, M. (2007). Manipulating perceived social presence through the web interface and its impact on attitude towards online shopping. International journal of human-computer studies, 65(8), 689-708.
    Heerink, M., Kröse, B., Evers, V., & Wielinga, B. (2010). Assessing acceptance of assistive social agent technology by older adults: the almere model.
    Heide, J. B., & Wathne, K. H. (2006). Friends, businesspeople, and relationship roles: A conceptual framework and a research agenda. Journal of Marketing, 70(3), 90-103.
    Herring, S. C. (2004). Computer-mediated discourse analysis: An approach to researching online behavior. Designing for virtual communities in the service of learning, 338, 376.
    Hew, J. J., Lee, V. H., Ooi, K. B., & Lin, B. (2016). Mobile social commerce: The booster for brand loyalty?. Computers in Human Behavior, 59, 142-154.
    Hill, J., Ford, W. R., & Farreras, I. G. (2015). Real conversations with artificial intelligence: A comparison between human–human online conversations and human–chatbot conversations. Computers in human behavior, 49, 245-250.
    Hobfoll, S. E. (1989). Conservation of resources: A new attempt at conceptualizing stress. American psychologist, 44(3), 513.
    Hobfoll, S. E. (2001). The influence of culture, community, and the nested‐self in the stress process: Advancing conservation of resources theory. Applied psychology, 50(3), 337-421.
    Hobfoll, S. E., Johnson, R. J., Ennis, N., & Jackson, A. P. (2003). Resource loss, resource gain, and emotional outcomes among inner city women. Journal of personality and social psychology, 84(3), 632.
    Hobfoll, S. E., & Freedy, J. (2018). Conservation of resources: A general stress theory applied to burnout. In Professional burnout (pp. 115-129). CRC Press.
    Hobfoll, S. E., Halbesleben, J., Neveu, J. P., & Westman, M. (2018). Conservation of resources in the organizational context: The reality of resources and their consequences. Annual review of organizational psychology and organizational behavior, 5, 103-128.
    Hoffman, D. L., & Novak, T. P. (2009). Flow online: lessons learned and future prospects. Journal of interactive marketing, 23(1), 23-34.
    Höge, T., Sora, B., Weber, W. G., Peiró, J. M., & Caballer, A. (2015). Job insecurity, worries about the future, and somatic complaints in two economic and cultural contexts: A study in Spain and Austria. International Journal of Stress Management, 22(3), 223.
    Ho, W. L. J., Koussayer, B., & Sujka, J. (2023). CHATGPT: FRIEND OR FOE IN MEDICAL WRITING? AN EXAMPLE OF HOW CHATGPT CAN BE UTILIZED IN WRITING CASE REPORTS. Surgery in Practice and Science, 100185.
    Hsiao, C. H., Chang, J. J., & Tang, K. Y. (2016). Exploring the influential factors in continuance usage of mobile social Apps: Satisfaction, habit, and customer value perspectives. Telematics and Informatics, 33(2), 342-355.
    Hunter, J. E., & Schmidt, F. L. (1996). Intelligence and job performance: Economic and social implications. Psychology, Public Policy, and Law, 2(3-4), 447.
    Jeong, N., Yoo, Y., & Heo, T. Y. (2009). Moderating effect of personal innovativeness on mobile-RFID services: Based on Warshaw's purchase intention model. Technological Forecasting and Social Change, 76(1), 154-164.
    Jex, S. M., & Gudanowski, D. M. (1992). Efficacy beliefs and work stress: An exploratory study. Journal of organizational behavior, 13(5), 509-517.
    Jolliffe, D., & Farrington, D. P. (2006). Development and validation of the Basic Empathy Scale. Journal of adolescence, 29(4), 589-611.
    Judd, C. M., James-Hawkins, L., Yzerbyt, V., & Kashima, Y. (2005). Fundamental dimensions of social judgment: understanding the relations between judgments of competence and warmth. Journal of personality and social psychology, 89(6), 899.
    Judge, T. A., & Bono, J. E. (2001). Relationship of core self-evaluations traits—self-esteem, generalized self-efficacy, locus of control, and emotional stability—with job satisfaction and job performance: A meta-analysis. Journal of applied Psychology, 86(1), 80.
    Kasilingam, D. L. (2020). Understanding the attitude and intention to use smartphone chatbots for shopping. Technology in Society, 62, 101280.
    Keiper, M. C., Fried, G., Lupinek, J., & Nordstrom, H. (2023). Artificial intelligence in sport management education: Playing the AI game with ChatGPT. Journal of Hospitality, Leisure, Sport & Tourism Education, 33, 100456.
    Kevin Roose (2022). The brilliance and weirdness of ChatGPT. The New York Times Company.
    Kim, B. (2010). An empirical investigation of mobile data service continuance: Incorporating the theory of planned behavior into the expectation–confirmation model. Expert systems with applications, 37(10), 7033-7039.
    Koufaris, M. (2002). Applying the technology acceptance model and flow theory to online consumer behavior. Information systems research, 13(2), 205-223.
    Lawshe, C. H. (1975). A quantitative approach to content validity. Personnel psychology, 28(4), 563-575.
    Lazarus, R. S. (1991). Progress on a cognitive-motivational-relational theory of emotion. American psychologist, 46(8), 819.
    Lee, C., Huang, G. H., & Ashford, S. J. (2018). Job insecurity and the changing workplace: Recent developments and the future trends in job insecurity research. Annual Review of Organizational Psychology and Organizational Behavior, 5, 335-359.
    Lee, S., & Choi, J. (2017). Enhancing user experience with conversational agent for movie recommendation: Effects of self-disclosure and reciprocity. International Journal of Human-Computer Studies, 103, 95-105.
    Lee, S., Lee, N., & Sah, Y. J. (2020). Perceiving a mind in a chatbot: effect of mind perception and social cues on co-presence, closeness, and intention to use. International Journal of Human–Computer Interaction, 36(10), 930-940.
    Lee, S. Y., Petrick, J. F., & Crompton, J. (2007). The roles of quality and intermediary constructs in determining festival attendees' behavioral intention. Journal of Travel Research, 45(4), 402-412.
    Lent, R. W., Brown, S. D., & Hackett, G. (1994). Toward a unifying social cognitive theory of career and academic interest, choice, and performance. Journal of vocational behavior, 45(1), 79-122.
    Lin, W. S. (2012). Perceived fit and satisfaction on web learning performance: IS continuance intention and task-technology fit perspectives. International Journal of Human-Computer Studies, 70(7), 498-507.
    Liu, B. (2021). In AI we trust? Effects of agency locus and transparency on uncertainty reduction in human–AI interaction. Journal of Computer-Mediated Communication, 26(6), 384-402.
    Li, X., Chan, K. W., & Kim, S. (2019). Service with emoticons: How customers interpret employee use of emoticons in online service encounters. Journal of Consumer Research, 45(5), 973-987.
    Lowry, P. B., Gaskin, J., Twyman, N., Hammer, B., & Roberts, T. (2012). Taking ‘fun and games’ seriously: Proposing the hedonic-motivation system adoption model (HMSAM). Journal of the association for information systems, 14(11), 617-671.
    Lucas, G. M., Gratch, J., King, A., & Morency, L. P. (2014). It’s only a computer: Virtual humans increase willingness to disclose. Computers in Human Behavior, 37, 94-100.
    Lu, H. P., & Chiou, M. J. (2010). The impact of individual differences on e‐learning system satisfaction: A contingency approach. British Journal of Educational Technology, 41(2), 307-323.
    Lund, B. D., & Wang, T. (2023). Chatting about ChatGPT: how may AI and GPT impact academia and libraries?. Library Hi Tech News, 40(3), 26-29.
    Luo, X., Tong, S., Fang, Z., & Qu, Z. (2019). Frontiers: Machines vs. humans: The impact of artificial intelligence chatbot disclosure on customer purchases. Marketing Science, 38(6), 937-947.
    Lu, S. F., Rui, H., & Seidmann, A. (2018). Does technology substitute for nurses? Staffing decisions in nursing homes. Management Science, 64(4), 1842-1859.
    Luszczynska, A., Gutiérrez‐Doña, B., & Schwarzer, R. (2005). General self‐efficacy in various domains of human functioning: Evidence from five countries. International journal of Psychology, 40(2), 80-89.
    Luszczynska, A., Scholz, U., & Schwarzer, R. (2005). The general self-efficacy scale: multicultural validation studies. The Journal of psychology, 139(5), 439-457.
    Macdonald, C., Adeloye, D., Sheikh, A., & Rudan, I. (2023). Can ChatGPT draft a research article? An example of population-level vaccine effectiveness analysis. Journal of global health, 13.
    Madhavan, P., Wiegmann, D. A., & Lacson, F. C. (2006). Automation failures on tasks easily performed by operators undermine trust in automated aids. Human factors, 48(2), 241-256.
    Maedche, A., Morana, S., Schacht, S., Werth, D., & Krumeich, J. (2016). Advanced user assistance systems. Business & Information Systems Engineering, 58, 367-370.
    Mauno, S., Leskinen, E., & Kinnunen, U. (2001). Multi‐wave, multi‐variable models of job insecurity: applying different scales in studying the stability of job insecurity. Journal of Organizational Behavior: The International Journal of Industrial, Occupational and Organizational Psychology and Behavior, 22(8), 919-937.
    Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of management review, 20(3), 709-734.
    McCoy, S., Galletta, D. F., & King, W. R. (2005). Integrating national culture into IS research: The need for current individual level measures. Communications of the Association for Information Systems, 15(1), 12.
    McDuff, D., & Czerwinski, M. (2018). Designing emotionally sentient agents. Communications of the ACM, 61(12), 74-83.
    Metz, A. (2023). exciting ways to use ChatGPT–from coding to poetry. TechRadar.< ht t ps.
    M. Jovanović and M. Campbell (2022). Generative Artificial Intelligence: Trends and Prospects, in Computer, vol. 55, no. 10, pp. 107-112, doi: 10.1109/MC.2022.3192720.
    Mitchell, A. (2023). ChatGPT could make these jobs obsolete:‘The wolf is at the door.’. New York Post.
    Mohammad Hosseini, Serge PJM. Horbach, Fighting Reviewer Fatigue or Amplifying Bias? Considerations and Recommendations for Use of ChatGPT and Other Large Language Models in Scholarly Peer Review, 2023.
    Moriuchi, E. (2019). Okay, Google!: An empirical study on voice assistants on consumer engagement and loyalty. Psychology & Marketing, 36(5), 489-501.
    Moussawi, S., Koufaris, M., & Benbunan-Fich, R. (2021). How perceptions of intelligence and anthropomorphism affect adoption of personal intelligent agents. Electronic Markets, 31, 343-364.
    Mou, Y., & Xu, K. (2017). The media inequality: Comparing the initial human-human and human-AI social interactions. Computers in Human Behavior, 72, 432-440.
    Muntinga, D. G., Moorman, M., & Smit, E. G. (2011). Introducing COBRAs: Exploring motivations for brand-related social media use. International Journal of advertising, 30(1), 13-46.
    Neff, A., Sonnentag, S., Niessen, C., & Unger, D. (2015). The crossover of self-esteem: A longitudinal perspective. European Journal of Work and Organizational Psychology, 24(2), 197-210.
    Neisser, U. (1976). Cognition and Reality: Principles and Implications of Cognitive Psychology. New York: WH Freeman and Company.
    Nisar, S., & Aslam, M. S. (2023). Is ChatGPT a Good Tool for T&CM Students in Studying Pharmacology?. Available at SSRN 4324310.
    O'Connor, S. (2022). Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse?. Nurse Education in Practice, 66, 103537-103537.
    Okon-Singer, H., Hendler, T., Pessoa, L., & Shackman, A. J. (2015). The neurobiology of emotion–cognition interactions: fundamental questions and strategies for future research. Frontiers in human neuroscience, 9, 58.
    O'Neill, B. S., & Mone, M. A. (1998). Investigating equity sensitivity as a moderator of relations between self-efficacy and workplace attitudes. Journal of Applied Psychology, 83(5), 805.
    Ones, D. S., Dilchert, S., & Viswesvaran, C. (2012). Cognitive abilities. N. Schmitt (Ed.). Handbook of personnel assessment and selection (pp. 179-224).
    O. Pappas, I., G. Pateli, A., N. Giannakos, M., & Chrissikopoulos, V. (2014). Moderating effects of online shopping experience on customer satisfaction and repurchase intentions. International Journal of Retail & Distribution Management, 42(3), 187-204.
    Patel, V. L., Shortliffe, E. H., Stefanelli, M., Szolovits, P., Berthold, M. R., Bellazzi, R., & Abu-Hanna, A. (2009). The coming of age of artificial intelligence in medicine. Artificial intelligence in medicine, 46(1), 5-17.
    Pavlou, P. A., & Gefen, D. (2004). Building effective online marketplaces with institution-based trust. Information systems research, 15(1), 37-59.
    Pelau, C., Dabija, D. C., & Ene, I. (2021). What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry. Computers in Human Behavior, 122, 106855.
    Pensgaard, A. M., & Roberts, G. C. (2000). The relationship between motivational climate, perceived ability and sources of distress among elite athletes. Journal of sports sciences, 18(3), 191-200.
    Pessoa, L. (2013). The cognitive-emotional brain: From interactions to integration. MIT press.
    Pettinato Oltz, Tammy (2023). ChatGPT, Professor of Law, Professor of Law.
    Piazza, J. R., Charles, S. T., Sliwinski, M. J., Mogle, J., & Almeida, D. M. (2013). Affective reactivity to daily stressors and long-term risk of reporting a chronic physical health condition. Annals of Behavioral Medicine, 45(1), 110-120.
    Preacher, K. J., & Hayes, A. F. (2008). Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models. Behavior research methods, 40(3), 879-891.
    Prentice, C., & Nguyen, M. (2021). Robotic service quality–Scale development and validation. Journal of Retailing and Consumer Services, 62, 102661.
    Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training.
    Radziwill, N. M., & Benton, M. C. (2017). Evaluating quality of chatbots and intelligent conversational agents. arXiv preprint arXiv:1704.04579.
    Ramadan, Z. B. (2021). “Alexafying” shoppers: The examination of Amazon's captive relationship strategy. Journal of Retailing and Consumer Services, 62, 102610.
    Rao, A., Kim, J., Kamineni, M., Pang, M., Lie, W., & Succi, M. D. (2023). Evaluating ChatGPT as an adjunct for radiologic decision-making. medRxiv, 2023-02.
    Reddy, T. (2017). How chatbots can help reduce customer service costs by 30%. The Analytics Maturity Model (IT Best Kept Secret Is Optimization).
    Reed, L. (2022). ChatGPT for Automated Testing: From conversation to code. Sauce Labs.
    Russell, S. & Norvig, P. (2010) Artificial Intelligence: A Modern Approach. 3rd Edition, Prentice-Hall, Upper Saddle River.
    Saks, A. M. (1994). Moderating effects of self‐efficacy for the relationship between training method and anxiety and stress reactions of newcomers. Journal of organizational behavior, 15(7), 639-654.
    Salovey, P., & Mayer, J. D. (1990). Emotional Intelligence Imagination, cognition, and personality. Imagination, Cognition and Personality, 9(3), 1989-90.
    Samek, W., Montavon, G., Vedaldi, A., Hansen, L. K., & Müller, K. R. (Eds.). (2019). Explainable AI: interpreting, explaining and visualizing deep learning (Vol. 11700). Springer Nature.
    Savage, N. (2020). How AI is improving cancer diagnostics. Nature, 579(7800), S14–S16.https://doi.org/10.1038/d41586-020-00847-2.
    Schanke, S., Burtch, G., & Ray, G. (2021). Estimating the impact of “humanizing” customer service chatbots. Information Systems Research, 32(3), 736-751.
    Schaubroeck, J., & Merritt, D. E. (1997). Divergent effects of job control on coping with work stressors: The key role of self-efficacy. Academy of Management Journal, 40(3), 738-754.
    Scholz, U., Doña, B. G., Sud, S., & Schwarzer, R. (2002). Is general self-efficacy a universal construct? Psychometric findings from 25 countries. European journal of psychological assessment, 18(3), 242.
    Schuetzler, R. M., Grimes, G. M., & Scott Giboney, J. (2020). The impact of chatbot conversational skill on engagement and perceived humanness. Journal of Management Information Systems, 37(3), 875-900.
    Schwarzer, R. (Ed.). (1992). Self-efficacy: Thought control of action. Hemisphere Publishing Corp.
    Schwarzer, R., & Jerusalem, M. (1995). Generalized self-efficacy scale. J. Weinman, S. Wright, & M. Johnston, Measures in health psychology: A user’s portfolio. Causal and control beliefs, 35, 37.
    Schwarzer, R., & Jerusalem, M. (1995). Optimistic self-beliefs as a resource factor in coping with stress. In Extreme stress and communities: Impact and intervention (pp. 159-177). Dordrecht: Springer Netherlands.
    Schwarzer, R., Schmitz, G. S., & Tang, C. (2000). Teacher burnout in Hong Kong and Germany: A cross-cultural validation of the Maslach Burnout Inventory.
    Sender, A., Arnold, A., & Staffelbach, B. (2017). Job security as a threatened resource: Reactions to job insecurity in culturally distinct regions. The International Journal of Human Resource Management, 28(17), 2403-2429.
    Shane, S., & Venkataraman, S. (2000). The promise of entrepreneurship as a field of research. Academy of management review, 25(1), 217-226.
    Sheehan, B., Jin, H. S., & Gottlieb, U. (2020). Customer service chatbots: Anthropomorphism and adoption. Journal of Business Research, 115, 14–24.
    Shelton, S. H. (1990). Developing the construct of general self-efficacy1. Psychological reports, 66(3), 987-994.
    Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551.
    Shoss, M. K. (2017). Job insecurity: An integrative review and agenda for future research. Journal of management, 43(6), 1911-1939.
    Sibley, C. G., Osborne, D., & Duckitt, J. (2012). Personality and political orientation: Meta-analysis and test of a Threat-Constraint Model. Journal of Research in Personality, 46(6), 664-677.
    Sohn, K., & Kwon, O. (2020). Technology acceptance theories and factors influencing artificial Intelligence-based intelligent products. Telematics and Informatics, 47, 101324.
    Srite, M. (2006). Culture as an explanation of technology acceptance differences: An empirical investigation of Chinese and US users. Australasian Journal of Information Systems, 14(1).
    Stevenson, C., Smal, I., Baas, M., Grasman, R., & van der Maas, H. (2022). Putting GPT-3's Creativity to the (Alternative Uses) Test. arXiv preprint arXiv:2206.08932.
    Straub, D., Keil, M., & Brenner, W. (1997). Testing the technology acceptance model across cultures: A three country study. Information & management, 33(1), 1-11.
    Strogatz, S. (2018). One giant step for a chess-playing machine. The New York Times.
    Sun, H., & Zhang, P. (2006). The role of moderating factors in user technology acceptance. International journal of human-computer studies, 64(2), 53-78.
    Sundar, S. S. (2008). The MAIN model: A heuristic approach to understanding technology effects on credibility (pp. 73-100). Cambridge, MA: MacArthur Foundation Digital Media and Learning Initiative.
    Sutoyo, R., Chowanda, A., Kurniati, A., & Wongso, R. (2019). Designing an emotionally realistic chatbot framework to enhance its believability with AIML and information states. Procedia Computer Science, 157, 621-628.
    Sverke, M., & Hellgren, J. (2002). The nature of job insecurity: Understanding employment uncertainty on the brink of a new millennium. Applied Psychology, 51(1), 23-42.
    Tamkin, A., Brundage, M., Clark, J., & Ganguli, D. (2021). Understanding the capabilities, limitations, and societal impact of large language models. arXiv preprint arXiv:2102.02503.
    Terwiesch, C. (2023). Would Chat GPT get a Wharton MBA? A prediction based on its performance in the operations management course. Mack Institute for Innovation Management at the Wharton School, University of Pennsylvania.
    Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science, 379(6630), 313-313.
    Trippi, R. R., & Turban, E. (Eds.). (1992). Neural networks in finance and investing: Using artificial intelligence to improve real world performance. McGraw-Hill, Inc.
    Tsai, W. H. S., Liu, Y., & Chuan, C. H. (2021). How chatbots' social presence communication enhances consumer engagement: the mediating role of parasocial interaction and dialogue. Journal of Research in Interactive Marketing, 15(3), 460-482.
    Tung, L. (2023). ChatGPT can write code. Now researchers say it’s good at fixing bugs, too. ZDNet.
    Van den Broeck, E., Zarouali, B., & Poels, K. (2019). Chatbot advertising effectiveness: When does the message get through?. Computers in Human Behavior, 98, 150-157.
    Van Doorn, J., Mende, M., Noble, S. M., Hulland, J., Ostrom, A. L., Grewal, D., & Petersen, J. A. (2017). Domo arigato Mr. Roboto: Emergence of automated social presence in organizational frontlines and customers’ service experiences. Journal of service research, 20(1), 43-58.
    Vargo, S. L., Maglio, P. P., & Akaka, M. A. (2008). On value and value co-creation: A service systems and service logic perspective. European management journal, 26(3), 145-152.
    Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS quarterly, 425-478.
    Vimalkumar, M., Sharma, S. K., Singh, J. B., & Dwivedi, Y. K. (2021). ‘Okay google, what about my privacy?’: User's privacy perceptions and acceptance of voice based digital assistants. Computers in Human Behavior, 120, 106763.
    Wang, Y. S. (2008). Assessing e‐commerce systems success: a respecification and validation of the DeLone and McLean model of IS success. Information systems journal, 18(5), 529-557.
    Weil, P. (2017). The blurring test. Socialbots and their friends: Digital media and the automation of sociality, 19-46.
    Wenzlaff, K., & Spaeth, S. (2022). Smarter than Humans? Validating how OpenAI’s ChatGPT model explains Crowdfunding, Alternative Finance and Community Finance. Validating how OpenAI’s ChatGPT model explains Crowdfunding, Alternative Finance and Community Finance.(December 22, 2022).
    Wirtz, J., Patterson, P. G., Kunz, W. H., Gruber, T., Lu, V. N., Paluch, S., & Martins, A. (2018). Brave new world: service robots in the frontline. Journal of Service Management, 29(5), 907-931.
    Wu, D., Zhou, C., Li, Y., & Chen, M. (2022). Factors associated with teachers' competence to develop students’ information literacy: A multilevel approach. Computers & Education, 176, 104360.
    Wunderlich, P., Veit, D. J., & Sarker, S. (2019). Adoption of sustainable technologies: A mixed-methods study of German households. MIS Quarterly, 43(2).
    Xanthopoulou, D., Bakker, A. B., Demerouti, E., & Schaufeli, W. B. (2007). The role of personal resources in the job demands-resources model. International journal of stress management, 14(2), 121.
    Xames, M. D., & Shefa, J. (2023). ChatGPT for research and publication: Opportunities and challenges. Available at SSRN 4381803.
    Xu, F., & Du, J. T. (2018). Factors influencing users’ satisfaction and loyalty to digital libraries in Chinese universities. Computers in Human Behavior, 83, 64-72.
    Yan, D. (2023). Impact of ChatGPT on learners in a L2 writing practicum: An exploratory investigation. Education and Information Technologies, 1-25.
    Young, A. G., Majchrzak, A., & Kane, G. C. (2021). Organizing workers and machine learning tools for a less oppressive workplace. International Journal of Information Management, 59, 102353.
    Yudkowsky, E. (2008). Artificial intelligence as a positive and negative factor in global risk. Global catastrophic risks, 1(303), 184.
    Zamora, J. (2017). I'm sorry, dave, i'm afraid i can't do that: Chatbot perception and expectations. In Proceedings of the 5th international conference on human agent interaction (pp. 253-260).
    Zhou, X., Kim, S., & Wang, L. (2019). Money helps when money feels: Money anthropomorphism increases charitable giving. Journal of Consumer Research, 45(5), 953-972.
    Agrawal, A., Gans, J., & Goldfarb, A. (2022). Chatgpt and how AI disrupts industries. Harvard Business Review. 擷取自: https://hbr.org/2022/12/chatgpt-and-how-ai-disrupts-industries.
    Christof Koch (2016). How the Computer Beat the Go Master. 擷取自: https://www.scientificamerican.com/article/how-the-computer-beat-the-go-master/.
    Daniel Ruby (2023). 59+ ChatGPT Statistics — Users, Trends & More (July 2023). 擷取自: https://www.demandsage.com/chatgpt-statistics/.
    Gartner (2019). Gartner Predicts 25 Percent of Digital Workers Will Use Virtual Employee Assistants Daily by 2021. 擷取自: https://www.gartner.com/en/newsroom/press-releases/2019-01-09-gartner-predicts-25-percent-of-digital-workers-will-u.
    LaleEIM (2023)。 ChatGPT 的原理是什麼?和 Chatbot 聊天機器人有什麼不同?。擷取自: https://news.lale.im/_news/article/7c1109a6。
    Mollick, E. (2022). Chatgpt is a tipping point for AI. Harvard Business Review. 擷取自: https://hbr.org/2022/12/chatgpt-is-a-tipping-point-for-ai.
    Steve Andriole (2018). AI: The Good, the Disruptive, and the Scary. 擷取自: https://www.cutter.com/article/ai-good-disruptive-and-scary-498936.
    The Future of Chatbots: 10 Trends, Latest Stats & Market Size.(2023).擷取自: https://onix-systems.com/blog/6-chatbot-trends-that-are-bringing-the-future-closer.
    Ubisend (2017). Chatbot Survey: We now live in an on-demand society, time to get prepared. 擷取自: https://www.slideshare.net/PatricioCornejoA/chatbot-survey-2017-ubisend.
    風傳媒 (2023)。80%的工作會被ChatGPT影響!OpenAI研究指出這12種職業最受衝擊,擔心被取代一定要知道。 擷取自: https://www.storm.mg/lifestyle/4776766。
    工商時報 (2023)。 迎接發展拐點 台灣應建立「可信任AI」典範。 擷取自: https://view.ctee.com.tw/technology/49124.html。
    陳淯萱(2023)。【專題報導】機器人當道? 淺談ChatGPT 和Chatbot的產業應用!。 擷取自: https://www.ectimes.org.tw/2023/03/%E3%80%90%E5%B0%88%E9%A1%8C%E5%A0%B1%E5%B0%8E%E3%80%91%E6%A9%9F%E5%99%A8%E4%BA%BA%E7%95%B6%E9%81%93%EF%BC%9F-%E6%B7%BA%E8%AB%87chatgpt-%E5%92%8Cchatbot%E7%9A%84%E7%94%A2%E6%A5%AD%E6%87%89%E7%94%A8/。
    維基百科 (2023)。聊天機器人。 擷取自: https://zh.wikipedia.org/zh-tw/%E8%81%8A%E5%A4%A9%E6%A9%9F%E5%99%A8%E4%BA%BA。
    維基百科 (2023)。 ChatGPT。 擷取自: https://zh.wikipedia.org/zh-tw/ChatGPT。

    無法下載圖示
    全文公開日期 2025/08/20 (校外網路)
    全文公開日期 2025/08/20 (國家圖書館:臺灣博碩士論文系統)
    QR CODE