The dark side of generative artificial intelligence: A critical analysis of controversies and risks of ChatGPT
Objective: The objective of the article is to provide a comprehensive identification and understanding of the challenges and opportunities associated with the use of generative artificial intelligence (GAI) in business. This study sought to develop a conceptual framework that gathers the negative aspects of GAI development in management and economics, with a focus on ChatGPT.
Research Design & Methods: The study employed a narrative and critical literature review and developed a conceptual framework based on prior literature. We used a line of deductive reasoning in formulating our theoretical framework to make the study’s overall structure rational and productive. Therefore, this article should be viewed as a conceptual article that highlights the controversies and threats of GAI in management and economics, with ChatGPT as a case study.
Findings: Based on the conducted deep and extensive query of academic literature on the subject as well as professional press and Internet portals, we identified various controversies, threats, defects, and disadvantages of GAI, in particular ChatGPT. Next, we grouped the identified threats into clusters to summarize the seven main threats we see. In our opinion they are as follows: (i) no regulation of the AI market and urgent need for regulation, (ii) poor quality, lack of quality control, disinformation, deepfake content, algorithmic bias, (iii) automation-spurred job losses, (iv) personal data violation, social surveillance, and privacy violation, (v) social manipulation, weakening ethics and goodwill, (vi) widening socio-economic inequalities, and (vii) AI technostress.
Implications & Recommendations: It is important to regulate the AI/GAI market. Advocating for the regulation of the AI market is crucial to ensure a level playing field, promote fair competition, protect intellectual property rights and privacy, and prevent potential geopolitical risks. The changing job market requires workers to continuously acquire new (digital) skills through education and retraining. As the training of AI systems becomes a prominent job category, it is important to adapt and take advantage of new opportunities. To mitigate the risks related to personal data violation, social surveillance, and privacy violation, GAI developers must prioritize ethical considerations and work to develop systems that prioritize user privacy and security. To avoid social manipulation and weaken ethics and goodwill, it is important to implement responsible AI practices and ethical guidelines: transparency in data usage, bias mitigation techniques, and monitoring of generated content for harmful or misleading information.
Contribution & Value Added: This article may aid in bringing attention to the significance of resolving the ethical and legal considerations that arise from the use of GAI and ChatGPT by drawing attention to the controversies and hazards associated with these technologies.
artificial intelligence (AI); generative artificial intelligence (GAI); ChatGPT; technology adoption; digital transformation; OpenAI; chatbot industry; technostress
Acemoglu, D., & Restrepo P. (2019). Artificial Intelligence, Automation, and Work. [In] A. Agrawal, J. Gans & A. Goldfarb (Eds.), The Economics of Artificial Intelligence: An Agenda. Chicago: University of Chicago Press. https://doi.org/10.7208/chicago/9780226613475.003.0008
Adetayo, A.J. (2023). Artificial intelligence chatbots in academic libraries: the rise of ChatGPT. Library Hi Tech News.
Amariles, D.R., & Baquero, P.M. (2023). Promises and limits of law for a human-centric artificial intelligence. Computer Law & Security Review, 48, 1-9.
Androniceanu, A., Georgescu, I., & Sabie, O.-M. (2022). The impact of digitalization on public administration, economic development, and well-being in the EU countries. Central European Public Administration Re-view, 20(1), 7-29.
Appel, G., Neelbauer, J., & Schweidel, D.A. (2023). Generative AI Has an Intellectual Property Problem. Har-vard Business Review. Retrieved from https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem on May 8, 2023.
Arslan, A., Cooper, C., Khan, Z., Golgeci, I., & Ali, I. (2022). Artificial intelligence and human workers interac-tion at team level: a conceptual assessment of the challenges and potential HRM strategies. Interna-tional Journal of Manpower, 43(1), 75-88.
Atleson, M. (2023). The Luring Test: AI and the engineering of consumer trust. Retrieved from https://www.ftc.gov/business-guidance/blog/2023/05/luring-test-ai-engineering-consumer-trust?utm_source=govdelivery on May 8, 2023.
Babbie, E. (2012). The Practice of Social Research. 13th ed., Belmont, CA: Wadsworth Cengage Learning.
Bengio, Y., Courville, A., & Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1798-1828.
Berg, A., Buffie, E.F., & Zanna, L.F. (2018). Should We Fear the Robot Revolution? (The Correct Answer is Yes) (May 2018). IMF Working Paper No. 18/116. Retrieved from https://ssrn.com/abstract=3221174 on May 7, 2023.
Borji, A. (2023). A Categorical Archive of ChatGPT Failures. arXiv pre-print server. Retrieved from https://www.arxiv-vanity.com/papers/2302.03494/ on May 3, 2023.
Bove, T. (2023). Bill Gates says ChatGPT will ‘change our world’ but it doesn’t mean your job is at risk. Re-trieved from https://fortune.com/2023/02/10/bill-gates-chatgpt-jobs-chatbot-microsoft-google-bard-bing/ on February 24, 2023.
Brod, C. (1984). Technostress: The human cost of the computer revolution: Basic books.
Brumfield, E.J. (2008). Using online tutorials to reduce uncertainty in information seeking behavior. Journal of Library Administration, 48(3-4), 365-377.
Brynjolfsson, E. (2022). The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence. Dædalus, 151(2), 272-287.
Bughin, J., & van Zeebroeck, N. (2018). 3 'AI divides' and what we can do about them. Word Economic Fo-rum. Retrieved from https://www.weforum.org/agenda/2018/09/the-promise-and-pitfalls-of-ai on May 7, 2023.
Chan, A. (2023). GPT-3 and InstructGPT: technological dystopianism, utopianism, and “Contextual” perspec-tives in AI ethics and industry. AI and Ethics, 3(1), 53-64. https://doi.org/10.1007/s43681-022-00148-6
Chen, Y., Wang, X., Benitez, J., Luo, X., & Li, D. (2022). Does techno-invasion lead to employees’ deviant be-haviors?. Journal of Management Information Systems, 39(2), 454-482.
Chen, Y., & Xu, D. (2018). The Impact of Artificial Intelligence on Employment. Comparative Studies, 2, 135-160. https://doi.org/10.2139/ssrn.3116753
Collis, J., & Hussey, R. (2009). Business Research: A Practical Guide for Undergraduate & Postgraduate Stu-dents. 3rd ed., London: Palgrave Macmillan.
Costello, E. (2023). ChatGPT and the Educational AI Chatter: Full of Bullshit or Trying to Tell Us Something?. Postdigital Science and Education. https://doi.org/10.1007/s42438-023-00398-5
Culver, C., Moor, J., Duerfeldt, W., Kapp, M., & Sullivan, M. (1994). Privacy. Professional Ethics, 3(3- 4), 3-25.
Dengler, K., & Matthes, B. (2018). The impacts of digital transformation on the labour market: Substitution potentials of occupations in Germany. Technological Forecasting and Social Change, 137, 304-316. https://doi.org/10.1016/j.techfore.2018.09.024
Dijmărescu, I., Iatagan, M., Hurloiu, I., Geamănu, M., Rusescu, C., & Dijmărescu, A. (2022). Neu-romanagement decision making in facial recognition biometric authentication as a mobile pay-ment technology in retail, restaurant, and hotel business models. Oeconomia Copernicana, 13(1), 225-250. https://doi.org/10.24136/oc.2022.007
Dipankar Das. (2023). Understanding the choice of human resource and the artificial intelligence: “strategic behavior” and the existence of industry equilibrium. Journal of Economic Studies, 50(2), 234-267. https://doi.org/10.1108/JES-06-2021-0305
Dwivedi, Y.K., Kshetri, N., Hughes, L., Slade, E.L., et al. (2023). So what if ChatGPT wrote it?. Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71.
Efe, A. (2022). The Impact of Artificial Intelligence on Social Problems and Solutions: An Analysis on The Context of Digital Divide and Exploitation. Yeni Medya, (13), 247-264. https://doi.org/10.55609/yenimedya.1146586
Ekanayake, J., & Saputhanthri, L. (2020). E-Agro: Intelligent chat-bot. IoT and artificial intelligence to en-hance farming industry. Agris on-Line Papers in Economics and Informatics, 12(1), 15-21. https://doi.org/10.7160/aol.2020.120102
Eke, D.O. (2023). ChatGPT and the rise of generative AI: Threat to academic integrity?. Journal of Responsible Technology, 13.
Eliot, L. (2023). Generative AI ChatGPT As Masterful Manipulator Of Humans, Worrying AI Ethics And AI Law. Forbes. Retrieved from https://www.forbes.com/sites/lanceeliot/2023/03/01/generative-ai-chatgpt-as-masterful-manipulator-of-humans-worrying-ai-ethics-and-ai-law/?sh=c5836371d669 on May 3, 2023.
European Parliament and Council. (2016). Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Retrieved from https://eur-lex.europa.eu on May 3, 2023.
Farrokhnia, M., Banihashem, S.K., Noroozi O., & Wals, A. (2023). A SWOT analysis of ChatGPT: Implications for educational practice and research. Innovations in Education and Teaching International. https://doi.org/10.1080/14703297.2023.2195846
Fischer, K.W., Yan, Z., & Stewart, J. (2003). Adult cognitive development: Dynamics in the developmental web. In J. Valsiner & K. Connolly (Eds.), Handbook of developmental psychology (pp. 491-5160.) Sage. https://doi.org/10.4135/9781848608306.n2
Fisher, C. et al. (2010). Researching and Writing a Dissertation. 3rd edition. Harlow: Prentice Hall.
Florek-Paszkowska, A., Ujwary-Gil, A., & Godlewska-Dzioboń, B. (2021). Business innovation and critical suc-cess factors in the era of digital transformation and turbulent times. Journal of Entrepreneurship, Man-agement, and Innovation, 17(4), 7-28. https://doi.org/10.7341/20211741
Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review, 1(1), https://doi.org/10.1162/99608f92.8cd550d1
Freelon, D., Bossetta, M., Wells, C., Lukito, J., Xia, Y., & Adams, K. (2022). Black Trolls Matter: Racial and Ideo-logical Asymmetries in Social Media Disinformation. Social Science Computer Review, 40(3), 560-578. https://doi.org/10.1177/0894439320914853
Fu, Y.-G., Fang, G.-C., Liu, Y.-Y., Guo, L.-K., & Wang, Y.-M. (2023). Disjunctive belief rule-based reasoning for decision making with incomplete information. Information Sciences, 625, 49-64.
Fuchs, Ch., & Trottier, D. (2015). Towards a theoretical model of social media surveillance in contemporary society. Communications, 40(1), 113-135.
Garibay, O.O., Winslow, B., Andolina, S., Antona, M., Bodenschatz, A., Coursaris, C., Falco, G., Fiore, S. M., Garibay, I., Grieman, K., Havens, J.C., Jirotka, M., Kacorri, H., Karwowski, W., Kider, J., Konstan, J., Koon, S., Lopez-Gonzalez, M., Maifeld-Carucci, I., McGregor, S., Salvendy, G., Shneiderman, B., Stephanidis, C., Strobel, Ch., Ten Holter, C., & Xu, W. (2023). Six Human-Centered Artificial Intelligence Grand Challeng-es. International Journal of Human–Computer Interaction, 39(3), 391-437, https://doi.org/10.1080/10447318.2022.2153320
GDPR (2023). General Data Protection Regulation, art. 4 GDPR Definitions. Retrieved from https://gdpr-info.eu/art-4-gdpr/ on March 29, 2023.
Georgieff, A., & Hyee, R. (2022). Artificial intelligence and employment: New cross-country evidence. Fron-tiers in Artificial Intelligence, 5, 832736. https://doi.org/10.3389/frai.2022.832736
Getahun, H. (2023). ChatGPT could be used for good, but like many other AI models, it’s rife with racist and discriminatory bias. Insider. Retrieved from https://www.insider.com/chatgpt-is-like-many-other-ai-models-rife-with-bias-2023-1 on March 29, 2023.
Gladden, M., Fortuna, P., & Modliński, A. (2022). The Empowerment of Artificial Intelligence in Post-Digital Organizations: Exploring Human Interactions with Supervisory AI. Human Technology, 18(2), 98-121. https://doi.org/10.14254/1795-6889.2022.18-2.2
Green, A., & Lamby, L. (2023). The supply, demand and characteristics of the AI workforce across OECD countries. OECD Social, Employment and Migration Working Papers No. 287, OECD Publishing, Paris. https://doi.org/10.1787/bb17314a-en
Gruetzemacher, R., Paradice, D., & Lee, K.B. (2020). Forecasting extreme labor displacement: A survey of AI practitioners. Technological Forecasting and Social Change, 161.
Hang, Y., Hussain, G., Amin, A., & Abdullah, M.I. (2022). The moderating effects of technostress inhibitors on techno-stressors and employee's well-being. Frontiers in Psychology, 12, 6386.
Hart, J., Noack, M., Plaimauer, C., & Bjørnåvold, J. (2021). Towards a structured and consistent terminology on transversal skills and competences. Brüssel: Europäische Kommission und Cedefop. Towards a struc-tured and consistent terminology on transversal skills and competences Esco (europa. eu). Retrieved from https://esco.ec.europa.eu/uk/publication/towards-structured-and-consistent-terminology-transversal-skills-and-competences on May 7, 2023.
Hsu, J. (2022). Europe's AI regulations could lead the way for the world. New Scientist, 256(3419).
Huang, M.H., & Rust, R.T. (2018). Artificial intelligence in service. Journal of Service Research, 21(2), 155-172. https://doi.org/10.1177/109467051775245
Huang, M.H., Rust, R., & Maksimovic, V. (2019). The feeling economy: Managing in the next generation of artificial intelligence (AI). California Management Review, 61(4), 43-65. https://doi.org/10.1177/0008125619863436
Illia, L., Colleoni, E., & Zyglidopoulos, S. (2023). Ethical implications of text generation in the age of artificial intelligence. Business Ethics, the Environment & Responsibility, 32(1), 201-210. https://doi.org/10.1111/beer.12479
Janssen, M., & Kuk, G. (2016). The challenges and limits of big data algorithms in technocratic governance, Government Information Quarterly, 33(3) 371-377. https://doi.org/10.1016/j.giq.2016.08.011
Janssen, M., Brous, P., Estevez, E., Barbosa, L.S., & Jankowski, T. (2020). Data governance: Organ-izing data for trustworthy Artificial Intelligence. Government Information Quarterly, 37(3), https://doi.org/10.1016/j.giq.2020.101493
Jones-Jang, S.M., Mortensen, T., & Liu, J. (2021). Does Media Literacy Help Identification of Fake News? In-formation Literacy Helps, but Other Literacies Don’t. American Behavioral Scientist, 65(2), 371-388. https://doi.org/10.1177/0002764219869406
Kandoth, S., & Kushe Shekhar, S. (2022). Social influence and intention to use AI: the role of personal inno-vativeness and perceived trust using the parallel mediation model. Forum Scientiae Oeco-nomia, 10(3), 131-150. https://doi.org/10.23762/FSO_VOL10_NO3_7
Kaplan, A.M., & Haenlein, M. (2021). Siri, Siri, in my hand: Who's the fairest in the land? On the interpreta-tions, illustrations, and implications of artificial intelligence. Business Horizons, 64(2), 219-228. https://doi.org/10.1016/j.bushor.2018.08.004
Kedziora, D. (2022). Botsourcing, Roboshoring or Virtual Backoffice? Perspectives on Implementing Robotic Process Automation (RPA) and Artificial Intelligence (AI). Human Technology, 18(2), 92-97. https://doi.org/10.14254/1795-6889.2022.18-2.1
Kelley, S. (2022). Employee perceptions of the effective adoption of AI principles. Journal of Business Ethics, 178(4), 871-893.
Khogali, H., & Mekid, S. (2023). The Blended Future of Automation and Ai: Examining Some Long-Term Soci-etal Impact Features. Technology in Society, 73, https://doi.org/10.1016/j.techsoc.2023.102232
Khogali, H.O., & Mekid, S. (2023). The blended future of automation and AI: Examining some long-term soci-etal and ethical impact features. Technology in Society, 73, 102232. https://doi.org/10.1016/j.techsoc.2023.102232
Kietzmann, J., Lee, L.W., McCarthy, I.P., & Kietzmann, T. (2020). Deepfakes: Trick or treat?. Business Horizons, 63(2), 135-146. https://doi.org/10.1016/j.bushor.2019.11.006
Kitsara, I. (2022). Artificial Intelligence and the Digital Divide: From an Innovation Perspective. In A. Bounfour (Ed.) Platforms and Artificial Intelligence. Progress in IS (pp. 245-265). Springer, Cham. https://doi.org/10.1007/978-3-030-90192-9_12
Königstorfer, F., & Thalmann, S. (2022). AI Documentation: A path to accountability. Journal of Responsible Technology, 11.
Kopalle, P.K., Gangwar, M., Kaplan, A., Ramachandran, D., Reinartz, W., & Rindfleisch, A. (2022). Examining artificial intelligence (AI) technologies in marketing via a global lens: Current trends and future research opportunities. International Journal of Research in Marketing, 39(2), 522-40.
Korzynski, P., Kozminski, A.K., & Baczynska, A.K. (2023). Navigating leadership challenges with technology: Uncovering the potential of ChatGPT, virtual reality, human capital management systems, robotic pro-cess automation, and social media. International Entrepreneurship Review, 9(2). (ahead-of-print)
Korzynski, P., Mazurek, G., Altmann, A., Ejdys, J., Kazlauskaite, R., Paliszkiewicz, J., Wach, K., & Ziemba, E. (2023). Generative artificial intelligence as a new context for management theories: analysis of ChatGPT. Central European Management Journal, 31(1), https://doi.org/10.1108/CEMJ-02-2023-0091
Korzynski, P., Rook, C., Florent Treacy, E., & Kets de Vries, M. (2021). The impact of self-esteem, conscien-tiousness and pseudo-personality on technostress. Internet Research, 31(1), 59-79.
Krügel, S., Ostermaier, A., & Uhl, M. (2022). Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical Decisions. Philosophy & Technology, 35(1), 17. https://doi.org/10.1007/s13347-022-00511-9
Krügel, S., Ostermaier, A., & Uhl, M. (2023). ChatGPT’s inconsistent moral advice influences users’ judgment. Scientific Reports, 13(1), 4569. https://doi.org/10.1038/s41598-023-31341-0
Lane, M., & A. Saint-Martin (2021). The impact of Artificial Intelligence on the labour market: What do we know so far?. OECD Social, Employment and Migration Working Papers, No. 256, OECD Publishing, Paris. https://doi.org/10.1787/7c895724-en
Lassébie, J., & Quintini G. (2022). What skills and abilities can automation technologies replicate and what does it mean for workers? New Evidence. OECD Social, Employment and Migration Working Papers No 282, OECD Publishing, Paris. https://doi.org/10.1787/646aad77-en
Lăzăroiu, G., Androniceanu, A., Grecu, I., Grecu, G., & Negurita, O. (2022). Artificial intelligence-based deci-sion-making algorithms, Internet of Things sensing networks, and sustainable cyber-physical manage-ment systems in big data-driven cognitive manufacturing. Oeconomia Copernicana, 13(4), 1047-1080. https://doi.org/10.24136/oc.2022.030
Leinen, P., Esders, M., Schütt, K.T., Wagner, C., Müller, K.R., & Tautz, F.S. (2020). Autonomous robotic nanofabrication with reinforcement learning. Science Advances, 36(6), eabb6987. https://doi.org/10.1126/sciadv.abb698
Li, L., & Wang, X. (2021). Technostress inhibitors and creators and their impacts on university teachers’ work performance in higher education. Cognition, Technology & Work, 23, 315-330.
Li, S., Wang, H., & Wang, S. (2021). Research on the factors that influence the labor structure of the manu-facturing industry in the context of Artificial Intelligence. Management Review, 33(3), 307-314. Retrieved from http://journal05.magtech.org.cn/jweb_glpl/EN/Y2021/V33/I3/307 on May 5, 2023.
Li, X., & Liao, H. (2023). A large-scale group decision making method based on spatial information aggrega-tion and empathetic relationships of experts. Information Sciences, 632, 503-15.
Li, X., & Liao, H. (2023). A large-scale group decision making method based on spatial information aggrega-tion and empathetic relationships of experts. Information Sciences, 632, 503-515. https://doi.org/10.1016/j.ins.2023.03.051
Lutz, C. (2019). Digital inequalities in the age of artificial intelligence and big data. Human Behaviour and Emerging Techgnologies, 1(2), 141-148. https://doi.org/10.1002/hbe2.140
Małkowska, A., Urbaniec, M., & Kosała, M. (2021). The impact of digital transformation on Europe-an coun-tries: insights from a comparative analysis. Equilibrium. Quarterly Journal of Economics and Economic Pol-icy, 16(2), 325-355. https://doi.org/10.24136/eq.2021.012
Manyika, J., Silberg, J., & Presten, B. (2019). What Do We Do About the Biases in AI?. Harvard Business Re-view, October 25. Retrieved from https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai on April 23, 2023.
Maslej N., Fattorini, L., Brynjolfsson, E., Etchemendy, J., Ligett, K., Lyons, T., Manyika, J., Ngo, H., Niebles, J-C., Parli, V., Yoav Shoham, Wald, R., Clark, J., Perrault, R. (2023). Annual Report, AI Index Steering Commit-tee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2023.
Mazurek, G. (2023). Artificial Intelligence, Law, and Ethics. Krytyka Prawa, 15(1), 11-14. https://doi.org/10.7206/kp.2080-1084.568
Mazurek, G. (2023). Artificial Intelligence, Law, and Ethics. Krytyka Prawa, 15(1), 11-14.
Mazurek, G., & Małagocka, K. (2019). Perception of privacy and data protection in the context of the devel-opment of artificial intelligence. Journal of Management Analytics, 6(4), 344-364. https://doi.org/10.1080/23270012.2019.1671243
McKinsey & Company Survey (2022). The state of AI in 2022 – and a half of decade in review. Retrieved from https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2022-and-a-half-decade-in-review on 28 March 2023.
Moor, J. (1997). Towards a theory of privacy in the information age. Computers and Society, 27(3), 27-32.
Morandini, S., Fraboni, F., De Angelis, M., Puzzo, G., Giusino, D., & Pietrantoni, L. (2023). The impact of artifi-cial intelligence on workers’ skills: Upskilling and reskilling in organisations. Informing Science: The Inter-national Journal of an Emerging Transdiscipline, 26, 39-68. https://doi.org/10.28945/507
Moravec, P.L., Kim, A., & Dennis A.R. (2020). Appealing to Sense and Sensibility: System 1 and System 2 In-terventions for Fake News on Social Media. Information Systems Research, 31(3), 987-1006. https://doi.org/10.1287/isre.2020.0927
Nabawanuka, H., & Ekmekcioglu, E.B. (2022). Millennials in the workplace: perceived supervisor support, work–life balance and employee well–being. Industrial and Commercial Training, 54(1), 123-144.
Nasir, J.A., Khan, O.S., & Varlamis, I. (2021). Fake news detection: A hybrid CNN-RNN based deep learning approach. International Journal of Information Management Data Insights, 1(1). https://doi.org/10.1016/j.jjimei.2020.100007
Newman, J., Mintrom, M., & O'Neill, D. (2022). Digital technologies, artificial intelligence, and bureaucratic transformation. Futures, 136, 102886. https://doi.org/10.1016/j.futures.2021.102886
Norori, N., Hu, Q., Aellen, F.M., Faraci, F.D., & Tzovara, A. (2021). Addressing bias in big data and AI for health care: A call for open science. Patterns (N Y), 2(10), 100347.
Oduro, S., Moss, E., & Metcalf, J. (2022). Obligations to assess: Recent trends in AI accountability regulations. Patterns (N Y), 3(11), 100608.
Oliinyk, O., Bilan, Y., & Mishchuk, H. (2021). Knowledge Management and Economic Growth: The Assessment of Links and Determinants of Regulation. Central European Management Journal, 29(3), 20-39. https://doi.org/10.7206/cemj.2658-0845.52
Olinga, L. (2022). Elon Musk sounds the alarm about ChatGPT. Retrieved from https://www.thestreet.com/technology/elon-musk-sounds-the-alarm-about-chatgpt on February 17, 2023.
Oliveira, A., & Braga, H. (2020). Artificial intelligence: learning and limitations, WSEAS Transanctions on Advances Engineering Education, 17, 80-86, https://doi.org/10.37394/ 232010.2020.17.10
Pahl, S. (2023). An emerging divide: Who is benefiting from AI?. Retrieved from https://iap.unido.org/articles/emerging-divide-who-benefiting-ai#fn-2303-0 on May 3, 2023.
Palladino, N. (2022). A ‘biased’ emerging governance regime for artificial intelligence? How AI ethics get skewed moving from principles to practices. Telecommunications Policy.
Pan, Y., & Froese F.J. (2023). An interdisciplinary review of AI and HRM: Challenges and future directions. Human Resource Management Review, 33(1). https://doi.org/10.1016/j.hrmr.2022.100924
Pereira, V., Hadjielias, E., Christofi, M., & Vrontis, D. (2023). A systematic literature review on the impact of artificial intelligence on workplace outcomes: A multi-process perspective. Human Resource Manage-ment Review, 33(1), https://doi.org/10.1016/j.hrmr.2021.100857
Peres, R., Schreier, M., Schweidel, D., & Sorescu, A. (2023). On ChatGPT and beyond: How generative artificial intelligence may affect research, teaching, and practice. International Journal of Research in Marketing. https://doi.org/10.1016/j.ijresmar.2023.03.001
Piotrowski, D. (2022). Consumer perceived ethicality of banks in the era of digitalisation: The case of Poland. Economics and Business Review, 22(1), 90-114. https://doi.org/10.18559/ebr.2022.1.6
Piotrowski, D. (2023). Privacy frontiers in customers’ relations with banks. Economics and Business Review EBR, 23(1), 119-141. https://doi.org/10.18559/ebr.2023.1.5
Politou, E., Alepis, E., & Patsakis, C. (2018). Forgetting personal data and revoking consent under GDPR: Challenges and proposed solutions. Journal of Cybersecurity, 4(1), 1-20.
Puzzo, G., Fraboni, F., & Pietrantoni, L. (2020). Artificial intelligence and professional transformation: Re-search questions in work psychology. Rivista Italiana di Ergonomia, 21, Human-Centered AI, 43. http://www.societadiergonomia.it/wp-content/uploads/2014/07/rivista-n.21-corr.pdf#page=61
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI Blog, 1(8).
Ragu-Nathan, T., Tarafdar, M., Ragu-Nathan, B.S., & Tu, Q. (2008). The consequences of technostress for end users in organizations: Conceptual development and empirical validation. Information Systems Re-search, 19(4), 417-433.
Ramos, R., Ferrittu, G., & Goulart, P. (2023). Technological change and the future of work. In Global Labour in Distress, Volume I: Globalization, Technology and Labour Resilience (pp. 203-212): Springer.
Rana, N.P., Chatterjee, S., Dwivedi, Y.K., & Akter, S. (2022). Understanding dark side of artificial intelligence (AI) integrated business analytics: assessing firm’s operational inefficiency and competitiveness. Europe-an Journal of Information Systems, 31(3), 364-387. https://doi.org/10.1080/0960085X.2021.1955628
Rana, N.P., Chatterjee, S., Dwivedi, Y.K., & Akter, S. (2022). Understanding dark side of artificial intelligence (AI) integrated business analytics: assessing firm’s operational inefficiency and competitiveness. Europe-an Journal of Information Systems, 31(3), 364-387. https://doi.org/10.1080/0960085X.2021.1955628
Ratten, V. (2023). Research Methodologies for Business Management. London: Routledge
Regona, M., Yigitcanlar, T., Xia, B., & Li, R.Y.M. (2022). Opportunities and Adoption Challenges of AI in the Construction Industry: A PRISMA Review. Journal of Open Innovation: Technology, Market, and Complexi-ty, 8(1).
Reshetnikova, M.S., & Mikhaylov, I.A. (2023). Artificial Intelligence Development: Implications for China. Montenegrin Journal of Economics, 19(1), 139-152.
Robertson, A. (2023). ChatGPT returns to Italy after ban. The Verge, 28 April 2023.
Sayed, E., Yasin, A., Elsayed, A., Ezzat, H., & Elsayed, O. (2022). Man vs. machine: exploring the impact of artificial intelligence adoption on employees' service quality. Journal of the Faculty of Tourism and Ho-tels-University of Sadat City, 6(1/2).
Short, C.E., & Short, J.C. (2023). The artificially intelligent entrepreneur: ChatGPT, prompt engineering, and entrepreneurial rhetoric creation. Journal of Business Venturing Insights, 19.
Slapeta, J. (2023). Are ChatGPT and other pretrained language models good parasitologists?. Trends Parasi-tol.
Smits, J., & Borghuis, T. (2022). Generative AI and Intellectual Property Rights. In B. Custers & E. Fosch-Villaronga (Eds.), Law and Artificial Intelligence: Regulating AI and Applying AI in Legal Practice (pp. 323-344). T.M.C. Asser Press. https://doi.org/10.1007/978-94-6265-523-2_17
Stahl, B.C. (2021). Artificial intelligence for a better future: an ecosystem perspective on the ethics of AI and emerging digital technologies. Springer Nature.
Sullivan, Y.W., & Fosso Wamba, S. (2022). Moral Judgments in the Age of Artificial Intelligence. Journal of Business Ethics, 178(4), 917-943. https://doi.org/10.1007/s10551-022-05053-w
Tarafdar, M., Tu, Q., Ragu-Nathan, B.S., & Ragu-Nathan, T. (2007). The impact of technostress on role stress and productivity. Journal of Management Information Systems, 24(1), 301-328.
Teubner, T., Flath, C.M., Weinhardt, C., van der Aalst, W., & Hinz, O. (2023). Welcome to the Era of ChatGPT et al. Business & Information Systems Engineering, 65(2), 95-101.
Thorp, H.H. (2023). ChatGPT is fun, but not an author (Editorial). Science, 379(6630), 313. https://doi.org/10.1126/science.adg7879
Tlili, A., Shehata, B., Adarkwah, M.A., Bozkurt, A., Hickey, D.T., Huang, R., & Agyemang, B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learning Envi-ronments, 10(1).
UNCDP. (2018). Excerpt from Committee for Development Policy, Report on the twentieth, See Official Rec-ords of the Economic and Social Council, 2018, Supplement No. 13 (E/2018/33). Retrieved from https://sustainabledevelopment.un.org/content/documents/2754713_July_PM_2._Leaving_no_one_behind_Summary_from_UN_Committee_for_Development_Policy.pdf on May 2, 2023.
Vaassen, B. (2022). AI, Opacity, and Personal Autonomy. Philosophy & Technology, 35(4), 88. https://doi.org/10.1007/s13347-022-00577-5
van Dis, E.A., Bollen, J., Zuidema, W., van Rooij, R., & Bockting, C.L. (2023). ChatGPT: Five priorities for re-search. Nature, 614(7947), 224-226. https://doi.org/10.1038/d41586-023-00288-7
Varsha, P.S. (2023). How can we manage biases in artificial intelligence systems – A systematic literature review. International Journal of Information Management Data Insights, 3(1).
Verma, R., Shen, J., Benedict, B.C., Murray-Tuite, P., Lee, S., Gurt' Ge, Y., & Ukkusuri, S.V. (2022). Progression of hurricane evacuation-related dynamic decision-making with information processing. Transportation Research Part D: Transport and Environment, 108.
Vorobeva, D., El Fassi, Y., Costa Pinto, D., Hildebrand, D., Herter, M.M., & Mattila, A.S. (2022). Thinking skills don’t protect service workers from replacement by artificial intelligence. Journal of Service Research, 25(4), 601-613. https://doi.org/1177/10946705221104312
Wang, Q., & Zhao, G. (2023). Exploring the influence of technostress creators on in-service teachers' atti-tudes toward ICT and ICT adoption intentions. British Journal of Educational Technology, n/a(n/a). https://doi.org/10.1111/bjet.13315
Weil, M.M., & Rosen, L.D. (1997). Technostress: Coping with technology@ work@ home@ play. (13), J. Wiley New York.
West D.M. (2020). Machines, Artificial Intelligence, & and the Workforce: Recovering & Readying Our Econ-omy for the Future, September 10, 2020, Brookings Institution Washington, D.C.
Willcocks, L. (2020). Robo-apocalypse cancelled? Reframing the automation and future of work debate. Journal of Information Technology, 35(4), 286-302. https://doi.org/10.1177/0268396220925830
World Economic Forum. (2019). World Economic Forum Annual Meeting [Conference session]. Retrieved from https://www.weforum.org/events/world-economic-forum-annual-meeting-202 on May 22, 2023.
Wu, J., Guo, S., Zhang, W., Shin, D., & Song, M. (2022). Techno-invasion and job satisfaction in China: The roles of boundary preference for segmentation and marital status. Human Systems Management, 41, 655-670. https://doi.org/10.3233/HSM-211595
Xu, G., Xue, M., & Zhao, J. (2023). The Relationship of Artificial Intelligence Opportunity Perception and Em-ployee Workplace Well-Being: A Moderated Mediation Model. International Journal of Environmental Research and Public Health. 20, https://doi.org/10.3390/ ijerph20031974
Zajko, M. (2022). Artificial intelligence, algorithms, and social inequality: Sociological contributions to con-temporary debates. Sociology Compass, 16(3), e12962. https://doi.org/10.1111/soc4.12962
Zhang, Z., Ye, B., Qiu, Z., Zhang, H., & Yu, C. (2022). Does Technostress Increase R&D Employees' Knowledge Hiding in the Digital Era?. Frontiers in Psychology, 13.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a CC BY-4.0 licence that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are asked to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) only the final version of the article, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access). We advise using any of the following research society portals: