05/2025 – Generating Tomorrow’s Me: How Collaborating with Generative AI Changes Humans

Published : 26.02.2024 | Categories: Call For Papers


Generative artificial intelligence (GenAI) is artificial intelligence that uses generative models to create text, images, or other data (e.g., Banh and Strobel 2023; Feuerriegel et al. 2024). GenAI learns the patterns and structure of its provided training data and then, typically in response to textual inputs (i.e., prompts), generates new, synthetic data with similar characteristics. The recent boom around GenAI specifically emerged at the beginning of the 2020s with the rise of large language models in the form of chatbots, such as ChatGPT, Copilot, and Bard, and text-to-image transformers, such as Stable Diffusion, Midjourney, and DALL-E. Given the various applications across a wide range of industries and use cases, companies such as OpenAI, DeepL, Microsoft, Google, and Baidu have developed their own GenAI and further accelerated the development and dissemination of GenAI (e.g., Teubner et al. 2023). The predicted impact of GenAI is enormous. It is expected that GenAI will generate over 600 billion dollars in revenue by 2030 (Fortune 2023), affecting up to 80% of current jobs (Eloundou et al. 2023).

Human-GenAI collaboration studies how humans and GenAI agents work together to accomplish a human-desired goal (e.g., Anthony et al. 2023; Baptista et al. 2020; Jarvenpaa and Klein 2024). GenAI can aid humans in various domains, from decision-making tasks over idea generation and innovation to art creation (e.g., Benbya et al. 2024). GenAI, in collaboration with humans, generates output such as code, text, images, or videos in response to prompts. Humans can then use this output to elevate their capabilities and improve desired outcomes, i.e., they can become more productive in creative (Zhou and Lee 2023) or coding tasks (Peng et al. 2023). However, it may also lead to problematic and unclear consequences, such as a decrease in overall creativity (Zhou and Lee 2023) or the dissemination of misinformation across organizations and digital platforms (e.g., Sabherwal and Grover 2024; Susarla et al. 2023; Wessel et al. 2023).

This call for paper focuses on one of these critical consequences, namely how humans will change due to their collaborations with GenAI. These changes can apply to various objects of analysis, such as human cognitive processes, perceptions, emotions, beliefs, and behaviors toward GenAI systems or towards other humans. How individuals learn, adapt, and influence others through AI collaboration has gained recognition in existing research on human collaboration with non-generative, predictive AI. Examples of domains are medicine (e.g., Jussupow et al. 2021; Jussupow et al. 2022), sales (e.g., Adam et al. 2021; Adam et al. 2023; Gnewuch et al. 2023), system development (e.g., Adam et al. 2024), and non-specialized image classification (e.g., Fügener et al. 2021; Fügener et al. 2022). In this vein, previous studies have discussed, for instance, that people change their own beliefs through processing the explanations of AI (e.g., Bauer et al. 2021; Bauer et al. 2023), adapt their behavior in response to observing AI predictions about themselves (e.g., Bauer and Gill 2023), become more selfish in their interaction with AI systems compared to their interaction with humans (March 2021) or develop more negative attitudes towards algorithmic versus human errors (e.g., Burton et al. 2020; Berger et al. 2021; Jussupow et al. 2020). Yet, dedicated studies on how GenAI – and its particularities – affect humans collaborating with it are only beginning to emerge.

Focus and Possible Topics

The focus of this call for papers is to stimulate innovative research on how humans change due to their collaborations with GenAI. While the within-individual changes (e.g., regarding cognitive processes, perceptions, emotions, beliefs, and behaviors) are of primary interest, we also invite submissions at the group or organizational levels with reference to the individual level. Human-GenAI collaborations in either professional or private contexts should be the research setting.

Papers that focus solely on GenAI without a focus on collaborations with humans or human changes are not the focus of this call for papers. Further, papers that focus only on human perceptions or consequences of collaborations with GenAI (e.g., user satisfaction, acceptance, or performance changes due to collaborating with GenAI) without a deeper investigation of human changes are also outside the scope of this CfP.

Possible research areas include, but are not limited to:

  • Creativity and Innovation: Changes in the creative processes of humans through the automatic creation or curation of text and images through GenAI
  • Communication and Personalization: Changes in the communication styles of humans to the GenAI (e.g., prompts, politeness of their expressions)
  • Errors and Biases: Humans adopting errors and biased worldviews due to misinformation (e.g., hallucinations) or over-reliance on GenAI outputs
  • Learning and Competencies: Erosion and elevation of human skills due to the capabilities of GenAI
  • Aversion and Appreciation: Changing relationships with other humans or technologies due to collaborations with GenAI
  • Affordances and Possibilities: Humans collaborating with GenAI in predictable and unpredictable ways
  • Humanistic Outcomes: Increases and decreases in the psychological well-being of humans through the workings of GenAI
  • Ethics: Corrupting and purging the ethical views and practices of humans due to collaborations with GenAI (e.g., engaging in plagiarism, checking and spreading GenAI-generated misinformation and deep fakes)


We welcome various research approaches, including, but not limited to:

  • Conceptual/theoretical articles (also formal models and simulations)
  • Qualitative studies (e.g., interviews and case studies)
  • Quantitative studies (e.g., surveys, lab and field experiments, and trace data)
  • Design science (e.g., GenAI artifacts implemented in collaboration with humans)
  • Combinations of these approaches (i.e., multi- and mixed-methods)


All papers must be submitted by 15 October 2024 at the latest via the journal’s online submission system (http://www.editorialmanager.com/buis/). Please observe the instructions regarding the format and size of submission to BISE. Papers should adhere to the submission general BISE author guidelines (https://www.bise-journal.com/?page_id=18).

Submissions will be reviewed anonymously in a double-blind process by at least two referees with regard to relevance, originality, and research quality. In addition to the editors of the journal, distinguished international scholars will be involved in the review process.

Given the timeliness and importance of this topic, we aim to publish meaningful contributions after fast and limited decision cycles. The editorial timeline will proceed as follows:

  • Deadline for Submission: 15 Oct 2024
  • Notification of the Authors, 1st Round: 07 Jan 2025
  • Completion Revision 1: 15 Mar 2025
  • Notification of the Authors, 2nd Round: 15 May 2025
  • Completion Revision 2: 15 Jun 2025
  • Notification of the Authors, Final Round: 30 Jun 2025
  • Online Publication: asap
  • Print Publication: October 2025

Editors of the Special Issue

Martin Adam,
University of Goettingen, Germany
martin.adam@uni-goettingen.de (corresponding)

Kevin Bauer,
University of Mannheim, Germany

Ekaterina Jussupow,
Darmstadt University of Technology, Germany

Alexander Benlian,
Darmstadt University of Technology, Germany

Mari-Klara Stein,
Tallinn University of Technology, Estonia


Adam M, Diebel C, Goutier M, Benlian A (2024) Navigating autonomy and control in human-AI delegation: User responses to technology- versus user-invoked task allocation. Decision Support Systems.

Adam M, Roethke K, Benlian A (2023) Human vs. automated sales agents: How and why customer responses shift across sales stages. Information Systems Research 34(3):1148-1168.

Adam M, Wessel M, Benlian A (2021) AI-based chatbots in customer service and their effects on user compliance. Electronic Markets 31(2):427-445.

Anthony C, Bechky BA, Fayard A-L (2023) “Collaborating” with AI: Taking a system view to explore the future of work. Organization Science 34(5):1672-1694.

Banh L, Strobel G (2023) Generative artificial intelligence. Electronic Markets 33(1):1-17.

Baptista J, Stein MK, Klein S, Watson-Manheim MB, Lee J (2020) Digital work and organisational transformation: Emergent digital/human work configurations in modern organisations. The Journal of Strategic Information Systems. 29(2):101618.

Bauer K, Hinz O, van der Aalst W, Weinhardt C (2021) Expl (AI) n it to me–explainable AI and information systems research. Business & Information Systems Engineering 63:79-82.

Bauer K, Gill A (2023) Mirror, mirror on the wall: Algorithmic assessments, transparency, and self-fulfilling prophecies. Information Systems Research.

Bauer K, von Zahn M, Hinz O (2023) Expl(AI)ned: The impact of explainable artificial intelligence on users’ information processing. Information Systems Research.

Benbya H, Strich F, Tamm T (2024) Navigating generative artificial intelligence promises and perils for knowledge and creative work. Journal of the Association for Information Systems 25(1):23-36.

Berger B, Adam M, Rühr A, Benlian A (2021) Watch me improve—algorithm aversion and demonstrating the ability to learn. Business & Information Systems Engineering 63:55-68.

Burton JW, Stein MK, Jensen TB (2020) A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making 33(2):220-239.

March C (2021) Strategic interactions between humans and artificial intelligence: Lessons from experiments with computer players. Journal of Economic Psychology 87:102426.

Eloundou T, Manning S, Mishkin P, Rock D (2023) Gpts are gpts: An early look at the labor market impact potential of large language models. arXiv preprint arXiv:2303.10130.

Feuerriegel S, Hartmann J, Janiesch C, Zschech P (2024) Generative AI. Business & Information Systems Engineering 66:111-126.

Fortune (2023) Generative AI market size, share and industry analysis. Retrieved from https://www.fortunebusinessinsights.com/generative-ai-market-107837

Fügener A, Grahl J, Gupta A, Ketter W (2021) Will humans-in-the-loop become borgs? Merits and pitfalls of working with AI. Management Information Systems Quarterly (MISQ)-Vol 45.

Fügener A, Grahl J, Gupta A, Ketter W (2022) Cognitive challenges in human–artificial intelligence collaboration: Investigating the path toward productive delegation. Information Systems Research 33(2):678-696.

Gnewuch U, Morana S, Hinz O, Kellner R, Maedche A (2023) More than a bot? The impact of disclosing human involvement on customer interactions with hybrid service agents. Information Systems Research.

Jarvenpaa S, Klein S (2024) New Frontiers in Information Systems Theorizing: Human-gAI Collaboration. Journal of the Association for Information Systems 25(1):110-121.

Jussupow E, Benbasat I, Heinzl A (2020) Why are we averse towards algorithms? A comprehensive literature review on algorithm aversion. In Proceedings of the 28th European Conference on Information Systems (ECIS) June 15-17.

Jussupow E, Spohrer K, Heinzl A (2022) Radiologists’ usage of diagnostic AI systems: The role of diagnostic self-efficacy for sensemaking from confirmation and disconfirmation. Business & Information Systems Engineering 64(3):293-309.

Jussupow E, Spohrer K, Heinzl A, Gawlitza J (2021) Augmenting medical diagnosis decisions? An investigation into physicians’ decision-making process with artificial intelligence. Information Systems Research 32(3):713-735.

Peng S, Kalliamvakou E, Cihon P, Demirer M (2023) The impact of AI on developer productivity: Evidence from GitHub copilot. arXiv preprint arXiv:2302.06590.

Sabherwal R, Grover V (2024) The societal impacts of generative artificial intelligence: A balanced perspective. Journal of the Association for Information Systems 25(1):13-22.

Sturm T, Gerlach JP, Pumplun L, Mesbah N, Peters F, Tauchert C, . . . Buxmann P (2021) Coordinating human and machine learning for effective organizational learning. MIS Quarterly 45(3):1581-1602.

Susarla A, Gopal R, Thatcher JB, Sarker S (2023) The janus effect of generative AI: Charting the path for responsible conduct of scholarly activities in information systems. Information Systems Research 34(2):399-408.

Teubner T, Flath CM, Weinhardt C, van der Aalst W, Hinz O (2023) Welcome to the era of chatgpt et al. the prospects of large language models. Business & Information Systems Engineering 65(2):95-101.

Wessel M, Adam M, Benlian A, Thies F (2023) Generative AI and its transformative value for digital platforms. Journal of Management Information Systems.

Zhou E, Lee D (2023) Generative AI, human creativity, and art. SSRN preprint https://ssrn.com/abstract=4594824