Call for Papers
ARTIFICIAL INTELLIGENCE: ORGANIZATIONAL POSSIBILITIES AND PITFALLS
Submission Deadline: 1st August 2023
Guest editors:
Dominic Chalmers, Adam Smith Business School, University of Glasgow, UK
Richard Hunt, Pamplin College of Business, Virginia Tech, USA
Stella Pachidi, Judge Business School, University of Cambridge, UK
David Townsend, Pamplin College of Business, Virginia Tech, USA
JMS editor:
Kristina Potočnik, University of Edinburgh Business School, University of Edinburgh, UK.
CALL FOR PAPERS
After more than a half-century of frustrating false starts and expensive disappointments, Artificial Intelligence (AI) is now impacting business and society in ways that technologists and futurists have long predicted (Chalmer, Mackenzie, & Carter, 2021; Obschonka & Audretsch, 2020; Townsend & Hunt, 2019). Such is the pervasiveness of AI, defined as “the ability of machines to perform human-like cognitive tasks, including the automation of physical processes such as manipulating and moving objects, sensing, perceiving, problem solving, decision making and innovation” (Benbya, Davenport, & Pachidi, 2020: 9), that it is being simultaneously used to control nuclear fusion (Katwala, 2022), revolutionize cancer therapy (Ho, 2020) and wage automated war (Johnson, 2019). AI has also been adopted within organizational settings, transforming a range of common workplace tasks. Recruitment processes now routinely use facial recognition to screen candidates (van den Broek, Sergeeva, & Huysman, 2021), sales functions become automated (Pachidi, Berends, Faraj, & Huysman, 2021), and new forms of employee surveillance are deployed, with often harmful consequences, to optimize labor (Rahman, 2021). Perhaps most significantly, activities previously thought to be the preserve of human cognition seem to be penetrated by AI tools and functionalities. For example, large language models such as GPT3 and PaLM are being experimentally applied to tasks that require abstract reasoning (Narang & Chowdhery, 2022) and creativity (Amabile, 2019).
The diffusion of these technologies into daily organizational life is stimulating a range of new practices that require theoretical exploration and explanation. Specifically, for every advance brought about by AI, there is often a countervailing harm that tends to affect more vulnerable members of the workforce and society (Bender, Gebru, McMillan-Major, & Shmitchell, 2021; Crawford, 2021; Pasquale, 2019). Thus, while evidence shows that AI can empower individuals to achieve remarkable feats, this is balanced against numerous unintended consequences such as the use of AI as a means of unprecedented and unchecked managerial control (Kellogg, Valentine, & Christin, 2020; Zuboff, 2019), utter transformation of the organizing regime (Faraj & Pachidi 2021), humans doubting their own judgment (Lebovitz, Lifshitz-Assaf, & Levina, 2022), and workers trying to game the system (Cameron, 2022). To understand the conflicted nature of AI in organizations therefore, our special issue seeks to critically examine how the benefits and harms of AI can be navigated to achieve a range of positive organizational outcomes.
AIMS AND SCOPE
By 2030, analysts estimate that AI systems will generate more than $15T in economic value through productivity enhancements, and the development of new products and services (PwC, 2017). Despite the rapid diffusion of AI in practice, academic research focused on organizational and managerial issues relating to AI is fragmented and piecemeal. Despite some excellent contributions (e.g., Bailey et al. 2022; Kellogg et al., 2020; Raisch & Krakowski, 2021), there is a need for greater cohesion around core concepts and ideas. Accordingly, this is a timely special issue in which we aim to bring together scholars from across various disciplinary specialisms to develop conceptual and empirical foundations for an important and expanding field of research. Our special issue seeks to address four core areas.
First, we call for research that enhances our theoretical understanding of AI in organizations. The fields of artificial intelligence and organization theory share deep, common historical roots (e.g., Herbert Simon’s work on AI and organization) yet have diverged over recent years. What are the new opportunities for building and enhancing theories of organized intelligence at the intersection of AI and OT research? We invite scholars to explore:
- How will different types of AI inform the development of emerging theories of organized intelligence?
- How does the penetration of AI across all kinds of industries and organizational functions impact the study of work and organizing? What paradigms should organizational scholars consider for studying AI in organizations?
- What new insights can organizational scholars offer to technology theorists regarding the possibilities and problems of machine agency and autonomy?
- In terms of organizational design, will AI technologies impact informational processing and knowledge transfer both within and between organizations, through existing or new network ties? How will AI transform existing job roles and the organization of tasks?
- How does the use of AI challenge or transform the boundaries and nature of rationality and rational choice in organizations? Will the growing use of AI lead to a ‘hyperrationality’ in organizational decision-making? And how does the growing use of creative AI tools challenge these rationality narratives in organization and management theory?
Second, we seek a rich empirical understanding of how AI is experienced by workers in organizations. A limitation of some current research is a reliance on empirical materials that draw on a small number of well-established cases (e.g., well-known gig-work platforms). We suggest many interesting and potentially novel organizational phenomena are being overlooked, and that granular research into how AI is used in a wide range of organizational contexts can refine existing organizational theories and support the development of new theories too. Some questions that could be addressed are:
- How is AI altering the nature of work, including how people make judgements, make decisions, and create knowledge?
- What happens when AI outperforms humans? How will this impact on the evaluation of performance, power structures, and individuals’ careers?
- How does coordination and collaboration change when teams include both humans and machines?
- How is AI affecting creativity and innovation? In what ways do humans maintain the creative and craft aspects of work?
- The configurations of AI systems, ranging from augmentation to full automation. For example, is there value in the “augmentation thesis” that rejects the oppositional framing of AI versus human decision-making (i.e., humans versus the machine) in favor of a collaborative model?
- What are the effects of surveillance technologies on worker behavior? For example, how do workers adapt and even subvert AI-enabled systems of control?
- Does the automation of skilled tasks reduce the individual capabilities and absorptive capacity of skilled workers?
- What new job roles/families are emerging to embed and operationalize AI systems in the workplace?
- How does the black box/unexplainable nature of AI decision-making affect organizational ability to learn from failure?
Third, we call for normative and meta-ethical theories of AI in organizations that reflect on how AI should be used to achieve specific organizational ends. We are specifically interested in research that explores fundamental philosophical questions around when and why AI is appropriate in an organizational setting, and what trade-offs are made to justify the use of a particular AI-enabled system or process. Potential research questions to be addressed are:
- How do corporate efforts to introduce AI principles and AI accountability frameworks influence on-the-ground organizational work?
- What is the nature of AI agency and personhood? Should robots be treated as moral agents within an organization? What are the consequences of doing so?
- How do organizations navigate issues associated with algorithmic bias (e.g., racial bias, gender bias)? Are there empirical case studies of different organizational approaches to addressing bias that can be compared?
- How do organizational actors make decisions in high-stakes scenarios (e.g., medical emergencies, unfolding terrorist attacks, financial trading) where their expert judgement diverges from AI judgement?
- How do normative frameworks for applied AI vary across culture? Can lessons be learned by comparing Western and Eastern approaches to managing AI in organizations?
Finally, we invite scholars to submit papers that critically examine the societal implications of AI at the organization and management nexus. While the organizational domination of AI can seem inevitable and inescapable, we are particularly interested in studies that push back against the technology. For example, scholars have fruitfully advanced structural critiques of AI (Crawford, 2021), and there are interesting Marxist and neo-Luddite perspectives that frame AI development waves in terms of broader historical labor trends (Roszak, 1986; Sadowski, 2020; Sadowski & Andrejevic, 2020). We believe these critical perspectives are vital, and encourage submissions that:
- Critically evaluate the tangible capabilities of AI systems in varied organizational contexts (contrast against widespread waves of hype). Do AI services deliver the organizational benefits promised by technologists and software entrepreneurs, or is there an element of technological solutionism at play?
- How is AI enabling potentially harmful new forms of practice (e.g., hypernudging, deepfakes, disinformation) and what are the organizational consequences of these practices?
- What are the ecological and sustainability costs of AI adoption in firms? For example, how do organizations legitimize harms associated with training AI models, labeling data, and excessive energy consumption?
These are by no means an exhaustive set of questions, and we invite submissions that cover other issues and topics that fall within the aims and scope of this special issue. We would also like to note that we purposively adopt a broad understanding of AI, partly as we seek to adopt a ‘big tent’ approach to theory development, and partly because the concept is widely contested in theory and practice. For instance, computing science experts consider machine learning to be a more accurate term for what people consider AI, though recognize that artificial intelligence has popular currency and therefore do not resist too hard against the loose terminology. For clarity, we will only accept papers addressing technologies that have some form of learning/adaptive capacity. We will not accept papers that only incorporate analysis of basic automation.
SUBMISSION PROCESS AND DEADLINES
• Submission deadline: 1st August 2023.
• Expected publication: 2025
• Submissions should be prepared using the JMS Manuscript Preparation Guidelines (http://www.socadms.org.uk/journal-management-studies/submission-guidelines/)
• Manuscripts should be submitted using the JMS ScholarOne system (https://mc.manuscriptcentral.com/jmstudies)
• Articles will be reviewed according to the JMS double-blind review process.
• We welcome informal enquiries relating to the Special Issue, proposed topics, and potential fit with the Special Issue objectives. Please direct any questions on the Special Issue to the Guest Editors:
Dominic Chalmers: dominic.chalmers@glasgow.ac.uk
Rick Hunt: rickhunt@vt.edu
Stella Pachidi: s.pachidi@jbs.cam.ac.uk
David Townsend: dtown@vt.edu
SPECIAL ISSUE EVENTS
Online Information Session: The editorial team will host an online information session on the 30th August at 4pm (BST) (11am EST) to launch the special issue call. Prospective contributors can ask questions about the call, and the recording will be uploaded to the JMS website shortly after.
Pre-submission Online Workshop: The editorial team will organize a virtual special issue pre-submission workshop in Fall 2022 to work with authors in developing studies for submission to the Special Issue. Ph.D. students and junior faculty are especially welcome to attend the pre-submission workshop. Participation in the workshop does not guarantee acceptance of the paper in the Special Issue and attendance is not a prerequisite for publication. More details about this workshop will be announced at a later date.
Post-submission Workshop: The editorial team will organize a special issue in-person revision workshop (contingent on COVID-19 travel restrictions). Authors who receive a “revise and resubmit” (R&R) decision on their manuscript will be invited to attend this workshop. Participation in the workshop does not guarantee acceptance of the paper in the Special Issue and attendance is not a prerequisite for publication. More details about this workshop will be announced at a later date.
References
Amabile, T. (2019). 'GUIDEPOST: Creativity, Artificial Intelligence, and a World of Surprises Guidepost Letter for Academy of Management Discoveries'. Academy of Management Discoveries. doi:10.5465/amd.2019.0075
Bailey, D. E., Faraj, S., Hinds, P. J., Leonardi, P. M. and von Krogh, G. (2022). 'We are all theorists of technology now: A relational perspective on emerging technology and organizing'. Organization Science, 33, 1-18.
Bender, E. M., Gebru, T., McMillan-Major, A. and Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?ü¶ú. Paper presented at the Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency.
Benbya, H., Davenport, T. H. and Pachidi, S (2020). 'Special Issue Editorial: Artificial Intelligence in Organizations: Current State and Future Opportunities'. MIS Quarterly Executive, 19, 9-21.
Cameron, L. D. (2022). '“Making Out” While Driving: Relational and Efficiency Games in the Gig Economy'. Organization Science, 33, 231-52.
Chalmers, D., MacKenzie, N. G. and Carter, S. (2021). 'Artificial intelligence and entrepreneurship: Implications for venture creation in the fourth industrial revolution'. Entrepreneurship Theory and Practice, 45, 1028-53.
Crawford, K. (2021). The Atlas of AI. Yale University Press.
Faraj, S. and Pachidi, S. (2021). 'Beyond Uberization: The co-constitution of technology and organizing'. Organization Theory, 2, 2631787721995205.
Fotheringham, D. and Wiles, M. A. (2022). 'The effect of implementing chatbot customer service on stock returns: an event study analysis'. Journal of the Academy of Marketing Science, 1-21.
Ho, D. (2020). 'Artificial intelligence in cancer therapy'. Science, 367(6481), 982-83.
Johnson, J. (2019). 'Artificial intelligence & future warfare: implications for international security'. Defense & Security Analysis, 35, 147-69.
Kaplan, A. and Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business horizons, 62, 15-25.
Katwala, A. (2022). 'DeepMind Has Trained an AI to Control Nuclear Fusion'. Wired. Retrieved from https://www.wired.com/story/deepmind-ai-nuclear-fusion/
Kellogg, K. C., Valentine, M. A. and Christin, A. (2020). 'Algorithms at Work: The New Contested Terrain of Control'. Academy of Management Annals, 14, 366-410. doi:10.5465/annals.2018.0174
Lebovitz, S., Lifshitz-Assaf, H. and Levina, N. (2022). 'To engage or not to engage with AI for critical judgments: How professionals deal with opacity when using AI for medical diagnosis'. Organization Science, 33, 126-48.
Narang, S. and Chowdhery, A. (2022). Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance. Retrieved from https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html
Newlands, G. (2021). 'Algorithmic surveillance in the gig economy: The organization of work through Lefebvrian conceived space'. Organization studies, 42, 719-37.
Obschonka, M. and Audretsch, D. B. (2020). 'Artificial intelligence and big data in entrepreneurship: a new era has begun'. Small Business Economics, 55, 529-39.
Pasquale, F. (2019). 'The Second Wave of Algorithmic Accountability'. LPE Project. Retrieved from https://lpeproject.org/blog/the-second-wave-of-algorithmicaccountability/
Pachidi, S., Berends, H., Faraj, S. and Huysman, M. (2021). 'Make way for the algorithms: Symbolic actions and change in a regime of knowing'. Organization Science, 32, 18-41.
Rahman, H. A. (2021). 'The invisible cage: Workers’ reactivity to opaque algorithmic evaluations'. Administrative Science Quarterly, 66, 945-88.
Raisch, S. and Krakowski, S. (2021). 'Artificial Intelligence and Management: The Automation–Augmentation Paradox'. Academy of management review, 46, 192-210. doi:10.5465/amr.2018.0072
Roszak, T. (1986). The Cult of Information: The Folklore of Computers and the True Art of Thinking. Pantheon.
Sadowski, J. (2020). Too Smart: How Digital Capitalism Is Extracting Data, Controlling Our Lives, and Taking over the World. Cambridge, UNITED STATES: MIT Press.
Sadowski, J. and Andrejevic, M. (2020). 'More than a few bad apps'. Nature Machine Intelligence, 2, 655-57. doi:10.1038/s42256-020-00246-2
Townsend, D. M. and Hunt, R. A. (2019). 'Entrepreneurial action, creativity, & judgment in the age of artificial intelligence'. Journal of Business Venturing Insights, 11, e00126.
van den Broek, E., Sergeeva, A. and Huysman, M. (2021). 'When the Machine Meets the Expert: An Ethnography of Developing AI for Hiring'. MIS Quarterly, 45.
van Esch, P., Black, J. S. and Ferolie, J. (2019). 'Marketing AI recruitment: The next phase in job application and selection'. Computers in Human Behavior, 90, 215-22. doi:https://doi.org/10.1016/j.chb.2018.09.009
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power: Profile Books.