I think you will find these thoughts on AI in research of interest:
AI Is the Opportunity that Academic Research Will Not Seize (But Should)
An Open Letter of Sorts to my Peers
We have only seen the beginning of the fundamental changes that generative AI will cause to how we do things. Within the academy, the first obvious impact of AI was students' use of it to produce term papers, essays, reports, and answer questions on quizzes and exams for them. (A.k.a. cheating.) The academy responded in the way that human beings typically respond to the threat of change: by prohibiting, cracking down on, and seeking to control it. (After all, it is cheating.) Recently, syllabi and student guidebooks have now added "AI policies" to clarify terms, but they often do little more than state that "thou shalt not" use it. (Many students of course use it anyway because it offers a very-low-effort way of getting a passing grade, which tends to be students' real objective - not learning.)
Somewhat later, and at a seemingly slower pace, AI is also finding its way into academic research. The initial reaction here is mostly the same as in teaching: to protect the status quo. But as this becomes untenable, which it soon will, we enter a new phase in which we can either embrace this new tool and use it to make research better, faster, and more useful - or not.
My money is on "not."
At present, we are seeing great resistance from researchers, journal editors, and publishers alike. They are certainly embracing AI as a topic used to produce more research, edit/publish special issues, and engage in discussions on its possible impact and implications. That's low-hanging fruit that fits perfectly with the publish-or-perish status quo. But it changes little in any meaningful sense. It is business as usual.
I will not waste space to discuss how generative AI works, which I have done elsewhere (and many others have too), but note that it can already conduct several stages in the academic research process. And it can do so almost flawlessly. For example, it can inductively analyze data, test hypotheses, choose and do whatever robustness checks are needed, and draw appropriate conclusions from the data. It can also write up reasonably good papers based on the findings. And it can do all of this much faster and more effectively than any human can.
This means we have three options for how we as a research community deal with AI: (1) we can choose to see it as a threat and fight it tooth and nail; (2) we can adopt it as a (limited) tool to speed up the research process, much like how we use statistical softwares and online questionnaires; or (3) we can embrace it and use it to strengthen and update the scholarly process (i.e., innovate). I suggest that these options are also, unfortunately, stages: we are currently at (1), when the battle is lost we will attempt (2), and then desperately fight to not get to (3). And this will be disastrous for research and scholarship.
Academic research desperately needs an overhaul. To use an analogy, we are holding on to how we make buggies and buggy whips whereas Henry Ford has already produced the Model T. That's obviously unsustainable regardless of how proud we are of our trade and "how we do things." To use another historic analogy, smashing the Spinning Jenny is not a viable strategy; it is here to stay and will change our trade forever.
To be perfectly clear, I am not suggesting that research needs an overhaul *because* AI can do the same things that researchers currently do. In this sense we are in a very different situation than the analogies above. This is not a change that is forced by technology, but a flaw that technology is (or soon will be) uncovering. In fact, AI will both make it very clear what the problem is and, at the same time, offer an opportunity to solve it. We either seize this opportunity or risk the same fate as the buggy whip makers and spinners. Choosing the wrong approach risks the research enterprise, not merely our jobs. It would be disastrous.
The real issue we are facing is not the "AI threat" but the poor quality of research in the social sciences generally and, especially, the business disciplines. Core to the quality issue is that we are generally not very good at producing theory. (To be blunt, we suck.) This is devastating for a field of study, because, as I put it in a recent paper (Bylund & Packard, 2022), "'Theory' is explanation of reality and how it works - a causal account of observations." A scholarly field's body of theory is not merely a set of hypotheses or stack of papers, but the accumulated and amassed explanation and understanding that we have collectively accomplished through careful, systematic study.
No body of theory is perfect, but the alpha and omega of the research enterprise is to advance it. It is not a consistent advance toward truth as mistakes will be made to later be corrected. But the problem here is not the process but that we have, through poor theorizing, riddled the body of theory with errors, contradictions, and unaddressed gaps.
A partial explanation of this issue is that the business disciplines were originally practical and not theoretical fields. Unlike disciplines like economics there is no consistent framework for study and no shared, abundantly tested, and broadly applied assumptions. The business disciplines lack the advanced and long-exercised theory tradition that economics and other older disciplines have and still benefit from. But these are problems to overcome and they are not issues that haunt us if we do research properly. Unfortunately, this is not the direction we have been moving in.
Business scholars tend to have a skepticism toward theory and theorizing. Many of them argue that there is "too much" theorizing going on in the journals, apparently ignorant of the meaning and use of theory in social science. A field that collects and analyzes data but does not generate theory has accomplished nothing (other than a mass of data). As Ronald Coase once put it (about American institutionalists), "Without a theory they had nothing to pass on except a mass of descriptive material waiting for a theory, or a fire." This applies to business scholarship as much as to any other field.
The problem with such theory skepticism is that it stands in the way of developing and adopting good theorizing practices, feeds resistance to refine and challenge the body of theory, and perpetuates a relative inability to recognize good and sound theory.
This is also what AI will make abundantly clear: that theories in the business discipline are overall not very good, lack a sound foundation in proper assumptions and logic, and are built somewhat ad hoc and therefore are inconsistent and/or contradictory. This has not been obvious because we produce a lot of studies that test a jumble of hypotheses that are typically produced after the results are known (because they must meet the peculiar requirements of being derived from previous theory, yet be surprising, and supported by the data). Analyses, assessments, and comparisons of theories are largely absent in the literature. There is no consistent framework and no fundamental, generally accepted assumptions.
I expect that we as a collective will, after first attempting to resist AI, use it more broadly as a tool - but without changing how we do research. But this does not constitute a move from abacus to calculator or from calculator to statistics software. It is not merely automation of a manual task. AI adds (granted, imperfect) intelligence that can replace human scholars in most of the research practice. AI can, or will soon be able to, do a better job than human researchers in most or all types of data collection, statistical analysis, and hypothesis testing. We should let it.
What AI cannot do is what we in the business disciplines have largely refrained from doing: construct theory by developing explanations for the mechanisms and causalities that to some extent may be revealed in empirical data. This is the core of scholarship and it is a creative, imaginative, yet deeply insightful enterprise. In an ideal world, one in which the value of what we accomplish trumps all other concerns, we would embrace AI as a means of reengineering the research process such that we can fully take advantage of its power to release us from the tasks that can be carried out by AI.
But this would mean that we also embrace the fact that we and our scholarship are never better than the theories we have generated. And, consequently, that we as scholars must focus on true scholarship: creating explanations and deeper understanding for the world, which can then be communicated and applied to improve people's lives. Herein lies the promise and threat of AI. It is a promise, because the opportunity is one barely without end - the value of good, relevant theory is nothing short of enormous. It is a threat, because most of us have focused not on theory development and theorizing but on developing skills in manipulating and analyzing data. The latter will in the age of AI be as valuable a skill as hand-spinning in the age of the Spinning Jenny.
To be sure, a scholar must know and understand data analysis and statistics. I am not suggesting that we abandon such knowledge. On the contrary, it is necessary that we have deep knowledge in order to oversee and direct AI to properly conduct those tasks. The problem arises if we expect to continue to be paid to carry out the tasks that AI can do faster, cheaper, and better. There is plenty that scholar in the age of AI can and should do, but it is the creative, imaginative work rather than the semi-manual processing and analysis of data: formulating research questions, developing hypotheses, and, above all, theorizing.
| | PER L BYLUND | Associate Professor Johnny D. Pope Chair School of Entrepreneurship 424 Business Building | Stillwater, OK 74078 405-744-4301 | per.bylund@okstate.edu business.okstate.edu |
Sent from Surface Pro 9 with Windows 11