AI Is the Opportunity that Academic Research Will Not Seize (But Should)
An Open Letter of Sorts to my Peers
We have only seen the beginning of the fundamental changes that generative AI will cause to how we do things. Within the academy, the first obvious impact of AI was students' use of it to produce term papers, essays, reports, and answer questions on quizzes and exams for them. (A.k.a. cheating.) The academy responded in the way that human beings typically respond to the threat of change: by prohibiting, cracking down on, and seeking to control it. (After all, it is cheating.) Recently, syllabi and student guidebooks have now added "AI policies" to clarify terms, but they often do little more than state that "thou shalt not" use it. (Many students of course use it anyway because it offers a very-low-effort way of getting a passing grade, which tends to be students' real objective - not learning.)
Somewhat later, and at a seemingly slower pace, AI is also finding its way into academic research. The initial reaction here is mostly the same as in teaching: to protect the status quo. But as this becomes untenable, which it soon will, we enter a new phase in which we can either embrace this new tool and use it to make research better, faster, and more useful - or not.
My money is on "not."
At present, we are seeing great resistance from researchers, journal editors, and publishers alike. They are certainly embracing AI as a topic used to produce more research, edit/publish special issues, and engage in discussions on its possible impact and implications. That's low-hanging fruit that fits perfectly with the publish-or-perish status quo. But it changes little in any meaningful sense. It is business as usual.
I will not waste space to discuss how generative AI works, which I have done elsewhere (and many others have too), but note that it can already conduct several stages in the academic research process. And it can do so almost flawlessly. For example, it can inductively analyze data, test hypotheses, choose and do whatever robustness checks are needed, and draw appropriate conclusions from the data. It can also write up reasonably good papers based on the findings. And it can do all of this much faster and more effectively than any human can.
This means we have three options for how we as a research community deal with AI: (1) we can choose to see it as a threat and fight it tooth and nail; (2) we can adopt it as a (limited) tool to speed up the research process, much like how we use statistical softwares and online questionnaires; or (3) we can embrace it and use it to strengthen and update the scholarly process (i.e., innovate). I suggest that these options are also, unfortunately, stages: we are currently at (1), when the battle is lost we will attempt (2), and then desperately fight to not get to (3). And this will be disastrous for research and scholarship.
| PER L BYLUND | Associate Professor Johnny D. Pope Chair School of Entrepreneurship 424 Business Building | Stillwater, OK 74078 405-744-4301 | per.bylund@okstate.edu business.okstate.edu |
Sent from Surface Pro 9 with Windows 11