brain sitting on top of tablet

Don’t assume students are eager AI adopters

Faculty and administrators shouldn’t assume students are jumping on the AI bandwagon, Andrea L. Guzman writes.

By Andrea L. Guzman | April 27, 2023
Inside Higher Ed

For the past few months, faculty and administrators have been trying to wrap their heads around what ChatGPT and other technologies of generative artificial intelligence mean for higher education. I, like many other scholars of technology and society, think that we are in the midst of a profound technological and social shift on the magnitude of that of the internet, if not bigger.

Yet, as someone who studies human-machine communication, specifically people’s perceptions of and interactions with AI, and who has been teaching undergraduate and graduate students about artificial intelligence in journalism and communication for nearly a decade, I have found much of the discourse to be based on faulty assumptions surrounding students’ general knowledge of and attitudes toward AI.

My fear is that the current knee-jerk reaction to ChatGPT will shape the higher ed response to AI writ large, with potentially devastating effects in the long term.

Misunderstanding Students

The introduction of ChatGPT has been met with a rush to police its use to prevent academic misconduct.

Let me say that as an educator for 16 years, I am not so naïve as to think that generative AI will not be used to cheat. Students throughout history have used a variety of technologies to do so, and I have an automated-writing technology policy in my course.

But what I find disconcerting is that undergirding much of the conversation among educators is the not-so-implicit assumption that when given the opportunity, students will use AI to cheat or in other ways ultimately detrimental to their learning. In other words, students will use a technology, no questions asked, if it makes their lives easier.

The problem with such an assumption is that it falsely oversimplifies students’ perceptions and use of technology generally and AI specifically.

Such an attitude is an extension of the “students as digital natives” fallacy that incorrectly paints students as “natural,” “eager” and “adept” users of any technology. Research has repeatedly demonstrated that people’s attitudes toward and behavior with technology are highly complex and shaped by a variety of factors.

Yes, students use technology to cheat, but they also use it to learn, and history has shown us repeatedly that far more students use it to learn than cheat.

In this vein and in a rejoinder to the calls for limiting AI in the classroom, other educators have advocated for integrating and embracing AI in teaching to better prepare students for life in a world shaped by artificial intelligence.

I wholeheartedly agree that teaching students how to use AI for learning and within the context of their future careers is important and that helping them to understand AI’s far-reaching implications for self and society, including ethical issues, is just as critical.

At the same time, the discourse surrounding this goal seems to indicate that faculty members simply need to show students how to correctly use AI and future generations will be on their way to success.

Similar to the narrative surrounding AI and cheating, this conversation also is built upon simplistic assumptions regarding student attitudes toward technology, as well as misconceptions regarding the complexities of teaching with and about AI.

Failing to Grasp the AI Difference

Arguments surrounding what should and should not be done about AI in the classroom overlook the cultural and philosophical baggage that greatly complicates people’s understanding of and reactions to AI technology.

The process of discovery for most technologies takes place when a device or application is released to the public and people learn about it through marketing, news reports, word of mouth or in direct use.

Artificial intelligence, by contrast, has had a cultural presence almost since the term was coined in the mid-20th century in the form of characters and plot lines in science fiction. Research has demonstrated that such fictional portrayals have played a role in shaping people’s perceptions and expectations of AI.

When students first walk into my course regarding AI in media and journalism, many of them, although not all, have heard the term “AI” and have wide-ranging opinions about its effects, good and bad, often built upon and expressed through the lens of media portrayals.

What makes artificial intelligence so intriguing for science fiction plot lines also further complicates people’s perspectives of the capabilities of AI.

The panic and excitement surrounding ChatGPT is an emotional reaction to a technology, a thing, carrying out what seemingly was thought to be a human-only process. Within the context of the college classroom, the questions that may follow in the minds of students are what the capabilities of AI mean for them personally and professionally.

And so, contrary to the assumptions undergirding both sides of the discourse surrounding AI in the classroom, students are not jumping en masse on the AI bandwagon, no matter how efficiently it can do homework.

While some students may be excited about the prospects of AI, others are deeply concerned and uneasy about it. Still others aren’t quite sure what to think about AI and may not care. It is worth noting that apathy also is a routine response to technology, even AI. All three perspectives and the shades between have consequences for how educators approach AI in the classroom.

Whose Reactions to AI Are We Putting First: Students’ or Our Own?

Long-standing misperceptions of student technology use aside, I think much of the debate surrounding ChatGPT and generative AI in the classroom and initial drafting of policies on AI-related academic misconduct say more about faculty and administrators’ own reactions to AI than about students.

Indeed, many of the general markers of the complexities of making sense of AI—drawing upon or making references to science fiction and deeply emotional and felt responses—can be found in how faculty and administrators have talked about and created policies around generative AI.

I’m not making this claim to sound glib or dismissive of the efforts of my own colleagues or of peers at other institutions. Far from it. My goal is to make the assumptions fueling the faculty response to AI salient so that they can be addressed and kept in check.

A key lesson I have learned in studying and teaching about AI is that a person can quickly become distracted and overwhelmed by the hype surrounding AI—and there is a great deal of it—and by the emotional elements of watching machines increasingly become better at a task than a person who has trained their entire life.

As scholars and educators, we have to consciously fight against knee-jerk responses and simplistic assumptions. Admittedly this is extremely hard to put into practice when a disruptive technology, such as ChatGPT, is suddenly thrust upon us instead of slowly being introduced over time, as with the internet. Moving forward will require a conscious effort on the part of those at all levels of higher education to look beyond the fictions of AI toward its realities and guide our students to do the same.


Andrea L. Guzman is an associate professor of communication at Northern Illinois University and lead editor of The SAGE Handbook of Human-Machine Communication (forthcoming, June 2023).

Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *