Outstretched hand holds a glowing hologram of a human head with an “AI” chip and graduation cap, surrounded by icons for learning, innovation, analytics, and achievement against a dark background.

Alarmed but hopeful? What AAC&U’s new GenAI report tells us

AAC&U’s new survey, The AI Challenge: Faculty Concerns About Generative AI in Higher Education, doesn’t just measure opinions about generative AI; it captures a moment of deep uncertainty about what teaching, learning, and academic work are becoming. The report reflects the results of a national survey of 1,057 faculty members and provides a complicated picture of faculty attitudes toward generative AI. What is clear from the survey results is that the field of higher education is at a moment of transition, and GenAI is not just another tech trend.

Faculty aren’t just worried about cheating—they’re worried about the future

What stands out most in the data is not fear of misconduct but concern that generative AI is reshaping the meaning and value of education itself. Many faculty (49%) believe that GenAI will have a negative career impact on their students, a negative impact on degree integrity (74%), and an overall net negative impact on students’ lives at their institution (54%).

Human hand holding AI and human brain digital hologram with education and technology concept for innovation, deep learning, and knowledge transfer.These concerns come at a time when the high cost of higher education is already being weighed against students’ employability after graduation and a broader erosion of public trust in higher education. Within that context, faculty anxiety about generative AI is not just about whether students are learning; it’s about whether academic credentials will continue to signal expertise in a labor market increasingly skeptical of what a degree represents. When the work that underpins those credentials can be easily automated by AI, the credibility of the credential itself is what feels most at risk.

Core contradiction: personalization vs. intellectual erosion

Faculty aren’t solely focused on the negative aspects of generative AI, however. Faculty simultaneously believe that AI could make learning more responsive while also hollowing out the cognitive work that makes learning meaningful. Sixty-one percent of faculty think AI will enhance or customize learning. At the same time, a majority of faculty believe it will lead to declines in students’ critical thinking (90%), attention spans (83%), and learning outcomes (62%). While this may seem like inconsistency in faculty perceptions and perspectives, what it really reflects are competing visions of what “learning” means.

Group of shining and dimmed light bulbs on wooden block ladder with fibers in a shape of Failure and Success concept words isolated on black background.Learning can mean many things to faculty. On the one hand, learning encompasses instructional efficiency, access, and individualized support—e.g., just-in-time explanations, examples, and feedback. From this perspective, faculty may see AI as a powerful learning support and scaffolding tool. On the other hand, learning can also be defined by the slow, effortful, uncomfortable work of grappling with new ideas, practicing skills, making mistakes, and building intellectual stamina. From that perspective, outsourcing certain kinds of cognitive work and learning tasks (such as idea generation, drafting, analysis, reasoning, synthesis, decision-making) to generative AI risks impeding the very processes that develop understanding, judgment, and creativity by bypassing the productive struggle involved in learning. It’s not that faculty are torn about the technology; they’re wrestling internally with two competing ideas of what education is supposed to do.

Academic integrity has already shifted

Core Values Text on Green Notebook with Magnifying Glass and Office Supplies on White Background.For most faculty, generative AI is no longer a hypothetical risk; it’s an active force reshaping classroom norms. An important faculty concern is GenAI’s perceived negative impact on academic integrity: 78% say that cheating has increased, while 73% have personally dealt with it (40% a few; 33% “a lot”). This signals that the classroom has already changed faster than institutional policy in many cases.

There are two issues at play here. One is that institutions are not keeping pace with the reality of generative AI ubiquity and usage by students–whether that’s through preparing faculty to address AI in their teaching or establishing institutional guidelines for AI use, particularly in regards to academic integrity. Another issue is that the definition of what is acceptable use of AI by students varies wildly among faculty members, which makes navigating academic integrity and GenAI a proverbial minefield for students.

Institutions are not keeping pace with faculty reality

The survey reveals a system where, from faculty’s perspective, responsibility has been pushed downward while guidance remains fragmented. Of faculty surveyed, 87% have created their own rules for use of generative AI in their courses, as compared to 35% of departments and 48% of institutions that have developed guidelines.

Desperate office worker overwhelmed with paperwork, she is asking help with her handWhen institutions don’t set norms, faculty end up absorbing student complaints, grade disputes, accusations of unfairness, and the emotional labor of policing student AI use. That’s an especially precarious situation for adjunct faculty, early-career faculty, faculty of color, and faculty in gateway courses, which creates equity and labor issues. Moreover, this fragmentation actually worsens cheating concerns. When AI policies are unclear, contradictory, or buried in syllabi, students are more likely to guess, push boundaries, or rationalize violations. This can amplify academic integrity conflicts, feeding faculty burnout and interpersonal conflict.

Some faculty may see setting their own AI policies as an exercise of academic freedom. Unfortunately, students may experience this as inconsistency rather than the “academic freedom” of their professors. Academic freedom protects intellectual and pedagogical judgment. It doesn’t mean that every faculty member should invent their own ethical framework for each emerging technology. Without shared guardrails, freedom can become chaos.

“Legitimate use” is a moving target

One of the most destabilizing findings in the report is that neither students nor faculty agree on what ethical or appropriate AI use actually looks like. The report reveals the murkiness of “legitimate” GenAI use, which has the potential to undermine trust, assessment, and accountability. From the student perspective, this policy patchwork makes an already labyrinthine higher-education system even harder to navigate, especially when expectations shift from course to course.

Doubtful person, hands on hips, choosing the way as multiple arrows on the road showing a mess of different directions. Choosing the correct pathway, difficult decision concept, confusion symbol.A single student may be required to disclose AI use in one class, forbidden from using it in another, encouraged to use it in a third, and restricted to specific uses but not others in a fourth. The same acts—using AI to brainstorm, revise, summarize, or check grammar—could be framed as cheating, acceptable assistance, or good practice for different professors. That doesn’t come off as academic freedom to students; it can read as arbitrariness, and may undermine students’ ability to become ethically competent users of AI, particularly if the rationale behind each professor’s policy isn’t clarified. If students see policies as arbitrary, they learn compliance rather than judgment, and risk management instead of ethical reasoning. This undermines the AI literacy faculty say their students need.

Everyone wants AI literacy—but do we know what that is?

About half of faculty say that AI literacy is either extremely or very important for students to learn, and 69% are already addressing it in their teaching; only 13% say it is irrelevant. The emergence of yet another form of “literacy” should sound familiar. We have seen this pattern before with writing across the curriculum, information literacy, and digital literacy: institutions declare a competency essential, but leave it loosely defined and unevenly supported, pushing responsibility onto individual faculty before shared frameworks are established. The result is fragmented experimentation without institutional coherence—a pattern the survey suggests is now repeating with generative AI.

At the same time, the concept of “AI literacy” is being stretched to cover too many purposes. For some, it means learning to use the tools effectively; for others, it means understanding ethical risks such as bias, privacy, and authorship; for still others, it involves critically evaluating AI outputs or interrogating the power structures embedded in these technologies. All of these are legitimate aims, but they are not the same learning outcome. This is where teaching and learning centers (like CITL), campus libraries, and academic support units have a critical role to play: not just in offering workshops on tools, but in helping institutions develop shared definitions of AI literacy, align them with institutional and departmental goals, and integrate them into teaching and workflows in ways that support both learning and ethical use.

What faculty think must be addressed

Faculty concerns about generative AI are not abstract—they map directly onto ethical, legal, and educational responsibilities, including:

  • Knowledge reliability (hallucinations, fabricated citations, misinformation, inability to verify sources or trace claims, erosion of trust in academic work)
  • Rights and ownership (copyright infringement, authorship and intellectual credit, training data and models)
  • People and power (student and faculty data privacy, bias in training data and outputs, surveillance, equity in access to tools and models, labor impacts)

In other words, faculty concerns about AI are really concerns about what happens to the foundations of academic work: knowledge, authorship, and equity. To resolve these concerns requires coordinated, collaborated, institution-wide approaches.

What this means for teaching and learning centers

The report makes clear that we cannot treat AI as just another instructional technology. Faculty, staff, and administration need to work together expeditiously to develop ethical guidance for faculty use and pedagogical guidance for student use, ensure data privacy and regulatory compliance, and weigh the equity, labor, and environmental costs of GenAI and distribute them responsibly.

Diverse group of hands joined together, representing teamwork and unity. Symbolizes cooperation, collaboration, and the power of collective effort.This is where teaching and learning centers can help. Centers like CITL are not meant to mandate tools or write policy; they exist to build shared pedagogical language, help faculty navigate ethical questions, and support programs in developing coherent approaches to teaching and learning. In this way, CITL functions as institutional infrastructure that supports alignment, reflection, and capacity-building rather than enforcement.

Where does this leave us?

The AAC&U’s findings point to a field that knows generative AI is here to stay, but has not yet agreed on what it should be allowed to do to learning. Different instructors adopting different approaches to generative AI is not, in itself, a problem; what matters is whether those differences grow out of a shared institutional framework grounded in pedagogy, ethics, and disciplinary goals. Without a shared framework, AI use will continue to be governed by a patchwork of individual course policies, leaving students to navigate contradictory expectations that may make ethical use harder to discern.

Teamwork and startup concept in flat vector with a giant puzzle, symbolizing collaboration and partnership. Includes team building, project management, group motivation, and brainstorming.At the same time, the absence of clear institutional guidance increases the likelihood that faculty and students will inadvertently run afoul of copyright, privacy, FERPA, HIPAA, or research ethics. Perhaps most importantly, failing to act allows the meaning of “AI literacy” to be shaped by the tools themselves and the companies that build them, rather than by educators and educational values. Taken together, these findings underscore the need for shared language, the risks of doing nothing (or too little) and the opportunity for teaching and learning center staff to help faculty and administrators shape what “AI literate” will mean in higher education.