A student wearing headphones takes notes in a notebook while watching an online lecture on a laptop where a teacher points to diagrams on a whiteboard.

When homework isn’t proof of learning: Assessing foundational math in the AI era

If you teach a foundational math course right now, you have likely experienced the following disconnect: polished homework submissions paired with shaky quiz and exam performance. In introductory mathematics courses, where you need to be comfortable working with symbols and able to solve problems quickly and correctly in order to succeed later on, the rise of AI has introduced a new layer of complexity.

In faculty meetings across campuses, the same concern keeps resurfacing—students in foundational math courses are turning to AI to complete their homework. If homework can no longer be trusted as evidence of understanding, how do we identify authentic signals of learning, and how must course design change to ensure students develop real mathematical competence?

The Problem with Student AI Use in Foundational Math Courses

So, what’s the problem with students using AI to complete homework assignments in a foundational math course? Is this a whole lot of hand wringing like we’ve seen before with the calculator and the internet?

A laptop displays a college math homework problem in an online courseware platform while an AI chat assistant beside the screen provides a step-by-step solution.In order to provide quicker turnaround on feedback so students can learn from their mistakes, many faculty use publisher courseware for homework assignments. Courseware platforms can provide instant feedback on students’ work, which is beneficial for learning and growth. But what happens if students use AI to complete those homework assignments? What is the feedback based on? If the student is using AI, they aren’t getting feedback on their own work, so that would seem to defeat the purpose of the courseware. In the age of AI, there seems to be more of a tradeoff between labor efficiency for faculty and meaningful engagement for students.

In courses like algebra, precalculus, and introductory statistics, symbolic fluency and procedural automaticity are prerequisites for later coursework. Put differently, it’s like learning to drive. If you still have to consciously think about how to steer, brake, and signal, you won’t be ready for highway driving. Similarly, later math assumes the basics are automatic, but if students aren’t developing the basics because they’re using AI to circumvent the learning process, they won’t be ready for those more advanced courses. Outsourcing that early practice disrupts schema formation. If you skip practicing and just rely on AI to do the steps for you, your brain never builds those internal “mental shortcuts” (schemas) that make math feel manageable later. Moreover, the students who skip this productive struggle of learning often cannot transfer knowledge to new contexts because without struggling through the problems yourself, you don’t build flexible understanding.

A young woman writes mathematical equations and graphs on a glass board while studying complex formulas.Learning in foundational math courses is about putting in enough hands-on practice so your brain builds the right foundations. The problem, then, is not that students are using AI. It’s that foundational math courses may still rely on assessment structures that assume students cannot outsource cognition, and in the age of AI, that’s no longer true. What seems to be necessary as a result is that we adapt our course and assessment design rather than begin a moral panic over generative AI.

Types of AI Use

Students may use AI for a variety of reasons in myriad ways in a foundational math course. Some uses may be detrimental to students’ learning, while some other uses could actually benefit their learning and skills development. Different types of generative AI use may include:

  • AnswA smartphone displaying a GPT interface rests on a laptop beside wireless earbuds on a bright green surface.er substitution (student copies full solution)
  • Solution translation (student pastes work into AI for reformatting or checking)
  • Concept explanation requests (student asks AI to explain a definition, rule, or theorem in simpler terms)
  • Error diagnosis assistance (student asks AI to identify and explain mistakes in their own work)
  • Strategic planning help (“How do I start this type of problem?”)

Out of these five uses of AI, only the answer substitution would completely circumvent the learning process. The others could be beneficial strategies for enhancing students’ learning.

Important Questions to Consider

As we rethink our pedagogy and assessment in a world increasingly infused with AI tools, there are some questions we should consider, particularly if we notice students are using generative AI to the detriment of their education:

  • What is the objective of the homework?
    What do students need to know by the end of this course, and what is okay to outsource (e.g., to a calculator—or AI)? What cognitive competencies are non-negotiable in foundational courses? For instance, delegation may be appropriate after mastery but not during acquisition.
  • Is this an AI problem, or a homework design (or course design) problem?
    If assignments are: purely procedural, easily solvable via AI, uniquely answer-based (no reasoning required), and/or identical to textbook problems online, then AI use is structurally incentivized. This shifts the discussion from “students are cheating” to “the assessment design is misaligned with our learning goals.”

What Pedagogical Changes, Innovations, or Tweaks Could We Try?

Focusing on AI detection and punishment puts faculty in the unwinnable position of playing the role of anti-AI enforcer. Rather than focusing on detection and punishment, we can consider how to adjust our pedagogical approaches to make cheating with AI less likely or less desirable for students. Some strategies could include the following:

Flipped classroom:

Three students collaborate while writing ideas on a glass board during a brainstorming session.Students learn concepts before class (via short lecture videos, readings, or guided notes), and class time is devoted to working on problems, explaining reasoning, and getting immediate feedback. This reduces the incentive to outsource thinking to AI because students must demonstrate process and understanding in real time, while independent practice remains low-stakes and mastery-focused.

Non-credit homework exercises:

Assign regular problem sets framed explicitly as practice (e.g., “practice quizzes”) that do not directly impact the course grade. This lowers the incentive to use AI for answer production while reinforcing repetition and skill development. Mastery is then verified through in-class quizzes or other assessments where students must independently demonstrate understanding.

Non-traditional grading systems (e.g., labor-based or mastery grading):

A hand places the final step on a series of rising wooden platforms leading to a target symbol representing progress toward a goal.Shift assessment away from points accumulation and high-stakes exams and toward mastery of defined learning outcomes. Students acquire credit by completing required practice (labor based) and/or by meeting specified proficiencies on targeted assessments (mastery). Quizzes and exams could be retaken to demonstrate growth and to reduce the pressure that drives AI misuse while ensuring that course credit reflects verified competence.

Alternative assessments:

Incorporate assessment formats that require students to demonstrate their reasoning, not just produce answers. Examples could include oral micro-assessments (“explain your thinking” check-ins), process-required submissions (annotated steps, reflection), in-class retrieval quizzes to strengthen recall, and two-stage in-class assessment (individual closed-book quiz followed by group correction activity).

AI in the classroom:

Rather than leaving students to infer what ethical AI use looks like in a math course, show them what it looks like. Define and model ethical AI use as supporting thinking rather than replacing it. Walk students through asking AI to clarify definitions, generate additional practice problems, or provide feeA teacher sits with a laptop in front of a chalkboard labeled “AI Prompting” while discussing prompt-writing steps with students.dback on work students have already attempted independently. At the same time, explain clearly why ethical use excludes submitting AI-generated solutions as one’s own or using AI during closed assessments.

Gen-AI can also be incorporated into structured classroom activities. For example, students could solve a problem independently, prompt AI to critique their solution, and then write or discuss a brief evaluation of the feedback’s accuracy and usefulness. This reinforces the idea that AI outputs require mathematical judgment. Another option is to have students analyze AI-generated problem sets for correctness, difficulty, and alignment with learning objectives. By foregrounding appropriate uses and limitations, faculty frame AI as a tool to be examined critically rather than a shortcut to bypass learning.

Explicit AI policies:

Clearly communicate your course policies around AI use. What can it be used for? What can it not be used for? For example, students may not submit AI generated work as one’s own, use AI during closed assessments, or replace required practice such as homework exercises with AI outputs. Permissible uses might include clarifying definitions, generating additional practice problems, explaining why an answer is correct/incorrect, or checking the student’s work after completion.

Beyond listing rules:

In addition to outlining your course policies, explain the pedagogical rationale behind them. Connect your policies to the development of procedural fluency, conceptual understanding, and academic integrity. Clarify the ethical implications of misrepresenting AI output as one’s own work and why independent practice is necessary for mastery in mathematics. Transparent reasoning increases compliance and frames your policies as learning-centered rather than punitive or arbitrary.

When Discrepancies Appear: Prevention, Response, and Communication

You may be thinking, “Okay, but what do I do when I already see homework–quiz/exam discrepancies?” When you see that pattern emerge, it raises practical questions about prevention, response, and communication—and about whether the structure of the course itself is contributing to the behavior.

A professor reviews textbooks at a desk in front of a whiteboard covered with algebra equations.In terms of prevention, one structural solution, as I’ve mentioned, is removing homework from the grade and framing it as practice for quizzes and exams, which are where competence will be measured. Since their grades are not reflected in performance or perfection on homework assignments, the incentive to outsource the homework decreases.

If discrepancies persist, respond with structured interventions rather than penalties. For instance, you might require skill-recovery sessions, targeted reassessment tied to performance thresholds, or mandatory tutoring to ensure students build missing foundations.

When discrepancies appear, frame the conversation around learning rather than misconduct. Individual conferences can focus on the gap between homework and assessment performance and explore what study strategies the student is using, while class-wide reminders can clarify expectations about independent practice and the purpose of homework.

Professor pointing at college student with hands raised in classroomAt the same time, generate student buy-in by explaining why independent practice matters (i.e., helps them practice for quizzes/tests, lets them identify areas they need to work on or study more, and improves their grade in the course as a result). Students often resort to inappropriate AI use out of fear of failure, to avoid discomfort with the struggle of learning, or because they overestimate their understanding after reading solutions (illusion of competence).

Addressing the root causes of student AI use may help reduce it: share illusion of competence research, model growth mindset framing, and provide transparent explanation of why fluency matters for future courses. If possible, you could share data from prior courses that demonstrates the correlation between independent practice and exam success. If students value formative work as a tool for improving their learning rather than a hurdle to clear, they’re more likely to engage with it honestly and earnestly.

Finally, consider whether students with heavy external demands are using AI as a time-management strategy. If shortcut behavior is widespread, it may signal a workload, pacing, or assessment structure that unintentionally incentivizes cognitive outsourcing.

Verifying Mastery in Fully Online Courses

It may be more challenging to address unauthorized AI use by students in fully online asynchronous courses. There’s no live observation of problem-solving, all submitted artifacts could be AI-mediated, and discussion board responses can be AI-generated. Thus, thoughtful learning and assessment design is paramount in this modality.

In a course where all cognitive work could be externally mediated, how can we design assessment systems that still measure independent competence? Luckily, faculty have numerous options to choose from.

Options for adjustments to assessment approaches include but are not limited to the following:

  • Assessment proctoring and/or lockdown browsers – This option may increase accountability or prevent copying and pasting between assessments and generative AI, but it may also raise privacy concerns, have accessibility and equity implications, heighten anxiety, and/or introduce technical barriers.
  • Frequent, low-stakes, timed quizzes with randomized question banks – Short, regularly administered quizzes with randomized question banks can reinforce recall and consistent engagement with course concepts by providing low-stakes, repeatable practice while ensuring each student receives a different version and encouraging reliance on their own preparation through time limits.
  • Mastery checkpoints – Students must pass mastery checkpoint before progressing to the next module in the course (using release conditions and/or forced sequencing), checkpoints are timed and cumulative, and retakes require new problem variants.
  • A woman holds up a notebook with hand-drawn graphs and diagrams while explaining them during an online meeting on her laptop.Process documentation – Students submit evidence of their problem-solving process, such as handwritten work (scanned or photographed), a short audio or video explanation of a selected problem, and/or a brief reflection on where they struggled.
  • AI disclosure – Require students to provide a written acknowledgment describing where AI was used, how it was used, and what they learned from the interaction, promoting transparency about AI use and encouraging students to reflect on how the tool contributed to their learning.
  • AI-integrated assignments – Find ways to demonstrate ethical AI use in your course. For example: “Use AI to generate a solution to this problem. Identify at least two weaknesses or limitations in the AI-generated explanation.”
  • Non-credit homework practice – Assign regular practice problems that don’t carry course credit (or just have credit for completion not tied to performance) to encourage skill development and self-assessment without the pressure of grading or earning “points.”

A few things to keep in mind for online asynchronous courses are that (a) isolation increases shortcut behavior and (b) there may be more equity concerns and variations with students who typically take online asynchronous courses. To mitigate the isolation issue, make sure you work intentionally to generate a sense of community and connectedness in your online course and maintain regular and substantive instructor presence (ideally through videos and announcements as well as one-to-one, personalized communications with students).

A mother sits at a laptop while holding a baby and guiding the child’s hand on a colorful bead maze toy at a home desk.Regarding equity concerns, our online students could be working students, caregivers, first-gen students, multilingual students, and/or students with unstable internet access. Keep in mind that requiring video submissions could disadvantage some students due to technology limitations, privacy concerns, or bandwidth constraints.

In addition, it may be a good idea to establish a nuanced AI policy that distinguishes between language support and cognitive outsourcing—for instance, multilingual learners may use AI to translate explanations or rephrase textbook language for easier comprehension, while still engaging in the core intellectual work themselves.

Professor wearing a headset presents math formulas on a whiteboard while conducting an online lesson with students visible on a laptop screen.Instructor modeling videos are also a great strategy to ensure students understand course concepts and have less need to turn to AI. Consider recording think-aloud solution videos to demonstrate productive struggle—including how to start a problem, how to check reasonableness, and common pitfalls students encounter.

The broader questions around online asynchronous courses are institutional (and maybe even existential) considerations: Should fully asynchronous foundational math courses exist if procedural fluency is essential? Are hybrid models pedagogically superior? What minimum synchronous verification should be required, if any?

Final Thoughts

I’d like to finish off by arguing that the educational crisis brought on by generative AI doesn’t have to be a learning crisis. We can reframe it as an opportunity to rethink and innovate our course design. Gen-AI isn’t inherently detrimental to learning in foundational math courses (or any course, really).

When used intentionally, generative AI can support students in meaningful ways. AI can generate unlimited practice problems at various levels of difficulty, which allows students to get repetition necessary for procedural fluency. It can also provide alternative explanations for definitions, theorems, and worked examples, which can be helpful for students who need to encounter a concept from multiple different angles before it “clicks” for them. For multilingual learners, AI can translate explanations or rephrase technical language in more accessible ways. When used as a tutor instead of a “solver,” AI could even help reduce math anxiety by providing students with low-stakes, on-demand support.

The issue, then, becomes whether our course structures assume that practice can or cannot be outsourced. In foundational math, cognitive outsourcing disrupts the processes that build competence. So, if we redesign our courses so that independent practice is expected, mastery is verified, and AI is framed as a tool (rather than a substitute), we can preserve rigor while adapting to changes in technology.

AI isn’t going to go away. Neither is the need for symbolic fluency, procedural automaticity, or flexible reasoning. The challenge before foundational math instructors isn’t to figure out how to eliminate AI from our classrooms but to design learning environments in which students develop mathematical competence with technology serving that goal rather than undermining it.

Resources

  • Awang, L. A., Yusop, F. D., & Danaee, M. (2025). Current practices and future direction of artificial intelligence in mathematics education: A systematic review. International Electronic Journal of Mathematics Education, 20(2), em0823. https://doi.org/10.29333/iejme/16006
  • Bastani, H., Bastani, O., Sungu, A., Ge, H., Kabakcı, Ö., & Mariman, R. (2025). Generative AI without guardrails can harm learning: Evidence from high school mathematics. PNAS, 122(26) e2422633122. https://doi.org/10.1073/pnas.2422633122
  • D’Agostino, S. (2022, August 15). A machine can now do college-level math. Inside Higher Ed. https://www.insidehighered.com/news/2022/08/16/machine-can-now-do-college-level-math
  • Gabriel, F., Kennedy, J., Marrone, R., & Leonard, S. (2025). Pragmatic AI in education and its role in mathematics learning and teaching. NPJ science of learning, 10(1), 26. https://doi.org/10.1038/s41539-025-00315-4
  • Gao, S., Gao, W., Malomo, O., Allagan, J. D., Jianning Su, Eyob, E., & Challa, C. (2024). Exploring the interplay between AI and human logic in mathematical problem-solving. Online Journal of Applied Knowledge Management, 12(1), 73–93. https://doi.org/10.36965/OJAKM.2024.12(1)73-93
  • Lin, W., & Jiang, P. (2025). Factors Influencing College Students’ Generative Artificial Intelligence Usage Behavior in Mathematics Learning: A Case from China. Behavioral Sciences (2076-328X), 15(3), 295. https://doi.org/10.3390/bs15030295
  • Richard, P. R., Vélez, M. P., & Van Vaerenbergh, S. (2022). Mathematics Education in the Age of Artificial Intelligence : How Artificial Intelligence can Serve Mathematical Human Learning (S. Van Vaerenbergh, P. R. Richard, & M. P. Vélez, Eds.; 1st ed., Vol. 17). Springer International Publishing. https://doi.org/10.1007/978-3-030-86909-0
  • Taani, O., & Alabidi, S. (2025). ChatGPT in education: benefits and challenges of ChatGPT for mathematics and science teaching practices. International Journal of Mathematical Education in Science & Technology, 56(9), 1748–1777. https://doi.org/10.1080/0020739X.2024.2357341
  • Weigand, H. G., Trgalova, J., & Tabach, M. (2024). Mathematics teaching, learning, and assessment in the digital age. ZDM – Mathematics Education, 56(4), 525–541. https://doi.org/10.1007/s11858-024-01612-9