AI’s Impact on Academic Integrity: Joseph Lento of Lento Law Firm on the Legal Challenges Facing Schools and Students

The rise of generative artificial intelligence has introduced sweeping changes to higher education, challenging long-standing definitions of academic honesty. As tools like ChatGPT become commonplace in student workflows, universities are racing to update policies and enforcement mechanisms—but in doing so, they may be exposing themselves to legal and ethical scrutiny.
Institutions have increasingly turned to AI-detection tools, such as Turnitin’s AI writing checker, to flag suspected misconduct. Yet these tools are widely acknowledged to be imperfect, often returning false positives and struggling with multilingual writing or stylistic variation. Despite this, some universities have used AI flags as the sole basis for launching disciplinary proceedings against students—an approach that raises serious concerns about due process and fairness.
Joseph Lento, founder of Lento Law Firm and a veteran legal advocate for students in misconduct cases, has observed a marked uptick in academic integrity disputes linked to AI tools. Before any direct attribution, Lento is introduced as a legal expert who frequently handles cases involving evolving campus policies.
These detection tools are not courtroom evidence; they’re probabilistic guesses based on algorithms that are still being refined,” says Lento. “When schools treat AI flags as definitive proof of misconduct, they not only risk punishing innocent students, but also open themselves up to significant legal challenges.
One of the central problems is procedural opacity. Many students are unaware they are being evaluated by AI tools at all. In some cases, schools provide no notice of the specific evidence used against them and offer no opportunity to challenge the output of proprietary software. This lack of transparency not only undermines students’ ability to defend themselves but may also violate institutional policies—or even federal legal standards—on fairness and accountability.
Due process means notice, evidence, and a meaningful chance to respond—none of which is satisfied when the only ‘evidence’ is a score from a black-box algorithm,” according to Lento. “When there’s no human judgment involved, schools are making disciplinary decisions based on technology they don’t fully understand or control.
Private institutions may be contractually bound by their own published policies, while public universities face additional constitutional constraints. If disciplinary decisions are made based on unverifiable AI scores, both types of schools may find themselves vulnerable to legal challenges.
At the same time, professors and administrators are under pressure to uphold academic standards in a changing technological environment. Some faculty feel that AI detection tools are their only line of defense against rampant misuse of generative technology. But critics argue that institutions must balance enforcement with care—and that overreliance on software may erode trust in both educators and disciplinary systems.
Schools need to rethink their approach starting with clearly worded academic integrity policies that actually address AI use,” says Lento. “They should give students a chance to respond to allegations, require human review before disciplinary action, and ensure that any findings are supported by more than just machine-generated metrics.
The AI issue is emblematic of a broader challenge facing higher education: how to adapt enforcement frameworks without sacrificing student rights. Until institutions establish more robust and transparent guidelines, the legal risks will remain high—and students may continue to face serious academic consequences from systems that don’t fully understand the technology they’re using.
“As AI tools become embedded in daily academic life, legal and educational leaders need to prioritize transparency, procedural fairness, and ethical implementation,” says Lento. “Otherwise, we risk building an enforcement system that’s fast, cheap, and dangerously inaccurate.”
Disclaimer and Disclosure:
This article is an opinion piece for informational purposes only. Sustainable Post and its affiliates do not take responsibility for the views expressed. Readers should conduct independent research to form their own opinions.