I serve on an advisory council for a graduate cybersecurity program at a university in my local area. During a recent meeting, one of the instructors shared how the program uses tools to detect students “cheating” on their assignments with Artificial Intelligence (AI). Based on a sampling of some of my mentees who are actively working on graduate degrees at other institutions, it is fairly common for schools to forbid students from using AI in most of their assignments. I am astonished that so many universities have taken that position.
At my day job, we have a plethora of corporate initiatives and programs, many of which are desperately trying to figure out how to leverage AI to gain a competitive edge. We’re trying to use AI for automation, optimization, and innovation of everything we do. Essentially, it is believed that figuring out how to harness the potential of AI is essential to stay ahead of our competitors and adversaries. The latter is particularly important as we develop products and services critical to national security. It is not uncommon for us to regard the use of AI as a literal arms race.
If students can use AI to satisfy their university assignments, leaving the course instructors with no means to differentiate human performance, perhaps the problem is with the assignments. To me, it would make more sense to recognize that people will use large language models or other AI-enable tools and to develop assignments that require human intellectual capital and judgment beyond what those tools can provide. I would expect that AI can generate “C” level work and only a good student can make the intellectual contributions to transform the output into a higher grade. The critical skill for the workforce is developing the ability to leverage AI to push the envelope of human knowledge and performance.
AI models are very good at providing answers with speed and precision that exceed human capability. However, AI doesn’t know what questions to ask, which is an essential human skill. Additionally, the phenomenon of AI “hallucinations,” where plausible answers have no basis in reality or fact, highlights the necessity of human oversight. Skilled workers are essential for performing sanity checks and ensuring that responses align with reality. While AI may provide the “right” answers in many cases, it is human judgment that keeps those answers grounded, meaningful, and actionable in the real world.
Instead of focusing on detecting whether AI was used, assignments should be evaluated based on the quality, originality, and depth of the final output. This would shift the emphasis from penalizing AI usage to assessing how well students integrate and elevate their work with it. This approach rewards intellectual effort and human creativity rather than discouraging the use of tools that are becoming indispensable in the workforce. It better prepares students in a skill that will be critical to their success in the real world.
Additionally, the efficacy of AI usage detection tools is questionable at best. The false positive and negative rates are far from reliable. A Businessweek study tested some of the leading detection services on a random sample of 500 college application essays submitted to a university shortly before the initial release of ChatGPT. In other words, there is no way AI generated those essays because those tools did not yet exist. Unfortunately, 1-2% of those essays were falsely flagged as written by AI. While that’s “not bad,” the results could devastate those falsely accused.
What bothers me the most about trying to detect and punish the perceived illicit use of AI in an academic setting is the arms race between AI generation and AI detection obfuscation technology. At the end of the day, my concern is that the students who followed the rules and avoided any and all use of AI are at a performance disadvantage to those who are skillfully breaking the rules and using the technology without detection. Identifying and punishing people who break the rules is uncertain, so universities should err on the side of punitive caution. However, the rule followers will always be disadvantaged whether or not cheating is detected. That isn’t fair.
So where’s the tennis? Tomorrow’s post is about a USTA League Rule that was changed because it was impossible to enforce. That topic is very much analogous to using AI for academic dishonesty. Tennis players who followed the rule were disadvantaged, while those willing to bend ethical boundaries enjoyed a competitive edge. Ultimately, the USTA did away with incentivizing bad behavior by redefining it as no longer prohibited. It was the only equitable solution. It is a lesson with potential applicability to the modern era of tennis. At least, I think so.
However, I am writing about AI today for another reason. I frequently use AI-assisted technology in the production of this site. To be clear, AI is not generating my posts, though sometimes I would happily turn the reins over to ChatGPT if it produced quality copy. However, there are quite a few use cases where I have used AI extensively. For example, the capsule summaries in the “Tennis Books Year in Review (2024)” post last December were drafted by feeding my original reviews into ChatGPT and requesting a short summary. In essence, that is using the technology to distill my own work. I still had to edit those paragraphs heavily, but it was a heck of a head start.
Another way I have used AI is by pointing it at one of my previous posts, a recent news article (which is always cited), and asking for a draft of a new post informed by those recent events. That’s really an oversimplification, as those prompts tend to be more detailed with the points I want to bring out. ChatGPT’s text is never directly usable. However, it is frequently a very efficient way to quickly generate a fresh new draft. Leveraging AI in this way makes me more productive. Behind the scenes, I sometimes use AI tools to generate higher SEO-scoring headlines and metadata that should theoretically help search engines find content on my site. Quite frankly, AI is better at those tasks than I am.
Ultimately, my use of AI on this site reflects what I believe to be an ethical application of technology. It serves as a tool to enhance my productivity and improve the quality of my writing rather than a substitute for the creative or intellectual effort that drives it. To me, this is consistent with how AI should be integrated into the workplace and educational institutions as well.