I recently drafted the following thoughts to share with the Academic Dean of my daughter’s school. These were driven from a thread I recently started around these ideas on Micro.blog. I wanted to post them here for archives sake also/and to share them with my broader audience for further thinking and discussion.
A few days ago, I didn’t know the right settings for cooking white beans in my Instant Pot. So, I Googled it. The top result was Google’s AI telling me exactly what I needed to know. I did not need to click further. I now know the right method going forward.
Did I use AI for the answer? Did I learn?
Yes.
The point being that AI is everywhere now. If I search for something on Google the top result is increasingly often Google’s AI driven answer with the information I need. If I type in my word processor or email program and it makes word suggestions or offers sentence completion, AI is driving that. In fact, everything students and teachers write or post online — including everything in Google Classroom — is being used to train the very LLMs that the AI is using to provide the answers.
The truth is, AI is a tool. A tool that will very soon (1-3 years maximum in my guessing) be everywhere and in everything and the answers/solutions it will produce will be so accurate it will be indistinguishable from actual learning and, in fact, teaching.
AI is quickly becoming the first and final step of learning something new on the internet. My Instant Pot story is an example of this. I did not ask for AI but it was the first answer presented, had the right answer, and I learned from it. So, I “used AI” for the result.
Therefore, if a student uses AI to learn a better method for doing stoichiometry for Chemistry class because the one being taught is not making sense or if they learn the same thing from their learning coach, what’s the difference? Especially if they can now use that method going forward to get the right results? And if the student has learned something, what makes either instruction superior to the other? Now, what if the learning coach learned that new method from AI?
These are the sorts of things, of many, I believe schools of all levels will need to think about and should be having conversations about right now.
The bottom line is that without a specific AI policy that addresses what is appropriate and not appropriate use, I would argue that the school has neither the standards, understanding, or clarity about it to make any objective accusations about when/how/why/how much it was used. And if the school does not have a clear, communicable, policy regarding this, it can’t possibly expect the students to know the difference between proper use of a tool to learn versus intellectual dishonesty.
Especially without proper policy built from thinking about what AI is, the many ways it might manifest, how one might use it for valid reasons and learning, or even how one may not even realize they are using it at all (the typing an email example).
With a lack of clear specific AI policy, teachers are left without any objective way to test or defend against its use and this leads to a desire to use subjective suspicion as a basis for accusation.
If the school is going to remain all-in with using Google Classroom, then it must at least acknowledge the potential conflict of interest this might reveal; i.e. the students and teachers themselves are likely training the very LLM’s Google is using for their AI programming and results using the very coursework we are asking the students to do. What then happens when the AI result is something another student wrote or even the student themselves using the AI wrote previously? Is plagiarizing yourself possible? I suspect we’re about to find out.
I suspect within the next 2-3 years, any school policy broadly and unilaterally against AI — or as only mention in a broad cheating policy – will seem silly as it will be nearly ubiquitous, everywhere, and impossible to truly avoid using. It’ll be like being broadly against eyeglasses. The smart schools will realize this now and start to draft policy around how to use it ethically as a tool for learning more so than simply tying to ban its use with no clear guidance.
For what it’s worth, I remember similar arguments about pocket calculators when I was in grade school in the 1970s. I still remember a section in a mathematics class where we used a calculator to get the result but — with a creative twist – the result requested was the word the result numbers appeared to spell when you turned the calculator upside down. A very creative way to incorporate new technology seen as a potential threat! The more things change…
To summarize, AI is here. It is everywhere. It’s going to only be more so. It is being used right now to even write this. We can’t avoid it. We have to learn how to live with it and use it properly or go back to pencils and blue books, proctored exams, and lean more heavily into live (unassisted) discussion and presentation.