Reconsidering School Now That Generative AI Isn’t Going Away

The availability of generative AI has answered a question that has been debated amongst the educators with whom I have worked for the last four decades. It has been asked in various ways, but it comes down to “What matters in learning?” The debate centers around two generally given answers: Educators believe learning can be achieved, so it is an and something we can see clearly demonstrated in a product or they believe it is a process. While the “achievement” folks posit student must demonstrate learning, and that must “meet the standard,” the “process folks’ believe it is what is done along the way that maters and one ay have learned despite the product not “meeting the standard.”  

One’s preferred answer comes down to the psychological theories upon which one grounds their practice. Behaviorists posit we can only know what is happening within learners when we observe (and measure or maybe compare against a rubric) what they do or produce. Cognitive psychologists posit we can get an indication of what is happening in learners by observing, but that learning is what happens within brains and bodies when they interact with the world.  

Generative AI is good at many things, including creating things that resemble the products behaviorists accept as proof of learning. Students figured that out almost immediately and have been using it to create these products ever since.  

The difficulty for educators is that generative AI is really good at mimicking. Humans also mimic, and it’s often a first step at learning. Notice my word choice, mimicry a first step. Understanding comes long after one begins mimicking. 

If we confuse mimicry with meaningful learning, we end up with less than desired results. This recalls the cargo cults whose members crafted landing strips out of local materials on isolated islands in the Pacific after Works War II. They mimicked the structures and actions that brought planes filled with supplies. (I first encountered this idea in Richard Fenyman’s Surely You Are Joking Feynman, a book I highly recommend.) 

We move beyond mimicry when we learn the reasons we do the steps and the relevant details. Cargo cultists connect their “headphones” to their “radios,” but learners realize there are relevant details that matter such as the materials used to construct those pieces and the other systems to which they connect that allow for effective communication to summon supply-filled planes. 

Intention becomes important in the design and use of systems. When we have truly learned, we can identify our intentions, the designed intentions of our tools, and the intentions we find through bricolage. This requires more sophisticated and deeper capacity to know than mimicry and it allows for critical understanding (is the problem solved?) and for creativity (solution of new problems). 

Intentions and goals seem to be a part of human cognition that were recognized previously, but these aspects of cognition have taken on increased importance as generative AI has caused us to reevaluate learning. (I recognize this is just the latest attempt by humans to figure out what makes us “special.” The more we seek to answer that question, the more we discover our abilities are part of the continuum of abilities life has evolved. Humans’ specialness fades away as we look closer.) 

It their 2024 paper ChatGPT is Bullshit, Hicks, Humphries, and Slater make the point that “even if the chatbot can be described as having intentions, it is indifferent to whether its utterances are true. It does not and cannot care about the truth of its output.” This is the result of how large language models are constructed. They are built on probabilities; so we can conclude the text generated by LLM’s will (quite literally) probably resemble sensical human creations. 

When I was a student, I had classmates who had not done the reading or the homework—I was sometimes one of them. When asked to answer questions or to join discussions those students would make utterances that were true or were not true, they did not care about the truth of their output. We were bullshitting, and we knew it.  All we cared about was they we appeared to resemble sensical creations. We were doing ChatGPT without the large language models. 

So, what does this tell us about learning? Does any of it support either view of learning I started this post explaining? 

As I reflect on a couple of years navigating teaching with AI, I have become firmly convinced we need to recognize learning is a process. We need to focus classrooms on activities in which our students engage with our curriculum. Before that, however, we need to realize and help our students realize that the work of learning is what matters. The products are fine, but if they do not result from their efforts, then they have no value. I argue the bullshitting my classmates and I participated in decades ago (and that I observed in my students for generations) was effortful; we were at least learning how to game the system at least for those parts about which we did not care.  

Now that we can download that part of schooling to technlogy, we (both educators and students) are probably better served bt rethinking several aspects of schooling: 

Motivation matters. If students are curious about problems, they are likely to engage with the lesson. Educators have focused on learning outcomes for a generation. This is misguided. Interesting and relevant questions or problems that cannot be easily answered or solved must focus our lessons. 

Learning requires effort. We have focused for a generation on making learning easy. This is misguided. Learning requires effort; too much struggle is a barrier to learning, but effort is needed to change the brain in a meaningful way. Good questions can motivate effort in a way learning outcomes don’t. 

Standardized curriculum isn’t sustainable. In the 21st century, standards have been guiding schooling. Origianlly promoted as “high quality instruction” for all, it has become “the same” for all and it is for many the least interesting curriculum. I do not advocate for tracking (the schooling I experienced), and I recognize the biases that can adversely affect marginalized populations when different types of curriculum are available, but students should be able to explore their interests to a greater degree than we let them. Students persuing their interests is motivation we cannot impose.  

The arrival of generative AI in our students’ technology toolboxes is going to change what we do and how we do it as teachers. It is easier that ever to mimic having learned in the methods traditionally. They question for educators is “How do we rediscover learning as worthwhile process?” and ensure our students understand that and that our school structures support it in all learners.  

References

Feynman, Richard (1985). Surely you’re joking, Mr. Feynman!: Adventures of a curious character. W. W. Norton & Company.

Hicks, M.T., Humphries, J. & Slater, J. (2024). ChatGPT is bullshit. Ethics and Information Technology 26, 38. https://doi.org/10.1007/s10676-024-09775-5