AI is all the rage right now. Students use it. Faculty complain about it. Bosses are trying to figure out which jobs can be replaced by it. We are all trying to figure out just how this technology will affect us. It is a challenging time, and responsible educators are actively participating our collective negotiation of its role in our work.
In the middle of all of this, I am reading Brian Christian’s The Alignment Problem. I am a fan of Christian’s books, and this is my second time thought this book after listening to it last winter. In the chapter titles “Transparency,” Christian addresses the “black box” that AI can be. “Black boxes” are a familiar concept: We see the input and we see the outputs, but the logic and operations that happen within it that transform inputs into outputs cannot be determined. In some cases, they may be unknown to users but known to the designers; but in the case of many machine learning tools, the designers (probably more accurately the deployers) of the tools do not know the logic of the black box. In these cases, we might know the recommendation, but we don’t know why it was made.
There are many reasons to be concerned when relying on such technologies. In this post, I am addressing only technologies that are black boxes; there are many other decision-making processes, including those that rely exclusively on humans, that are black boxes as well. If we cannot articulate a rationale and logic for converting inputs to outputs, then we have a black box.
One of the difficulties for those of us who seek to comprehend how inputs and outputs is understanding. When faced with a black box, we may be able to predict what outputs will result from inputs (we will be able to find correlations), but we really to understand how they were caused. It is only through this knowledge that we can, we can resolve problems with the block box’s predictions or extend and enhance what might be done with it.
Of the many tasks that can be accomplished with machine learning, generating answers to questions is one in which educators find important. The work of teaching and the work of learning is about answering questions; they may be disguised as other prompts, but they are questions nonetheless. Here lies one of the problems with educating in a world where machine learning is in everyone’s web browser: Machine learning is very adept as answering the types of questions we ask in many school settings.
The black box becomes a metaphor for how students interact with the answers that arise form machine learning. Students can create answers to questions that arise from the black box of AI. They can create questions (or maybe even copy and paste questions from their instructors) and provide them as input into the machine learning tool. The output is a collection of words that the machine learning tool predicted would be an answer. We observe the data of learning (chosen answers, provided answers, essays, and so forth) but we don’t know the logic or operations that produced it.
If you are a student, your job is to take the input and use your own logic and operations to produce the output that is to be graded by your teacher. Adopting this approach to your work is less efficient than using the black box approach and it will interfere with the production of data which may be difficult for your instructors to accept. Causing the logic and the operations that become your education output is hard work, but it is the work of being a student.