Facial Recognition Technology and Schools

AI was recently asked about the potential of using facial recognition technology on campus. That caused me to return to my folder of “old stuff” I maintain and to revise a blog post I had abandoned almost 5 years ago when the pandemic caused the issue to be forgotten by those who had me thinking about it then.Top of Form

Facial recognition technology (FRT) is rapidly integrating into schools, promising enhanced security, automated attendance, and even insights into student engagement. While lauded for its efficiency and potential safety benefits, its widespread integration raises profound ethical questions, challenging the very nature and purpose of education. We must critically examine its place in institutions meant for learning and development.

Dehumanization and Systemic Bias

At its core, FRT reduces students to “statistical images,” computationally extracting facial features without discerning the full spectrum of human emotion—a fundamentally dehumanizing process. This “mechanistic gaze” can even pressure students to contort their expressions to be “readable” by the system. Moreover, FRT often foregrounds fixed attributions of race and gender, with historical issues of misrecognition for non-white faces due to biased datasets. Even with improved accuracy, classifying students into racialized or gendered categories can exacerbate existing discrimination, reviving debunked “race science” and promoting racism within schools.

Erosion of Autonomy and the Right to Obscurity

Facial data is inescapable, subjecting individuals to constant, permanent surveillance, as students are always connected to their faces. This undermines genuine consent; “opt-out” mechanisms often require an initial scan, rendering them ineffective and raising coercion concerns in school settings. Such pervasive monitoring eliminates “practical obscurity,” removing a student’s ability to blend in or operate “under the radar”—a legitimate coping strategy for some children and young people developing their social identity. The inherently coercive nature of schools, coupled with dress codes, makes obscuring one’s face difficult, leading to fears of increased authoritarianism and “mission creep” that can impede human flourishing.

Cascading Automation and the Oppression of Marginalized Groups

FRT initiates a “cascading process of automation,” where passive data capture leads to automated decision-making, potentially displacing human judgment in critical areas. This creates new “actionable knowledge” about individuals, categorizing them based on biometric and psychographic data. This poses a significant ethical threat to marginalized groups, including racial minorities and queer/trans students. Data-driven systems, built on norms and discrete categories, can misrepresent or disadvantage those whose lives don’t fit easily, reproducing existing social hierarchies. Technical fixes, such as training FRT systems on more diverse datasets, only make discriminatory tracking more accurate, thus increasing harm.

Conclusion

Ultimately, the “enormous risks” of facial recognition in schools likely outweigh its “meagre gains”. Addressing technical flaws does not resolve the core ethical issues of othering, oppression, and coercive control. Schools should not become sites where local communities are desensitized to automatic identification, profiling, and potential discrimination. We must fundamentally question if this “societally dangerous” technology has any justifiable place in education at all, rather than accepting its “unhindered passage into school.”

This post was largely informed by Mark Andrejevic, M & Selwyn, N (2020). Facial recognition technology in schools: critical questions and concerns. Learning, Media and Technology 45(2), 115-128. DOI: 10.1080/17439884.2020.1686014