Schools Are Using AI to Track What Students Write On Their Computers

There are over 50 million K-12 students who will be headed back to school in the United States this fall, and a vast majority of them will be tracked every time they type a word on a school computer. Per the Children’s International Protection Act, a United State school which receives federal funding must have an internet-safety policy in place.

The guidelines of such a policy vary from simply blocking sites with potentially inappropriate content to much more comprehensive initiatives which use help from software companies such as Securly, Gaggle, and GoGuardian to bring troublesome student communications to the attention of school leadership.

These companies use Safety Management Platforms which utilize natural-language processing to sift through millions upon millions of words being typed by students on school computers. These companies have all made case studies available linking these SMPs to intercepted messages which helped prevent a potential school tragedy before it started. As mass school-shootings and suicide rates rise, such interception has real intrinsic value. But, it also blurs the line between student safety and student privacy.

What do SMPs really do?

Girard Kelly, director of privacy review for Common Sense Media, believes that the advent and implementation of Safety Management Platforms normalize the idea of surveillance from a young age. His worry is that such normalization leaves kids vulnerable when it comes to learning the importance of protecting their online data and privacy – a type of vulnerability which has been exploited before by data-mining companies in such controversial ways as that which set off the Cambridge Analytica scandal at Facebook.  

Balancing the safety of students and their privacy is a high wire act that has yet to find a compromise which satiates critics on both sides of the aisle. With school violence, bullying, and charted mental illnesses on the rise in the student populace, it’s more essential than ever that schools find ways to protect their students from potential tragedies. However, a balance must be struck by administrations and the companies who implement such initiatives as Safety Management Platforms which respects student freedoms while also finding a way to ethically supervise and identify potential problems.

Informed consent and its effects

Should students know that they’re being monitored when they use technology on campus? Proponents believe that such transparency would be fair to student privacy and freedoms, but critics believe the trade-off would be self-censorship and a compromise of actual, useful data. Companies such as Securely and Gaggle stand on the prevention side of the argument, believing unfettered surveillance without informed consent saves lives.

They argue that the intentions and machinations of troubled students (and troubled teachers in some instances) won’t have the clarity and truth required to take preventative action if they know they’re being watched. Thus, potential tragedies are kept under wraps and off school communication systems – leaving institutions in the dark when a potential problem arises. However, a balance must be struck as such surveillance can go overboard and take away safe, digital spaces for students to be themselves without fear of reprisal. 

Choose your Reaction!