Nolan heard voices that told him to hurt people. But you wouldn’t know it to look at him. We meet at Cincinnati Children’s Hospital Medical Center (CCHMC) in Avondale. Nolan’s psychiatrist, Drew Barzman, M.D., director of the Child and Adolescent Forensic Psychiatry Service at CCHMC, introduces us. In Barzman’s office, Nolan sits across from his grandmother and legal guardian, Angie. They ask that we use only their first names for privacy purposes.
Nolan is wearing a Star Wars shirt and blue jeans. At 17 and a high school senior, he’s baby-faced and pleasant, and struggles with mental health issues. “Two and a half or three years ago, I started hearing voices in my head, seeing hallucinations, things like that,” he says. “I kept it a secret because I wasn’t sure how people would react, but eventually it got really bad, so I started telling people.”
He told his school counselor, who contacted Barzman. The voices told Nolan to harm himself and others. I ask him what it was like. Most of us recall certain voices and hear them inside our heads, distinctive voices like our mother’s or that of a prominent actor, like Morgan Freeman. But it wasn’t like that for Nolan.
He was hearing the voices of persons who were present, sometimes men and sometimes women, but no one’s voice in particular. “It’s like someone is standing there talking to me face to face,” he says. “In the beginning I was a little confused as to what it was, but then I realized something’s probably wrong.”
That was more than a year ago, and under Barzman’s care Nolan took part in a special pilot program at CCHMC. A fast-track interview with Nolan was annotated by Barzman and his colleagues and fed into a natural language artificial intelligence (AI) system designed to quickly analyze a teenage student’s responses to determine his or her level of risk for school violence.
Barzman says the goal of using the AI is to move young people with violent impulses into psychiatric care faster. He says he wants to improve the objectivity, accuracy, and efficiency of school violence risk assessments and describes the AI as a sort of digital assistant, helping clinicians speed up their analyses of psychiatric assessments. “The technology will improve the scalability of such assessments,” Barzman says. “It will help teens stay in school by identifying ways to help them, and it will help parents and schools understand how to help students stay in school and avoid legal problems.”
Barzman says the pilot study will need to be tested on 336 interview subjects before it’s ready for general use, which he expects to happen by the middle of 2020 at the latest. More than 200 local subjects have taken part in the CCHMC study so far. Participating schools refer students to the study when a student’s behavior becomes a concern.
There are two parts to the risk assessment system that Barzman has built with his project partner Yizhao Ni, an assistant professor at CCHMC’s Department of Biomedical Informatics. The first part is the interview, which has been honed by Barzman to be quick and incisive, between 15 minutes and an hour. The questions focus on students’ experiences that are statistically relevant to their risk of school violence. The content of the interview is transcribed, and the text is fed into an AI program that looks for linguistic descriptors—tells—in the child’s wording and phrases.
Programming the AI is at the core of Ni’s work. His field, informatics, deals with looking for patterns in medical information and using those patterns to make predictions. “Those questions have been predefined so we have a starting protocol,” Ni says. “Then, when we ask these questions, our computerized algorithm analyzes the answers. We record the interview and the interview is transcribed into text. The computerized algorithm analyzes the text file to identify clues in the text file.”
The student, he explains, might say, “I want to burn down my house.” The computerized algorithm can identify those key clues and phrases and “aggregate the interview evidence in order to predict whether this student is at a high likelihood for school violence.”
Ni describes the AI slowly, and he carefully considers the words he uses. He’s selective in his speech, too, I suspect, because that’s his business. In computer programming, the difference between the right words and the almost right words is the difference between the algorithm enlightening the team or a computer just lighting up.
Ni’s AI is narrowly focused on a particular job, more like Google Maps than Hal from 2001: A Space Odyssey. Google Maps works by using baseline data like maps and speed limits as well as the feedback it receives from our phones. In real time, it asks, “How fast is Dave driving?” and “What routes are the other cars taking to arrive at their destinations?” In this way, the algorithm learns and narrows its data set to indicate what’s likely the fastest route.
In a similar way, the more it’s used, Ni’s AI gets better at identifying phrases and words that correlate with a student’s propensity for school violence. When you consider that a student has already been picked out as someone who is of concern to his or her school administration and might mention arson, as in Ni’s example, it sounds plain on its face. But it’s not that simple.
As of now, Barzman and Ni’s system is a pilot study in its infancy. Barzman says they need to process more interviews in order to arrive at a proof of concept. But, so far, the AI risk assessment has done much better than other systems that have been used in this way. Ni cites an American Psychological Association study indicating that previous attempts to systematically predict violence in children have been failures, accurately predicting school violence less than 50 percent of the time. That coin flip contrasts sharply with Barzman and Ni’s system, which currently has a 93 percent success rate. That is to say, the results of the fast-track interview, when processed by their AI, have agreed with lengthy, clinical interviews 93 percent of the time.
“The 93 percent correlation would be from the forensic team who collect all the collateral, or personal, data about the student and from the parent or the family member who is the guardian,” says Barzman. “We then fine-tune the questions we’re going to ask the student. The AI will simply analyze the recorded student interview transcript, and then we’ll see how well it matched the team. We’ve done other analyses since [arriving at the] 93 percent, and it seems to be improving with each set.”
The AI’s big advantage, then, is the speed at which it can arrive at results. Ni says school violence rates have been rising over the past decade and references a 2016 Centers for Disease Control and Prevention study as saying that 20 percent of students report bullying at school, a predictor of school violence. The idea is to address what Ni describes as a critical need for rapid analyses of student interviews to get students into therapeutic treatment before there is an episode of violence—to stop everything from fist fights to school shootings before they occur.
The AI prediction isn’t a simple yes or no answer, says Barzman, but more of a flag for the young person’s potential for violent behavior. “The primary purpose of the AI algorithm is to present this evidence to mental health professionals so that they can make the correct decision efficiently,” he says. “That being said, the final decision is still a clinical judgment rather than an AI prediction.”
Since development of the AI began in 2016, the CCHMC program has been continually improved and a variety of applications have been discussed, including presenting the interview as an iPhone app that might allow school personnel to administer it. That idea was scrapped, Barzman says, as he believes the interview is best conducted by psychiatric professionals in order to maintain the integrity of the results.
“We’re not thinking that the AI is going to take over for the human interviewer,” Barzman says. “We’re thinking the AI will simply augment and supplement what the interviewer is doing and help the interviewer in real time as a medical tool.”
The risk assessment interview questions can be divided into three parts: a generalized section on aggressive tendencies, a section that focuses on a subject’s school relationships and attitudes, and a customized set of questions determined by Barzman’s research team after the initial referral using biographical information he’s given.
The actual interview questions are the intellectual property of CCHMC and can’t be published or even reviewed by the media. Instead, I ask to participate as an interview subject so I can understand what sort of questions are being posed in order to know how the test achieves its reported level of insight. After some internal discussions, I’m permitted to be interviewed by Barzman according to one of the three parts.
I thought the interview questions would be written with a certain immediacy, more like the stereotypical cinematic trope of the psychiatrist asking about dreams and how I feel about my mother, and rotating Rorschach prints. But the questions are actually arranged to compel you to talk about high school peer relationships, your family dynamic during adolescence, and your run-ins with high school administration. I don’t know that I would have asked to do this if my past three decades of personal growth would count for so little in the context of the assessment. But it’s geared to get high school kids to talk, and that’s what it does, even if your high school self is somewhere beneath gray hairs.
Without revealing the questions, I can say that it gets very personal very quickly. I’m surprised at the volume of biographical information I yield in the space of 40 minutes, as well as how much introspection the interview inspires. Looking into the abyss of high school means getting to know yourself much better. There’s nothing magical or tricky about the questions used, but between a carefully and concisely worded roster of questions and a non-judgmental interviewer such as Barzman coaxing the answers, it’s clear why so much detail can be gathered so rapidly.
Nolan has been diagnosed as schizophrenic and is now being treated with medication as a result of his participation in the CCHMC study. He still hears the voices every couple of months but, according to Barzman, has significantly improved. Nolan and his grandmother say that Nolan’s school and his peer group have shown a great deal of compassion for him and he hasn’t experienced negative reactions as a result of his diagnosis.
It’s worth noting that Nolan decided to let the school and his friends know about his diagnosis. The results of Barzman’s interviews are kept private, even from school faculty, though Nolan says his choice to go public with his diagnosis has been met with offers of help and understanding by nearly everyone at his school, which is not being identified for privacy reasons.
“They don’t treat me any different than they used to,” Nolan says. “They still know that I’m me.” Angie agrees: “When he says, ‘I need to leave,’ they understood. So he didn’t have to get up and say, ‘I’m hearing voices. I gotta get out of here.’ He could just go to his counselor and they would call me and I can pick him up.”
It’s important to remember that Barzman and Ni’s cooperation with the schools works in both directions. Dialogue between the CCHMC staff and the schools is the project’s backbone. The schools flag students as being candidates for the study, and without their participation the study would be unfocused—combing through a random sampling of students. Likewise, the value the study provides is that it takes a young person whose behavior is of concern and zooms in on exactly what’s going on, sorting kids who might have had a bad day and uttered a threat that they later regretted from a young person who needs immediate psychiatric help.
If Nolan had been judged to have been at risk for imminent violence, CCHMC would have advised the school of their concerns. “Most parents sign a release for us to talk with the school,” Barzman says. “If there is a specific threat, we have a duty to warn and protect, and that duty doesn’t violate privacy laws.”
Barzman says he wants to be sensitive to both a student’s privacy and the possibility that the AI could be used inappropriately for profiling. That’s one of the reasons the iPhone version of the study was halted—the hospital felt the questions were best administered by psychiatric professionals.
Clinical supervision is also needed, Barzman says, because the AI’s 93 percent accuracy rate means that 7 percent of the cases looked at could be false positives or false negatives. Administration and follow-up by a psychiatrist is the check on this gap in accuracy. And Barzman says the accuracy is improving. “One advantage of AI is that it is able to learn and improve its prediction over time,” he says. “We improved the accuracy from 91 percent to approximately 93 percent by adding 30 new subjects.”
There are procedures built into the study to avoid a school prejudging a student and taking disciplinary action against him or her on the basis of the AI’s results. “We protect students by disclosing risk factors and protective factors rather than risk predictions or risk levels,” says Barzman, explaining that they share information about the student’s specific issues, such as severe impulsiveness resulting from ADHD, and how the school can help him or her deal with those issues.
In the end, Ni and Barzman hope to use the AI to halt violence before it happens. Quick psychiatric interventions are key to making this theory a reality. “We expect to see fewer students being sent to emergency rooms for evaluations,” Barzman says. “Ultimately we expect to see a significant reduction in bullying and school violence and fewer expulsions and suspensions.”