Last Words: Suicide, Language, and the Data of Despair

When does ordinary misery turn to self-destruction? The language of suicide notes is helping professionals decide.
9773

If there’s a heaven, there must be a special place in it for those who have lived through hell and who manage to salvage something from their own purgatorial nightmare to help others.

Consider, for example, Jean Baker. Spurred on by the suicide of her 21-year-old son, Brian Case, in 1999, Baker, who lives in Houston, became a teacher for special needs children. She has also written a book—Schizophrenia: Evolving from My Son’s Suicide to the Classroom—and works as an outspoken advocate for kids with mental illness. Still, she’s always looking for more ways to translate her son’s tormented life and death into something of permanence. “There had not been any accounting of his passing,” she says. “They didn’t even put it in the newspaper in the town where he died.”

Last spring, she found another way to make Brian’s death count for something. Tuning in to Talk of the Nation on National Public Radio, she caught an interview with John Pestian, a clinical scientist at Cincinnati Children’s Hospital Medical Center who is leading a project that is attempting to use computer technology to identify people who are suicidal. Pestian’s area of expertise is computational medicine—that is, applying technology to help physicians and clinicians make decisions about treatment. The work that he discussed on NPR was focused on using information gleaned from suicide notes.

Baker had few artifacts from the last months of her son’s life, when schizophrenia had rendered his world chaotic. But she did have the waterlogged message that police found in Brian’s pocket when his body was recovered from the Texas lake where he died. It began painfully: “Maybe my dying is for the best,” he wrote. It then devolved into the confused rambling characteristic of his illness before concluding with two clear and desperate questions: “Where does it end? How does it end?” He answered by drowning himself.

Pestian’s project was so compelling to Baker that, she says, “I tracked him down.” Now her son’s last words are part of an extraordinary collection of data: the language of approximately 1,400 people who have ended their own lives.

A computer databank may sound like a soulless repository for something so personal. But Baker sees it as a place where important remnants of Brian’s life—his words—have a place. Working with a research team at CCHMC, Pestian is looking for clues in language that can help reveal when a person is bound for self-destruction. In the complex, confounding mystery that is suicide, an early detection system like that could be revelatory. “This means the world to me,” says Baker.


More people in the United States now commit suicide than die in car accidents—about one every 14 minutes or so. In Hamilton County in 2012, there were more suicides than homicides—almost one every three days.

If you spotted a drunk driver or a guy in a ski mask with a shotgun heading into a 7-Eleven, you’d call for help. But short of seeing someone clamber over a bridge railing, how can you tell when a person is liable to harm himself? How can you tell when another human has moved beyond ordinary despair or sadness or frustration or confusion, and in defiance of his body’s biological imperative to survive, is at risk of ending his life? That’s the puzzle that has driven John Pestian for the past eight years. Pestian is director of the hospital’s Computational Medicine Center—a collaboration with the University of Cincinnati and Ohio’s Third Frontier (the state’s technology-based economic development initiative) that looks for ways to use technology to improve clinical practice.

“We don’t make decisions,” Pestian explains. “We bring together data so that the clinician can make a better decision.” But on the computer in his small office there’s a prototype of a tool that may, in the not-too-distant future, be in the hands of mental health workers. And it seems like science fiction.

On the screen is a Sim character that may very well be the most intuitive therapist this side of Good Will Hunting. She’s slim and young and—in this iteration—white. But she could be older, or African-American, or a man dressed in camo speaking with an Appalachian accent. “There’s a whole lot of research about how to change the virtual human for who you’re dealing with,” Pestian says. “It could be that kids want someone to look like a nurse, or [soldiers with PTSD] want it to look like a soldier.”

The goal of the Sim prototype isn’t really to create the ideal mental health worker; it is to create one that a patient will talk to. Because it’s a patient’s response—the turns of phrase, the vocal inflection, each time shoulders droop or nostrils flare—that Pestian and his team are mining for information, and that may someday help a nurse or doctor or therapist make a decision: Is this person a suicide risk?

Pestian says that the research started with a colleague’s frustration. “A friend—a psychiatrist—came in to my office,” he recalls. “He said, ‘We just don’t know what to do. We have 40 kids a week coming into our emergency room who are suicidal, and it’s just such a challenge to analyze and decide what to do with them.’ ” Some would be admitted for psychiatric care; others would be assessed, and because they didn’t seem apt to harm themselves, sent home. But a good number of the ostensibly low risk kids would be back in the ER before long.

“That was the genesis of it,” Pestian says. “He said, ‘Can you help?’ ”

Pestian’s idea was to use linguistics-based data to see if it was possible to teach a computer to distinguish between someone who is genuinely driven to take his own life and someone who is not. And the one way to get the undisputable words of the former was clear: “We said to ourselves, ‘OK, let’s get some suicide notes, and see what we can do with that.’ ”

He turned to Edwin Shneidman and his colleagues at the University of California, Los Angeles. Shneidman, who died in 2009 at the age of 91, was a pioneering psychologist. Among other things, he introduced the idea of suicide prevention. “He’s the father of suicide research,” Pestian says, “a wonderful guy who, at 90, would go around in his nightshirt.” Shneidman’s work not only launched a thousand crisis hotlines, it shaped much of our current thinking about self-destruction. And his archives included an unusual collection: scores of suicide notes he had amassed over the course of his career, some dating back to the 1950s. As luck would have it, he also had a group of fake notes. Back in the 1970s, as part of his own research, Shneidman asked a group of men at a union hall, “If you were going to commit suicide, what would you write?”

The union hall experiment was, by contemporary research standards, ethically ambiguous at best. “You couldn’t do that today,” Pestian says. But the notes turned out to be important. “We took the real notes and the pseudo-notes and we said, ‘We’ll see if we can tell the difference.’ ”

That meant creating software for sentiment analysis—a computer program that scrutinized the words and phrases of half the real suicide notes and learned how to recognize the emotion-laden language. They tested it by asking the computer to pick out the remaining real notes from the simulated ones. Then they had 40 mental health professionals—psychiatrists, social workers, psychologists—do the same. According to Pestian, the professionals were right about half the time; the computer was correct in 80 percent of the cases.

“So we said, ‘OK, we can figure this out,’ ” he recalls. “If the computer is taught how to listen, it will be able to listen to this database and say, ‘This sounds like it’s suicidal.’ Because there are patterns in the language that are the language of suicide.” Even if those patterns are not always apparent to a trained professional, the real note/fake note test held out the promise that a computer could learn to spot them.

“They’re there,” Pestian says.


Big research calls for big data, and Pestian has amassed it. Shneidman’s collection has been expanded with notes from all over, sent to Pestian by professionals who’ve read about his research in medical journals and folks like Jean Baker who heard about it via news stories. He even received 200 from an agency in Poland (“And I don’t speak Polish,” he says).

As notes arrived they were scanned, uploaded, and transcribed, pulling out names and dates and anything that might identify the writer. Then his team built a machine-learning software—what most people know as “artificial intelligence”—to look for patterns in suicidal language.

On a recent morning, braced by coffee from the hospital cafeteria, Pestian shows me how it works. Choosing a numbered document from the list on his screen, he opens it: I hope you will look after Mom, instructs the writer. I’m sorry I have taken such a cowardly way out. I seem to be a beautiful misfit and I always will be.

“Aw,” he says. “Beautiful misfit.”

Then he begins to annotate what has been written, highlighting phrases—Please look after Mom; I’m sorry; beautiful misfit—and logging each one in a classification that appears in a pull-down menu. Is the writer feeling sorrow? Giving instructions to a family member? Those two are easily classified. But what about the beautiful misfit remark? Should the computer learn to recognize that as sorrow? Pride? Hopelessness? In an actual annotation process, three readers would go through each note, and their responses would be weighted.

That’s one of the things that separates this research from ordinary number-crunching. Pestian needed people—a lot of them—to read the notes and assign emotions to the words so that they could be digitally catalogued like shoe styles on Zappos.com. Over time, the computer would learn through the growing corpus of data that certain words and phrases signify “hopelessness,” for instance—or that a writer who says “Be sure to pay the gas bill” and another who instructs “Please look after mom” have the same mindset and share an emotional state with many others who have killed themselves.

He had an idea where he might find willing hands for the work. “The literature will tell you that, on average, six people are profoundly affected by one suicide,” he says. “So we called on those folks.” Pestian enlisted people who’d lost a loved one—165 volunteers—to help get through the initial body of material. Subsequent notes, like the three he received the week before our meeting, are continuously added. (And yes, he could use someone who knows Polish.)

The project took years to develop—not just because creating the software was challenging, but because “it’s a topic that’s a challenge to get used to.

“I’ve read every one of those notes,” Pestian adds. “If you let it, it’ll eat at you. So you have to put [them] down for a while.”


If you think that reading 1,400 suicide notes would be disheartening, you’d be right. Scan just a few and you can’t help but feel a rising tide of hopelessness, not to mention sorrow, guilt, and affection.

I do not see myself getting well ever again.

Life is too great a struggle.

Please forgive what I am doing.

I love you. I love you. I love you.

Sometimes there’s anger, too, as well as regret and recrimination. And often there are instructions to survivors about taking care of a decedent’s affairs. My car is at the service station or This check is to cover cost of burial—housekeeping details from beyond the grave.

But if you think that reading 1,400 suicide notes would be a profoundly illuminating journey into the human psyche, you would be wrong.

“Some of the people who annotated the notes were disappointed that they didn’t reveal more,” says Michelle Linn-Gust, past president of the American Association of Suicidology, who worked with Pestian to recruit volunteers for the project. “People think these notes are going to be confessional. Most are not. It’s ‘My jewelry goes to so-and-so.’ They’ve already checked out.”

Still, Linn-Gust’s volunteers were dedicated; the grimness of the task caused few to drop out of the effort, she says; some even asked for more work. Researchers haven’t enlisted survivors—people who have lost someone to suicide—very often. “I knew we wouldn’t have a problem getting people to do it,” she says. “We want to help.”

Linn-Gust, who lost her own sister when she was in college, has a PhD in family studies; she writes and speaks about coping and bereavement. She wasn’t surprised that the material in the study was not—on the surface—endlessly revealing. “My sister’s note said, ‘I know you love me; I just don’t love myself,’ ” she says. “It didn’t tell us anything we didn’t already know.”

What is surprising to her is the ingenious way Pestian’s team is using the notes. There has been a great deal of research on suicide risk factors such as depression, substance abuse, and family history. And today there are more ways to address these issues—therapy, family intervention, medication—than there were back in the 1950s, when Edwin Shneidman had to convince his contemporaries that someone with suicidal thoughts was not inherently insane. But, Linn-Gust says, “the field has reached a plateau. He’s working in a totally new direction.”

And Cincinnati, it seems, is an apt place to do it. Not because we have more of such deaths than usual (Hamilton County’s suicide rate—10.63 per 100,000—is slightly less than the state average), but because CCHMC has an Emergency Department that is tied in to the issues. Jacqueline Grupp-Phelan, the hospital’s director of research for the division of emergency medicine, studied children who came into the ER with suicidal ideation—that is, kids who admitted they were “having thoughts” about hurting themselves. Anywhere from a third to half of the patients she tracked in a given year concerned the medical staff enough to be admitted. Of those who were evaluated and sent home, 20 percent were back in the ER within two months. “A very high risk group of kids,” Grupp-Phelan says. Though initially they didn’t seem that way, even to trained professionals. And that was just a survey of kids who admitted to having dark thoughts. Boys in crisis aren’t usually forthcoming, she says. “They just kill themselves.”

Whether it’s being applied to adults or children, Grupp-Phelan says, what Pestian and his team are doing is trying to create “another way of getting at risk.” They’re trying to deliver insights that get past a child’s reluctance to talk; that won’t be compromised by an adult’s bravado; that can’t be distorted by a staff member’s poor night’s sleep the evening before.

In layman’s terms, the goal is for a clinician to be able to take a person in crisis and read him like a book. Even when they have to read between the lines.


Remember the Sim on Pestian’s computer? To me, the virtual human looks a whole lot like Lesley Rohlfs, the project coordinator for his lab. Maybe that’s because she’s sitting squarely in front of me, just like the Sim on the screen. The major difference being that she is very human and I’m looking at her framed by two slender silver cameras—one facing me, one facing her—perched on tripods.

Rohlfs has a degree in counseling and human behavior and a background in pediatric psychiatry; for several years she was on the staff of CCHMC’s inpatient facility in College Hill. “I worked with what we always called ‘the babies,’ ” she says, “patients as young as 2 who were admitted for all types of mental health issues. Things like depression, aggressive behavior, and—unfortunately—suicidal behavior.

“The burnout rate working directly with patients is very high,” she says by way of explaining how she came to apply for a position on a team where her colleagues would be database developers and software gurus. Going into the interview, she didn’t know much about the research. “When I found out, I had to have this job!” she says.

Several times a week, she takes her cameras to record interviews with a variety of patients who show up in the emergency room at CCHMC. She begins with a couple of standard mental health questionnaires. But what she tapes are the responses to five pretty simple queries at the end of the session:

Do you have hope?

Do you have any fears?

Do you have secrets?

Are you angry?

Does it hurt emotionally?

“We call them ubiquitous questions,” she says. “They’re not specific to suicide. You can ask them of someone who comes in with a broken arm or a sore throat.” The cameras collect the words she and the patients use, the pauses, the timbre of their voices, their body movement, the way emotions flicker across their faces. The recordings of Rohlfs will teach the Sim to be more lifelike; the recordings of patients will be used to identify “thought markers”—to distinguish the sentences, silences, gestures, and expressions of a suicidal state versus a non-suicidal one.

Rohlfs’s CCHMC interviews will join those being done at three other locations: University of Cincinnati Medical Center (she’ll begin adult interviews there this winter), a hospital in West Virginia, and one in Canada. Ultimately, the Thought Marker project will contain data from 500 people, “the largest study ever for something like this,” Pestian says. And eventually the findings will be used in concert with data from another investigation—an exploration of the relationship between a suicidal person’s emotional state and their genetic traits.

It’s a far sight more complicated than teaching a computer to tell the difference between a real suicide note and a phony, but Pestian thinks the principles are the same. “We know we can tell from language who’s suicidal and who’s not,” he says. “When we combine video with audio, with language and genetics, does the suicidal person look differently than the non-suicidal person? Can we create a multi-modal test that looks at a person’s state and at a person’s traits and identifies suicidal people? That’s why this is so powerful.”

Interacting with a Sim on an iPad programmed to calculate the odds of your self-annihilation does sound pretty Minority Report-ish. But Pestian and Rohlfs emphasize that it will be just one more way to help a doctor or social worker as they assist a patient—a collection of data that can be part of their decision-making. It will be a tool, and Rolfes imagines it being used in a busy waiting room, while a patient is waiting for a face-to-face evaluation. “They’d be able to take that data in real time,” she says—data that offers a human care-giver an opinion about whether this man or woman, child or teen, is likely to harm himself at this point in time and whether they need to be protected from themselves.

Of course, having a tool that says this person is suicidal still doesn’t answer the most fundamental question: Why? The solution to that puzzle lies outside the scope of Pestian’s research, but it was never far from his mind as he went through the 1,400 notes.

“Have you ever been with someone who is dying?” he says. “We have a natural will to live; we kick ass to live. And these folks chose death. Why?”

He has his theories—about genetic variance, about bio-chemical changes and emotional stressors. But for now he’s teaching computers how to listen to voices from the past and the present, hoping that the work makes a difference in the future.

Originally published in the January 2014 issue.

Facebook Comments