Virtual Burial Plots: A Conversation between Kelly Christian and Jed Brubaker

Jed R. Brubaker is an expert is postmortem data. He wears many hats: he’s an Assistant Professor and Founding Member in Information Science at the University of Colorado at Boulder, he runs their Identity Lab which studies how “identity is designed, represented, and experience through technology,” and is an academic partner at Facebook. In this role, he helped design and develop their Legacy Contact feature, which allowed Facebook users to identify a close friend or family member to care for their profile when the account becomes memorialized after the user dies. Jed also gave a very enlightening TEDx talk, enticingly titled: “You’re going to die. What will happen to your online life?

In my own work as a writer, I focus on the intersection of technology and systems of power in relation to death and mourning in American culture. While I have written extensively on these subjects as they relate to the 19th and 20th centuries, I find myself returning to discussions that explore the impact that mourning and grief have on our contemporary digital lives. The internet has created an entirely new landscape onto which we map our lives and our deaths. This subject has been especially present in my life lately: in the past six months, I have unexpectedly lost several friends and my father. I have grappled with these deaths differently online, especially relating to my dad, who had no online presence at all. My inquiry into grief online is driven by own my personal experiences and informed by my historical research.

For this conversation, Jed and I started out talking about what projects were on the horizon, if social media will ever know that we’re dead, and how the language of grief is so unique that even artificial intelligence can be taught to classify it.

This interview has been condensed and lightly edited for clarity.

 

Jed: We have this new paper that’s coming out this fall that might be the most sexy thing I can tell you. This doesn’t have anything to do with Facebook but it’s a problem that I’ve had forever. As a researcher, I might want to design something to be as sensitive and as thoughtful and as appropriate as possible, but how do I know if you’re dead? This was an interest of mine from when I first started doing this work, and I’ve just never been able to do it. Over this last year, a graduate student and I did this large scale, massive linguistic analysis from a computational perspective, rather than from a human coding perspective: what do memorial spaces look like? We needed to build a mortality classifier.

My job is just to do everything I possibly can in this space so we know everything as much as possible. I don’t always specifically know what is going to be relevant to Facebook or not, and I certainly try to optimize those connections. And to be fair, Facebook, or any tech company, is thinking about what they should be doing right now, and they’re going to be thinking about the next five to ten years too. My job is to think about the next twenty to fifty.

Kelly: Totally, and within the context of shifting cultural norms too, I can’t imagine that some kind of technology conglomerate is going to automatically respond to all of the nuances when it doesn’t have the foresight or ability to look ahead a year, whether that be due to funding, shifts in scope, and so on. Looking at the internet’s role in the way that we explore mourning and grief, it’s the same way: all of it is being forged as we go along. I think that’s kind of exciting and the way that your work is exploring that, and it’s obviously going to have some messy missteps, which you noted in your paper that outlined an issue at Myspace, and you said that they just deleted people. With this in mind, I’m wondering if you can explain the very basic introduction to what you meant by a mortality classifier: what that looks like and or how you’re imagining what that does in action.

Jed: Yes, so what we built is called a machine learning classifier. Machine learning is a form of artificial intelligence, and it looks nothing like SkyNet (laughs). Basically, you show it a bunch of things and you say: “This is a strawberry”, “This is a strawberry”, “This is also a strawberry”, “This is a strawberry”. You then show it a thing, and ask: do you think this is a strawberry? And sometimes it’s right. That’s a classifier. That is the basic premise of all artificial intelligence, which is where most of the interest in artificial intelligence is focused right now. That’s why I chuckle when everyone talks about how artificial intelligence is going to do all of this magic, weird stuff or put us out of our jobs.

I’ve often joked that you get to set up your Facebook account, say what your name is, what your gender is, say where you went to high school, and you get to say all of these things but perhaps the most important attribute, whether you’re alive or not, there’s no place to say that. And after all, I mean if there was like, an “I’m dead” button, when would you click it?

And so this is some of the more theoretical work that I’ve been doing, that is more of a straight-up critique of human-computer interaction. I have a paper that’s a postmodern take on human computer interaction called “Post-userism.” It argues that we’ve basically been approaching people with this very modernist construct and it really constrains how we do this work in our field. And death is certainly one of the kinds of research that is brought into that theoretical argument. We have that problem, and so when I first started encountering these problems on MySpace, I was just struck with this sense that well, it’s strange. We should do something about it. And with a background as an engineer, well, I didn’t necessarily know what we should do about it. But I started thinking about how you would go about doing something about it, and quickly realized that you couldn’t. If you were writing a database query, you’d need some attribute that lets you know if someone’s alive or not, if you’re going to exclude them. Right? In 2009, the “Reconnect disaster” that I talked about in that TED talk [in which Facebook asked users if they would like to reconnect with former contacts, including in some instances people who had died], was a situation informed by the computer science side of my head, but also it was nasty, and yes, it was bad. But there’s absolutely no way they could have prevented it. There’s no database query that any engineer could have written that would have solved that problem.

So, if the problem is that I can’t say I’m dead, then you need some other way of getting that data point into your data ontology. And so in our classifier, what we did is we turned to expressions of grief. I had done some work in 2012 and 2013 looking at how severe emotional distress was linguistically different from the kind of perennial expressions of grief you might find in these spaces. There’s a category that is different than the rest. They’re extremely distraught. But one of the things I always wanted to do after that study was actually compare post-mortem messages to non-memorialized text [everyday text not related to grief], to see if we could get a computer to learn the difference between those. It turned out to be a pretty straightforward thing to do and it’s helped us actually get a better understanding of the ways in which the messages people write on social media compare to the kind of language we use in obituaries. We have some comparisons in there too. The whole point here is that we started off with a bunch of questions: could we like take a memorialized profile, take the messages, and figure out if that profile belonged to someone who is dead? The answer was yes, and with high reliability. In this literature, you’re trying to get something between 65 to 90% accuracy, and we are well above 85%.

Kelly: Wow, that’s impressive.

Jed: It turns out that grief language, bereaved language, funerary language as it’s sometimes called, is just very, very distinctive. We then were interested in how quickly we could figure out if the profile belonged to someone that was deceased because the news gets out really quick. You’d sensitize something not after a week or a year, but within a couple hours. We found out that in most cases, we could do it with just a couple of messages, which meant that we could do it really quickly. Then we decided to look and see if we could classify messages themselves instead of the profile because some people were like “You know, I’m thinking of my grandmother today” and it’s not exactly on the memorial profile, and we found out that yes, we can do that too. It was kind of like the most successful project and in part, it’s just because of the way we speak. You know, in previous work, I’ve talked about the difference between funerary language and social media language, and that’s certainly true, but it turns out that funerary language is just so unique in English usage that it just really stands out. The way that these classifiers work, particularly when they’re looking at language, is they’re looking for unique words, certain words. So there are words that we use around funerals that just don’t exist otherwise.

Kelly: From the way that I imagine my own projects, I always wonder about the larger shifts, such as how we’ve arrived at the language that we use now. So many of our mourning norms are coming from the last 200 years. I wonder about the evolution of language and what those shifts might look like in the future as these things change, and whether technology itself is pushing the change for that in linguistic shifts or if it’s coming directly from the kind of words that social media brings us.

Jed: Yeah, and you know what I’m seeing is that to some extent, this analysis is not fantastic at answering that question specifically. In part because it will, this technique we use focuses a lot on novelty, like if you use a word that is really rare, even if you used it only once, it will tap into the rareness of it.

So for example, there’s a bag of words model that is, well, let me explain what that means. Before someone dies, the most common words used are “love”, “hate”, “lol”, “just”, and “I’m”. And after they die, it’s “love,” “miss,” “no,” ”just,” and “like.” That doesn’t give you much, but when you get into something called a bigram, which is when you look at two words adjacent to each other: the most common pre-mortem bigram is “love love” but then you get [the most common postmortem bigram] “love miss.” So miss actually is a word, like love, and both are present on pre- and postmortem. But miss is a really powerful key term for postmortem language, which is interesting because you can imagine all kinds of situations in which you might say miss. It turns out, it’s a strong indicator. I think that maybe because on social media, we don’t miss people because they’re always there.

I have some other work that kind of looks at this too. There’s a way of looking at what’s called “linguistic style” and then syntax. So, I’m looking at these. For example, second-person pronouns are more prevalent postmortem. Messages are longer postmortem, so actually the length of the message we write on someone’s wall, we have more to say, and that ends up being a big cue as well. Tense is an interesting thing too. I know the results from the emotional distress analysis more, but people who are more extremely distraught don’t use past tense the same way as people who aren’t. They’re still working through it. And so the way they use language, there’s a lot more reminiscence. When there’s less emotional distress, the language “we” comes up a lot. “We as a community are grieving your loss.” If you’re emotionally distraught, you’re not there yet, so it’s a lot of “you and I.”

Kelly: It’s super interesting for me because it’s pretty outside of the work that I do. For me, that definitely pushes on some of the boundaries that I imagine with these large scale social shifts that relate to historical- or even a future-looking lens. I think it’s interesting to reflect on that fact that for a long time, there was some kind of institutionalized grief, or expectations for how we ought to grieve. There was a way of bringing grief into all parts of our lives that shared our pain with the world, even if it was based on our identities. This was especially rigid for women and the expectation of how women should and ought to mourn is one particular example of mourning has changed. It’s less prescribed. With the access to multiple interfaces to technology, whether it’s the internet or our phones, it’s the way we have access to people in our lives now. I think this is the kind of next frontier of how we manage grief.

Jed: People say, “It would be so great if technology could do x, y, and z.” X was create a collection of photos, and Y was get a bunch of photos together and put them in a collection, and Z was, “it’d be so awesome if we had the photos, it could be like a scrapbook.” The lack of creativity just baffled me, and people were just so limited. It was about that time that I started to realize that today’s big data is tomorrow’s big postmortem data, and started to question how it would be used. We’re about to be on this big frontier of postmortem data. We don’t know what to do with it. But some people keep creating things that are so Black Mirror-esque it’s disgusting.

It was so bizarre, and I have enough of background in distributed cognition and theory of mind that these kind of AI embodiment things are not going to happen in the way that people think they’re going to happen. It’s not how our brains work. But then I read this piece about a company called Luca. There’s a really lovely longform Verge article called Speak, Memory that tells the story of a woman, Eugenia Kuyda, and she sought to basically create a chatbot that would let her continue to chat with her dead friend. And so I used to always present this Black Mirror stuff as if it were creepy and everyone would tell me it was creepy. I used to always let students off the hook and was like, “Hahaha, but you know actually none of this stuff actually works.” Then it was like a week before I was going to give the “Hahaha” thing again, I saw the article, and the truth is that now it does work. So we now need to start having a conversation about how we want this to work.