The Facebook Suicide Algorithm or: Getting Closer to Getting Further Away

Recently, Facebook announced they got a new algorithm that’s supposed to spot suicidal behavior. What I’m about to present isn’t a claim for or against this. This doesn’t have much to do with my philosophy of suicide. Rather, I’ll analyze the technology based on the McLuhan-ian view of technology as extensions of man. My purpose is to present this analysis and let people decide whether this technology is worthwhile. Spoiler alert, I think the conclusion means it’s bad.

First off, here’s the basic theory of McLuhan. When McLuhan talks about ‘media’, he talks about any technology. Any technology is an extension of a function of us. A ‘weapon’ isn’t something that sprang out of nowhere. Every weapon is an extension of our ability to hurt other people. Another integral fact is that every extension is meant to be more effecient in achieving its end, but means less involvement.

A hammer is an extension of our ability to hit things. What the hammer does and what the hand does when they beat the nail isn’t any different. The difference is in the effiency and involvement. The hammer is better at knocking the nail, can insert it more quickly into the surface. Once we use the hammer, we’re also less involved in the process. This is more vague, but what it means is our experience is limited. When we knock the nail with the hammer, we don’t feel the nail.

To use the weapon example, think of the atom bomb. It is just an extension of our ability to cause destruction, only far worse than a fist hitting a board. When you hit something with your fist in order to destroy it, you’re deeply involved in the process, you feel the surface of the object being destroyed. The object has to be close to you so you’ll use your fist. The atom bomb makes us less involved, since we don’t feel the surface of the buildings being destroyed. We don’t even see the victims since we have to drop the bomb from far away. This fact explains why technology leads to far deadlier wars, since people are less involved in the act of killing.

Of course, it’s possible this is not exactly what McLuhan meant. His writing can be cryptic, but this is the framework I’m working with here.

Now, for the algorithm. People have the ability to reach out to people that they consider in need of help. In our case, being suicidal means needing help. Life’s positive value is an axiom for many. Currently users can report posts they consider problematic – by that, I mean containing signals of ‘self-harm’ or suicide. I’m not sure if this can be called an extension of our ability to reach out, since it is already embedded in a technology – Facebook, which is an extension of our social circle/neighbourhood. What the algorithm does is search for these signals of ‘self-harm’ and report them, instead of users doing it.

Our ability to offer help is extended via this algorithm. It serves the same function, yet unlike a single person it scans thousands or millions posts a day. This alone makes it more efficient, since no post will go unnoticed and every distressing signal will be reported. In general, people will report a distressing suicide if it will be explicit. A show of hands: How many of you had people reaching out to you because you expressed something sad? By ‘reaching out’, I don’t mean commenting but engaging in conversation. If our current methods were efficient, we wouldn’t create an algorithm to do this. We wouldn’t feel the need to extend this ability if we did it right, just as we don’t have a machine to extended our ability to chew because our teeth work.

Now comes the bad side. Extensions of ourselves make us less involved, which is good if the experience wasn’t worth much. No one is going to miss feeling the pain of hitting a needle. In this case, the algorithm makes us less involved because we’re no longer reaching out as a person. Many in Sanctioned Suicide mocked this. We’re less involved since we’re no longer giving personal feedback, seeing the distressing signals with our own eyes and containing it. We don’t contact the person and hear what they got to say and hear their feedback to our attempts at help. Although this algorithm will be more efficient at finding distressing signals, we will be less involved in the experience of reaching out.

The question is, is this bad? My answer is, yes.

Involvement is critical when it comes to personal issues. Else, we’d all confess our sins to Cleverbot. A common complaint against psychotherapy is that the therapist isn’t actually involved and doesn’t really care. It’s a profession for them, they ask questions for the salary. The whole idea of caring demands involvement. In order for someone to care for us, for our troubles to mean to them something they need to be involved in our life. They need to find our troubles affecting, consider them important. Try reading about a serial killer and then watching an interview with him. In the second instance, you’re more involved with this person, you see them and hear their voices. Empathy demands involvement, since we can’t be empathetic unless we imagine ourselves in the position of the person suffering.

The algorithm, by making us less involved in the process of reaching out to people undermines itself. By removing ourselves, we remove the most crucial thing. The basis of reaching out is that someone actually cares about your troubles and wants to be involved in getting through them. Remove the person who cares, and there is no ‘caring’. An algorithm cannot care, it is not a person.

The main message this algorithm sends is not that someone is so caring they’ll invent this technology but the opposite. Someone is so uncaring that they’ll invent a technology that will do the caring for them. You can lead a horse to water, but a bunch of professionals showing up at a person’s house doesn’t send the message you care but that you want control. The reason communities like Sanctioned Suicide work compared to R/SuicideWatch is that the people in SS are deeply involved with one another, they communicate and exchange ideas, don’t aim for a specific result but are just there with a person.

Let’s assume we take the position that suicide is bad. This algorithm is another symptom of our pathetic attempts at controlling people, rather than helping them. If suicidal people are really in a bad situation and in need of help, how can we help them by patronizing them, caging them, trying to control them rather than reaching out to them? We can’t complain about being mystified by suicide since we don’t even try to understand it. Technology now extends our ability to reach out for others, to letting them know we hear their troubles in such a way that actually tells them we don’t care.

If we really did care, we wouldn’t need to invent a technology to do it for us.