Monday, April 22, 2013

Internet Vigilantism

Perhaps you heard about the bombings that happened at the Boston marathon last week. The FBI got involved. And so did the Internet.

Pictured: Some poor saps with backpacks. Unheard of in a college-dense city like Boston.

Reddit, 4chan, and some other Internet communities decided to comb through public footage of the Boston marathon. Their aim was to find the marathon bombers themselves, and to piece together the narrative behind the event. After all, they had all the same resources as the FBI, minus the criminal database, the forensic evidence, witness testimony, and all of those other "extra" things.

These internet detectives assembled an extensive gallery of "suspicious" people. Tabloids found these pictures, printed them, and circulated them. When the FBI released blurry photos of the two suspects, the Internet communities doubled down on their efforts. They reduced their list of suspicious people to a middle-eastern-looking guy named Sunil Tripathi, who had gone missing prior to the bombing. People ate up the possibility that the person responsible for the bombings had been identified.

And then it turned out that the Internet got it completely wrong. It wasn't Sunil Tripathi. It was two Chechen brothers (which must have been a relief for middle-eastern people dealing with unwarranted backlash about this already). Of course, that didn't stop people from going on a completely misinformed witch hunt for the first guy before the debunking. And it isn't stopping crackpots from trying to continue that witch hunt. Oops.

This is internet vigilantism - and in this case, it misfired. These events demonstrate the power of the online sphere in shaping public opinion. The Internet can also have a stake in the action themselves, with users often getting involved in pursuing criminals and other targets. This phenomenon has happened before, and it will probably happen again. What is the anatomy of online vigilante justice, and what can be learned from its failure here?

Situations with internet vigilantism have a common story arc to them. It starts with some trigger of outrage - information of a crime, an injustice, or some other objectionable narrative. News of this trigger spreads around, getting the attention of many people online. People then decide to react to the trigger. Given the size of the Internet, it's a statistical inevitability that someone decides to do something about the trigger.

Online vigilantism has lots of overlap with online social justice. Both capitalize on the Internet's ability to deliver pertinent information to the viewer. Both also capitalize on the Internet's ability to equalize the weight of your voice against thousands of others. However, social justice tends to be driven by ideology and notions of progressiveness.  Vigilantism is reactionary - any given cause only exists for the initial trigger. As a result, social justice tends to be farther reaching than vigilantism, but online vigilantism usually has a role in online social justice. After all, a part of what social justice activists do involves reacting to events that showcase social inequality. Plus, interpreting blogs that rely on calling out other people as a form of vigilantism isn't too much of a stretch.

Well, if Batman does it...

So, how often do we see vigilante movements emerge on the Internet? Kind of often, actually. It happens with varying degrees of notoriety and success.

Perhaps the finest case of successful internet vigilantism was the arrest of Chris Forcand. Forcand was a Canadian and devout church-goer who also liked to go on MSN chats and sexually proposition young girls. On October of 2007, he started talking to a 13-year-old girl online, offering sexually charged words and inappropriate webcam footage. Fortunately, that "13-year-old-girl" turned out to not be a 13-year-old girl at all - it was instead an anonymous member (or members?) of 4chan, out to nail any pedophiles trawling around the internet. Archives of the exchange were put up online (which you can find by following this very not-safe-for-work link here), and then sent to the police. Forcand was arrested in December of that year.

There are also other successful examples of this phenomenon. Scam-baiting is when a person poses as a potential victim to a scam and deliberately wastes the scammer's time while collecting incriminating information. In the wake of Rehtaeh Parsons' suicide, people online have taken up the cause of hunting down her rapists.  Unrelated to that, the efforts of the online crowd have also gone towards dissuading a 15-year-old girl from committing suicide. This isn't an exclusively Western phenomenon, either - China's human flesh search engine has been used as a tool towards Eastern online vigilante movements as well.


Online vigilantism is a powerful thing, and has taken on some very interesting causes. But what about when it misfires?

This is where it gets tricky. Failed attempts at online vigilantism aren't usually referred to as 'online vigilantism' - they're called cyberbullying instead. Take, for example, the unwarranted harassment of an 11-year old caused when some tween drama got leaked to seedier parts of the Internet. Or the harassment of a 17-year-old for being falsely accused of bothering his neighbors with loud drumming.

How do we draw the line between 'bad online vigilantism' and 'cyberbullying'? We look at intent. Portions of the online crowd pay very special attention to intent - they even coined a meme whose entire point was to explain that they won't just play personal army for people's drama. Lots of online people recognize that harassing a kid for snubbing another kid at a birthday party is a stupid rallying call. Despite this distinction, some online people still can't tell the difference very well.

We are defining 'bad online vigilantism' to be 'well-intentioned online reactionary movements that lead to the wrong outcome'. Unfortunately, it's harder to hear about instances of bad online vigilantism, since people can easily avoid accountability if a vigilante movement goes awry. Such failures can register on our radar if someone privy to the failed movement can record it - we can look at the false accusation of fraud levied against a cancer activist as one example. The only other way we could hear about a bad vigilante movement is if it was based around an especially notorious event.

The Boston bombing situation - described earlier - is one such example, fulfilling the criteria of having good intentions, having much notoriety, and ultimately arriving at the wrong result. The same can be said for the Sandy Hook shooting aftermath.  The wrong Lanza was targeted by the internet for a period of time before information broke about the true identity of the Sandy Hook shooter. By then, of course, it was too late - Ryan Lanza had been harassed extensively, and his name was briefly tarnished, all while coming to terms with how his family members were dead. Again, oops.

But, hey, we learned our lesson there! Oh, wait...

The really undesirable part of bad internet vigilantism is the targeting of innocents by the online collective. So, why does bad internet vigilantism happen? And why do the successful instances of internet vigilantism succeed?

Notice that our examples of online successes involve relatively straightforward problems. What does it take to catch pedophiles? Simple - you need to know how they approach their crime. If you know that, then you know what kind of bait to set and where to set it.

What about scam baiting? That's pretty simple as well - you need to first determine if your target really is a scammer (which is usually not difficult to do). Then you can get going.

But then you have something like the Boston investigation, which was complex and open-ended. It was insufficient to compare people's accessory items to the torn-up bomb bags. Your findings needed to be able to stand up against alternative explanations. They needed to be reinforced by multiple information sources. It doesn't matter how many people you have working towards a problem - if your information is insufficient, then your results are contaminated from the start.

Such an interpretation is consistent with the assertions made in James Surowiecki's The Wisdom of Crowds.

Now with 20% more pop-sociology!

The book explores the concept of crowd wisdom, something that should be fairly native to larger online movements. Individuals will often fall prey to personal biases and error. However, the collective average of a decentralized group's opinions will often paint a shockingly accurate assessment of a problem. One cited example was a 1906 fair where statistician John Galton observed 800 people participating in a contest to guess the weight of a slaughtered and dressed ox. Although individual guesses were generally uninformed and variant, the average of the guesses was accurate to within 1% of the ox's actual weight (1207 guessed, vs 1198 actual), and more accurate than the majority of individual opinions.

The relevant caveat presented in the book addresses the availability of information:
Decentralization's great strength is that it encourages independence and specialization on the one hand while still allowing people to coordinate their activities and solve difficult problems on the other. Decentralization's great weakness is that there's no guarantee that valuable information which is uncovered in one part of the system will find its way through the rest of the system. -Surowiecki, pg 71
Here we see a description that sufficiently aligns with the potency of online vigilantism while defining an important prerequisite condition: In addition to having an appropriate trigger, your movement must be able to get well-informed.

Scam-baiting is relatively simple, because the prerequisite information is prevalent. Most people online get spam, or find themselves a witness to a scam. We're immediately guaranteed a diverse, decentralized pool of contributors to describing scams. With so much information to tap into about scams, scam-baiting can manage to be an effective self-directing vigilante movement. Same goes for suicide prevention - many people will have different experiences with the concept, and can individually contribute interesting things. In unison, you get a strong anti-suicide message.

A complex problem, like the Boston bombing, has incomplete public information and a poorly-defined methodology for finding a solution. The efforts fell short because of this genuine lack of information, despite the diverse crowd that the Internet can attract, and despite the clear decentralization of the crowd. The criteria for a sound crowdsourced solution was not met, but that didn't prevent the internet from attempting solutions anyway.

These solutions ended up being wrong, and it would have ended there if the case had less notoriety. But among the people examining the case with idle curiosity were also the people who didn't think to check their solution's shortcomings. Word of the online investigation got spread around. People were already looking for someone to blame, and were satisfied to be given a face to hate, and latched onto the wrong person.

Sorry about that, guy.

The greatest lesson to be learned from all this is that the fate of an internet vigilantism campaign can be traced back to its very first few moments - specifically the moments where you present the rallying call that triggers the masses. If the cause isn't good enough, your campaign may not go anywhere, or devolve into another case of cyberbullying. But, more pressingly, if your cause isn't well-defined enough or well-informed enough, you can risk missing your target by miles.

Internet detectives and activists should take special care to recognize the limits of their information sources. Their efforts have been shown to work very well in circumstances where all necessary information is accessible to them. But when speculating about more open-ended problems, keep a modicum of self-awareness. And when speculating about problems with a lot of gravity, be aware of the waves your words can make.

1 comment: