Facebook Jail: How Facebook Empowers Trolls

Facebook Jail: How Facebook Empowers Trolls

By attempting to stop trolls, Facebook ineptly empowered them even further—and the reason has to do with Facebook’s own laziness and greed. You may have noticed that a lot of people you know have been put in “Facebook Jail” for seemingly petty offenses. Why is that happening?

I have a page on Facebook that I use to post about politics. I had already been through Facebook’s extortion wringer—they allow you to accrue followers on your page, gain some momentum, and then they kneecap you: they severely limit your ability to reach out to the followers you have gathered while at the same time start hitting you with incessant demands that you pay them to give some of it back to you. I certainly didn’t care enough to pay them a penny, so I just accepted that my page would be muted by its own platform. If you want to be a high-traffic site on Facebook, you have to pay.

But then, at one point a few years ago, I noticed something happening on my page: no one was getting notifications when I put up a post. The only reactions I got were from the few people who actively went to my page to see what I had posted. My reach dropped to zero. What the hell. I could still use Facebook in every other way, but suddenly my voice had been silenced.

I made complaints to Facebook about it, and, predictably, they did nothing. “Customer support” on the service is a fiction; go through the steps to ask them to fix something and all you’ll get are automated responses and no help at all.

After one week, the mysterious ban disappeared. Then, soon after, I got hit with another one. It also lasted exactly one week.

And then again, a third time, shortly after that. Same thing. One week, then back to normal. After that, it stopped.

It took me a bit to figure it out: I had been attacked by trolls.

Facebook has a problem with trolls. Publicly, they claim to want to stop them, but they can’t. Trolls tend to have a great deal of time to attack a problems, and tend to work in packs, helping and supporting each other. Facebook, on the other hand, doesn’t want to spare much in the way of money or manpower to truly address the problem.

So they automate.

They create systems which are not announced publicly. If so many complaints are made in such and such a pattern, then a certain penalty is applied. They never announce it because they know that if they do, it can and will be abused.

The trolls are aware of this, and they actively work the system to figure out the latest rules. They share that information amongst themselves, and then use it as a weapon. That’s what happened to my page: trolls figured out that if they report abuse or make some other complaint in a specific way, they could get my page shut down. They do so, and then watch the page and laugh their asses off as the target posts about how they’ve been mysteriously silenced. They take that back to their off-service forums and share it as an accomplishment, as a way to accrue respect and currency amongst their peers. They milk these things as far as they can take them.

So, why does Facebook allow them to get away with this?

The answer is pretty simple: Facebook wants to appear to be dealing with abuse, but they don’t want to actually pay for it. So they automate.

The problem is, a basic truism in strategy in conflict is that if you always react in the exact same way every single time, your opponent can easily use that against you.

So Facebook lays out automated systems that rely on user-reported abuse by trolls, but trolls quickly work out what the system is and then use it in force to abuse their targets even more. In essence, Facebook is only aggravating the problem, not solving it.

Making the problem worse is that the people being attacked are never made aware of it until they are punished; there is no recourse. I was never informed of anyone making complaints, and my “Page Quality” console never showed a single violation. I have no clear idea how exactly they did it, I simply know that they did it.

I am guessing that this is how it worked: Facebook sets up a parameter where if a certain number of complaints of abuse are made by a certain number of different users within a certain period of time, then the page gets muted for a week. This would be allowed to happen three times before any human interaction from Facebook is called for. This means that a site can issue abuse—or be abused—for about a month. Upon the start of a fourth cycle of complaints, an alarm goes off, and a Facebook staffer spends maybe half a minute looking at the situation. If they can see no obvious signs of actual abuse, they then shut off the automated muting mechanism.

There is likely an exception made for high-traffic sites (which, in order to be high-traffic, are paying a fair amount of money to Facebook), where the parameters are set higher and staffer attention is brought in faster. Or maybe not—big spenders may just get the reverse treatment, where they can get reported any number of times and never receive punishment unless they become so abusive that it starts garnering media attention.

However, for the people who don’t “boost” their page—me and you and most Facebook users—they get the punishment, while the trolls get a hearty laugh.

This is a similar problem that you see in copy protection or “DRM” schemes: once a single copy of a movie or song gets out in the wild, it gets shared endlessly, and no copy protection scheme ever works, because somewhere there is a media pirate who always finds a way to get past it. As a result, the DRM scheme is completely ineffective—and, in fact, punishes the people who actually follow the rules by putting restrictions on them that make using the media harder.

Facebook avoids paying a price for this because (a) they hide the workings of it, and (b) they have a virtual monopoly, and know that the users they abuse won’t leave in enough numbers to cost them anything. In fact, all the fuss probably increases traffic and ad views for them.

And so we come to the latest set of rules in Facebook, which seem to be the cause for the now-famous seven- or thirty-day bans referred to as “Facebook Jail.”

You can bet that it is a new set of parameters Facebook set up for dealing with abuse.

You can bet that it is 99% automated.

You can bet that trolls figured it out very quickly and started using it.

And you can bet that the people you know who are disappearing for a week or a month at a time are being sent to Facebook Jail by the very trolls that are supposed to be the ones sent there.

Leave a Reply

Your email address will not be published. Required fields are marked *