Everything with moderation

I wrote about the PostSecret app as a place to build empathy by seeing enough anecdotes to form patterns. What I didn’t mention is that, beyond the repetition and non-secrets, there were a few people who outright abused the app. It is, after all, an anonymous photo sharing service, and while I will not feed the trolls rest assured there are plenty. Correction, were plenty: the app was removed the App Store and and uploading new secrets has been disabled. After four months, the PostSecret app is dead, a victim of what Frank Warren calls the “1% of … content that was not just pornographic but also gruesome and at times threatening”.

The app did have a system of moderation. Users could flag any secret they deemed not to conform to the app’s rules of submissions, which would (apparently) send that secret for review and possible removal by a human moderator. Inappropriate secrets would still be submitted and seen by others prior to being removed. As a last-ditch attempt to save the app, the mods “even tried prescreening 30,000 secrets a day,” which was just too many.

Before we dismiss online communities as untamable, let’s look at a success story: Wikipedia. One veteran explains the balance of power to an upstart vandal as follows:

Experience indicates that in a fight between editors and admins, the admins hold all the cards. All of them. We can block you indefinitely, and we can block your IP address, and we can lock the articles, and we can prevent you editing your talk page, and we can moderate you off the mailing list.[1]

Admins also can “rollback” edits in a single click, see recent changes made by only anonymous users, write bots (more properly algorithms) to revert clearly malicious edits, and install filters to prevent the most egregious vandalism from being submitted in the first place. This is the sort of control that PostSecret moderators could only dream of. The difference in moderator tools exists for a simple reason: computers can analyze text far better than images. Wikipedia vandalism is usually text; PostSecret abuse was normally images. Distinguishing between appropriate and inappropriate images algorithmically is almost impossible; unlike Supreme Court justices, a computer doesn’t know it when it sees it.

The PostSecret app made it easy to upload an image. The option was available from multiple screens, and one could use the device’s built-in camera.  Wikipedia, on the other hand, doesn’t advertise the upload link, prevents anonymous users from uploading images, requires factual and legal information about every image, and deletes images with an unacceptable copyright status. Vandals could still insert existing graphic images into articles, but which article? Unlike PostSecret’s unified stream of recent secrets, traffic on Wikipedia is diffused across literally millions of articles. The most visible targets are often protected from editing.

Conversely, virtually all of Wikipedia’s automated defense systems lock on to their targets using regular expressions (patterns in text). PostSecret allowed users to overlay their image with text, but despite the emergence of character patterns that should have been restricted, it evidently never gave its moderators any sort of automated tools. This is apparent from incidents when the same hateful secret would continue to be posted dozens of times. Almost as effective would have been limiting the number of secrets that could be uploaded in an hour, a technique proposed to cut spam email.

It’s also worth mentioning that Wikipedia’s admins are chosen by the community, crowdsourcing the problem and giving tools to those who need and use them. As far as I know, PostSecret has been pretty opaque about who their mods are. Probably overworked interns.

The diatribe continues:

The really ”stupid” thing is that you are probably right about the edits, but your ludicrous flying off the handle about a pretty much automatic action per WP:3RR, a policy which makes very clear the fact that it applies ”even if you are right” and is designed to stop the massive disruption of the project which edit wars cause, has resulted in your attracting the unwelcome attention of multiple admins and pretty much preventing any chance of you actually achieving what you want to achieve. … All you have to do is engage in civil debate on the Talk page. If there is a POV [Point Of View, i.e. biased] edit, there’s a good chance that there will be plenty of people happy to agree it should not be there. Instead you go into edit-war mode and fling invective around, with the result that nobody takes you in the least bit seriously and you end up blocked for an ever-extending period. Just how smart is that, exactly? [ibid]

Even though Wikipedians can be anonymous, they have a persistent identity and can build (and destroy) a reputation. As mentioned above, user accounts, IP addresses, an even ranges of IP addresses are subject to blocks of potentially indefinite duration. On PostSecret, there was no way for users to see all secrets submitted by a particular person. Furthermore, there was (apparently) no way for moderators to block a disruptive user, and inappropriate secrets had to be removed individually.

Once a secret had been posted, it could not be modified. Users could reply to secrets, and many back-and-forth conversations emerged, with participants identifying themselves by their secret’s background. This mechanism was poorly suited to discussion about the app, and user-user interaction. By contrast, every Wikipedia page  has an associated discussion page, a backstage area where editors can discuss any issues that arise without impacting content shown to readers. Such pages are also used to leave users messages. All posts are signed, creating a thread of conversation. Ironically, this makes it easier to make lasting friendships on Wikipedia that on an app designed to promote human connection.

Wikipedia articles persist much longer than secrets did. They also take longer to write (if done well).  Secrets were transient, disappearing from the “recents” feed within minutes. Wikipedia editing is still done on full-sized computers; the PostSecret app was, well, a mobile app, and one could dash off a secret anytime, anywhere. PostSecret’s images were much more approachable than encyclopedic text. Wikipedia articles are categorized, and link to related articles; secrets stood alone. Together, these factors give Wikipedia editing a much more weighty and important feel, while uploading a secret was cheap and quick. That was by design, of course, but I can’t help wondering if a higher barrier to entry could have discouraged abusers of the system. Mailing a physical postcard, perhaps. Oh wait.

Finally, we all have benefited from Wikipedia, and we don’t like biting the hands that feeds us. Vandalism has shifted from simple disruption to trying to subtly insert bias for personal gain. PostSecret, on the other hand, is an art project. Vandalizing Wikipedia is difficult, does minimal damage, and hurts a positive force on the internet. But the PostSecret app was, for a small handful, a place to vent, solicit, and shock, and no one could do a thing about it.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: