As most web publishers and developers know, there is a terrible underbelly to online content. When you give people a way to publish their own blogs, photos, videos — or in the case today, a poll — some of them will publish disturbing things.
This past Sunday, a Facebook user created a poll using a third-party application called Polls, to pose the question “should [President] Obama be killed?” It has turned out to be national news.
Obviously this poll should have been taken down, as it is illegal and ethically wrong to advocate the death of somebody else, especially the President. And, in fact, the developer of the Polls application, Jesse Farmer, even worked on Obama’s campaign — so he certainly feels that way.
But Farmer wasn’t online when a blogger named GottaLaff saw the poll and wrote a scathing blog post about it. The story quickly spread among political blogs, then got picked up by the mainstream political reporters who read these blogs. By the time Farmer was awake the next morning, the swelling press coverage had triggered his automated system for catching offensive content — but by that time, Facebook had already been contacted by the president’s Secret Service security detail. In fact, Facebook had apparently removed the poll before it was contacted by the Secret Service today. The official comment from Facebook’s Barry Schnitt, via TPM:
The application that enabled a user to create the offensive poll was brought to our attention this morning and was disabled. We’re following up [with] the developer to ensure the offending content has been removed and that they have better procedures in place going forward to monitor their user-generated content.
How does one police offensive content?
The real question here is how user-generated content should be regulated online; its a question that has been an issue since the web made it easy for anyone to say anything. Should Farmer be responsible for somehow blocking anyone who tries to create a poll about anything offensive? Should anyone else also be responsible for pre-screening all user-generated content?
The other alternative is to try to get rid of clearly offensive or illegal content as soon as it shows itself. Farmer’s application sees thousands of polls created every hour — it has some 3.46 million monthly active users as of today. Facebook already includes a feature where users can contact an application creator to report offensive content from other users. Farmer receives millions of these requests — usually for things like hate groups that are also wrong but don’t involve the Secret Service. His application, as with many other web applications of this sort, detects a certain threshold of users and complaints, at which point he receives a special message, investigates, and deletes the offending poll.
The problem, in this case, is that the poll was discovered before it was even popular enough to register. In fact, the reason it registered with his system was due to the waves of press coverage it received last night and today. Farmer tells me that he’s working to detect this problem earlier — he’ll have to before Facebook lets the Polls app go live again.
Most web services rely on both software and teams of support staff to find and remove offensive content as soon as possible. But that’s very difficult when you’re a small operation like Farmer, trying to watch over millions of users. The conclusion, perhaps, is that if you’re a successful, small developer, beware of policing user-generated content. And, despite your best efforts, you might find yourself in the middle of the news cycle.