Share this story...
FILE - In this Wednesday, May 16, 2012, file photo, a Facebook logo is displayed on the screen of an iPad, in New York. A Facebook video of an Ohio man shooting a 74-year-old retiree in Cleveland was up for three hours Sunday, April 16, 2017, after it was posted, raising questions about the social-media network's process for taking down objectionable content posted by its users. The company said it does not allow such "horrific crime" on Facebook. It did not immediately respond to further questions about the incident. (AP Photo/James H. Collins, File)
Latest News

Cleveland shooting the latest crime video on Facebook

FILE - In this Wednesday, May 16, 2012, file photo, a Facebook logo is displayed on the screen of an iPad, in New York. A Facebook video of an Ohio man shooting a 74-year-old retiree in Cleveland was up for three hours Sunday, April 16, 2017, after it was posted, raising questions about the social-media network's process for taking down objectionable content posted by its users. The company said it does not allow such "horrific crime" on Facebook. It did not immediately respond to further questions about the incident. (AP Photo/James H. Collins, File)

A Facebook video of an Ohio man shooting a 74-year-old retiree in Cleveland was up for three hours Sunday after it was posted, raising questions about the social-media network’s process for taking down objectionable content posted by its users.

The shooter’s profile page was also removed. Facebook will deactivate accounts if it determines they pose safety threats.

FACEBOOK POLICY

The company said it does not allow such “horrific crime” on Facebook.

It has algorithms that attempt to automatically filter out some stuff, like pornography. But the decision of whether to take down violent acts on video is more complicated.

There is a team that monitors videos in dozens of languages, and must decide what to remove. Graphic videos and images that document human rights abuses and are intended to make others aware of them, for example, could be permitted. Videos that violate the company’s standards , including those that “glorify” violence, that are “shared for sadistic pleasure,” or that mock victims, would be removed.

It’s not alone in this approach. Google says “violent” or “gory” videos intended to be “shocking, sensational or disrespectful” shouldn’t be on YouTube but, like Facebook, says that it allows videos of some violent episodes to remain, such as footage of, say, protesters being beaten with information added about the date, location and additional “context.” Users report graphic or violent videos.

SOCIAL BENEFIT

Live video posted of Philando Castile , a black man who was shot by police in Minnesota during a traffic stop, helped drive attention to police shootings of black men through millions of viewings in July 2016. The video was down briefly, which the company blamed on a glitch, but then was back up. Facebook attached a warning about graphic content.

REMOVAL PROCESS

Facebook relies on users to report videos that glorify violence or disrespect victims. People have to know how to report videos and take the step to do it. Facebook reviews most video within hours, but that means some take longer. One exception: Videos posted live that rack up a certain high number of viewers are reviewed by the company and can be interrupted without any reports from users. The company would not specify what that threshold is.

Copyright © The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

Related Links