Facebook toughened its livestreaming policies Wednesday as it prepared to huddle with world leaders and other tech CEOs in Paris to find ways to keep social media from being used to spread hate, organize extremist groups and broadcast terror attacks.

Facebook’s move came hours before its executives would face the prime minister of New Zealand, where an attacker killed 51 people in March — and livestreamed parts of it on Facebook.

 

The CEOs and world leaders will try to agree on guidelines they will call the “Christchurch Call,” named after the New Zealand city where the attack on a mosque took place.

 

Facebook said it’s tightening up the rules for its livestreaming service with a “one strike” policy applied to a broader range of offenses. Any activity on the social network that violates its policies, such as sharing a terrorist group’s statement without providing context, will result in the user immediately being temporarily blocked. The most serious offenses will result in a permanent ban.

 

Previously, the company took down posts that breached its community standards but only blocked users after repeated offenses.

 

The tougher restrictions will be gradually extended to other areas of the platform, starting with preventing users from creating Facebook ads.

 

Facebook said it’s also investing $7.5 million in new research partnerships to improve image and video analysis technology aimed at finding content manipulated through editing to avoid detection by its automated systems — a problem the company encountered following the Christchurch shooting.

 

“Tackling these threats also requires technical innovation to stay ahead of the type of adversarial media manipulation we saw after Christchurch,” Facebook’s vice president of integrity, Guy Rosen, said in a blog post.

 

New Zealand Prime Minister Jacinda Ardern welcomed Facebook’s pledge. She said she herself inadvertently saw the Christchurch attacker’s video when it played automatically in her Facebook feed.

 

“There is a lot more work to do, but I am pleased Facebook has taken additional steps today… and look forward to a long-term collaboration to make social media safer,” she said in a statement.

 

Ardern is playing a central role in the Paris meetings, which she called a significant “starting point” for changes in government and tech industry policy.

 

Twitter, Google, Microsoft and several other companies are also taking part, along with the leaders of Britain, France, Canada, Ireland, Senegal, Indonesia, Jordan and the European Union.

 

Officials at Facebook said they support the idea of the Christchurch appeal, but that details need to be worked out that are acceptable for all parties. Free speech advocates and some in the tech industry bristle at new restrictions and argue that violent extremism is a societal problem that the tech world can’t solve.

 

Ardern and the host, French President Emmanuel Macron, insist that it must involve joint efforts between governments and tech giants. France has been hit by repeated Islamic extremist attacks by groups who recruited and shared violent images on social networks.

 

Speaking to reporters ahead of the meetings, Ardern said, “There will be of course those who will be pushing to make sure that they maintain the commercial sensitivity. We don’t need to know their trade secrets, but we do need to know what the impacts might be on our societies around algorithm use.”

 

She stressed the importance of tackling “coded language” that extremists use to avoid detection.

 

Before the Christchurch attack, she said, governments took a “traditional approach to terrorism that would not necessarily have picked up the form of terrorism that New Zealand experienced on the 15th of March, and that was white supremacy.”

 

 

 

leave a reply: