“Terrorism and violent extremism are complex societal problems that require an all-of-society response,” the statement said.
“For our part, the commitments we are making today will further strengthen the partnership that governments, society and the technology industry must have to address this threat.”
The commitments we are making today will further strengthen the partnership that governments, society and the technology industry must have to address this threat.
Joint statement from Facebook, Google, Twitter, Amazon and Microsoft
Under the agreement, all the tech giants have agreed to identify “appropriate checks on livestreaming, aimed at reducing the risk of disseminating terrorist and violent extremist content online”.
This could include more vetting measures, moderating specific events and checks on livestreaming.
The competing companies said they would develop new technology, and collaborate with global governments, including sharing data, in an effort to improve machine learning and artificial intelligence as well as developing open source and shared digital tools.
A “crisis protocol” would also be put into place to respond to new urgent events, with information to be shared among the companies, governments and non-government organisations. Each company has agreed to create an incident management team to coordinate and share information.
After the Christchurch massacre was livestreamed, the tech giants grappled to keep the video from appearing online after different versions were uploaded millions of times across platforms like Facebook, Twitter and Google’s YouTube.
New Zealand Prime Minister Jacinda Ardern, as part of a “Christchurch Call” pledge supported by a swathe of countries, has asked the social media giants to take a closer look at any software directing people to violent content and has pushed for examination of their algorithms. British Prime Minister Theresa May has also called for action from the social media giants.
The Australian government pushed through tough legislation in the wake of the attacks that could see tech companies face billions of dollars in fines and have their executives jailed if they did not quickly remove objectionable content.
Facebook on Wednesday has also independently introduced a new “one strike” policy for livestreaming for its 2.3 billion users after widespread calls for limits on the technology.
Facebook vice president of integrity Guy Rosen said, in a post uploaded to the social media company’s blog on Wednesday afternoon (AEST), that those who broke the social network’s “most serious policies” on one occasion would now be blocked from using its livestreaming technology for 30 days.
This includes a no-tolerance approach to those who link to, or share, terrorist or violent content.
These restrictions will soon be extended to stop the same users from creating advertisements.
In the past, content that broke the rules on Facebook was removed by moderators and those who continually did so were blocked for a period of time or, in extreme cases, banned.
“Following the horrific terrorist attacks in New Zealand, we’ve been reviewing what more we can do to limit our services from being used to cause harm or spread hate,” he said.
Jennifer Duke is a media and telecommunications journalist for The Sydney Morning Herald and The Age.