Social media groups accused of terror fight failings

25 August 2016

Social media companies such as Facebook, Twitter and YouTube are “consciously failing” to prevent their sites being used for terrorist propaganda and must take more responsibility for controlling extremist content, MPs have said.

In a new report, parliament’s home affairs committee identifies the use of the internet to promote terrorism as “one of the greatest threats” to the UK. The MPs say it is therefore “alarming” that the US-based tech companies each have only a few hundred employees monitoring networks with billions of accounts, and accuses them of using their supranational legal status to “pass the parcel of responsibility”.

Following a spate of terrorist attacks in France and Germany this summer, British police are on high alert for a similar incident in the UK. Earlier this month, Scotland Yard deployed an extra 600 armed police across London. UK terrorism-related arrests were a third higher in 2015 than five years previously.

The problem of how social media groups respond to terrorist content was highlighted two years ago when the intelligence and security committee criticised a US internet company — now known to be Facebook — for failing to pass on information that could have helped prevent the murder of British soldier Lee Rigby by Islamist terrorists.

Wednesday’s report claims to reveal more weaknesses in monitoring. While Facebook and Google told MPs that they notify law enforcement agencies about terrorist material that represents a threat to life, Twitter admitted that it does not because it considers that its content is already public.

The committee said that all tech organisations should do better to balance the “hundreds of millions in revenues” generated from their billions of users with a “greater sense of responsibility and ownership for the impact that extremist material on their sites is having”.

Mark Rowley, Scotland Yard’s head of counterterror, was more strident in his criticisms. He is quoted in the report accusing tech groups of deliberately “undermining” counter-terrorism investigations by refusing to hand over potential evidence or threatening to tip off suspects.

Facebook: the killing of Lee Rigby

In May 2013, Fusilier Lee Rigby was the victim of a jihadi-inspired killing outside his London army barracks. An investigation by parliament’s security and intelligence committee found that one of the two assailants, Michael Adebowale, had discussed his plans to murder a British soldier in a “substantial online exchange” five months before the attack.

The US internet company concerned, now known to be Facebook, had already suspended seven accounts belonging to Adebowale because of their extremist content but had not passed on any information about the user to security services.

Sir Malcolm Rifkind, chairman of the committee, said that with proper intervention from the company there was a “significant possibility” that MI5 would have been able to prevent the killing.

Facebook, Twitter and Google all felt the criticisms in the report were unfair. “We take our role in combating the spread of extremist material very seriously. We remove content that incites violence, terminate accounts run by terrorist organisations and respond to legal requests to remove content that breaks UK law,” a YouTube spokesperson said.

While all the main social networks rely heavily on their massive international communities, rather than their own employees, to flag and report extremist content, Facebook and Google-owned YouTube do remove accounts associated with known terrorists.

Twitter pointed out that its daily suspensions of terror-inciting accounts had risen 80 per cent in the past year to a total of 350,000 since mid-2015, with increases in the number of accounts suspended after terrorist attacks in Nice and Brussels. At least 235,000 have been suspended in the six months since February alone, it said.

To facilitate quicker removal of dangerous content, the committee called on social media companies to deploy staff to a Scotland Yard unit that tracks online extremist material. This unit secured the removal of more than 120,000 pieces of terrorist-related content between 2010 and 2016, but MPs suggested it should be expanded and upgraded into a 24-hour operation run by staff from the Home Office, the security services, the police, internet companies and others.

Twitter and YouTube: radical preacher Anjem Choudary

The radical Muslim preacher Anjem Choudary, who was convicted last week of inviting support for Isis, used social media to influence followers. But during his trial at the Old Bailey, the court heard that police had repeatedly failed in trying to have his Twitter posts and YouTube videos erased.

The evidence that secured his conviction was a YouTube video in which Choudary swears an oath of allegiance to Isis. An officer at the trial said that authorities had no power to force corporations to remove material from the internet even if it was believed to break UK anti-terror laws.

Police also made several requests to close Choudary’s Twitter account, which had more than 32,000 followers. The feed showed support for Isis contrary to section 12 of the 2000 Terrorism Act and breached Twitter’s own rules on “threatening or promoting terrorism”. The account was suspended last week, a year after Choudary was first charged.

Keith Vaz, the committee’s chair, described the internet as the “modern front line” in the fight against terror. “Forums, message boards and social media platforms are the lifeblood of [Isis] and other terrorist groups for their recruitment and financing and the spread of ideology,” he said. “The companies’ failure to tackle this threat has left some parts of the internet ungoverned, unregulated and lawless.”

Mr Vaz added that the government must develop a counter-narrative to the “slick and effective” propaganda machine run by Isis. “We should utilise the brightest talent of the world’s creative industries to counter terrorist propaganda with even more sophisticated anti-radicalising material,” he said.

Social networks are increasingly turning to anti-extremist “counter-speech”, rather than removing content, as a way to combat extremism. YouTube said it has run eight counter-speech events in Europe in the past year, while Twitter and Facebook said they provide free advertising credits to non-profits promoting peaceful messages.

Responding to the report Ben Wallace, security minister, said the government was working closely with internet companies and wanted to see a “swifter, more automated approach to the identification and removal of content from social media sites, not just in the UK but across the world”.

Financial Times