California, United States (Reuters): Alphabet Inc's Google will implement more measures to identify and remove terrorist or violent extremist content on its video sharing platform YouTube, the company said in a blog post on Sunday.
Google said it would take a tougher position on videos containing supremacist or inflammatory religious content by issuing a warning and not monetizing or recommending them for user endorsements, even if they do not clearly violate its policies.
The company will also employ more engineering resources and increase its use of technology to help identify extremist videos, in addition to training new content classifiers to quickly identify and remove such content.
Google's general counsel Kent Walker said, "While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now."
Google will expand its collaboration with counter-extremist groups to identify content that may be used to radicalize and recruit extremists, it said.
The company will also reach potential Islamic State recruits through targeted online advertising and redirect them towards anti-terrorist videos in a bid to change their minds about joining.
Germany, France and Britain, countries where civilians have been killed and wounded in bombings and shootings by Islamist militants in recent years, have pressed Facebook and other providers of social media such as Google and Twitter to do more to remove militant content and hate speech.
Facebook on Thursday offered additional insight on its efforts to remove terrorism content, a response to political pressure in Europe to militant groups using the social network for propaganda and recruiting.
Facebook has ramped up use of artificial intelligence such as image matching and language understanding to identify and remove content quickly, the company said in a blog post.