The British government has unveiled a tool that can accurately detect extremist content and block it from being viewed, the media reported. Home Secretary Amber Rudd told the BBC that she would not rule out forcing technology companies to use it by law.

Rudd visited the US to meet tech companies to discuss the idea, as well as other efforts to tackle extremism. The tool was made as a way to demonstrate that the government’s demand for a clampdown on extremist activity was not unreasonable.
  1. It’s a very convincing example of the fact that you can have the information you need to make sure this material doesn’t go online in the first place. 
  2. Thousands of hours of content posted by the Islamic State (IS) terror group was run past the tool, in order to ‘train’ it to automatically spot extremist material.
  3. The government provided 600,000 pounds ($832,000) of public funds towards the creation of the tool by an artificial intelligence company based in London. 
  4. According to ASI Data Science, the software is capable of detecting 94 per cent of IS’s online activity, with an accuracy of 99.995 per cent.
  5. The Global Internet Forum to Counter Terrorism, launched last year, brings together several governments including the US and UK, and major internet firms like Facebook, Google, Twitter and others. However, the bigger challenge is predicting which parts of the internet the terrorists will use next.
  6. The Home Office estimates that between July and the end of 2017, extremist material appeared in almost 150 web services that had not been used for such propaganda before.