Does the stuff you post on the internet make you look like a terrorist? Is the rhythm of your typing sending the wrong signals? The government wants sites such as Google and Facebook to scan their users more closely. But if everything we do online is monitored by machines, how well does the system work? With ever-more sophisticated equipment, the security services can be very specific about who they track. But that still leaves room for error.
Should our future robot overlords decide to write a history of how they overcame their human masters, late 2014 will be a key date in the timeline. Last week, an official report from the parliamentary intelligence and security committee handed over responsibility for the UK’s fight against terrorism, or at least part of it, to Facebook’s algorithms – the automated scripts that (among other things) look at your posts and your networks to suggest content you will like, people you might know and things you might buy.