Share this article:
Artificial intelligence (AI) is in the headlines every week now from driverless cars to high-frequency stock trading algorithms that outperform any human. Computers now beat the best human players at chess and almost any other game and have even begun to interpret emotions from facial expressions and create works of art.
AI has also surprised human doctors at diagnosing difficult-to-spot eye conditions on scans. Yet as much as these advances may make our lives easier, it is this last area, visual recognition, that corporations and governments around the world are now using to build an unprecedented surveillance state.
There are between 5 and 5.9 million surveillance cameras in the United Kingdom and over 176 million now in China. But for this vast network of recording devices to be truly effective, they would need an equally vast number of humans watching them in real time, around the clock.
It would be impossible to monitor every camera in real time and experiments have demonstrated that human attention spans reduce significantly after only 20 minutes of video footage, and dramatically so if they attempt to watch more than one screen. Artificial intelligence trained to detect human activity, recognize faces, identify silhouettes and track movement in shadows has no such limitation.
Not only will artificial intelligence monitor tens of millions of video feeds in real time, but it will enter the data into a predictive matrix.
When computer recognition for video footage, by itself, is combined with the vast pattern-matching potential of artificially intelligent algorithms, the system begins not only to detect crimes in progress but predict them before they occur and send law enforcement or security resources there ahead of time.
Though this idea may sound farfetched, a location-based variant is already in use in Milan, Italy and Los Angeles, California.
By mapping very specific crime statistics (time of day, type of crime, location) onto a map and creating a complex pattern of associations within the data, the algorithm is able to predict the probable time and location of the next crime. Whereas it remains simply a general prediction, this crude system allows law enforcement better to deploy their resources.
The system employed in Milan is known as KeyCrime and has been predicting robberies for more than ten years. The American version, PredPol, is now hard at work in 40% of the nation’s largest police departments, again providing estimates of the time and location for future crimes. But the systems now coming online in China and elsewhere go much further.
With the use of video-linked artificial intelligence, the system grows exponentially more powerful and can operate in real time. By linking security camera footage to individuals and feeding it with thousands of data points about each subject, the goal becomes not simply to predict when and where a crime might occur, but who will be responsible.
The artificial intelligence watches you purchase a kitchen knife and it files the information away and does not act. But a week later, if you buy a roll of tape and a set of gloves, the system may consider this alongside a variation in your daily traffic patterns to flag your behavior for analysis.
The Chinese company Uniview is tasked with identifying Chinese citizens who travel to specific countries, such as Vietnam and Myanmar, and then marking them for more intensive data collection. In July of this year, the Chinese government announced that it plans to spend $150 billion to construct an artificial intelligence industry by the end of 2030.
With the ability to design new products, develop, build more advanced AI systems, and surveil citizens, the use of AI in policing is set to become pervasive.
Li Meng, the vice-minister of science and technology in China, pointed to crime prediction as a task for AI within the government, "If we use our smart systems and smart facilities, we can know beforehand... who might be a terrorist, who might do something bad," the vice-minister said. Bad, such as opposing the authoritarian rule of the Communist Party.
In Russia, VisionLabs is providing advanced, real-time facial recognition software to the government and in the United States, the software and hardware company Nvidia is doing so as well. Marc Rotenberg is the President of the Electronic Privacy Information Center.
He has been active in pointing out how the accelerated growth of these technologies raises privacy risks. "Some of these techniques can be helpful, but there are huge privacy issues when systems are designed to capture identity and make a determination based on personal data,” Rotenberg believes. "That's where issues of secret profiling, bias and accuracy enter the picture."
With traditional computer programming models, the code may be complex but is still decipherable to human programmers. With trainable and evolved artificial intelligence systems, the code is now becoming a black box because it is modified not by the programmers but by its own interaction with the environment.
With such systems as are being developed to match patterns among vast seas of complex data, there really is no way to understand how it thinks, and thus controlling or policing such a system becomes impossible for humans.
It is entirely possible that we are building the perfect police state and its level of control will be both absolute and outside of the reach of its human masters.