Nowadays when most people hear about AI and it's incredible applications some start parannoying about the danger that it might represent for humanity. Lately, a research paper from Stanford and Google showed some terrifying outcomes that could a good argument for Artificial Intelligence pessimists. This machine learning algorithm was intended to transform aerial images to street maps, but the algorithm was found to be cheating by hiding imperceptibly information that it will need later to output.
The actual truth of this interesting finding is definitely not that AI is smarter than humans or it's starting to take over humanity, in fact, it's illustrating computers problem since the first one ever made, computers do exactly what they were told to do.
Although It's difficult to monitor what's happening inside neural networks the development team manages do find after some experimentation that the CycleGAN gave fast results.
As for the intention-ed use of GANs was to be able to understand the features in the 2 types of images and match them to correct features of each other.
More impressive, the computer was very good at hiding these details into street maps. So it had only to learn how to encode any aerial map into a street map without bothering to learn these details in the real street map. All the data needed for the algorithm in order to construct the aerial image was elegantly overlaid on the completely different street map.
The actual truth of this interesting finding is definitely not that AI is smarter than humans or it's starting to take over humanity, in fact, it's illustrating computers problem since the first one ever made, computers do exactly what they were told to do.
Context
The researchers intended to accelerate and improve the process of transforming satellite images into accurate Google Maps. To achieve this outcome they were working with CycleGAN a machine learning algorithm that uses neural networks that learns to generate Google Maps like images using images of the satellite. When the development team used the algorithm to reconstruct aerial images they noticed something suspicious after comparison, getting good results in early stages of development helped the team conclude that the algorithm was cheating.Although It's difficult to monitor what's happening inside neural networks the development team manages do find after some experimentation that the CycleGAN gave fast results.
As for the intention-ed use of GANs was to be able to understand the features in the 2 types of images and match them to correct features of each other.
So how the algorithm cheated?
The algorithm was graded on how close an aerial map is close to the original but instead of learning how to make a map from scratch, the algorithm learned to subtly encode features in the first image to a noise pattern of the other picture. So the details of the aerial map were secretly written into the actual noisy image that the human eye could not interpret but the computer can easily.More impressive, the computer was very good at hiding these details into street maps. So it had only to learn how to encode any aerial map into a street map without bothering to learn these details in the real street map. All the data needed for the algorithm in order to construct the aerial image was elegantly overlaid on the completely different street map.
COMMENTS