Skip to Main Content

New technique protects privacy from snoopers who hack smart devices

Researchers at Texas A&M University and Adobe Research are collaborating on new ways to protect user privacy from video-enabled in-home devices.

Smart devices are integrating into our workspaces and homes, providing convenient features such as security cameras, temperature control, motor games and more. Smart devices often have cameras that transmit data to the cloud. The cloud uses analytics to recognize different actions and sends data back to the device. Recently these smart cameras have come under public scrutiny for creating hacking opportunities that infiltrate the privacy of the home.

“We must reach a balance that allows people to use cloud-based services without exposing personally identifiable information,” said Zhangyang Wang, assistant professor in the Department of Computer Science and Engineering Department in Texas A&M’s College of Engineering.

Wang and two doctoral students, Zhenyu Wu and Haotao Wang, are partnering with Adobe Research scientists Zhaowen Wang and Hailin Jin. The collaboration began in 2017.

Their work builds on adversarial machine learning, a research field that lies at the intersection of machine learning and cybersecurity.  

“Traditional machine learning tries to preserve and extract information — to maximize it,” Adobe Research’s Zhaowen Wang said. “Our approach is different. For example, adversarial learning can help us minimize recognizing a person’s identity while still seeing and understanding their actions.”

The team formulated a unique adversarial training framework to improve privacy-preserving visual recognition. Adversarial machine learning has two models. One model is trying to protect the information, while the other model does the opposite, trying to steal the information. The models then learn from competing with each other and advance their techniques.

Their adversarial training framework learns a smart “filtering” mechanism that can automatically convert a raw image to a privacy-preserving version. The learned filter can be embedded in the camera front-end, so that the image captured by the camera will have privacy information removed at the very beginning, before any transmission, storage or analytics. Experiments show that it cannot be hacked by other machine learning attacker models, and only video contents free of privacy information can get past the scrutiny of the filter.