Skip to Main Content

Fighting deepfakes with smartwatches


Voice spoofing or imitating voices is a major concern when utilizing artificial intelligence (AI). Deepfakes are a prominent type of voice spoofing facilitated by deep learning.  

In manufacturing settings, verbal commands are widely used to control machinery. Hence, voice spoofing is a major concern since criminals can engineer voices to manipulate and disrupt the machinery or steal data. 

With the increased vulnerability of sensitive data, manufacturing personnel need a way to verify that verbal commands are legitimate. Such protections exist but cannot single-handedly stop skilled attacks like deepfakes. 

Dr. Nitesh Saxena, College of Engineering, Texas A&M University and Yingying “Jennifer” Chen, School of Engineering, Rutgers University are working to protect voice control technology against manipulation and have found smartwatches to be part of their solution.

Smartwatches are equipped with microphones and accelerometers, which measure vibrations. Saxena and Chen plan to develop software that can be installed on smartwatches or other wearable devices to help authenticate commands in manufacturing settings.

They hope to enhance security by requiring verification of both the acoustic properties and the vibration signals of voice samples, which are harder to fake simultaneously.

“Improving the security of voice authentication is critical in many applications, especially in manufacturing domains. Thanks to the support from our sponsor, this projects will allow us to explore this important line of research,” Saxena says. The research is funded by MxD (Manufacturing x Digital), an organization that partners with the Department of Defense to strengthen U.S. manufacturing.