Monday, April 29, 2024
ad
HomeNewsResearch Discovers Acoustic Attack that Steals Data from Keystrokes with 95% Accuracy

Research Discovers Acoustic Attack that Steals Data from Keystrokes with 95% Accuracy

The technique clarifies how deep learning might be used to create fresh forms of malware to steal data such as credit card numbers, messages, conversations, and other private information.

By listening to what you type on the keyboard, a deep learning model can collect private information including usernames, passwords, and messages. According to the paper, The sound-recognition system can catch and analyze keystrokes captured from a microphone with 95% accuracy after being trained by a group of academics from British universities.

When the model was evaluated with the well-known video conferencing services Zoom and Skype, the accuracy fell to 93% and 91.7%. The technique clarifies how deep learning might be used to create fresh forms of malware that can listen to keyboard input and steal data such as credit card numbers, messages, conversations, and other private information.

Sound-based attacks are more viable than other strategies, thanks to recent advances in machine learning and the availability of inexpensive, high-quality microphones on the market. Other strategies are frequently constrained by variables like data transfer speed and distance.

Read More: OpenAI’s Sam Altman Launches Cryptocurrency Project Worldcoin

The researchers recorded the sound made by 36 keys on a MacBook Pro being pressed 25 times apiece to gather data for the sound-recognition system. Using an iPhone 13 mini, the audio was recorded when it was 17 centimeters from the PC.

Waveforms and spectrograms that identified each key were generated from the recordings. An image classifier dubbed “CoAtNet” was then trained using the distinctive sounds of each button to determine which key on the keyboard was pushed.

The method does not absolutely need access to the device microphone, though. Threat actors can also join a Zoom session as a participant to hear what users are typing by listening to the keystrokes. Users can protect themselves against such attacks, according to the research article, by altering their typing habits or employing complicated random passwords. The model can potentially be rendered less precise by using white noise or software that simulates keyboard sounds.

It is quite unlikely that upgrading to silent switches on a mechanical keyboard or fully moving to membrane keyboards will assist since the model was extremely accurate on keyboards used by Apple on laptops in the last two years, which are typically silent. Currently, implementing biometric identification methods like a fingerprint scanner, facial recognition, or an iris scanner is the best approach to counter such sound-based attackers.

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Discord Server to be a part of an engaging community.

Sahil Pawar
Sahil Pawar
I am a graduate with a bachelor's degree in statistics, mathematics, and physics. I have been working as a content writer for almost 3 years and have written for a plethora of domains. Besides, I have a vested interest in fashion and music.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular