Neha Giri, Aakanksha ., Kashish Jain and Anamika Gupta
Adv. Know. Base. Syst. Data Sci. Cyber., 2 (1):197-214
Neha Giri : Shaheed Rajguru College of Applied Sciences for Women University of Delhi, Delhi, India
Aakanksha . : Shaheed Rajguru College of Applied Sciences for Women, University of Delhi, Delhi, India
Kashish Jain : Shaheed Rajguru College of Applied Sciences for Women, University of Delhi, Delhi, India
Anamika Gupta : Shaheed Sukhdev College of Business Studies, University of Delhi,India
DOI: https://dx.doi.org/10.54364/cybersecurityjournal.2025.1110
Article History: Received on: 30-Mar-25, Accepted on: 22-Apr-25, Published on: 30-Apr-25
Corresponding Author: Neha Giri
Email: neha.giri@rajguru.du.ac.in
Citation: Neha Giri (2025). Assessing Vulnerabilities in Voice Assistants: Comparative Analysis of Google Assistant, Siri, and Alexa. Adv. Know. Base. Syst. Data Sci. Cyber., 2 (1 ):197-214
Voice assistants (VAs), such as Google Assistant, SIRI, and Alexa, have changed the way people use technology by making information and services easily available. VAs provides assistance with web searches, smart home control, and scheduling tasks. VAs make daily tasks more convenient. Although there are many concerns regarding security, privacy, and general performance of the VAs as their use has increased over the period. The majority of research concentrates on their usability and functioning; little is known about their security vulnerabilities, which include threats to data privacy, background noise interference, and unauthorized voice access. This study assess the VAs security methods, response accuracy and authentication methods along with evaluating potential risks of data leakage and eavesdropping. Improving the security of voice interactions can be achieved by being aware of these security risks. Tests were conducted to determine how well these assistants authenticate users, limit access to sensitive data, and answer user inquiries to assess their security and performance. Tests were conducted with situations like illegal voice attempts, background noise interference, and voice history access. A comparative study was also conducted on the results generated by the test case. The findings of the study were that although all three assistants can successfully execute complicated requests, their security systems vary. Google Assistant is more vulnerable to background noise activation, unauthorized access, and voice history accessibility. SIRI offers the best security measure by imposing restrictions on the access to stored data and avoiding unauthorized activations. Alexa provides privacy and correct responses by striking a balance between security and dependability. These findings stresses on how crucial it is to protect user data by using better encryption practices, more strong access controls, and improved voice authentication systems. As VA continues to develop and become integral part in our daily life, these problems must be resolved to make them safer and more reliable.
[1] Alepis E, Patsakis C. Monkey Says Monkey Does: Security and Privacy on Voice Assistants.
IEEE Access. 2017;5:17841-17851.
[2] Kalhor B, Das S. Evaluating the Security and Privacy Risk Postures of Virtual Assistants. arXiv
preprint: https://arxiv.org/pdf/2312.14633. 2023.
[3] Aleksanjan A. Data Protection in the Age of Virtual Personal Assistants PhD Thesis. Ghent
Belgium: Ghent University. 2019.
[4] Franco M, Gaggi O, Guidi B, Michienzi A, Palazzi CE. A Decentralised Messaging System
Robust Against the Unauthorised Forwarding of Private Content. Future Gener Comput Syst.
[5] Bolton T, Dargahi T, Belguith S, Al-Rakhami MS, Sodhro AH. On the Security and Privacy
Challenges of Virtual Assistants. Sensors Basel. 2021;21:2312.
[6] Awojobi B, Landry BJL. An Examination of Factors Determining User Privacy Perceptions of
Voice-Based Assistants. Int J Manag Knowl Learn. 2023;12:53-62.
[7] Cheng P, Roedig U. Personal Voice Assistant Security and Privacy—A Survey. Proc IEEE.
2022;110:476-507.
[8] Bispham MK, Agrafiotis I, Goldsmith M. A Taxonomy of Attacks via the Speech Interface.
In: Think Third International Conference on Cyber-Technologies and Cyber-Systems CYBER.
Mind Digital Library. 2018.
[9] Yan C, Ji X, Wang K, Jiang Q, Jin Z, et al. A Survey on Voice Assistant Security: Attacks and
Countermeasures. ACM Comput Surv. 2023;55:1-36.
[10] Zhang N, Mi X, Feng X, Wang XF, Tian Y, et al. Understanding and Mitigating the Security
Risks of Voice-Controlled Third-Party Skills on Amazon Alexa and Google Home. arXiv
preprint: https://arxiv.org/pdf/1805.01525v1. 2018.
[11] Haque S, Eberhart Z, Bansal A, McMillan C. Semantic Similarity Metrics for Evaluating
Source Code Summarization. In: Proceedings of the 30th IEEE/ACM international conference
on program comprehension. New York USA: ACM. 2022:36-47.
[12] Zhang N, Mi X, Feng X, Wang XF, Tian Y, et al. Dangerous Skills: Understanding and
Mitigating Security Risks of Voice-Controlled Third-Party Functions on Virtual Personal
Assistant Systems. In: IEEE Symposium on Security and Privacy SP. IEEE. 2019:1381-1396.
[13] Kadam AV. Adversarial Attacks on Voice Assistants & Protecting Against Manipulation. Int J
Comput Eng and Technol IJCET. 2023;14:163-171.
213
https://cybersecurityjournal.info// | April 2025 Aakanksha et al.
[14] Chen G, Chenb S, Fan L, Du X, Zhao Z, et al. Who Is Real Bob? Adversarial Attacks on
Speaker Recognition Systems. In: IEEE Symposium on Security and Privacy (SP). IEEE.
2021:694-711.
[15] Lei X, Tu GH, Liu AX, Ali K, Li CY, et. al. The Insecurity of Home Digital Voice Assistants–
Amazon Alexa as a Case Study. arXiv preprint: https://arxiv.org/pdf/1712.03327
[16] Lei X, Tu GH, Liu AX, Li CY, Xie T. The Insecurity of Home Digital Voice Assistants-
Vulnerabilities, Attacks and Countermeasures. In: IEEE conference on communications and
network security CNS. IEEE. 2018:1-9.
[17] Hoy MB, Alexa S. Alexa, Siri, Cortana and More: An Introduction to Voice Assistants. Med
Ref Serv Q. 2018;37:81-88.