Research Article | | Peer-Reviewed

Securing Well-Being: Exploring Security Protocols and Mitigating Risks in AI-Driven Mental Health Chatbots for Employees

Received: 18 December 2023    Accepted: 29 December 2023    Published: 11 January 2024
Views:       Downloads:
Abstract

In today's workplace, mental health is gaining importance. As a result, AI-powered mental health chatbots have emerged as first-aid solutions to support employees. However, there are concerns regarding privacy and security risks, such as spoofing, tampering, and information disclosure, that need to be addressed for their implementation. The objective of this study is to explore and establish privacy protocols and risk mitigation strategies specifically designed for AI-driven mental health chatbots in corporate environments. These protocols aim to ensure the ethical usage of these chatbots. To achieve this goal, the research analyses aspects of security, including authentication, authorisation, end-to-end encryption (E2EE), compliance with regulations like GDPR (General Data Protection Regulation) along with the new Digital Services Act (DSA) and Data Governance Act (DGA). This analysis combines evaluation with policy review to provide comprehensive insights. The findings highlight strategies that can enhance the security and privacy of interactions with these chatbots. Organisations are incorporating heightened security measures, including the adoption of Two-factor Authentication (2FA) and Multi-Factor Authentication (MFA), integrating end-to-end encryption (E2EE), and employing self-destructing messages. Emphasising the significance of compliance, these measures collectively contribute to a robust security framework. The study underscores the critical importance of maintaining a balance between innovative advancements in AI-driven mental health chatbots and the stringent safeguarding of user data. It concludes that establishing comprehensive privacy protocols is essential for the successful integration of these chatbots into workplace environments. These chatbots, while offering significant avenues for mental health support, necessitate effective handling of privacy and security concerns to ensure ethical usage and efficacy. Future research directions include advancing privacy protection measures, conducting longitudinal impact studies to assess long-term effects, optimising user experience and interface, expanding multilingual and cultural capabilities, and integrating these tools with other wellness programs. Additionally, continual updates to ethical guidelines and compliance with regulatory standards are imperative. Research into leveraging AI advancements for personalised support and understanding the impact on organisational culture will further enhance the effectiveness and acceptance of these mental health solutions in the corporate sector.

Published in American Journal of Computer Science and Technology (Volume 7, Issue 1)
DOI 10.11648/j.ajcst.20240701.11
Page(s) 1-8
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2024. Published by Science Publishing Group

Keywords

AI-Driven Mental Health Chatbots, Privacy Protocols, Security Threats, GDPR Compliance, Corporate Mental Health, Risk Mitigation, Data Security

References
[1] Bassett, C. (2018, February 21). The computational therapeutic: exploring Weizenbaum’s ELIZA as a history of the present. AI & SOCIETY, 34(4), 803–812. https://doi.org/10.1007/s00146-018-0825-9
[2] C. (2022, April 26). Heart Disease and Mental Health Disorders | cdc.gov. Centers for Disease Control and Prevention. https://www.cdc.gov/heartdisease/mentalhealth.htm
[3] Catapano, P., Cipolla, S., Sampogna, G., Perris, F., Luciano, M., Catapano, F., & Fiorillo, A. (2023, October 20). Organizational and Individual Interventions for Managing Work-Related Stress in Healthcare Professionals: A Systematic Review. Medicina, 59(10), 1866. https://doi.org/10.3390/medicina59101866
[4] Centers for Disease Control and Prevention. (2023, April 25). About Mental Health. https://www.cdc.gov/mentalhealth/learn/index.htm
[5] Chisholm, D., Sweeny, K., Sheehan, P., Rasmussen, B., Smit, F., Cuijpers, P., & Saxena, S. (2016, May). Scaling-up treatment of depression and anxiety: a global return on investment analysis. The Lancet Psychiatry, 3(5), 415–424. https://doi.org/10.1016/s2215-0366(16)30024-4
[6] Deng, M., Wuyts, K., Scandariato, R., Preneel, B., & Joosen, W. (2010, November 16). A privacy threat analysis framework: supporting the elicitation and fulfillment of privacy requirements. Requirements Engineering, 16(1), 3–32. https://doi.org/10.1007/s00766-010-0115-7
[7] GDPR.eu. (2018, November 14). Art. 17 GDPR - Right to erasure ('right to be forgotten’) - GDPR.eu. https://gdpr.eu/article-17-right-to-be-forgotten/
[8] GDPR.eu. (2018, November 14). Art. 32 GDPR - Security of processing - GDPR.eu. https://gdpr.eu/article-32-security-of-processing/
[9] GDPR.eu. (2018, November 14). Art. 5 GDPR - Principles relating to processing of personal data - GDPR.eu. https://gdpr.eu/article-5-how-to-process-personal-data/
[10] GDPR.eu. (2018, November 7). What is GDPR, the EU’s new data protection law? - GDPR.eu. https://gdpr.eu/what-is-gdpr/
[11] Grayson, N. R. (2023). Cybersecurity Framework Profile for Electric Vehicle Extreme Fast Charging Infrastructure. https://doi.org/10.6028/nist.ir.8473
[12] Hamberg-van Reenen, H. H., Proper, K. I., & van den Berg, M. (2012, August 3). Worksite mental health interventions: a systematic review of economic evaluations. Occupational and Environmental Medicine, 69(11), 837–845. https://doi.org/10.1136/oemed-2012-100668
[13] Hasal, M., Nowaková, J., Ahmed Saghair, K., Abdulla, H., Snášel, V., & Ogiela, L. (2021, June 3). Chatbots: Security, privacy, data protection, and social aspects. Concurrency and Computation: Practice and Experience, 33(19). https://doi.org/10.1002/cpe.6426
[14] Kaspersky. (2023, April 19). Chatbots are everywhere, but do they pose privacy concerns? www.kaspersky.com. https://www.kaspersky.com/resource-center/preemptive-safety/chatbots
[15] National Institute for Health and Care Excellence. (2022, March 2). Recommendations | Mental wellbeing at work | Guidance | NICE. https://www.nice.org.uk/guidance/ng212/chapter/Recommendations
[16] Nicole Sette, J. C. (2023, March 23). Emerging Chatbot Security Concerns | Kroll. https://www.kroll.com/en/insights/publications/cyber/emerging-chatbot-security-concerns
[17] Sebastian, G. (2023). Privacy and Data Protection in ChatGPT and Other AI Chatbots: Strategies for Securing User Information. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4454761
[18] Tariq, U., Ahmed, I., Bashir, A. K., & Shaukat, K. (2023, April 19). A Critical Cybersecurity Analysis and Future Research Directions for the Internet of Things: A Comprehensive Review. Sensors, 23(8), 4117. https://doi.org/10.3390/s23084117
[19] U.S. Health Resources & Services Administration. (2019, August 2). Guide to Privacy and Security of Health Information. https://www.hrsa.gov/behavioral-health/guide-privacy-and-security-health-information
[20] United States Department of Health and Human Services (HHS). (2009, November 20). Summary of the HIPAA Security Rule. HHS.gov. https://www.hhs.gov/hipaa/for-professionals/security/laws-regulations/index.html
[21] World Health Organization. (2016, April 13). Investing in treatment for depression and anxiety leads to fourfold return. https://www.who.int/news/item/13-04-2016-investing-in-treatment-for-depression-and-anxiety-leads-to-fourfold-return
[22] World Health Organization. (2022, September 28). Guidelines on mental health at work. https://www.who.int/publications/i/item/9789240053052
[23] Zagorski, N. (2022, May 1). Popularity of Mental Health Chatbots Grows. Psychiatric News, 57(5). https://doi.org/10.1176/appi.pn.2022.05.4.50
[24] Goetzel, R. Z., Roemer, E. C., Holingue, C., Fallin, M. D., McCleary, K., Eaton, W., Agnew, J., Azocar, F., Ballard, D., Bartlett, J., Braga, M., Conway, H., Crighton, K. A., Frank, R., Jinnett, K., Keller-Greene, D., Rauch, S. M., Safeer, R., Saporito, D., . . . Mattingly, C. R. (2018, April). Mental Health in the Workplace. Journal of Occupational & Environmental Medicine, 60(4), 322–330. https://doi.org/10.1097/jom.0000000000001271
[25] The White House. (2023, November 22). Blueprint for an AI Bill of Rights | OSTP | The White House. Retrieved December 10, 2023, from https://www.whitehouse.gov/ostp/ai-bill-of-rights
[26] European Commission. (2023, December 15). The Digital Services Act package. Shaping Europe’s Digital Future. https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package
[27] European Commission. (2023, December 14). European Data Governance Act. Shaping Europe’s Digital Future. https://digital-strategy.ec.europa.eu/en/policies/data-governance-act
[28] House, W. (2023, October 30). FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The White House. https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
Cite This Article
  • APA Style

    Banerjee, S., Agarwal, A., Bar, A. K. (2024). Securing Well-Being: Exploring Security Protocols and Mitigating Risks in AI-Driven Mental Health Chatbots for Employees. American Journal of Computer Science and Technology, 7(1), 1-8. https://doi.org/10.11648/j.ajcst.20240701.11

    Copy | Download

    ACS Style

    Banerjee, S.; Agarwal, A.; Bar, A. K. Securing Well-Being: Exploring Security Protocols and Mitigating Risks in AI-Driven Mental Health Chatbots for Employees. Am. J. Comput. Sci. Technol. 2024, 7(1), 1-8. doi: 10.11648/j.ajcst.20240701.11

    Copy | Download

    AMA Style

    Banerjee S, Agarwal A, Bar AK. Securing Well-Being: Exploring Security Protocols and Mitigating Risks in AI-Driven Mental Health Chatbots for Employees. Am J Comput Sci Technol. 2024;7(1):1-8. doi: 10.11648/j.ajcst.20240701.11

    Copy | Download

  • @article{10.11648/j.ajcst.20240701.11,
      author = {Sourav Banerjee and Ayushi Agarwal and Ayush Kumar Bar},
      title = {Securing Well-Being: Exploring Security Protocols and Mitigating Risks in AI-Driven Mental Health Chatbots for Employees},
      journal = {American Journal of Computer Science and Technology},
      volume = {7},
      number = {1},
      pages = {1-8},
      doi = {10.11648/j.ajcst.20240701.11},
      url = {https://doi.org/10.11648/j.ajcst.20240701.11},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajcst.20240701.11},
      abstract = {In today's workplace, mental health is gaining importance. As a result, AI-powered mental health chatbots have emerged as first-aid solutions to support employees. However, there are concerns regarding privacy and security risks, such as spoofing, tampering, and information disclosure, that need to be addressed for their implementation. The objective of this study is to explore and establish privacy protocols and risk mitigation strategies specifically designed for AI-driven mental health chatbots in corporate environments. These protocols aim to ensure the ethical usage of these chatbots. To achieve this goal, the research analyses aspects of security, including authentication, authorisation, end-to-end encryption (E2EE), compliance with regulations like GDPR (General Data Protection Regulation) along with the new Digital Services Act (DSA) and Data Governance Act (DGA). This analysis combines evaluation with policy review to provide comprehensive insights. The findings highlight strategies that can enhance the security and privacy of interactions with these chatbots. Organisations are incorporating heightened security measures, including the adoption of Two-factor Authentication (2FA) and Multi-Factor Authentication (MFA), integrating end-to-end encryption (E2EE), and employing self-destructing messages. Emphasising the significance of compliance, these measures collectively contribute to a robust security framework. The study underscores the critical importance of maintaining a balance between innovative advancements in AI-driven mental health chatbots and the stringent safeguarding of user data. It concludes that establishing comprehensive privacy protocols is essential for the successful integration of these chatbots into workplace environments. These chatbots, while offering significant avenues for mental health support, necessitate effective handling of privacy and security concerns to ensure ethical usage and efficacy. Future research directions include advancing privacy protection measures, conducting longitudinal impact studies to assess long-term effects, optimising user experience and interface, expanding multilingual and cultural capabilities, and integrating these tools with other wellness programs. Additionally, continual updates to ethical guidelines and compliance with regulatory standards are imperative. Research into leveraging AI advancements for personalised support and understanding the impact on organisational culture will further enhance the effectiveness and acceptance of these mental health solutions in the corporate sector.
    },
     year = {2024}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Securing Well-Being: Exploring Security Protocols and Mitigating Risks in AI-Driven Mental Health Chatbots for Employees
    AU  - Sourav Banerjee
    AU  - Ayushi Agarwal
    AU  - Ayush Kumar Bar
    Y1  - 2024/01/11
    PY  - 2024
    N1  - https://doi.org/10.11648/j.ajcst.20240701.11
    DO  - 10.11648/j.ajcst.20240701.11
    T2  - American Journal of Computer Science and Technology
    JF  - American Journal of Computer Science and Technology
    JO  - American Journal of Computer Science and Technology
    SP  - 1
    EP  - 8
    PB  - Science Publishing Group
    SN  - 2640-012X
    UR  - https://doi.org/10.11648/j.ajcst.20240701.11
    AB  - In today's workplace, mental health is gaining importance. As a result, AI-powered mental health chatbots have emerged as first-aid solutions to support employees. However, there are concerns regarding privacy and security risks, such as spoofing, tampering, and information disclosure, that need to be addressed for their implementation. The objective of this study is to explore and establish privacy protocols and risk mitigation strategies specifically designed for AI-driven mental health chatbots in corporate environments. These protocols aim to ensure the ethical usage of these chatbots. To achieve this goal, the research analyses aspects of security, including authentication, authorisation, end-to-end encryption (E2EE), compliance with regulations like GDPR (General Data Protection Regulation) along with the new Digital Services Act (DSA) and Data Governance Act (DGA). This analysis combines evaluation with policy review to provide comprehensive insights. The findings highlight strategies that can enhance the security and privacy of interactions with these chatbots. Organisations are incorporating heightened security measures, including the adoption of Two-factor Authentication (2FA) and Multi-Factor Authentication (MFA), integrating end-to-end encryption (E2EE), and employing self-destructing messages. Emphasising the significance of compliance, these measures collectively contribute to a robust security framework. The study underscores the critical importance of maintaining a balance between innovative advancements in AI-driven mental health chatbots and the stringent safeguarding of user data. It concludes that establishing comprehensive privacy protocols is essential for the successful integration of these chatbots into workplace environments. These chatbots, while offering significant avenues for mental health support, necessitate effective handling of privacy and security concerns to ensure ethical usage and efficacy. Future research directions include advancing privacy protection measures, conducting longitudinal impact studies to assess long-term effects, optimising user experience and interface, expanding multilingual and cultural capabilities, and integrating these tools with other wellness programs. Additionally, continual updates to ethical guidelines and compliance with regulatory standards are imperative. Research into leveraging AI advancements for personalised support and understanding the impact on organisational culture will further enhance the effectiveness and acceptance of these mental health solutions in the corporate sector.
    
    VL  - 7
    IS  - 1
    ER  - 

    Copy | Download

Author Information
  • Datalabs, United We Care, Gurgaon, India

  • Datalabs, United We Care, Gurgaon, India

  • Datalabs, United We Care, Gurgaon, India

  • Sections