ISACA Survey Reveals Cybersecurity Teams Are Often Excluded from AI Policy Development

Published:

The State of AI in Cybersecurity: A Call for Integration and Governance in India

In the rapidly evolving landscape of technology, artificial intelligence (AI) has emerged as a double-edged sword, particularly in the realm of cybersecurity. A recent survey by ISACA, a global professional association dedicated to advancing trust in technology, reveals a concerning trend among cybersecurity professionals in India. Only 27 percent of these professionals are involved in the development of policies governing the use of AI technology within their enterprises. Alarmingly, half of the respondents reported no involvement in the development, onboarding, or implementation of AI solutions at all. This gap highlights a critical need for organizations to rethink their approach to integrating cybersecurity expertise into AI governance.

The Current Landscape of AI Utilization in Cybersecurity

According to the 2024 State of Cybersecurity survey, which gathered insights from over 1,800 global cybersecurity professionals, Indian security teams are primarily leveraging AI for specific applications. The breakdown of AI utilization is as follows:

  • Endpoint Security: 31%
  • Automating Threat Detection/Response: 29%
  • Automating Routine Security Tasks: 27%
  • Fraud Detection: 17%

These statistics underscore the growing reliance on AI to enhance security measures and streamline operations. However, the lack of involvement from cybersecurity teams in the policy-making process raises questions about the effectiveness and security of these AI implementations.

The Importance of Cybersecurity Involvement in AI Governance

Jon Brandt, ISACA Director of Professional Practices and Innovation, emphasizes the necessity of integrating cybersecurity leaders into the AI governance process. “In light of cybersecurity staffing issues and increased stress among professionals in the face of a complex threat landscape, AI’s potential to automate and streamline certain tasks and lighten workloads is certainly worth exploring,” he states. However, he warns against a singular focus on AI’s operational role, advocating for a collaborative approach that includes cybersecurity teams in the development and implementation of AI solutions.

RV Raghu, director of Versatilist Consulting India Pvt Ltd and ISACA India Ambassador, echoes this sentiment. He points out that the current involvement of only 27 percent of cybersecurity teams in AI policy-making is a missed opportunity. “There is an urgent need for organizations to rethink how they integrate cybersecurity professionals in AI decision-making,” he asserts. The strategic importance of collaboration between AI and cybersecurity experts cannot be overstated, especially as organizations navigate the complexities of AI technology.

Exploring the Latest AI Developments

ISACA is actively working to provide resources that help cybersecurity professionals understand and navigate the implications of AI technology. Some notable initiatives include:

1. EU AI Act White Paper

With the impending EU AI Act set to impose requirements on certain AI systems used within the European Union, ISACA has released a white paper titled "Understanding the EU AI Act: Requirements and Next Steps." This document outlines essential steps for enterprises, including instituting audits, adapting existing cybersecurity policies, and designating an AI lead to oversee AI tools and strategies.

2. Authentication in the Deepfake Era

As AI technologies evolve, so do the challenges they present. ISACA’s resource, "Examining Authentication in the Deepfake Era," highlights the dual nature of AI in security. While AI can enhance adaptive authentication systems, making it more difficult for attackers to gain access, it also poses risks such as adversarial attacks and ethical concerns. Cybersecurity professionals must remain vigilant and informed about these developments.

3. AI Policy Considerations

Organizations looking to adopt generative AI policies can benefit from ISACA’s guidelines, which encourage them to ask critical questions about policy scope, acceptable behavior, and compliance with legal requirements. This proactive approach can help ensure that AI is implemented responsibly and securely.

Advancing AI Knowledge and Skills

Recognizing the need for continuous education in the face of evolving technologies, ISACA has expanded its credentialing options to equip professionals with the necessary skills to navigate the changing landscape. Key offerings include:

1. Machine Learning Courses

ISACA has introduced on-demand AI courses covering topics such as neural networks, deep learning, and large language models. These courses are designed to provide professionals with the knowledge needed to leverage AI effectively in their organizations.

2. Certified Cybersecurity Operations Analyst

Launching in Q1 2025, ISACA’s Certified Cybersecurity Operations Analyst certification will focus on the technical skills required to evaluate threats, identify vulnerabilities, and recommend countermeasures. This certification aims to prepare professionals for the challenges posed by automated systems and AI technologies.

Conclusion

The integration of AI into cybersecurity operations presents both opportunities and challenges. As organizations in India and beyond increasingly adopt AI technologies, it is crucial that cybersecurity professionals are actively involved in the governance and policy-making processes surrounding these tools. By fostering collaboration between AI and cybersecurity experts, organizations can ensure that AI is implemented securely and responsibly, ultimately enhancing their overall security posture in an increasingly complex threat landscape. The time for action is now—organizations must prioritize the integration of cybersecurity teams in AI decision-making to harness the full potential of this transformative technology.

Related articles

Recent articles