ISACA Survey Reveals Cybersecurity Teams Are Mostly Excluded from AI Policy Development – CRN

Published:

The Intersection of AI and Cybersecurity: Insights from ISACA’s 2024 Survey

In an era where artificial intelligence (AI) is revolutionizing industries, its integration into cybersecurity practices is both promising and challenging. According to the recently released 2024 State of Cybersecurity survey report from ISACA, a global professional association dedicated to advancing trust in technology, only 27 percent of cybersecurity professionals in India are involved in developing policies governing AI technology within their enterprises. Alarmingly, 50 percent report no involvement in the development, onboarding, or implementation of AI solutions. This raises critical questions about the role of cybersecurity teams in shaping the future of AI in their organizations.

The Current Landscape of AI in Cybersecurity

The survey, which gathered insights from over 1,800 global cybersecurity professionals, highlights the primary applications of AI in Indian security teams. The findings reveal that:

  • 31 percent utilize AI for endpoint security.
  • 29 percent employ it for automating threat detection and response.
  • 27 percent use AI to automate routine security tasks.
  • 17 percent leverage AI for fraud detection.

These statistics underscore the growing reliance on AI to enhance security measures, particularly in a landscape fraught with complex threats and staffing challenges.

The Need for Cybersecurity Involvement in AI Governance

Jon Brandt, ISACA Director of Professional Practices and Innovation, emphasizes the importance of integrating cybersecurity teams into the AI governance process. He states, “In light of cybersecurity staffing issues and increased stress among professionals in the face of a complex threat landscape, AI’s potential to automate and streamline certain tasks and lighten workloads is certainly worth exploring.” However, he cautions against a narrow focus on AI’s operational role, advocating for cybersecurity professionals to be actively involved in the development and implementation of AI solutions.

RV Raghu, director of Versatilist Consulting India Pvt Ltd and ISACA India Ambassador, echoes this sentiment, pointing out that the limited involvement of cybersecurity teams in AI policy-making represents a missed opportunity. He stresses the urgent need for organizations to rethink how they integrate cybersecurity professionals into AI decision-making processes, highlighting the strategic importance of collaboration between AI and cybersecurity experts.

Exploring the Latest AI Developments

ISACA’s survey findings are complemented by the organization’s efforts to provide resources that help cybersecurity professionals navigate the evolving AI landscape. Some key developments include:

1. EU AI Act White Paper

The EU AI Act introduces regulations for certain AI systems used within the European Union, with compliance requirements set to begin on August 2, 2026. ISACA’s white paper, Understanding the EU AI Act: Requirements and Next Steps, outlines essential actions for enterprises, such as instituting audits, adapting existing cybersecurity policies, and designating an AI lead to oversee AI tools and strategies.

2. Authentication in the Deepfake Era

ISACA’s resource, Examining Authentication in the Deepfake Era, discusses the dual nature of AI in security. While AI can enhance adaptive authentication systems, making it harder for attackers to gain access, it also poses risks such as adversarial attacks and algorithmic bias. The paper encourages cybersecurity professionals to remain vigilant about these challenges and monitor developments in AI and quantum computing that could impact authentication processes.

3. AI Policy Considerations

Organizations adopting generative AI policies are encouraged to ask critical questions to ensure comprehensive coverage. ISACA’s guidelines include inquiries about the policy’s impact, acceptable behavior, and compliance with legal requirements, fostering a responsible approach to AI integration.

Advancing AI Knowledge and Skills

To equip professionals with the necessary skills to navigate the changing landscape, ISACA has expanded its educational offerings:

1. On-Demand AI Courses

ISACA has launched several on-demand courses, including Machine Learning: Neural Networks, Deep Learning, Large Language Models, which provide continuing professional education (CPE) credits. These courses are designed to help professionals stay abreast of AI developments and their implications for cybersecurity.

2. Certified Cybersecurity Operations Analyst

Launching in Q1 2025, ISACA’s upcoming Certified Cybersecurity Operations Analyst certification will focus on the technical skills required to evaluate threats, identify vulnerabilities, and recommend countermeasures. This certification aims to prepare professionals for the evolving challenges posed by automated systems and AI technologies.

Conclusion

The integration of AI into cybersecurity practices presents both opportunities and challenges. As organizations increasingly rely on AI to bolster their security measures, it is imperative that cybersecurity professionals are actively involved in the governance and implementation of AI technologies. By fostering collaboration between AI and cybersecurity experts, organizations can ensure that AI is deployed securely and responsibly, ultimately enhancing their overall security posture in an ever-evolving threat landscape. The insights from ISACA’s 2024 survey serve as a crucial reminder of the need for strategic integration of cybersecurity teams in shaping the future of AI in their enterprises.

Related articles

Recent articles