Emerging AI Threats: GovTech Advocates for Responsible Use and Regulation

Published:

Navigating the Future of AI: Insights from Sherab Gocha at the National Cybersecurity Conference

The rapid evolution of Artificial Intelligence (AI) is reshaping industries and economies worldwide. Recent projections indicate that the global AI market is set to explode from USD 397 billion in 2022 to an astonishing USD 1.58 trillion by 2028, as reported by Grand View Research. Furthermore, PwC estimates that AI could contribute a staggering USD 15.7 trillion to the global economy by 2030. These compelling insights were shared by Sherab Gocha from GovTech during the National Cybersecurity Conference held on October 25.

The Importance of Cybersecurity Awareness

In conjunction with the conference, the Bhutan Computer Incident Response Team (BtCIRT), part of the Cybersecurity Division under the GovTech Agency, has been observing October as National Cybersecurity Awareness Month. This year’s theme, "Educate, Empower, Secure: Building a Cyber-Safe Bhutan," underscores the critical need for awareness and education in the face of increasing digital threats. As AI technology proliferates, the importance of cybersecurity becomes even more pronounced.

The Double-Edged Sword of AI Adoption

While the potential benefits of AI are immense, the technology also poses significant risks. A study by McKinsey highlights that AI could displace approximately 400 million workers, or around 15% of the global workforce, between 2016 and 2030. This stark reality raises questions about the future of work and the need for a balanced approach to AI implementation.

During his presentation, Sherab Gocha emphasized the importance of a cautious and responsible approach to AI within the civil service. He discussed the generative AI guidelines for civil servants, advocating for a careful balance between leveraging AI’s benefits and mitigating its risks.

The Need for Regulatory Frameworks

Gocha pointed out that while Bhutan currently lacks specific regulations addressing data protection, there are existing data management guidelines. He stressed the necessity of human oversight when utilizing AI tools, urging users to analyze, fact-check, and make informed decisions based on AI-generated content. Furthermore, he called for the establishment of clear mechanisms to address any issues or accidents that may arise from AI systems.

Privacy and Security Concerns

Privacy and security are paramount concerns in the age of AI. Generative AI models, such as ChatGPT and Google Gemini, collect user data, including logs and usage patterns. Users have the right to control their personal data, often having the option to opt out of data collection or request data deletion. For example, ChatGPT allows users to request that their data not be used for training, with the platform automatically deleting it after a specified period.

Gocha highlighted the risks associated with sharing unpublished work through generative AI platforms, likening it to disclosing information on social media, which could jeopardize property rights. He cited a notable incident involving a Toyota employee who inadvertently uploaded sensitive data, resulting in substantial financial losses.

Ensuring Fairness and Inclusivity

AI systems must be safe, reliable, and inclusive, delivering intended outputs while avoiding biases based on race, gender, or other factors. However, challenges such as bias and discrimination persist, particularly if AI systems are trained on biased data. The lack of transparency in complex AI models can further complicate efforts to understand and address potential issues.

To mitigate these risks, Gocha urged the implementation of appropriate regulations. He explained that generative AI relies on complex deep learning models, which can be difficult for users to fully comprehend. This lack of transparency raises concerns about privacy, security, and potential misuse.

Ethical Considerations in AI Usage

As AI systems become more sophisticated, concerns about surveillance and the use of biometric data, such as facial recognition, without explicit user consent are growing. Gocha remarked, “These practices raise ethical questions and potential privacy violations.” Moreover, AI-generated content can be manipulated to spread misinformation and disinformation. He cautioned that people should be cautious about treating AI-generated content as a primary source and always verify information with reliable sources.

Categorizing AI Risks

In his presentation, Gocha categorized AI risks into three levels:

  1. High-Risk AI Systems: These operate in sensitive areas like healthcare, law enforcement, and public services, requiring stringent regulations due to their significant risks.

  2. Limited-Risk AI: This category includes chatbots and recommendation systems, which necessitate some oversight but pose lower risks.

  3. Minimal-Risk AI: These are simple automation tools that typically handle non-sensitive data and operate within clearly defined boundaries. Gocha noted that civil servants have discretion in using such AI products.

Conclusion

As we navigate the complexities of AI adoption, the insights shared by Sherab Gocha at the National Cybersecurity Conference serve as a crucial reminder of the need for a balanced approach. By prioritizing education, regulation, and ethical considerations, we can harness the transformative potential of AI while safeguarding our privacy, security, and societal values. The journey toward a cyber-safe Bhutan is a collective effort, and it begins with informed and responsible engagement with technology.

Related articles

Recent articles