Securing the Future: Managing AI Risks in the Age of Automation

Artificial Intelligence (AI) has ushered in a new era where the long-promised benefits are now becoming a reality. The emergence of new AI tools, such as Open AI's ChatGPT, has made life easier and more efficient for users, but it has also opened doors for malicious actors to exploit these tools for personal gain. AI’s promised shortcuts make it an enticing option for those within the business world, which, in turn, makes establishing corporate protocols controlling AI use a high priority for today’s Chief Technology Officers.

Cyber risks abound when knowledge and time are no longer barriers to slow down attacks. Check Point Research has shown that “ChatGPT has also made the modern cyber threat landscape spicier by enabling less-skilled attackers to launch cyber attacks effortlessly.” This new threat level means CTOs must harden their parameter defenses against external attacks while also addressing elevated insider threat levels caused by employees utilizing this technology.

NIST’s AI RISK Management Framework (AI RMF 1.0) highlights this paradox: “AI is a challenging technology to deploy and use for both organizations and society. Without proper controls, AI systems can perpetuate or exacerbate undesirable outcomes for individuals and communities, while with proper controls, they can mitigate these risks.”

In May 2023, Samsung took decisive action by enforcing a ban on the usage of AI generative tools among its employees following a significant security breach. The breach resulted in the leakage of confidential information, including valuable company source code, which had been accessed by employees utilizing ChatGPT for task assistance. Concerns arose regarding the storage of data on external servers, making it challenging for Samsung to ascertain the extent of the leak, ensure proper security measures, and facilitate its removal. To address these concerns Samsung is working towards the development of an in-house AI service to be used exclusively by its employees to mitigate the risk of future breaches.

This incident illustrates the need to balance enhanced productivity with safeguarding sensitive data, reinforcing the importance of responsible AI usage and data security in the workplace. To ensure responsible use of AI  and mitigate potential risks, organizations can implement the following strategies:

  • Provide Clear Guidance: Establish guidelines for the appropriate use of AI, which includes the dangers of sharing sensitive information.
  • Address Ethical Considerations: AI algorithms and models should be designed to avoid biases and discrimination.
  • Educate Staff: Provide training and information on the risks associated with AI and best practices for safe use.
  • Continuous Monitoring: Conduct periodic assessments of AI usage in your organization to identify potential vulnerabilities or weaknesses.

AI has the potential to revolutionize the way we work and live. However, as with any technological advancement, there are potential risks associated with its use. By being aware of these risks and implementing proper controls, organizations can unlock the full potential of AI while keeping their systems and data safe.