Securing the Future: Managing AI Risks in the Age of Automation

Artificial Intelligence (AI) has ushered in a new era where the technology industry's long-promised benefits have become a modern reality. While the emergence of new AI tools, such as Open AI's ChatGPT, has made life easier and more efficient for users, these tools have also opened doors through which malicious actors are finding personal gain. As more business users are enticed by AI’s promised shortcuts, Chief Technology Officers (CTO) must make establishing corporate protocols controlling AI use a high priority.

CTOs understand that cyber risks increase when knowledge and time are no longer barriers that can slow down or block attacks. Check Point Research has shown that “ChatGPT has also made the modern cyber threat landscape spicier by enabling less-skilled attackers to launch cyber attacks effortlessly.” This new threat level means CTOs must harden their parameter defenses against external attacks while also addressing elevated insider threat levels caused by employees utilizing this technology.

NIST’s AI RISK Management Framework (AI RMF 1.0) highlights this paradox: “AI is a challenging technology to deploy and use for both organizations and society. Without proper controls, AI systems can perpetuate or exacerbate undesirable outcomes for individuals and communities, while with proper controls, they can mitigate these risks.”

In May 2023, Samsung took decisive action by enforcing a ban on the usage of AI generative tools among its employees following a significant security breach. The breach resulted in the leakage of confidential information, including valuable company source code, which had been accessed by employees utilizing ChatGPT for task assistance. Concerns arose regarding the storage of data on external servers, making it challenging for Samsung to ascertain the extent of the leak, ensure proper security measures, and facilitate its removal. To address these concerns and mitigate the risk of future breaches, Samsung is developing an in-house AI service to be used exclusively by its employees.

The sharp increase in cyber attacks since ChatGPT became widely used in business illustrates the need to balance enhanced productivity with tools to safeguard sensitive data while reinforcing the importance of responsible AI usage and data security in the workplace. Organizations wishing to ensure responsible use of AI while mitigating potential risks can implement the following strategies:

  • Provide Clear Guidance: Establish guidelines for the appropriate use of AI, which includes the dangers of sharing sensitive information.
  • Address Ethical Considerations: AI algorithms and models should be designed to avoid biases and discrimination.
  • Educate Staff: Provide training and information on the risks associated with AI and best practices for safe use.
  • Continuous Monitoring: Conduct periodic assessments of AI usage in your organization to identify potential vulnerabilities or weaknesses.

AI has the potential to revolutionize the way we work and live. However, as with any technological advancement, there are potential risks associated with its use. By being aware of these risks and implementing proper controls, organizations can unlock the full potential of AI while keeping their systems and data safe.