top of page
Search

The use of Artificial Intelligence (AI) is exploding, and severely outpacing cybersecurity

  • Needling Worldwide
  • 2 minutes ago
  • 2 min read

AI is being deployed across the internet faster than security measures can keep up.  This represents a wide and dangerous gap in AI and cybersecurity. 

AI is here to stay. And in many ways, it greatly benefits business operations. Below are just a few findings from a recent study performed by Sandbox AQ:

·       79% of organizations use AI heavily within production environments

·       Approximately 6% have wrapped security protections around both IT and AI-designed systems

·       About 10% of these businesses have a dedicated AI security team

·       Only 28% have conducted a comprehensive AI security risk assessment regarding specific AI intelligence controls

Traditional security teams are required to defend machine speed and logic bending systems based upon certain rules, but this isn’t providing adequate protection. Most organizations unfortunately lack visibility, governance, or access control over AI components, which undermines zero-trust principles. The Cybersecurity Market Report by Sandbox AQ found that approximately 85% of organizations that took part in the study plan to increase AI security budgets within the next 12-24 months. 

Top areas of focus for increased AI security include:

·       Protecting AI training and pipelines

·       Securing non-human identities (NHI), which are AI agents and embedded ML systems

·       Deploying automated incident response procedures specifically tailored for AI-driven infrastructures

Most organizations are unable to measure AI security in an effective way because AI is so new and the foundations for assessing security haven’t been developed yet. Based upon the research study, more mature teams are beginning to shift away from trying to retrofit legacy controls and are instead moving toward evaluating risks and determining how AI systems behave in production. This includes creating the ability to observe and monitor NHIs and cryptographic assets used in AI workflows. 

For all cybersecurity practitioners wanting to expand their horizons within AI-driven technology, below is a list of what you’ll need to learn:

·       AI should be treated as a new attack surface, meaning cybersecurity practitioners must have knowledge of training pipelines and all agents involved with AI-driven technology

·       These AI-driven components should and must be inventoried, governed, and monitored like a privileged identity

·       All AI systems should and must be audited using independent auditors for specific threat assessments and not just generic penetration tests or vulnerability scans

·       An organization should be fully devoted to security, anticipating future AI cybersecurity attacks

Based upon information gleaned thus far from AI-driven environments, protecting AI agents starts with applying the same controls used for people through lease privilege, automating credential rotation, and auditing regular usage of AI systems. AI is exploding so fast that companies are scrambling to see how they can secure their networks with AI-developed.

Many cyber insurance underwriters asked organizations trying to implement AI-driven technology about their number of service accounts, including the scope of permissions for them and how they are being monitored. Developing best practices regarding regular audits of their systems will be crucial.

For companies to stay in compliance with policies that require insurance, they will have to dedicate appropriate budgets to sustain the correct level of security. It’s essential for companies to quickly learn how to implement security against an AI-driven environment that’s constantly expanding.

 
 
 

Comments


bottom of page