13m read

What Is AI-Driven Insider Threat Detection?

What’s inside?

Cato Networks named a Leader in the 2024 Gartner® Magic Quadrant™ for Single-Vendor SASE

Get the report

AI-driven insider threat detection works to identify insider threats using a combination of behavioral analytics, machine learning, and anomaly detection. Insider threats, which include malicious insiders, negligent users, and compromised employees, are a top security challenge since the threat already has legitimate access to an organization’s environment. This makes these threats difficult to identify using traditional security solutions, and privileged access can amplify the impact that the threat has on the organization.

AI offers the potential to identify and block these types of threats by leveraging pattern and anomaly detection to catch the warning signs of these risks. It’s especially effective when implemented as part of a Secure Access Service Edge (SASE) architecture, where centralized visibility offers access to the data the AI needs, and integrated Zero Trust Network Access (ZTNA) provides the ability to identify and block anomalous and potentially malicious requests.

How Do Insider Threats Evolve in Modern SASE Environments?

Insider threats are a potential security risk to the organization that already has legitimate access to its systems, eliminating the need to exploit vulnerabilities or take other “noisy” actions to gain that initial foothold. Insider threats come in a few different forms, including malicious insiders, compromised accounts, and negligent users.

These insider threats are more difficult to detect than external ones, and factors such as remote work, cloud migration, and hybrid access complicate the issue. In this environment, SASE is an invaluable tool, offering the ability to see user traffic, access patterns, and security events in one place.

What Types of Insider Threats Matter Most Today?

Not all insider threats are deliberately trying to harm the business. The three main types of insider threats include:

  • Malicious Insiders: Some insider threats are trusted employees who deliberately take action to harm the business. Examples include a disgruntled employee or a threat actor who took a job with the business to gain access to its environment.
  • Compromised Users: Compromised users are legitimate employees who may help to enable a third-party attacker. This could include someone who has taken a bribe to hand over sensitive data.
  • Negligent Employees: Some insider threats pose a risk to the business by failing to properly follow security policies and protect corporate resources. For example, an employee who copies sensitive data into an LLM or to a personal cloud storage account might be the source of a data breach.

Insider threats can exist anywhere within an organization, but some business roles are riskier than others. For example, developers with access to code repositories, financial professionals, and HR staff have access to highly sensitive data or resources that can cause significant impacts to the business.

Additionally, the concept of the “insider threat” isn’t limited to employees working on-site. With remote work, contractors, and third-party access, the trusted party might be someone physically or organizationally external to the business.ֿ

Why Insider Threats are Difficult to Detect with Traditional Tools

Threat actors that start from outside the organization need to take noisy actions like exploiting vulnerabilities or performing phishing attacks to gain an initial foothold. In contrast, insider threats already have this access, allowing them to use valid credentials and approved devices to perform their attacks. This makes their activity look more legitimate since they’re largely exercising the privileges and permissions that they already have.

For an insider threat, an attack might involve performing more downloads than usual or gradually accessing more new data and applications. These types of activities are easy to write off as benign and can be lost in the noise of the many alerts that security analysts need to evaluate. The sheer volume of alerts can also drive SOC teams to tune settings to ignore these types of low-level threats and likely false positives in favor of identifying the flashier attacks performed by external threat actors.

As a result, traditional systems can struggle to identify insider threats that don’t need malware, vulnerability exploits, or unauthorized access to achieve their goals. Managing these threats requires behavioral analytics designed to identify deviations from normal behavior for a particular user, account, or device.

How Does SASE Improve Visibility for Insider Risk?

SASE is defined by the convergence of network optimization and SSE functionality in a single cloud-delivered platform. This offers insight into user traffic, access decisions, security policy enforcement, and web browsing across the entire corporate WAN.

With this centralization, it’s possible to track a user’s behavior across the entire corporate IT environment from within a single solution. Replacing point security solutions with an integrated platform also reduces the risk of missing threats due to a lack of context or blind spots between standalone security tools. This complete, normalized dataset is also ideal for training AI-based solutions since it evades coverage gaps or potential issues with correlating and translating information across different tools and formats.

What Is AI-Driven Insider Threat Detection in Practice?

In practice, AI-driven insider threat detection uses AI to look for the patterns and anomalies that indicate the presence of a potential threat within the organization’s environment. AI tools can build baselines of normal behavior, and then identify deviations from this behavior that could indicate a potential risk to the business. By distilling raw data into high-fidelity signals, the AI reduces alert volume and the load on human analysts, protecting against burnout and expediting incident detection and response.

How Do Behavioral Baselines and Anomaly Detection Work?

Behavioral analytics used unsupervised learning to identify normal behavior baselines for users. Over time, the AI monitors data about the user’s login times, locations, devices, accessed applications, data transfer volumes, and typical resource combinations. From this, it can extract patterns and an understanding of the user’s normal behavior.

With this baseline in place, the AI can switch to identifying statistically significant variations from the user’s normal behavior. For example, a user might suddenly access databases containing sensitive customer data when they have never done so in the past.

These variations could be benign, potentially a result of a promotion or transfer to a new team. Or they could be indicative of a potential threat. If these anomalies increase the user’s risk score past a particular level, they are reported for review and investigation. The result of this investigation can then be fed back into the AI, allowing baselines to evolve over time to reflect legitimate changes in behavior.

The Role of User and Entity Behavior Analytics (UEBA)

User and entity behavioral analytics (UEBA) focuses on understanding and scoring the behavior of users and non-human entities such as service accounts, servers, and devices. UEBA collects data from various sources, going beyond log analysis to analyze relationship graphs between users and resources, comparisons to their peer groups, and long-term trends.

UEBA is useful for insider threat detection because deviations from a user’s past behavior or that of their peer group could indicate a potential threat. UEBA can also use machine learning to cluster similar behaviors and identify outliers that could indicate misuse or compromise.

How Are AI-Driven Insights Operationalized by Security Teams?

AI-driven insights are designed to ensure that human security analysts focus their attention on the most likely and significant threats to the business. Analysts receive risk scores, prioritized alerts, timelines of correlated events, and context like user role and asset criticality for a potential threat via SIEM dashboards, SOAR playbooks, ticketing systems, and case management tools.

If a potential threat is reviewed and validated by a human analyst, the system can automatically implement various responses to it. For example, users may be required to perform extra authentication steps or subject to tighter security policies. During the course of the investigation, users may also be subject to temporary restrictions designed to manage risk until a verdict is reached.

Which Behavioral Signals and Data Sources Matter Most?

Behavioral analysis systems need access to high-value signals to help identify threats and differentiate them from abnormal but benign activity. Some examples of data that should be collected across the enterprise include:

  • Identity and access logs
  • Network telemetry
  • DNS and TLS data
  • Endpoint activity
  • SaaS logs
  • DLP events

With these data sources, AI systems can identify high-value behavioral systems, such as off-hours access, unusual resource combinations, excessive data downloads, or access from unusual locations. This information can also be correlated to common insider use cases, such as data exfiltration, stealthy reconnaissance, privilege abuse, and policy violations, to determine the risk and potential impacts on the organization.

What Network and Cloud Telemetry Does AI Need?

At the network and cloud level, AI needs access to various data for behavioral analysis. Key telemetry to collect includes:

  • Flow records
  • DNS logs
  • Proxy or SWG logs
  • TLS metadata
  • Firewall events
  • Cloud access logs from SaaS platforms

Combining these various datapoints can help to reconstruct the user journey for an internal threat actor. For example, a user may request access to sensitive corporate data, perform a large download, and then move the data to a personal cloud account.

Putting these pieces together requires the AI to have access to complete, normalized data. If data is not normalized, lacks context about users and devices, or has gaps, then AI systems may overlook a potential incident.

How Do Identity and Access Signals Shape Insider Risk Scores?

Identity is a fundamental part of insider threat detection. Some of the core signals include:

  • Authentication successes and failures
  • MFA prompts
  • Device posture checks
  • Role assignments
  • Privilege escalations

With this identity data, AI systems can dynamically adjust risk scores based on the perceived threat that a user poses to the business. For example, many failed login attempts or changes in privileges right before access to sensitive data or applications pose an elevated risk. Similarly, unusual events, such as access attempts from new devices or locations, should elevate the risk score.

What Role Do Content and Data Loss Prevention Signals Play?

Content and data loss prevention (DLP) signals focus on the resources that an insider threat might target. Common signals include:

  • File classification events
  • Clipboard usage
  • Uploads to unsanctioned apps
  • Email attachments
  • Printing of sensitive documents

With baselines in place, an AI can help to identify early signs of exfiltration and differentiate it from legitimate, normal use. For example, a system might find that a user performed significant downloads of sensitive data shortly before they resigned or right after termination.

Policy context can also play a significant role, where actions may be allowed even when considered “risky.” For example, a user may be permitted to download large volumes of data from the customer database; however, the act of doing so raises their risk score and the likelihood of investigation.

How AI-Driven Insider Detection Fits into SASE and Zero Trust

The Zero Trust security model is focused on managing risk exposure by eliminating implicit trust within an organization. This aligns well with insider threat detection, since insiders are most likely the ones who are implicitly trusted by the business.

SASE supports AI-driven insider detection by offering the data needed to identify threats and the tools needed to enforce least privilege access controls and corporate security policies. Centralized visibility into operations on the corporate network improves threat detection, and tools such as continuous verification and network segmentation offer the ability to continuously monitor user sessions and intervene when risk grows too high.

How Zero Trust Principles Support Insider Threat Programs

The Zero Trust security model is based on the principles of least privilege and explicit validation of access requests. Least privilege reduces the potential impact that an insider can have on the organization, while continuous, explicit verification ensures that monitoring and access management occur throughout the user session, not just at the beginning.

With explicit validation of each access request, Zero Trust systems can consider context, such as device health, location, and user behavior, when making access decisions. AI-based insider threat detection systems can use this data to identify patterns or anomalies that point to significant risks to the business.

What Is the Role of ZTNA in AI-Driven Insider Detection?

ZTNA applies Zero Trust principles to the corporate network. This includes granting users granular, application-level access rather than broad network-level access in response to a request. As a result, it’s a perfect foundation for an AI-driven insider threat detection program.

The granularity and richness of this data make it ideal for AI-driven insider threat detection models. With insight into everything that a user requests and does on the system –  including user identity, device posture, requested apps, and applied policies – the AI can build a solid baseline of normal user behavior and more effectively identify significant deviations from it.

ZTNA also has the potential to implement additional security in response to AI signals. For example, if AI increases a user’s risk score, ZTNA could implement step-up authentication, terminate their session, or tighten security policies.

How Does SASE Unify Data for Behavior Analytics?

SASE is defined by its convergence of various capabilities – including SWG, CASB, ZTNA, and FWaaS – into a single, cloud-delivered platform. This ensures that all data is centrally accessible and consistent, eliminating the need for AI systems to stitch together information from a variety of point security products.

This also helps with monitoring a user throughout their entire user journey. A single user may work remotely or from a branch office and access on-prem and cloud-based resources. With SASE, all of these activities are monitored by the same solution with consistent data collection and policy enforcement. As a result, AI can build better baselines, has fewer blind spots, and can more reliably detect anomalies that point to potential threats.

What Are the Key AI Security Risks and Limitations for Insider Detection?

AI offers the potential to dramatically enhance insider threat detection by analyzing large volumes of data to identify anomalies and trends that point to potential incidents. However, introducing AI-powered security can also create new security risks if AI makes mistakes or is targeted by attackers. Organizations should understand the challenges associated with using opaque AI models with limited explainability and implement policies and controls to manage these risks.

How Can AI Models Be Attacked or Manipulated?

AI-powered tools build models from training data, and the quality of this data is vital to the resulting model. If data contains biases or is poisoned by attackers, then the resulting model will have the same issues and produce incorrect results.

In addition to poisoning training or feedback data, attackers may use their understanding of the AI tool to evade detection. For example, an attacker with an understanding of the risk thresholds that trigger additional investigation or action could tailor their malicious activities to stay under these thresholds and evade detection. This could include a “low and slow” attack or distributing malicious activities across multiple compromised accounts.

To manage these threats, organizations should implement logging and monitoring of the AI system itself. Any deviations from normal behavior may signal a threat and trigger additional investigation.

What Governance and Controls Help Reduce Risk?

Managing the risks associated with AI requires AI governance and controls implemented at the enterprise scale. Some key best practices include:

  • Clear ownership for AI models
  • Documented objectives
  • Defined approval processes for changes.
  • Versioning models
  • Testing on holdout data sets
  • Monitoring drift
  • Implementing rollback mechanisms for problematic updates
  • Regular model validation, bias checks, and verification that thresholds meet corporate risk tolerance
  • Requiring human approval for high-impact automated actions and irreversible changes

Beyond securing AI, it’s also wise to link the findings and decisions made by AI-driven tools to established security policies and risk frameworks. This helps to ensure the explainability of security decisions during compliance audits or as part of incident response processes.

How Should Security Teams Communicate AI Limitations?

AI is a useful tool, but its utility and accuracy can be overblown. Some ways that security teams can communicate the limitations and drawbacks of AI include:

  • Transparency: Security leaders should explain to executives what AI can and cannot reliably detect.
  • Concrete Examples: Identify concrete examples of AI’s limitations, such as difficulty managing unique situations and novel behaviors, dependency on high-quality training data and telemetry, and the potential for false positive and false negative decisions.
  • Operationalized Findings: When presenting AI findings, include confidence scores, key assumptions, and recommended next steps.
  • Realistic Expectations: Avoid overpromising and emphasize that AI can enhance insider threat detection but does not wholly eliminate insider risk.
  • Defense in Depth: Deploy AI-powered tools as one element of a defense-in-depth strategy that includes policy, training, monitoring, and incident response capabilities.

FAQs

How is AI-driven insider threat detection different from traditional UEBA tools?

Traditional UEBA tools are primarily focused on historical patterns of activity, while modern AI systems intelligently analyze data for patterns, anomalies, or signs of potential threats. AI systems generally have better data coverage and the ability to correlate and enrich data to improve classification accuracy.

What data do we need to feed an AI-driven insider detection program?

An AI-driven insider detection program is most effective when it has access to a wide variety of high-value signals. Identity logs, network telemetry, DNS and TLS metadata, SaaS and collaboration logs, and DLP events are common examples of data to feed to these programs. It’s also important to ensure consistent, complete coverage across all of an organization’s sites and cloud environments.

How do we measure the effectiveness of AI-driven insider detection?

Some key metrics for assessing the effectiveness of AI-driven insider threat detection programs include mean time to detect (MTTD), alert fidelity, analyst workload, and reduction in undetected incidents. These metrics can be tuned and evaluated via controlled testing, such as penetration tests or breach-and-attack simulations.

Where does AI-driven insider detection fit in our existing SOC stack?

AI-insider threat detection should be integrated with existing SOC tools, including SIEM, SOAR, case management, and threat intelligence workflows. These tools augment the SOC’s ability to identify insider threats and complement existing workflows, tools, and processes.

How should we think about privacy and ethics when monitoring insider behavior?

Policies for monitoring insider behavior should be defined based on a collaboration between security, HR, legal, and compliance. Best practices include transparency, clear acceptable use policies, and alignment with regional regulations.

How AI Addresses Insider Threats

AI-driven insider threat discovery tools use behavioral analytics and continuous monitoring to quantify the risk posed by trusted insiders and take action when risk scores grow too high. These solutions are most effective as part of an integrated SASE platform, which offers the visibility and data access needed for AI-driven threat detection and the ability to enforce access and policy changes based on AI signals.

However, AI isn’t a perfect solution, potentially falling prey to false positives, data poisoning, and evasion attacks. As a result, companies should treat AI-driven insider detection as an evolving capability that should be paired with strong zero-trust design, identity governance, and human expertise.

Cato Networks named a Leader in the 2024 Gartner® Magic Quadrant™ for Single-Vendor SASE

Get the report