All ArticlesAI Security

AI Security

Protecting sensitive data in the age of AI and GenAI tools

25 articles

AI Security

AI Coding Assistants Leak Production PII

Unit test fixtures with real customer records. Log files with production data for debugging. GitHub found 39 million secrets leaked in 2024.

July 6, 20268 min
AI Security

Internal Wiki PII: Confluence Customer Data

Support teams document processes with screenshots of customer accounts. Over 3 years, that's thousands of GDPR data minimization violations in your.

July 5, 20266 min
AI Security

Screenshot PII: Leaks in Internal Tools

Slack, Teams, Jira, and email regularly receive screenshots containing customer PII. This access-control violation bypasses every DLP tool.

July 2, 20266 min
AI Security

PII Highlighting vs Compliance Training

62% of employees who use AI tools for customer data work 'sometimes' forget to remove PII first. Here's why automatic highlighting removes the compliance.

June 23, 20267 min
AI Security

Real-Time PII Prevention Saves $2.2M

IBM found a $2.2M cost difference between prevention and detection. Here's the math that makes real-time PII interception non-optional for security teams.

June 19, 20268 min
AI Security

GDPR Art. 32: AI Tools PII Monitoring

Enterprise compliance teams need quantitative evidence of AI tool PII controls. Network DLP misses browser AI interactions.

June 18, 20267 min
AI Security

Real-Time PII Prevention for AI Data Leaks

When an employee types a customer name into ChatGPT, the data leaves organizational control in real-time. Post-hoc DLP cannot un-ring this bell.

June 17, 20267 min
AI Security

GDPR Support AI: Custom Identifiers

Customer support AI receives customer messages with names, emails, AND order IDs. Standard PII tools strip email addresses but leave order IDs intact.

June 2, 20267 min
AI Security

Is Your AI Privacy Tool Stealing Your Data?

67% of AI Chrome extensions collect user data. The December 2025 incidents saw 900K users compromised by extensions posing as privacy tools.

April 19, 20268 min
AI Security

3.8 Daily PII Exposures in Support Teams

Every support agent using ChatGPT makes an average of 3.8 sensitive data pastes per day. For a 100-person team, that's 380 GDPR exposure incidents daily.

April 18, 20268 min
AI Security

After the 900K-User Extension Incident

In January 2026, two malicious Chrome extensions installed by 900K+ users exfiltrated complete ChatGPT and DeepSeek conversations every 30 minutes.

April 16, 20268 min
AI Security

Why Policy Fails to Stop ChatGPT PII Leaks

77% of enterprise AI users copy-paste data into chatbot queries. Nearly 40% of uploaded files contain PII or PCI data. HIPAA Security Rule update proposed.

April 15, 20268 min
AI Security

Enterprise AI: Dev Access Without Risk

Banks banned ChatGPT. Their developers used it from home anyway. 27.4% of all content fed into enterprise AI chatbots contains sensitive data (Zscaler.

April 6, 20269 min
AI Security

Using Cursor & Claude Without Leaking Code

Cursor loads .env files into AI context by default. A financial services firm lost $12M after proprietary trading algorithms were sent to an AI assistant.

April 5, 20269 min
AI Security

AI Policy Without Technical Controls Fails

77% of employees share sensitive work data with AI tools despite policies prohibiting it. A government contractor pasted FEMA flood-relief applicant data.

April 4, 20268 min
AI Security

IDE vs Browser: Developer AI Security

Developers use AI in two environments: IDE (Cursor, VS Code) and browser (Claude.ai, ChatGPT). Each requires different controls.

March 31, 20268 min
AI Security

83% of AI Extensions Are Never Audited

83% of Chrome extensions with broad permissions have never been security-audited (USENIX 2025). 45% of enterprise employees use unapproved extensions.

March 30, 20268 min
AI Security

39M GitHub Leaks: AI Coding Risk

67% of developers have accidentally exposed secrets in code (GitGuardian 2025). 39 million secrets leaked on GitHub in 2024, up 25% year-over-year.

March 29, 20268 min
AI Security

Browser DLP: Blocking vs. Anonymization Approaches 2026

Two approaches to browser DLP: blocking prevents PII submission to AI tools; anonymization transforms data before sending. An objective comparison.

March 14, 202610 min
AI Security

Samsung Lost Source Code to ChatGPT 3 Times

Three separate Samsung engineering teams pasted proprietary code and confidential data into ChatGPT in April 2023. Each incident revealed a different.

March 13, 20269 min
AI Security

Enterprise AI Bans: Productivity vs Risk

27.4% of enterprise AI chatbot content contains sensitive data—a 156% year-over-year increase. Yet 71.

March 9, 20269 min
AI Security

Safe AI Privacy Extensions in 2026

In January 2026, two malicious Chrome extensions with 900,000+ users were caught exfiltrating ChatGPT and DeepSeek conversations every 30 minutes.

March 8, 20268 min
AI Security

Browser DLP for ChatGPT, Claude, and Gemini

Traditional enterprise DLP was built for file transfers and email, not AI chatbots. This guide covers browser-native data loss prevention for ChatGPT.

March 8, 202612 min
AI Security

900K Users Had Their AI Chats Stolen

Two malicious Chrome extensions stole ChatGPT conversations from 900,000+ users. One had Google's 'Featured' badge.

February 21, 20266 min
AI Security

AI: The #1 Data Exfiltration Vector

77% of employees paste sensitive data into AI tools. GenAI now accounts for 32% of all corporate data exfiltration. Learn how to protect your organization.

February 17, 20268 min

Start Protecting Your Data Today

285+ entity types, 48 languages, enterprise-grade security at startup pricing.