Employees are turning to personal AI tools like ChatGPT, Claude, and Gemini to get work done faster. The risk is that sensitive business data can leave the organisation just as quickly, often through unmanaged accounts and without security oversight. This is shadow AI.
In this article, we explain how data leakage happens, and how we built AI Leak Block: a free, open-source browser extension that detects risky AI usage before sensitive data is sent.
How we built a Chrome extension to stop data leakage from AI tools
It’s 11pm and a deadline is looming. Your HR manager needs a quick summary of a complex document and pastes it straight into ChatGPT. It works great. Nobody thinks twice about it.
That’s shadow AI: employees using whatever AI tool works best or fastest, without much thought about where the data goes. A personal ChatGPT account, a new model to try out, whatever is most convenient.
Depending on the provider, account type, region, and settings, data submitted through personal AI accounts may be retained, reviewed, or used to improve AI systems. This sounds nice at first. The more you chat with an AI, the smarter it becomes. Except for when the data is all the personal sensitive customer information in your inbox, your entire codebase, or that kilometers-long spreadsheet containing customer data.
The scale is larger than most organisations realise. IBM’s 2025 breach report found one in five companies has suffered a breach tied to shadow AI, with 97% having no access controls in place.
Meanwhile, most organisations are still responding with training sessions and policy documents.
Why we built a browser-based approach to shadow AI
We’ve been researching this space for a while. One of our earlier projects was Prompt Injection for Good, where we used prompt injection as a defensive tool. The idea was to embed hidden instructions inside corporate documents, like Confluence pages or internal PDFs, so that when an employee uploads one to a personal AI tool, the LLM reads the hidden prompt and displays a compliance warning instead of just summarising the content.
But what happens when someone just types sensitive information straight into ChatGPT or Claude, without uploading a file at all?
We’d already built free browser extensions for other threats: ClickFix Block for fake CAPTCHA attacks, AitM Block for adversary-in-the-middle phishing kits.
So we thought: what if we tried a similar approach for shadow AI? Intercept the problem in the browser, right where the data is about to leave.
How we detect Shadow AI in the browser
To detect the usage of popular AI platforms, we needed to know what goes on under the hood of a user interacting with an AI agent via the popular AI platforms. So we made a personal account for the most popular ones and started intercepting requests made when a user starts chatting with an AI agent.
What we very quickly noticed is that, immediately after an AI agent was asked a question, a POST request followed to an API of the AI agent. See this POST request to ChatGPT as an example:

However, that isn’t quite enough. After all, a user could be using a corporate license of ChatGPT which would be compliant. So we also made business accounts for those platforms that offer them.
We started intercepting by requests again, which in the case of Anthropic (Claude) proved successful. Their payload contains an ‘org_type’ which proved telling enough.
Unfortunately, for the other commonly used LLMs, that trick did not work. But after a little bit of digging, we found a method for those too. ChatGPT hid their account type in localStorage, Mistral places it in the DOM (Le Chat Team), and Gemini and Copilot simply use a different domain. Simple but effective.
Importantly, this means we can adapt our rules so that when someone uses an AI business account, we won’t intercept those requests.
When and how does the browser extension block AI requests?
Being on a website isn’t wrong. In this case, chatting is the dangerous part. So, we only block requests when a user starts or continues to chat with an AI agent on a popular AI platform.
We intercept the POST request and prevent it from sending while displaying a popup nudging the user to use a business account for the approved AI agent. The popup is intended to provide an explanation and point to the corporate AI of choice which can be set in the extension settings.

However, the end user can still dismiss the popup and send the message anyway. There will always be a lot of cases where personal AI usage on a business laptop isn’t bad at all. Thinking again about that quick cooking recipe you need, but sadly only have your work laptop lying around. So yes, the intervention is interruptive but reversible, to make sure your food won’t overcook ;).
How the extension protects privacy: no logging of user data
We can understand that while reading the blog article, you would start to ask yourself some questions. If the application intercepts my communication with AI agents and constantly monitors my POST requests to make sure it doesn’t miss any, how can I know it doesn’t intercept anything I don’t want it to?
We can assure you we don’t know what is going on in your browser. The application runs locally and does not send any data anywhere. We do not know what you type, what you send, or what else is happening in your browser.
When you install the application, feel free to look into its source code. We recommend that people look at the code of (open-source) software they install anyway ;).
How to install and test the AI Leak Block extension
Make sure you’re using the Chrome browser (or Edge) on your desktop or laptop and go to the AI Block browser extension in the Chrome Web Store. On the extension page, click the blue Add to Chrome (on Edge: click Get) button. A popup will appear asking for confirmation.
Click Add Extension to complete the installation. Once installed, you’ll see the extension icon appear in the top-right corner of your browser, next to the address bar. If it is still hidden, click the puzzle piece icon next to the address bar and pin the extension so it is always visible. Click the extension to open the menu and set your corporate AI of choice.

Help us improve AI Leak Block: share your feedback
We want people to test the application and give us feedback. We want to provide everyone with state-of-the-art cyber protection, one step at a time. But since we don’t log anything, we do not know if anything isn’t working as intended. So if you find something wrong, please contact us so we can improve our product.
About Eye Security
We are a European cybersecurity company focused on 24/7 threat monitoring, incident response, and cyber insurance. Our research team performs proactive scans and threat operations across the region to defend customers and their supply chains.
This research was conducted by the Eye Security Threat Research Team, dedicated to detecting and disrupting emerging attack techniques across Europe.
Learn more about Eye Security at https://eye.security/ and follow us on LinkedIn to help us spread the word. You are also invited to read our corporate blog for customers and partners about AI Leak Block.