← Return to overview

Ghosts in the Cloud: Hijacking Orphaned Azure Blob Storage

Feb 26, 2026
Jacob Virsilas
By: Jotte Sonneveld and Jacob Virsilas

Before we get to it, a small disclaimer: The majority of this research was performed in 2025, but responsible disclosure and life in general has forced us to delay publishing it. Nevertheless, we believe this research has not lost its relevance and only gained importance.

This research started with a curiosity about forgotten cloud storage, and ended with high-impact disclosures. In this blog, we uncover hijackable scripts on government websites, Microsoft domains, ghosted PowerShell scripts and millions of other requests to long-abandoned Azure Blob storage accounts.

Earlier this year we watched how Watchtowr uncovered a massive supply chain threat which made us shiver. Inspired by the blog, scared by the possibilities, and armed with curiosity the and mission to protect our customers in hand, we started to dig within our own realm of possibilities.

But instead of abandoned AWS S3 Buckets which Watchtowr already covered, we decided to research their Microsoft equivalent: Azure Blob Storage accounts.

We fear this is going to be one of those series that can continue for much longer than desired, but let’s take a look.

What is Azure Blob Storage?

For those unfamiliar with all the A-Z of the Microsoft Suite, Azure Blob Storage is defined as follows:

>> Massively scalable and secure object storage for cloud-native workloads, archives, data lakes, high-performance computing, and machine learning.

This sounds like a great option to store data and files in which should be globally accessible and updated from time to time. And as we found out during this research, these storage accounts are often used for references within websites and applications to host content. These files are then loaded onto your computer when you load a webpage or execute an application.

Azure Blob Storage account names are globally unique, like domain names, but are hosted on a subdomain of Microsoft, leading to {storage_account_name}.blob.core.windows.net. However, there are almost no additional costs and no central registrar involved. So, claiming massive amounts of Azure storage account names is almost free of charge. You might see where this is going.

Claiming storage accounts in bulk

As part of our ongoing threat hunting efforts, we set out to identify Azure Blob Storage accounts that were still being actively referenced, despite no longer being maintained. Once we had a large pool of candidate account names, we used custom scripts to check their registration status in bulk against Azure APIs. The process was surprisingly efficient: verifying whether a storage account exists is free, and claiming an unregistered one costs nothing. So, we scaled up fast.

In total, we checked >15.000 blob storage account names for availability, identified and registered 621 of which where abandoned, which taught us that about 4% of all account names we checked were abandoned (!). We observed active incoming requests on about half of these storage accounts.

The moment a storage account name was registered to our Azure tenant, we began logging all incoming traffic via Azure Log Analytics. This gave us detailed insight into what kinds of files were being requested, from which IP addresses, by what user agents, and with what referrer headers.

It wasn’t long before patterns emerged, and so did risks.

Getting insights using access logs

After registering the Azure storage accounts, we were instantly swamped with millions of requests per day, showing we were onto something!

These requests themselves already pose security risks, as numerous ones were exposing API keys and other secrets within their referrer header.

The entities referring to resources controlled by us didn’t look too shabby either:

But more on that later! Of course behind every request there is a machine — or an LLM crawler bot — waiting to process a file. For our dataset the top 10 types of files requested during an arbitrary week were as follows:

There is quite a variety in types of files being requested, as well as the origin of the request as fingerprinted by the originating IP and UserAgent. Many files could be used to deface or discredit websites by altering the css or pictures loaded. Or even serve malicious .apk and .zip files to applications requesting resources. One of the possibilities we uncovered through control of an Azure Blob Storage Account was a Remote Code Execution possibility in Windows Update Health tools, which is explained in further detail in this write-up. However, at this moment we are mainly looking at direct impact opportunities, and still figuring out who and what all the people and devices are that try to load resources from our storage accounts. Let’s take a look at the different types of resources that are executed straight away.

RCE via PowerShell (.ps1)

The most interesting requests we observed seemed to be related to Powershell (.ps1) files being downloaded from a single blob storage account we owned. Analysing the requests to this particular account resulted in almost 10,000 unique daily requests from thousands of different IP addresses, which quite frankly amazed us.

This means that by registering this single blob storage account, we had the opportunity to instantly execute code on thousands of devices.

With some OSINT magic, our team managed to contact the original owner of the blob storage account, and responsibly gave the ownership of this storage back to them as soon as possible. Accompanied by some security advice about IT hygiene, which to their credit they took very seriously and acted right away. We decided not to share any specifics about this publicly.

Giving storage acounts back, is hard!

After our first responsible disclosure action with PowerShell, we learned that it is literally impossible to transfer a storage accunt across subscriptions, even through Microsoft Support. The only way is to delete the storage account, wait for an arbitrary number of weeks, and then hope that the other side is able to catch it first.

This is exactly how we transferred storage accounts to most original owners during our research. The only exception is transferring back a storage account to Microsoft itself, which we will cover later. 😀

Moving to Javascript

One of the other popular types of files being requested from our storage accounts were JavaScript files (.js). As you might know, this might allow code execution within browsers. Let us start off by taking a look at some example files being requested.

A table displaying data from an Azure storage blob including columns for URI, operation name, caller IP address, user agent header, and referrer header. The table contains several entries, with sensitive information partially obscured.

We initially were stunned by the variety and familiarity of the Referrer headers, which show us what websites attempt to include our javascript files. With half a million requests per week for a long-forgotten JavaScript file, we couldn’t help but peek at how and where it was being used. We quickly learned that many popular websites and SaaS tools try to include our JavaScript files. 🤯

By serving our own JavaScript files (secured with IP allowlisting, so we only serve it to our own IP) results were shockingly sensitive: in most apps and website we had access to the entire DOM, including valid session cookies, local storage capturing PII, and a full vision over the shoulder of someone thoughtlessly browsing their favorite websites.

# heavily redacted and filtered full DOM contents
# captured only our own session via IP allowlisting
{"url":"<REDACTED>","cookies":"HostGUID=<REDACTED>; \"accessToken\":\"eyJh<REDACTED>\",\"idToken\":\"eyJh<REDACTED>\"}","oidc.user:https://auth.<REDACTED>":" [...]

The implications of controlling JavaScript on trusted websites are hard to overstate. With full access to the DOM, attackers could invisibly modify site content, inject malicious UI overlays like fake CAPTCHA prompts, or silently harvest credentials, techniques commonly seen in modern infostealer campaigns. This kind of attack surface opens the door to session hijacking, phishing overlays, or even full endpoint compromise via social engineering. When the script is loaded from a domain that users (and browsers) inherently trust, like a previously legitimate blob storage URL, it becomes nearly invisible, making this an ideal vector for large-scale SaaS abuse.

From Microsoft, With Love

Ironically, we were also able to use Microsoft’s own abandoned storage account to inject JavaScript into their portals, because of an orphaned blob storage still being referenced in a live webpage.

Screenshot of a web browser console displaying a message related to an MSRC (Microsoft Security Response Center) report, alongside debugging information from a webpage.

And this wasn’t some legacy website, the number of unique requests were in the thousands again.

A table displaying referrer headers and corresponding counts of caller IP addresses, indicating traffic sources to the logged Azure blob.

Of course this issue was responsibly disclosed to Microsoft ASAP, who swiftly removed these faulty references to the abandoned storage account and acknowledged us with MSRC credits. We decided to not share any more details about this particular vulnerability and its implications, although we believe our signal is clear.

More JavaScript: Supply chain version

As our storage account monitoring continued, we began noticing requests from high value targets: governmental domains, research institutions, and cybersecurity-related websites.

One hit, in particular, stood out: Ghidra, the widely used reverse engineering tool developed by the NSA. We traced a request back to the domain https://ghidra-sre.org, which now redirects to GitHub. Based on the filename and parameters involved, we couldn’t immediately trace the JavaScript reference. But after comparing similar requests, we tracked it down to a video player script embedded on the homepage. Under the right browser and OS conditions, it still tried to load a JavaScript file from one of our blob storage accounts.

To confirm the impact, we injected a non-intrusive JavaScript snippet (allowlisted to be accessible from our IP only) that logged its execution in the browser console. The result? A clean proof of stored XSS on a tool downloaded by thousands of analysts and researchers worldwide.

Now imagine the what-ifs. This access could be used to manipulate download links, tamper with hash displays, or embed fake interface elements to steal credentials. With that much trusted traffic, even a basic website overlay could scale into something far more dangerous. All of this enabled by a few unclaimed storage accounts and a couple of bucks in log storage.

Many, many more miscellaneous findings

Hopefully it’s now clear that we are on to something. Not only executable files are being requested at scale from abandoned buckets, the majority of inbound requests are for static files like media (JPG/MP4) and JSON configs. We did spend significant time trying to assess the impact of these miscellaneous requests coming in, with the goal to responsibly notify third parties at scale. But it was simply too much to deal with as there are thousands of third-parties, all with custom communication channels and findings.

And yes, this random example is one of the most popular websites on the internet. We tried to get in contact with the company, but never got a response. So we decided to keep the storage accounts for ourselves for now, to make sure no one else can exploit this vulnerability in the future.

But it became clear to us how time-consuming responsible disclosure became at this stage. It literally felt like we are manually cleaning up the internet.

Scaling Responsible Disclosure

With hundreds of blob storage accounts under our control and thousands of requests pouring in from unknown origins, manually notifying every third party simply wasn’t feasible. Instead of embarking on months of cold outreach to scattered email addresses, we opted for a scalable, responsible approach.

For any blob storage accounts linked to potentially sensitive content or active malware delivery, we either sink-holed the traffic or returned ownership to verified original parties where possible. In all other cases, we served a harmless JavaScript snippet that logged a clear message to the browser console, aimed at alerting developers and security teams who might be investigating.

//* START - Proof used to responsibly disclose a vulnerability. Please check the email we sent about it with the subject "Abandoned Storage". *//
(function () {
    console.log('Abandoned Azure blob storage used, please fix asap: ', { url: window.location.href });
})();
//* END - Proof used to responsibly disclose a vulnerability. Please check the email we sent about it with the subject "Abandoned Storage". *//

This method allowed us to responsibly notify affected parties at scale, without risk, disruption, or ambiguity. If you saw this message in your console and received an email titled “Abandoned Storage”, that was us.

The internet feels broken to us

By proactively claiming these storage accounts, we were able to neutralize risks across hundreds of abandoned storage accounts, redirect malicious traffic, and, where possible, transfer ownership back to verified original maintainers. But make no mistake: the broader threat isn’t going anywhere and will only get bigger.

References to forgotten infrastructure, whether cloud storage, domains, or public buckets, continue to quietly power parts of the internet. As long as legacy links remain embedded in websites, apps, and installers, attackers will keep scanning for ways to hijack them. So instead we will register those legacy buckets, to safeguard the internet to the best of our abilities.

So far, we’ve registered 621 abandoned storage accounts and scanned thousands of candidates. Several have already been handed back to their rightful owners, and we expect this number to grow as our research continues.

Key takeaways

What can you do? For your own infrastructure and websites: adhere to proper asset management, periodically review domains and cloud infrastructure. If you encounter an external reference within a website or script that has stopped working, this might mean that the reference is up for grabs by an adversary.

Beware of supply chain attacks, this is not just limited to your company relationships, but also the software and websites used within your organisation.

And most importantly: accept that attacks like account hijacking and remote code execution will happen sooner or later. Assume breach. But make sure that you have the tools and processes in place to be able to detect and respond to breaches 24/7. If you’re not, you might need some help from a security partner to unlock this capability for when it really matters.