How this de-anonymization attack works is difficult to explain, but relatively easy to understand once you have the gist. Someone carrying out an attack needs a few things to get started: a website they control, a list of accounts associated with people they want to identify as having visited that site, and content posted on account platforms on their target list to allow the targeted accounts to see that content or block them to inspect it – the attack works both ways.
Next, the attacker embeds the above content into the malicious website. Then they wait to see who clicks. If anyone on the targeted list visits the site, the attackers will know who they are by analyzing which users can (or cannot) see the embedded content.
The attack uses a number of factors that most people probably take for granted: Many major services—from YouTube to Dropbox—allow users to host media and embed it on a third-party website. Regular users usually have an account with these ubiquitous services and, most importantly, often stay logged into these platforms on their phones or computers. Finally, these services allow users to limit access to content posted to them. For example, you can set up your Dropbox account to share a video privately with one or a few other users. Or you can publicly upload a video to Facebook, but block certain accounts from viewing it.
These “blocked” or “allowed” connections are at the heart of how researchers have discovered they can discover identities. In a “sanctioned” version of the attack, for example, hackers could silently share a Google Drive photo with a Gmail address of potential interest. They then embed the photo on their malicious website and lure the target there. When visitors’ browsers try to upload a photo via Google Drive, attackers can accurately determine whether the visitor has permission to access the content—that is, whether they have control over the email address in question.
Thanks to existing privacy protections on major platforms, an attacker cannot directly check whether a site visitor was able to upload content. But the NJIT researchers realized that they could analyze the available information about the target’s browser and their processor’s behavior while the request was occurring to infer whether the content request was allowed or denied.
The technique is known as a “side channel attack” because the researchers found they could make this decision accurately and reliably by training machine learning algorithms to analyze seemingly unrelated data about how the victim’s browser and device process the request. When an attacker learns that one user he allowed to see content has done so (or that one user has been blocked), he de-anonymizes the site visitor.
As complicated as it sounds, researchers warn that it would be easy to pull off once the attackers have done the prep work. It would only take a few seconds to potentially unmask every visitor to a malicious site — and it would be nearly impossible for an unsuspecting user to discover the hack. Researchers have developed a browser extension that can prevent such attacks and is available for Chrome and Firefox. But they note that this may affect performance and is not available for all browsers.
Through a major discovery process with numerous web services, browsers and web standards bodies, the researchers say they have started a broader discussion on how to comprehensively address this problem. Currently, Chrome and Firefox have not publicly released their responses. And Curtmola says it would take fundamental and likely unfeasible changes to the way processors are designed to fix the problem at the chip level. Still, he says collaborative discussions through the World Wide Web Consortium or other forums could eventually produce a broad solution.
“Retailers are trying to see if it’s worth the effort to sort this out,” he says. “They have to be convinced that it’s a serious enough problem to invest in solving it.”