Imagine if your web browser presented a harmless page to security systems specifically looking for trouble, and a malicious page chock-full of it to you. Imagine no more, cloaking-as-a-service is here, and it’s employing AI to serve up the attacks you can’t see coming. Whereas you are likely au fait with news headlines warning of web browser vulnerabilities, dodgy extensions and add-ons, and assorted other nefarious security bypasses, the chances are you have never heard of this one, despite it literally being a black and white browser attack. Welcome to the world of AI-powered web page cloaking, where a ‘black’ malicious page is served up to ordinary users while a harmless ‘white’ page goes to those systems designed to protect you. Cloaking-as-a-Service prevents prying eyes from seeing the phishing threats and malware, while you get scammed.
AI-Driven Traffic Filtering — Hiding Security Threats From Those Who Need To See Them
Hackers are now employing the same traffic-filtering techniques, now powered by AI, that were once the domain of shady online advertisers and marketing professionals seeking to exploit the system and boost conversion rates. Now, thanks to the use of Cloaking-as-a-Service platforms, threat actors are combining AI with scripting techniques, all at the click of a button and following a subscription payment, to turn malicious payloads in websites invisible as far as security tools are concerned, while displaying the phishing websites in their full glory to the intended victim.
SlashNext security researchers have revealed precisely how the threat works in a July 17 report, and you better read this one as it’s of concern to every web user. In a nutshell, the con works by presenting two different web pages, a black one and a white one, depending on who is viewing it. The harmless white page is shown to automated reviews, scanners and security tools trying to protect you, while the malicious black one gets served up to you, the potential victim.
The Cloaking-as-a-Service platforms ecosystem is driving the trend, designed to bring “JavaScript fingerprinting, device and network profiling, machine learning analysis, and dynamic content swapping,” the researchers said, “into user-friendly platforms that anyone (including criminals) can subscribe to.”
And there lies the rub: the services themselves are not illegal, but some people are using them for illegal purposes. By using the cloaking systems, hackers are able to increase “the effectiveness of phishing sites, fraudulent downloads, affiliate fraud schemes and spam campaigns,” the researchers warned, “which can stay live longer and snare more victims before being detected.”
Mitigating The Invisible Web Browser AI Hack Threat
I have not named the platforms being exploited in this article, although you can read more in the referenced report, for the very reason as explained by Andy Bennett, chief information security officer at Apollo Information Systems: “Just like threat actors use encryption, which is a core security technology, as a weapon to hold organization for ransom,” Bennett said, “it is no surprise that they are taking an approach designed to help opportunistic marketeers target and engage specific audiences and use it to target specific victims or evade detection.”
That said, the threat itself does need to be addressed. It is, as Mayuresh Dani, security research manager at the Qualys Threat Research Unit, said, “a critical evolution in the cyberthreat landscape demanding an immediate attention from security teams and organizations.”
Dani recommends the following mitigations:
- Behavioral and runtime analysis tools that can execute dynamic content analysis in real-time.
- AI-powered defensive solutions that can adapt to evolving attack patterns as needed to combat AI-powered threats.
- Zero-trust principles that continuously validate all network activity, rather than relying on perimeter-based defenses.
- Incident response procedures specifically designed for AI-enhanced threats.
Read the full article here