It helps to think of browsers having a “line of death” between pixels controlled by the browser and those that are under the control of a website and subject to manipulation by a malicious actors.
It helps to think of browsers having a “line of death” between pixels controlled by the browser and those that are under the control of a website and subject to manipulation by a malicious actors.

It is getting harder for web users to tell the difference between trusted websites and malicious content, according to a developer working on Google Chrome.

In a blog post, Google engineer Eric Lawrence said that it helps to think of browsers having a “line of death” between pixels controlled by the browser and those that are under the control of a website and therefore subject to manipulation by a malicious actor.

“In web browsers, the browser itself usually fully controls the top of the window, while pixels under the top are under control of the site. I've recently heard this called the line of death,” he said. “If a user trusts pixels above the line of death, the thinking goes, they'll be safe, but if they can be convinced to trust the pixels below the line, they're gonna die.”

Lawrence added that this crucial demarcation isn't explicitly pointed out to the user, and worse than that, it's not an absolute.

He cited an example where chevrons are used to cross over this line of death so that the browser can show extra information, such as if a connection is secure. Phishers, while not being able to cross this line, can fake something like this that touches the line and most users will fall for a fake chevron and notification which can be clicked on to serve up malicious content.

But a bigger problem, as far as Lawrence is concerned, is that some attacker data is allowed above the line of death, such as an icon and page title, which is in control of the attacker, as it's the attacker's domain name in the address bar. Lawrence said this may consist entirely of deceptive content and lies.

Another problem is the web content. “Nothing in this area is to be believed. Unfortunately, on windowed operating systems, this is worse than it sounds, because it creates the possibility of picture-in-picture attacks, where an entire browser window, including its trusted pixels,” he warned.

He said that even defences such as using a custom theme (as this would show up a fake window in default colours) wouldn't protect users against such attacks. Such attacks have rendered Extended Validation (EV) certificates pointless as they can also fake a green padlock, used for denoting validated sites. Lawrence said that his favourite mitigation technique for this kind of attack was a proposal that browsers should use PetNames for site identity.

“Not only would they make every HTTPS site's identity look unique to each user, but this could also be used as a means of detecting fraudulent or mis-issued certificates (in a world before we had certificate transparency),” he said.

However, the line of death has all but gone with the advent of HTML5-based browsers as this allows fullscreen windows without any address bar or chrome. He said that the Metro/Immersive/Modern mode of Internet Explorer in Windows 8 suffered from the same problem; because it was designed with a philosophy of “content over chrome”, there were no reliable trustworthy pixels. 

“I begged for a persistent trust badge to adorn the bottom-right of the screen (showing a security origin and a lock) but was overruled. One enterprising security tester in Windows made a visually-perfect spoofing site of PayPal, where even the user gestures that displayed the ephemeral browser UI were intercepted and fake indicators were shown. It was terrifying stuff, mitigated only by the hope that no one would use the new mode,” he said.

He added that virtually all mobile operating systems suffer from the same issue. “Due to UI space constraints, there are no trustworthy pixels, allowing any application to spoof another application or the operating system itself,” said Lawrence.

Kevin Bocek, chief cybersecurity strategist at Venafi, told SC Media UK that to have any guarantee of security, users need to be able to trust the sites they are visiting.

“Go back to the first days of the Internet itself and users had no way of knowing whether a website was real or not and if there was any privacy at all. We applied digital certificates to websites to tell us if websites were real, which also turned on encryption to keep things private,” he said.

“This is the technology behind the ‘green padlock' in the browser bar which gives us that guarantee of security. Yet cyber-criminals are trying to use the blind trust we have in this green padlock. And so today, unless people can have faith in that green padlock, the problem of not being able to trust the sites that we visit is not going to go away. If we can verify that these certificates are being used correctly, then we've gone a long way of solving the original cybersecurity problem – the trust issue.”

Giuliano Fasto, Senior Security Consultant at BSI Espion, told SC Magazine that changes and updates, in the way websites are produced and displayed to the user, makes it even harder for users to understand what the key pieces of information are that they need to double check in order to ensure that a website is legitimate.

“The increased use of smartphone devices to access the Internet further reduces the attention required to what gets accessed or submitted,” he said.

“Developers and website owners can facilitate users to recognise the legitimate website by using an easily recognisable SSL certificate i.e. using extended validation information on their certificate, such as the company name, and providing guidelines to their users on how correctly check this information. Although this could help limiting the likelihood of a phishing attack being successful, the real effort to prevent it is left to the final user.”