AI browsers face hidden cyber risks as OpenAI raises alarm

OpenAI says that attacker access the credentials, confidential information, or regulated data of users
A photo taken on October 4, 2023, in Manta, near Turin, shows a smartphone and a laptop displaying the logos of the artificial intelligence OpenAI research laboratory and ChatGPT robot. — AFP

A photo taken on October 4, 2023, in Manta, near Turin, shows a smartphone and a laptop displaying the logos of the artificial intelligence OpenAI research laboratory and ChatGPT robot. — AFP

OpenAI has issued a renewed cybersecurity warning, cautioning that the growing use of AI-driven browsers and digital assistants could expose users and organisations to serious security risks if proper safeguards are not in place.

Security researchers from universities, cloud service providers, and enterprise security firms have echoed similar concerns, warning that security protections are not evolving as quickly as AI capabilities.

At the centre of the risk are prompt injection attacks. These attacks involve hiding malicious instructions inside seemingly harmless content such as web pages, emails, PDFs, shared documents, or even customer support tickets. 

When an AI assistant processes this content, it may unknowingly follow the attacker’s instructions instead of the user’s original request.

It has been proven that these types of attacks can lead to leaks of sensitive information, manipulation of outputs, bypassing of safety mechanisms, or disclosure of internal system prompts by these AI-assisted tools. 

In more attacks, these agents have been fooled into accessing forbidden files or disclosing confidential information, something that these agents were meant to prevent.

The reason for this increasing concern is that AI browsers work completely differently from conventional browsers. They do not just render content on the screen. They interpret language and act upon it. 

Various AI applications can now automatically fill out forms, schedule meetings, carry out API requests, download internal company documents, and perform third-party operations.

As far as the working context is concerned, these AI systems function with heightened permissions. This results in the increased impact level of a possible attack. Additionally, the attacker may have access to credentials, confidential information, or regulated data.

According to experts, it is hard to detect the need for prompt injection attacks using common security solutions. This is because the payload in a typical malware or phishing attack contains text which might be buried in comments or formatting that do not get viewed by human users but can be processed by AI systems.

Research also shows that even when trained strongly, very few machine-learning algorithms are able to differentiate effectively between trusted and malicious commands when presented in natural language. The potential threats from more autonomous AI assistants can move from flash-in-the-pan errors to massive security breaches.