CopyPasta license attack: Hidden prompt-injection that could taint AI-written code at Coinbase

HiddenLayer warns of a ‘CopyPasta license attack’ that hides malicious prompts in README/LICENSE files, risking AI-generated code at exchanges such as Coinbase.

Cybersecurity firm HiddenLayer this week disclosed a novel prompt-injection exploit that can poison AI coding assistants by hiding malicious instructions inside markdown comments in common project files such as README.md or LICENSE.txt. Researchers call the tactic a CopyPasta license attack.

The attack is notable because many code assistants treat license or documentation files as authoritative context. If an assistant accepts the embedded ‘license’ text as legitimate, it can replicate the hidden instructions into any files it creates or edits—effectively propagating the payload through a repository without a developer’s explicit action.

HiddenLayer demonstrated the technique against Cursor, an AI coding tool reported to be widely used at Coinbase. Researchers showed how the exploit could trick an assistant into inserting backdoors, siphoning credentials, or adding commands that exhaust resources, all while the malicious content appears as harmless comments or documentation.

Because the injected prompts are disguised inside standard project files, traditional malware scanners and human code reviews can miss them. Unlike earlier prompt‑injection concepts that required user interaction, CopyPasta spreads semi‑autonomously: any AI agent that reads the infected file becomes a new vector.

Coinbase CEO Brian Armstrong has said AI writes a large share of the exchange’s code—around 40%—though he notes AI-generated work is focused on interfaces and non‑critical backends. That reliance raises the stakes: if coding assistants are compromised, supply‑chain style contamination could reach production systems.

Security teams should scan repositories for hidden markdown comments, treat all untrusted inputs used in LLM contexts as potentially malicious, and require manual review of AI-generated changes in sensitive areas. HiddenLayer urges organizations to add systematic detection and gating before prompt-based attacks can scale.

Source: HiddenLayer. Read the original coverage for full details.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts