OpenAI acknowledges that prompt injection attacks that manipulate AI agents through hidden networks or email instructions remain a persistent security challenge for its ChatGPT Atlas browser and are unlikely to be fully resolved. The company is deploying an LLM (Logical Logic Model)-based automated attacker that is trained using reinforcement learning to internally simulate and discover new attack strategies. While security updates have improved detection capabilities—such as the ability to flag previously malicious emails that tricked agents into sending resignation letters—experts point out that intelligent browsers like Atlas, which possess a certain degree of autonomy and high access to sensitive data, may not yet have reached a risk profile that justifies their daily use value.