Off-The-Shelf Image-to-Image Models Are All You Need To Defeat Image Protection Schemes
A study found that off-the-shelf image-to-image AI models can effectively remove protective perturbations from images. This defeats multiple existing image protection schemes designed to prevent misuse.
Why this matters: Reveals a critical vulnerability in current image protection methods, necessitating stronger security benchmarks.
Disrupting malicious uses of AI | February 2026
OpenAI's February 2026 threat report examines how malicious actors combine AI models with websites and social platforms. The report analyzes implications for detection and defense systems.
Why this matters: Organizations need to understand evolving AI-enabled threats to develop effective security measures and response strategies.
Introducing Lockdown Mode and Elevated Risk labels in ChatGPT
OpenAI is introducing Lockdown Mode and Elevated Risk labels in ChatGPT. These features are designed to help organizations defend against prompt injection and AI-driven data exfiltration.
Why this matters: This provides new tools for enhancing security when using AI models in sensitive or organizational contexts.