Disrupting malicious uses of AI | February 2026
OpenAI's February 2026 threat report examines how malicious actors combine AI models with websites and social platforms. The report analyzes implications for detection and defense systems.
Why this matters: Organizations need to understand evolving AI-enabled threats to develop effective security measures and response strategies.
Why we no longer evaluate SWE-bench Verified
OpenAI discontinued evaluation of SWE-bench Verified due to contamination issues and flawed measurements of coding progress.
Why this matters: Shows the importance of reliable benchmarks for accurately assessing AI coding capabilities.
OpenAI announces Frontier Alliance Partners
OpenAI launched Frontier Alliance Partners to help enterprises transition AI projects from pilots to production deployments.
Why this matters: Addresses the common challenge of scaling AI implementations from experimental to operational stages.
Advancing independent research on AI alignment
OpenAI is committing $7.5 million to The Alignment Project to fund independent AI alignment research. The funding supports work on AGI safety and security.
Why this matters: This investment could accelerate research into making advanced AI systems safer and more reliable.
Introducing OpenAI for India
OpenAI is expanding AI access in India through local infrastructure development and enterprise support. The initiative aims to advance workforce skills across the country.
Why this matters: This could accelerate AI adoption in one of the world's largest markets and create localized AI solutions.
GPT-5.2 derives a new result in theoretical physics
GPT-5.2 has derived a new result in theoretical physics, proposing a formula for a gluon amplitude. The finding was later formally proved and verified by researchers.
Why this matters: This demonstrates AI's potential to contribute to fundamental scientific discovery and verification.
Introducing Lockdown Mode and Elevated Risk labels in ChatGPT
OpenAI is introducing Lockdown Mode and Elevated Risk labels in ChatGPT. These features are designed to help organizations defend against prompt injection and AI-driven data exfiltration.
Why this matters: This provides new tools for enhancing security when using AI models in sensitive or organizational contexts.
Scaling social science research
OpenAI released GABRIEL, an open-source toolkit that uses GPT to convert qualitative text and images into quantitative data. This tool is designed to help social scientists analyze research at scale.
Why this matters: Provides researchers with automated tools to process large volumes of qualitative data more efficiently.
Beyond rate limits: scaling access to Codex and Sora
OpenAI developed a real-time access system combining rate limits, usage tracking, and credits for continuous access to Sora and Codex. This system addresses scaling challenges for these AI models.
Why this matters: Enables more reliable and predictable access to advanced AI tools for developers and organizations.
Introducing GPT-5.3-Codex-Spark
OpenAI introduces GPT-5.3-Codex-Spark, a real-time coding model with improved generation speed and context.
Why this matters: This update may impact developers and users of ChatGPT Pro, offering enhanced coding capabilities.
Introducing GPT-5.3-Codex-Spark
OpenAI releases GPT-5.3-Codex-Spark, a real-time coding model with improved generation speed and context.
Why this matters: This update may impact developers and users of ChatGPT Pro, but its broader implications are unclear.
Harness engineering: leveraging Codex in an agent-first world
OpenAI discusses harness engineering and leveraging Codex in an agent-first world.
Why this matters: This article provides insight into OpenAI's approach to harness engineering and its potential applications.
Testing ads in ChatGPT
OpenAI starts testing ads in ChatGPT with user control and clear labeling.
Why this matters: This update affects users' experience and the sustainability of free access to ChatGPT.
Bringing ChatGPT to GenAI.mil
OpenAI has deployed a custom ChatGPT on GenAI.mil for secure AI use by U.S. defense teams.
Why this matters: This development brings secure AI capabilities to U.S. defense teams, enhancing their operations.
Making AI work for everyone, everywhere: our approach to localization
OpenAI shares its approach to AI localization, adapting frontier models to local languages, laws, and cultures without compromising safety.
Why this matters: This approach aims to make AI more accessible and usable for people worldwide.