Learnings from COBOL modernization in the real world
AWS shared learnings from real-world COBOL modernization projects. Successful modernization requires both reverse engineering to create specs and forward engineering with AI coding assistants.
Why this matters: Offers practical guidance for organizations modernizing legacy systems, a common challenge in enterprise IT.
Reinforcement fine-tuning for Amazon Nova: Teaching AI through feedback
AWS describes reinforcement fine-tuning for Amazon Nova models, which customizes AI through feedback rather than imitation. The post covers applications from code generation to customer service and implementation options.
Why this matters: This technique could help enterprises better tailor AI models to specific business needs without extensive labeled data.
Nano Banana 2: Combining Pro capabilities with lightning-fast speed
Google DeepMind released Nano Banana 2, an image generation model with advanced capabilities and fast processing. The model offers production-ready specifications and subject consistency.
Why this matters: Provides a faster, more capable tool for content creation and visual design applications.
Pacific Northwest National Laboratory and OpenAI partner to accelerate federal permitting
Pacific Northwest National Laboratory and OpenAI introduced DraftNEPABench, a benchmark for AI coding agents in federal permitting. The tool shows potential to reduce NEPA drafting time by up to 15%.
Why this matters: Demonstrates how AI could accelerate infrastructure project reviews by automating regulatory documentation.
OpenAI Codex and Figma launch seamless code-to-design experience
OpenAI Codex and Figma launched an integration connecting code and design workflows. The tool aims to help teams iterate and ship products faster.
Why this matters: Streamlines collaboration between developers and designers, potentially reducing development cycles.
Efficiently serve dozens of fine-tuned models with vLLM on Amazon SageMaker AI and Amazon Bedrock
AWS demonstrates how to efficiently serve multiple fine-tuned models using vLLM on Amazon SageMaker and Amazon Bedrock. The implementation includes kernel-level optimizations for Mixture of Experts models.
Why this matters: This enables enterprises to deploy and manage multiple specialized AI models cost-effectively while maintaining performance.
Building intelligent event agents using Amazon Bedrock AgentCore and Amazon Bedrock Knowledge Bases
AWS shows how to build intelligent event agents using Amazon Bedrock AgentCore and Knowledge Bases. The system maintains attendee preferences and provides personalized experiences.
Why this matters: This provides a template for organizations to create scalable, personalized AI assistants without extensive custom infrastructure development.
Recovered in Translation: Efficient Pipeline for Automated Translation of Benchmarks and Datasets
Researchers present an automated framework for translating AI benchmarks and datasets while preserving quality. The method addresses semantic drift and context loss in existing translations.
Why this matters: Accurate multilingual benchmarks are essential for properly evaluating AI models across different languages and regions.
SumTablets: A Transliteration Dataset of Sumerian Tablets
Researchers released SumTablets, a dataset pairing 91,606 Sumerian cuneiform tablet glyphs with their transliterations. This addresses a gap that previously hindered NLP applications to Sumerian texts.
Why this matters: Enables computational analysis of ancient Sumerian, potentially accelerating historical and linguistic research.
Off-The-Shelf Image-to-Image Models Are All You Need To Defeat Image Protection Schemes
A study found that off-the-shelf image-to-image AI models can effectively remove protective perturbations from images. This defeats multiple existing image protection schemes designed to prevent misuse.
Why this matters: Reveals a critical vulnerability in current image protection methods, necessitating stronger security benchmarks.
Disrupting malicious uses of AI | February 2026
OpenAI's February 2026 threat report examines how malicious actors combine AI models with websites and social platforms. The report analyzes implications for detection and defense systems.
Why this matters: Organizations need to understand evolving AI-enabled threats to develop effective security measures and response strategies.
Train CodeFu-7B with veRL and Ray on Amazon SageMaker Training jobs
AWS published a technical guide for training the CodeFu-7B model using specific reinforcement learning methods. The process utilizes distributed computing on SageMaker.
Why this matters: It provides a replicable framework for organizations to train specialized, large-scale AI models efficiently.
Generate structured output from LLMs with Dottxt Outlines in AWS
An AWS blog post details a method for generating structured outputs from large language models. The approach uses the Dottxt Outlines framework within SageMaker.
Why this matters: This enables more reliable integration of LLMs into applications that require consistent data formats.
Global cross-Region inference for latest Anthropic Claude Opus, Sonnet and Haiku models on Amazon Bedrock in Thailand, Malaysia, Singapore, Indonesia, and Taiwan
AWS has expanded global cross-region inference for Anthropic's Claude AI models to five Southeast Asian countries. The announcement includes technical implementation guidance and quota management best practices.
Why this matters: Enables enterprises in these regions to deploy Claude models with improved resilience and lower latency for AI applications.
Introducing Amazon Bedrock global cross-Region inference for Anthropicโs Claude models in the Middle East Regions (UAE and Bahrain)
AWS has launched global cross-region inference for Anthropic's Claude AI models in the UAE and Bahrain. The post details model capabilities, resilience benefits, and includes implementation code.
Why this matters: Allows Middle Eastern businesses to build generative AI applications with enhanced performance and reliability.
Scaling data annotation using vision-language models to power physical AI systems
Bedrock Robotics uses vision-language models to analyze construction footage and generate labeled datasets for autonomous equipment training. This collaboration with AWS aims to scale data annotation.
Why this matters: Automates labor-intensive data preparation for physical AI systems in industrial settings.
A Very Big Video Reasoning Suite
Researchers introduced a large-scale dataset and benchmark for evaluating video reasoning in AI models. The suite aims to systematically study capabilities like understanding continuity and causality in videos.
Why this matters: Provides tools to measure and improve AI's ability to reason about dynamic visual scenes.
How Sonrai uses Amazon SageMaker AI to accelerate precision medicine trials
Sonrai implemented an MLOps framework using Amazon SageMaker AI for precision medicine trials. The system maintains traceability and reproducibility required in regulated healthcare environments.
Why this matters: Enables compliant AI deployment in regulated clinical trial settings.
Accelerating AI model production at Hexagon with Amazon SageMaker HyperPod
Hexagon scaled AI model production by pretraining segmentation models using Amazon SageMaker HyperPod infrastructure. This collaboration with AWS accelerated their model development pipeline.
Why this matters: Reduces infrastructure management overhead for enterprise AI model training.
Agentic AI with multi-model framework using Hugging Face smolagents on AWS
Hugging Face smolagents library integrates with AWS services to build agentic AI solutions. The demonstration includes a healthcare agent with multi-model deployment and clinical decision support capabilities.
Why this matters: Simplifies development of specialized AI agents for domain-specific applications.