id
stringlengths
10
10
url
stringlengths
42
42
title
stringlengths
5
214
average_rating
float64
-1
8.5
average_confidence
float64
-1
5
ratings
listlengths
0
9
confidences
listlengths
0
9
reviewers_num
int64
0
9
keywords
listlengths
1
42
abstract
stringlengths
26
4.31k
tldr
stringlengths
0
250
primary_area
stringclasses
21 values
pdf_url
stringlengths
40
40
submission_date
timestamp[s]date
2025-09-01 19:59:51
2025-09-20 20:18:08
total_reviews
int64
0
18
reviews
listlengths
0
9
jPNg2Yytaf
https://openreview.net/forum?id=jPNg2Yytaf
Fit-LoRA: Fit Your LoRAs to Pruned LLMs Without Additional Training or Data
3
4
[ 2, 6, 0, 4 ]
[ 3, 5, 5, 3 ]
4
[ "Parameter Efficient Fine-Tuning", "Pruning", "Sparsity", "Large Language Models", "Portability", "Low-Rank Adaptation" ]
Personalization of LLMs via fine-tuning has become a popular way to enhance performance on downstream tasks. However, the model adaptation obtained after fine-tuning is specific to the base model. Any modifications made to the structure of the base model require users to fine-tune on the downstream task again. During deployment, a base model may be modified using pruning to obtain several LLM scales tailored to specific compute requirements. In this scenario, it becomes challenging to keep up with personalization, since each derived model must be individually fine-tuned. To address this challenge, we explore the possibility of leveraging the base model's fine-tuned knowledge to personalize any derived models. In this paper, we present Fit-LoRA, a framework that enables fine-tuning knowledge transfer between a base LLM and derived LLMs of smaller scales without needing any training or access to the original fine-tuning data. We validate our approach by conducting extensive experiments covering representative datasets such as BoolQ, SST-2, MRPC, RTE, and WinoGrande, across various model architectures including Llama-2, Llama-3.1, Mistral, and Gemma-2. Furthermore, we show the effectiveness of our approach by demonstrating its compatibility across multiple types of state-of-the-art LLM pruning methods, including depth pruning, structured pruning, and sparsification.
other topics in machine learning (i.e., none of the above)
https://openreview.net/pdf?id=jPNg2Yytaf
2025-09-20T15:12:03
7
[ { "id": "BgLcwKCpLD", "forum": "jPNg2Yytaf", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission24065/Reviewer_fccj", "reviewer_name": "Reviewer_fccj", "rating": 2, "confidence": 3, "soundness": 2, "contribution": 2, "presentation": 3, "summary": "This ...
USy8iyZnjK
https://openreview.net/forum?id=USy8iyZnjK
Trail Mix: Adaptive Interpolation of Optimizers with Convergence Guarantees
5
3.5
[ 6, 4, 6, 4 ]
[ 2, 4, 4, 4 ]
4
[ "Optimizers", "interpolation", "convergence" ]
Optimizers are central to modern deep learning, yet no single algorithm consistently excels across architectures or datasets. Existing methods of adaptively mixing optimizers to combine complementary strengths are promising, but are restricted to narrow optimizer families or lack rigorous guarantees, leaving a gap between theory and practice. To fill this gap, we present TrailMix, an adaptive interpolation framework that is general across all first- and quasi-second-order methods. On the theoretical front, we prove that convex combinations of optimizers satisfying a mild alignment condition preserve standard convergence rates in non-convex, convex, and strongly convex or PL regimes. For the challenging same-timescale setting, we establish a novel analysis method by lifting the stochastic dynamics to a population-level Fokker-Planck PDE, for which we prove stability using a joint free-energy Lyapunov function. Algorithmically, we extend this framework with fairness normalization, trust-region clipping, and a curvature-awareness reward that stabilizes the meta-weights and enables smoother training. These additions allow TrailMix to behave like an ensemble when optimizers are complementary and to concentrate weight when one dominates, without breaking convexity. Our empirical evaluations on an optimizer set including AdamW, Lion, SOAP, Scion, and MARS show that TrailMix consistently matches or outperforms the strongest single optimizer across a wide range of analytic loss surfaces.
Trail Mix is a convex framework that provably preserves convergence rates while adaptively interpolating a wide range of optimizers, acting like an ensemble when they are complementary and collapsing onto the best one when it dominates.
optimization
https://openreview.net/pdf?id=USy8iyZnjK
2025-09-20T13:56:38
4
[ { "id": "E4woi37AmE", "forum": "USy8iyZnjK", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission23738/Reviewer_fa5n", "reviewer_name": "Reviewer_fa5n", "rating": 6, "confidence": 2, "soundness": 3, "contribution": 4, "presentation": 3, "summary": "The c...
HVKB5DM5n7
https://openreview.net/forum?id=HVKB5DM5n7
Approximate Equivariance via Projection-Based Regularisation
6.5
3.5
[ 6, 8, 6, 6 ]
[ 4, 4, 2, 4 ]
4
[ "equivariance theory", "spectral decomposition", "geometric deep learning" ]
Equivariance is a powerful inductive bias in neural networks, improving generalisation and physical consistency. Recently, however, non-equivariant models have regained attention, due to their better runtime performance and imperfect symmetries that might arise in real-world applications. This has motivated the development of approximately equivariant models that strike a middle ground between respecting symmetries and fitting the data distribution. Existing approaches in this field usually apply sample-based regularisers which depend on data augmentation at training time, incurring a high sample complexity, in particular for continuous groups such as $SO(3)$. This work instead approaches approximate equivariance via a projection-based regulariser which leverages the orthogonal decomposition of linear layers into equivariant and non-equivariant components. In contrast to existing methods, this penalises non-equivariance at an operator level across the full group orbit, rather than point-wise. We present a mathematical framework for computing the non-equivariance penalty exactly and efficiently in both the spatial and spectral domain. In our experiments, our method consistently outperforms prior approximate equivariance approaches in both model performance and efficiency, achieving substantial runtime gains over sample-based regularisers.
We propose an efficient projection-based regulariser for approximate equivariance based on an orthogonal decomposition into equivariant and non-equivariant function spaces.
learning on graphs and other geometries & topologies
https://openreview.net/pdf?id=HVKB5DM5n7
2025-09-20T01:44:21
4
[ { "id": "3hdgZKIWP0", "forum": "HVKB5DM5n7", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission20224/Reviewer_wSLS", "reviewer_name": "Reviewer_wSLS", "rating": 6, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This ...
Dnto7O7p3W
https://openreview.net/forum?id=Dnto7O7p3W
Evading Protections Against Unauthorized Data Usage via Limited Fine-tuning
3.6
3.8
[ 2, 4, 2, 6, 4 ]
[ 4, 4, 5, 3, 3 ]
5
[ "Text-to-Image Models", "Copyright Infringement", "Watermarking" ]
Text-to-image diffusion models, such as Stable Diffusion, have demonstrated exceptional potential in generating high-quality images. However, recent studies highlight concerns about the use of unauthorized data in training these models, which can lead to intellectual property infringement or privacy violations. A promising approach to mitigating these issues is to embed a signature in the model that can be detected or verified from its generated images. Existing works also aim to fully prevent training on protected images by degrading generation quality, achieved by injecting adversarial perturbations onto training data. In this paper, we propose RATTAN, which effectively evades such protection methods by removing the protective perturbations from images and catastrophically forgetting such learned features in a model. It leverages the diffusion process for controlled image generation on the protected input, preserving high-level features while ignoring the low-level details utilized by the embedded pattern. A small number of our generated images (e.g., 10) are then used to fine-tune marked models to remove the learned features. Our experiments on four datasets, two different IP protection methods, and 300 text-to-image diffusion models reveal that while some protections already suffer from weak memorization, RATTAN can reliably bypass stronger defenses, exposing fundamental limitations of current protections and highlighting the need for stronger defenses.
alignment, fairness, safety, privacy, and societal considerations
https://openreview.net/pdf?id=Dnto7O7p3W
2025-09-20T01:31:53
5
[ { "id": "BLrHtIZzl8", "forum": "Dnto7O7p3W", "review_number": 5, "reviewer_id": "ICLR.cc/2026/Conference/Submission20155/Reviewer_7qvf", "reviewer_name": "Reviewer_7qvf", "rating": 2, "confidence": 4, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "This ...
CSeX6I85Bp
https://openreview.net/forum?id=CSeX6I85Bp
Can Models Learn From Arbitrary Pairs?
3
3.75
[ 4, 4, 2, 2 ]
[ 3, 5, 3, 4 ]
4
[ "representation learning", "contrastive learning", "supervised learning" ]
Representation learning traditionally follows a simple principle: pull semantically similar samples together and push dissimilar ones apart. This principle underlies most existing approaches, including supervised classification, self-supervised learning, and contrastive methods, and it has been central to their success. Yet it overlooks an important source of information: Even when classes appear unrelated, their samples often share latent visual attributes such as shapes, textures, or structural patterns. For example, cats, dogs and cattle have fur and four limbs etc. These overlooked commonalities raise a fundamental question: *can models learn from arbitrary pairs without explicit guidance?* We show that the answer is yes. The primary challenge lies in learning from dissimilar samples while preserving the notion of semantic distance. We resolve this by proving that for any pair of classes, there exists a subspace where their shared features are discriminative to other classes. To uncover these subspaces we propose **SimLAP**, a **Sim**ple framework to **L**earn from **A**rbitrary **P**air. SimLAP uses a lightweight feature filter to adaptively activate shared attributes for any given pair. Through extensive experiments we show that models trained via SimLAP can indeed learn effectively from arbitrary pairs. Remarkably, models learned from arbitrary pairs are more transferable than those learned from traditional representation learning methods and exhibit greater resistance to representation collapse. Our findings suggest that arbitrary pairs, often dismissed as irrelevant, are in fact a rich, complementary and untapped source of supervision. By learning from them we move beyond rigid notions of similarity. Hopefully, SimLAP will open an additional pathway toward more general and robust representation learning.
SimLAP can learn from arbitrary pairs of classes robustly and promote distinct pairs close in subspaces while preserving class separability in global space
unsupervised, self-supervised, semi-supervised, and supervised representation learning
https://openreview.net/pdf?id=CSeX6I85Bp
2025-09-16T18:56:23
4
[ { "id": "PVaaGmRFd9", "forum": "CSeX6I85Bp", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission7405/Reviewer_9VN7", "reviewer_name": "Reviewer_9VN7", "rating": 4, "confidence": 3, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "This p...
owv9VvPwOW
https://openreview.net/forum?id=owv9VvPwOW
OpenFake: An Open Dataset and Platform Toward Real-World Deepfake Detection
3.5
4.75
[ 6, 2, 2, 4 ]
[ 4, 5, 5, 5 ]
4
[ "Deepfake Detection", "Misinformation", "Disinformation", "Dataset Benchmark", "Crowdsourcing", "Generative AI", "Synthetic Images" ]
Deepfakes, synthetic media created using advanced AI techniques, pose a growing threat to information integrity, particularly in politically sensitive contexts. This challenge is amplified by the increasing realism of modern generative models, which our human perception study confirms are often indistinguishable from real images. Yet, existing deepfake detection benchmarks rely on outdated generators or narrowly scoped datasets (e.g., single-face imagery), limiting their utility for real-world detection. To address these gaps, we present OpenFake, a large politically grounded dataset specifically crafted for benchmarking against modern generative models with high realism, and designed to remain extensible through an innovative crowdsourced adversarial platform that continually integrates new hard examples. OpenFake comprises nearly four million total images: three million real images paired with descriptive captions and almost one million synthetic counterparts from state-of-the-art proprietary and open-source models. Detectors trained on OpenFake achieve near-perfect in-distribution performance, strong generalization to unseen generators, and high accuracy on a curated in-the-wild social media test set, significantly outperforming models trained on existing datasets. Overall, we demonstrate that with high-quality and continually updated benchmarks, automatic deepfake detection is both feasible and effective in real-world settings.
A political-grounded deepfake detection dataset with realistic synthetic images and a crowdsourced adversarial platform for adaptive detection.
datasets and benchmarks
https://openreview.net/pdf?id=owv9VvPwOW
2025-09-20T05:40:37
4
[ { "id": "lk29Dqlw4j", "forum": "owv9VvPwOW", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission21478/Reviewer_uQDg", "reviewer_name": "Reviewer_uQDg", "rating": 6, "confidence": 4, "soundness": 4, "contribution": 3, "presentation": 3, "summary": "This ...
QkUOeLomLt
https://openreview.net/forum?id=QkUOeLomLt
Seeing Like Humans: Task-Driven Token Reduction for Accelerated ViT in Robotic Navigation
4.5
3.5
[ 4, 4, 4, 6 ]
[ 3, 4, 5, 2 ]
4
[ "Vision Transformers", "Robotic Navigation", "Computational Efficiency", "Adaptive Attention", "Edge Computing" ]
In robotics, vision is critical for enabling agents to perceive and interact with their environment. Recent advancements in vision models, particularly Vision Transformers (ViTs), have shown remarkable performance in pure vision tasks like object recognition and scene understanding, showing great potential for robotic applications such as object navigation. However, their computational cost grows quadratically with respect to the number of tokens, posing significant challenges for real-time deployment on resource-constrained robotic platforms. To enhance ViT efficiency in robotic tasks, we propose a biologically-inspired token reduction framework that dynamically allocates computation to task-relevant regions in images while neglecting those irrelevant regions for efficiency. Our method introduces two key components: (1) a task-driven spatial attention mechanism that selectively prunes redundant tokens based on the current task, and (2) a temporal feature reusing module that reuses stable visual features across frames to minimize redundant computation. Together, these components enable the visual perception model to focus only on relevant regions, significantly improving inference speed. Experiments show that our method notably reduces inference time in object navigation tasks without significant performance degradation. Additionally, it enables practical ViT deployment on edge devices such as the Jetson Orin (high-performance GPU) and Raspberry Pi 4B (lightweight CPU), achieving 56.5 FPS and 2 FPS, respectively. This represents a 1.5~3× speedup over standard ViTs, making real-time robotic vision more feasible.
This work presents a task-driven token reduction method for Vision Transformers, boosting inference speed (1.5× to 3×) in robotic navigation while maintaining performance, with scalability across high-performance and resource-constrained platforms.
applications to robotics, autonomy, planning
https://openreview.net/pdf?id=QkUOeLomLt
2025-09-19T10:32:16
4
[ { "id": "GMotnU8aRB", "forum": "QkUOeLomLt", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission15230/Reviewer_qd63", "reviewer_name": "Reviewer_qd63", "rating": 4, "confidence": 3, "soundness": 3, "contribution": 2, "presentation": 2, "summary": "This ...
QzCOeNN3vJ
https://openreview.net/forum?id=QzCOeNN3vJ
Memory-Augmented Functional Koopmanism for Interpretable Learning of Spatiotemporal Dynamics
4.5
3.5
[ 4, 4, 6, 4 ]
[ 4, 4, 3, 3 ]
4
[ "partial differential equations", "Koopman learning", "reduced order modeling", "non-Markovian", "spatiotemporal forecasting" ]
Precise prediction of spatiotemporal dynamics over predictive horizons is constrained by the computational cost of high-fidelity solvers and the sparsity, noise, and irregularity of data. We introduce MERLIN, a Koopman-based framework that lifts dynamics to the evolution of learned \textit{observation functionals} with near-linear progression, enabling full-field reconstruction at arbitrary resolutions. Theoretically, we develop a functional Koopman theory for PDEs and compensate for the loss of finite-dimensional linear invariance via the Mori–Zwanzig formalism, which augments the linear backbone with non-Markovian memory terms to improve predictive accuracy. Practically, MERLIN employs discretization-invariant \textit{function encoders} that map partial, irregular observations to observables, and resolution-free \textit{function decoders} that reconstruct states at arbitrary query points. Training under linear constraints yields an interpretable, low-dimensional model that captures principal modes, supports reduced-order modeling, and—augmented with memory correction—delivers stable long-horizon rollouts even in ultra-low-dimensional latent spaces.
A data-driven functional Koopmansim with memory correction is proposed for modeling spatiotemporal process.
learning on time series and dynamical systems
https://openreview.net/pdf?id=QzCOeNN3vJ
2025-09-19T16:44:36
4
[ { "id": "IkEgt0vBRl", "forum": "QzCOeNN3vJ", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission17040/Reviewer_WoP5", "reviewer_name": "Reviewer_WoP5", "rating": 4, "confidence": 4, "soundness": 3, "contribution": 2, "presentation": 2, "summary": "The a...
OrgL5DsU0f
https://openreview.net/forum?id=OrgL5DsU0f
DrivingGen: A Comprehensive Benchmark for Generative Video World Models in Autonomous Driving
6.5
4.5
[ 6, 6, 6, 8 ]
[ 4, 4, 5, 5 ]
4
[ "Benchmark", "Autonomous Driving", "Generative World Model" ]
Video generation models, as one form of world models, has emerged as one of the most exciting frontiers in AI, promising agents the ability to imagine the future by modeling the temporal evolution of complex scenes. In autonomous driving, this vision gives rise to driving world models—generative simulators that imagine ego and agent futures, enabling scalable simulation, safe testing of corner cases, and rich synthetic data generation. Yet, despite fast-growing research activity, the field lacks a rigorous benchmark to measure progress and guide priorities. Existing evaluations remain limited: generic video metrics overlook safety-critical imaging factors; trajectory plausibility is rarely quantified; temporal and agent-level consistency is neglected; and controllability with respect to ego conditioning is ignored. Moreover, current datasets fail to cover the diversity of conditions required for real-world deployment. To address these gaps, we present DrivingGen, the first comprehensive benchmark for generative driving world models. DrivingGen combines a diverse evaluation dataset—curated from both driving datasets and internet-scale video sources, spanning varied weather, time of day, geographic regions, and complex maneuvers—with a suite of new metrics that jointly assess visual realism, trajectory plausibility, temporal coherence, and controllability. Benchmarking 14 state-of-the-art models reveals clear trade-offs: general models look better but break physics, while driving-specific ones capture motion realistically but lag in visual quality. DrivingGen offers a unified evaluation framework to foster reliable, controllable, and deployable driving world models, enabling scalable simulation, planning, and data-driven decision-making.
datasets and benchmarks
https://openreview.net/pdf?id=OrgL5DsU0f
2025-09-18T13:46:00
4
[ { "id": "9RToozh3aA", "forum": "OrgL5DsU0f", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission10514/Reviewer_2jMw", "reviewer_name": "Reviewer_2jMw", "rating": 6, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This ...
RAUMsywTko
https://openreview.net/forum?id=RAUMsywTko
GazeVLM: Gaze-Guided Vision-Language Models for Efficient and Robust Inference
4
4
[ 4, 6, 4, 4, 2 ]
[ 3, 5, 3, 4, 5 ]
5
[ "Efficient VLM", "Gaze Guidance", "Robust Preprocessing", "Token Dropping", "Human Computer Interaction" ]
Vision-language models (VLMs) are emerging as a core building block of modern intelligent assistants, enabling real-time human-machine interactions based on natural language and vision. However, the excessive number of visual tokens generated from images results in high latency, low throughput, and memory bottlenecks, which hinder real-time interactions in resource-constrained settings. To address this, we aim to reduce the number of tokens by prioritizing tokens for efficient inference using user-relevant context. With the growing usage of smart glasses, eye gaze has emerged as a promising sensing modality that can naturally convey the user intent and interests based on the user's viewing context. Therefore, it can provide useful hints for efficient inference. However, the robustness of gaze-aware VLM depends highly on the quality of gaze data. When gaze data is inaccurate, the model may overlook informative visual content, leading to degraded inference accuracy. To this end, we introduce GazeVLM, a novel gaze-guided context-aware VLM framework for efficient and robust inference under a token budget constraint. GazeVLM consists of two key phases: (i) GazeVLM-Pre: a gaze-aware preprocessing mechanism before image encoding that extracts user-attentive scenes while not losing the global understanding for robust inference; (ii) GazeVLM-Post: a gaze-guided token selection method after image encoding that prioritizes tokens around the gazing area for efficient inference under the token budget constraint. Through extensive experiments using two visual question answering datasets with real human eye-tracking data, we demonstrate that GazeVLM achieves both efficiency and robustness under varying token budgets and gaze data qualities, outperforming diverse gaze-aware and gaze-agnostic baselines. Specifically, given the budget of 500 tokens ($\approx$22\% of the tokens of the vanilla architecture), we can achieve up to 1.9$\times$ higher throughput and 37\% lower latency while slightly improving accuracy compared to the vanilla architecture.
We propose a gaze-guided framework for efficient and robust VLM inference under a token budget constraint.
other topics in machine learning (i.e., none of the above)
https://openreview.net/pdf?id=RAUMsywTko
2025-09-17T00:38:35
5
[ { "id": "8dAxWltFWD", "forum": "RAUMsywTko", "review_number": 6, "reviewer_id": "ICLR.cc/2026/Conference/Submission7879/Reviewer_zVVV", "reviewer_name": "Reviewer_zVVV", "rating": 4, "confidence": 3, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "The pa...
jGXTx64gal
https://openreview.net/forum?id=jGXTx64gal
FERD: Fairness-Enhanced Data-Free Adversarial Robustness Distillation
6
3.75
[ 6, 6, 4, 8 ]
[ 4, 3, 4, 4 ]
4
[ "Data-Free Robustness Distillation; Robust Fairness" ]
Data-Free Robustness Distillation (DFRD) aims to transfer the robustness from the teacher to the student without accessing the training data. While existing methods focus on overall robustness, they overlook the robust fairness issues, leading to severe disparity of robustness across different categories. In this paper, we find two key problems: (1) student model distilled with equal class proportion data behaves significantly different across distinct categories; and (2) the robustness of student model is not stable across different attacks target. To bridge these gaps, we present the first Fairness Enhanced data-free Robustness Distillation (FERD) framework to adjust the proportion and distribution of adversarial examples. For the proportion, FERD adopts a robustness guided class reweighting strategy to synthesize more samples for the less robust categories, thereby improving robustness of them. For the distribution, FERD generates complementary data samples for advanced robustness distillation. It generates Fairness-Aware Examples (FAEs) by enforcing a uniformity constraint on feature-level predictions, which suppress the dominance of class-specific non-robust features, providing a more balanced representation across all categories. Then, FERD constructs Uniform-Target Adversarial Examples (UTAEs) from FAEs by applying a uniform target class constraint to avoid biased attack directions, which distribute the attack targets across all categories and prevents overfitting to specific vulnerable categories. Extensive experiments on three public datasets show that FERD achieves state-of-the-art worst-class robustness under all adversarial attack (e.g., the worst-class robustness under FGSM and AutoAttack are improved by 15.1% and 6.4% using MobileNetV2 on CIFAR-10), demonstrating superior performance in both robustness and fairness aspects. Our code is available at: [https://anonymous.4open.science/r/FERD-2A48/](https://anonymous.4open.science/r/FERD-2A48/).
transfer learning, meta learning, and lifelong learning
https://openreview.net/pdf?id=jGXTx64gal
2025-09-19T22:13:56
4
[ { "id": "j8JYU3Tqdo", "forum": "jGXTx64gal", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission18832/Reviewer_wkEv", "reviewer_name": "Reviewer_wkEv", "rating": 6, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 4, "summary": "This ...
iTK8BZ8i3J
https://openreview.net/forum?id=iTK8BZ8i3J
Vision Language Models Cannot Reason About Physical Transformation
4
4
[ 4, 2, 6 ]
[ 3, 4, 5 ]
3
[ "vision language models", "physical transformation", "multi-image understanding", "spurious correlation" ]
The ability to comprehend physical transformations is essential for reasoning in dynamic, real-world environments, yet it remains unclear whether vision–language models (VLMs) possess this capacity. To address this gap, we evaluate whether VLMs exhibit conservation—the understanding that physical quantities remain invariant despite changes in appearance. Inspired by Piaget’s framework, we design a systematic benchmark for conservation reasoning across four quantitative domains: number, length, volume, and size. Each task requires models to integrate visual evidence across time to identify invariant properties under transformation. To guard against spurious success, we also curated a set of negative control tasks paired one-to-one with the benchmark tasks, in which the targeted quantities are not conserved. Both benchmarks and controls incorporate four prompt types, three frame extraction methods, and 48 task variations per domain, yielding 384 baseline tasks and 13,824 total questions. Evaluating 34 VLMs, we find that none achieve systematic success: across conserving and non-conserving tasks, models consistently perform only marginally above chance, with those excelling on conservation tasks performing inversely on controls, indicating rigid biases in reasoning about physical processes. Moreover, models show no benefit from higher temporal resolution or prompt design, suggesting a reliance on static visual cues. Together, these findings indicate that current VLMs fail to internalize the structured representations necessary for systematic physical inference.
datasets and benchmarks
https://openreview.net/pdf?id=iTK8BZ8i3J
2025-09-20T13:25:17
3
[ { "id": "UwtNzGovuw", "forum": "iTK8BZ8i3J", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission23583/Reviewer_X9Du", "reviewer_name": "Reviewer_X9Du", "rating": 4, "confidence": 3, "soundness": 3, "contribution": 2, "presentation": 2, "summary": "The p...
UN26B1826Y
https://openreview.net/forum?id=UN26B1826Y
AbBiBench: A Benchmark for Antibody Binding Affinity Maturation and Design
3
4
[ 6, 2, 2, 2 ]
[ 4, 4, 5, 3 ]
4
[ "benchmark", "benchmark and dataset", "antibody design", "protein language models", "binding affinity", "antibody-antigen complex" ]
We introduce **AbBiBench** (**A**nti**b**ody **Bi**nding **Bench**marking), a benchmarking framework for antibody binding affinity maturation and design. Unlike previous strategies that evaluate antibodies in isolation, typically by comparing them to natural sequences with metrics such as amino acid recovery rate or structural RMSD, AbBiBench instead treats the antibody–antigen (Ab–Ag) complex as the fundamental unit. It evaluates an antibody design’s binding potential by measuring how well a protein model scores the full Ab–Ag complex. We first curate, standardize, and share more than 186,580 experimental measurements of antibody mutants across 13 antibodies and 9 antigens—including influenza, lysozyme, HER2, VEGF, integrin, Ang2, and SARS-CoV-2—covering both heavy-chain and light-chain mutations. Using these datasets, we systematically compare 15 protein models including masked language models, autoregressive language models, inverse folding models, diffusion-based generative models, and geometric graph models by comparing the correlation between model likelihood and experimental affinity values. Additionally, to demonstrate AbBiBench’s generative utility, we apply it to antibody F045-092 in order to introduce binding to influenza H1N1. We sample new antibody variants with the top-performing models, rank them by the structural integrity and biophysical properties of the Ab–Ag complex, and assess them with in vitro ELISA binding assays. Our findings show that structure-conditioned inverse folding models outperform others in both affinity correlation and generation tasks. Overall, AbBiBench provides a unified, biologically grounded evaluation framework to facilitate the development of more effective, function-aware antibody design models.
datasets and benchmarks
https://openreview.net/pdf?id=UN26B1826Y
2025-09-19T02:07:09
4
[ { "id": "iUshoKp7By", "forum": "UN26B1826Y", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission13514/Reviewer_F8BK", "reviewer_name": "Reviewer_F8BK", "rating": 6, "confidence": 4, "soundness": 3, "contribution": 4, "presentation": 3, "summary": "The a...
Nbbzu9SDzM
https://openreview.net/forum?id=Nbbzu9SDzM
Recasting Transformer Layers as Energy Models
5.5
3.25
[ 4, 6, 6, 6 ]
[ 4, 3, 2, 4 ]
4
[ "transformers", "energy-based models", "layer design", "language modeling" ]
Foundation models rely on sequence-to-sequence mappings parameterized by neural networks, and the design space of these layers continues to expand. Transformer layers remain the dominant choice due to their strong performance and high parallelism, though many design decisions are still empirically based. We introduce causal energy minimization (CEM), a framework that interprets each transformer layer as an algorithm for solving an energy minimization problem with causal structure. This perspective separates the mathematical interpretation of a layer from its numerical realization, offering a unifying lens for layer design and motivating principled architectural innovations. Within CEM, multi-head attention emerges as a gradient step on an interaction energy under the weights sharing constraint, while gated MLP correspond to element-wise energies. The form of transformer components within CEM suggests a weight-sharing scheme in both attention and MLP blocks: we show that this yields parameter-efficient layers with negligible performance loss. Further, the CEM interpretation suggests appealing extensions to the transformer architecture: pre-conditioner-matrices for residual connections, diagonal matrices for inter-token-distances in attention, and multiple gradient-steps (a form of layer re-use) for both attention and MLP blocks. We show that these ideas that occur naturally in CEM lead to improvements on language modelling tasks, positioning CEM as a blueprint for principled and extensible architecture design.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=Nbbzu9SDzM
2025-09-19T20:57:20
4
[ { "id": "ivStY9lc3Q", "forum": "Nbbzu9SDzM", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission18341/Reviewer_j463", "reviewer_name": "Reviewer_j463", "rating": 4, "confidence": 4, "soundness": 3, "contribution": 4, "presentation": 3, "summary": "This ...
I8a6bc9rmz
https://openreview.net/forum?id=I8a6bc9rmz
ROSA: Harnessing Robot States for Vision-Language and Action Alignment
3
3.75
[ 2, 2, 4, 4 ]
[ 4, 4, 4, 3 ]
4
[ "Vision-Language-Action Model", "Robot Manipulation" ]
Vision-Language-Action (VLA) models have recently made significant advance in multi-task, end-to-end robotic control, due to the strong generalization capabilities of Vision-Language Models (VLMs). A fundamental challenge in developing such models is effectively aligning the vision-language space with the robotic action space. Existing approaches typically rely on directly fine-tuning VLMs using expert demonstrations. However, this strategy suffers from a spatio-temporal gap, resulting in considerable data inefficiency and heavy reliance on human labor. Spatially, VLMs operate within a high-level semantic space, whereas robotic actions are grounded in low-level 3D physical space; temporally, VLMs primarily interpret the present, while VLA models anticipate future actions. To overcome these challenges, we propose a novel training paradigm, ROSA, which leverages robot state estimation to improve alignment between vision-language and action spaces. By integrating robot state estimation data obtained via an automated process, ROSA enables the VLA model to gain enhanced spatial understanding and self-awareness, thereby boosting performance and generalization. Extensive experiments in both simulated and real-world environments demonstrate the effectiveness of ROSA, particularly in low-data regimes.
applications to robotics, autonomy, planning
https://openreview.net/pdf?id=I8a6bc9rmz
2025-09-18T22:43:29
5
[ { "id": "coWTWcz0tu", "forum": "I8a6bc9rmz", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission12251/Reviewer_CUZU", "reviewer_name": "Reviewer_CUZU", "rating": 2, "confidence": 4, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "The p...
GrsofC2FqF
https://openreview.net/forum?id=GrsofC2FqF
Detection of unknown unknowns in autonomous systems
5.333333
3.666667
[ 6, 2, 8 ]
[ 4, 3, 4 ]
3
[ "unknown unknowns", "autonomous systems", "conformal bounds" ]
Unknown unknowns (U2s) are deployment-time scenarios absent from development/testing. Unlike conventional anomalies, U2s are not out-of-distribution (OOD); they stem from changes in underlying system dynamics without a distribution shift from normal data. Thus, existing multi-variate time series anomaly detection (MTAD) methods—which rely on distribution-shift cues—are ill-suited for U2 detection. Specifically: (i) we show most anomaly datasets exhibit distribution shift between normal and anomalous data and therefore are not representative of U2s; (ii) we introduce eight U2 benchmarks where training data contain OOD anomalies but no U2s, while test sets contain both OOD anomalies and U2s; (iii) we demonstrate that state-of-the-art (SOTA) MTAD results often depend on impractical enhancements: point adjustment (PA) (uses ground truth to flip false negatives to true positives, inflating precision) and threshold learning with data leakage (TL) (tuning thresholds on test data and labels); (iv) with PA+TL, even untrained deterministic methods can match or surpass MTAD baselines; (v) without PA/TL, existing MTAD methods degrade sharply on U2 benchmarks. Finally, we present sparse model identification–enhanced anomaly detection (SPIE-AD), a model-recovery-and-conformance, zero-shot MTAD approach that outperforms baselines on all eight U2 benchmarks and on six additional real-world MTAD datasets—without PA or TL.
We formalize U2 (non-OOD dynamic changes without distribution shift), release 8 U2 benchmarks, and propose SPIE-AD—a zero-shot U2 detection method.
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
https://openreview.net/pdf?id=GrsofC2FqF
2025-09-20T17:42:04
5
[ { "id": "PEwS41DiLv", "forum": "GrsofC2FqF", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission24864/Reviewer_u3Fj", "reviewer_name": "Reviewer_u3Fj", "rating": 6, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This ...
yli4zJhJB0
https://openreview.net/forum?id=yli4zJhJB0
NGS-Marker: Robust Native Watermarking for 3D Gaussian Splatting
5
3.75
[ 6, 6, 2, 6 ]
[ 3, 3, 4, 5 ]
4
[ "3D Gaussian Splatting", "digital asset watermarking", "copyright protection" ]
With the rapid development and adoption of 3D Gaussian Splatting (3DGS), the need for effective copyright protection has become increasingly critical. Existing watermarking techniques for 3DGS mainly focus on protecting rendered images via pre-trained decoders, leaving the underlying 3D Gaussian primitives vulnerable to misuse. In particular, they are ineffective against **Partial Infringement**, where an adversary extracts and reuses only a subset of Gaussians. In this paper, we propose **NGS-Marker**, a novel native watermarking framework for 3DGS. It integrates a jointly trained watermark injector and message decoder, and employs a gradient-based progressive injection strategy to ensure full-scene coverage. This enables robust ownership decoding from any local region. We further extend NGS-Marker with hybrid protection (combining native and indirect watermarks) and support for multimodal personalized watermarking. Extensive experiments demonstrate that NGS-Marker effectively defends against partial infringement while offering practical flexibility for real-world deployment.
applications to computer vision, audio, language, and other modalities
https://openreview.net/pdf?id=yli4zJhJB0
2025-09-08T14:29:30
4
[ { "id": "UJOFfmXvv6", "forum": "yli4zJhJB0", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission2991/Reviewer_ERh9", "reviewer_name": "Reviewer_ERh9", "rating": 6, "confidence": 3, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This p...
FpjH12hcqt
https://openreview.net/forum?id=FpjH12hcqt
Taming OOD Actions for Offline Reinforcement Learning: An Advantage-Based Approach
4
4
[ 4, 2, 4, 4, 6, 4 ]
[ 4, 5, 3, 4, 4, 4 ]
6
[ "Offline reinforcement learning", "out-of-distribution actions", "actor-critic" ]
Offline reinforcement learning (RL) learns policies from fixed datasets without online interactions, but suffers from distribution shift, causing inaccurate evaluation and overestimation of out-of-distribution (OOD) actions. Existing methods counter this by conservatively discouraging all OOD actions, which limits generalization. We propose Advantage-based Diffusion Actor-Critic (ADAC), which evaluates OOD actions via an advantage-like function and uses it to modulate the Q-function update discriminatively. Our key insight is that the (state) value function is generally learned more reliably than the action-value function; we thus use the next-state value to indirectly assess each action. We develop a PointMaze environment to clearly visualize that advantage modulation effectively selects superior OOD actions while discouraging inferior ones. Moreover, extensive experiments on the D4RL benchmark show that ADAC achieves state-of-the-art performance, with especially strong gains on challenging tasks.
In this paper, we propose Advantage-based Diffusion Actor-Critic (ADAC), a novel method that systematically evaluates OOD actions using the batch-optimal value function.
reinforcement learning
https://openreview.net/pdf?id=FpjH12hcqt
2025-09-18T17:14:59
6
[ { "id": "cNRqSNy7wm", "forum": "FpjH12hcqt", "review_number": 6, "reviewer_id": "ICLR.cc/2026/Conference/Submission11012/Reviewer_SCwX", "reviewer_name": "Reviewer_SCwX", "rating": 4, "confidence": 4, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "The p...
TRgEiJ5yN0
https://openreview.net/forum?id=TRgEiJ5yN0
Beyond RLHF: A Theoretical Framework of Alignment as Distribution Learning
3.5
3.25
[ 2, 2, 4, 6 ]
[ 4, 4, 3, 2 ]
4
[ "alignment", "large language models", "RLHF", "DPO" ]
Alignment via reinforcement learning from human feedback (RLHF) has become the dominant paradigm for controlling the quality of outputs from large language models (LLMs). However, the standard RLHF objective lacks formal justification and incentivizes degenerate, deterministic LMs in the asymptotic regime. We ask under what assumptions can we derive RLHF or other novel objectives with rigorous learning-theoretic guarantees, without relying on an \emph{a priori} notion of reward maximization. To this end, we reframe alignment as \emph{distribution learning} from pairwise preferences, formalizing our approach with a probabilistic assumption describing how preferences reveal information about the target (oracle) LM. This leads us to propose three principled alignment objectives: preference maximum likelihood estimation, preference distillation, and reverse KL minimization. We prove that all three approaches enjoy strong non-asymptotic $O(1/n)$ convergence to the target LM, naturally avoiding degeneracy. In particular, reverse KL highly resembles the RLHF objective, providing strong justification for RLHF with a minor correction. Furthermore, our theory confirms the empirical finding that on-policy objectives (e.g., RLHF) often outperform likelihood-style objectives (e.g., DPO). Finally, we empirically show that our proposed methods consistently matches or outperforms baselines across various tasks and models.
learning theory
https://openreview.net/pdf?id=TRgEiJ5yN0
2025-09-08T05:31:22
4
[ { "id": "KUUYiOH1mf", "forum": "TRgEiJ5yN0", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission2850/Reviewer_LNwF", "reviewer_name": "Reviewer_LNwF", "rating": 2, "confidence": 4, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "Propos...
80YSL3oy7z
https://openreview.net/forum?id=80YSL3oy7z
MASC: Boosting Autoregressive Image Generation with a Manifold-Aligned Semantic Clustering
4.5
4
[ 4, 4, 6, 4 ]
[ 4, 4, 4, 4 ]
4
[ "Autoregressive Image Generation", "Representation Learning", "Hierarchical Clustering", "Semantic Manifold", "Plug-and-Play Module" ]
Autoregressive (AR) models have shown great promise in image generation, yet they face a fundamental inefficiency stemming from their core component: a vast, unstructured vocabulary of visual tokens. This conventional approach treats tokens as a flat vocabulary, disregarding the intrinsic structure of the token embedding space where proximity often correlates with semantic similarity. This oversight results in a highly complex prediction task, which hinders training efficiency and limits final generation quality. To resolve this, we propose **M**anifold-**A**ligned **S**emantic **C**lustering (MASC), a principled framework that constructs a hierarchical semantic tree directly from the codebook's intrinsic structure. MASC employs a novel geometry-aware distance metric and a density-driven agglomerative construction to model the underlying manifold of the token embeddings. By transforming the flat, high-dimensional prediction task into a structured, hierarchical one, MASC introduces a beneficial inductive bias that significantly simplifies the learning problem for the AR model. MASC is designed as a plug-and-play module, and our extensive experiments validate its effectiveness: it accelerates training by up to 57\% and significantly improves generation quality, reducing the FID of LlamaGen-XL from 2.87 to 2.58. MASC elevates existing AR frameworks to be highly competitive with state-of-the-art methods, establishing that structuring the prediction space is as crucial as architectural innovation for scalable generative modeling. Our code is open-sourced via \url{https://anonymous.4open.science/r/anonymous_MASC-F3D2/}.
MASC resolves a core inefficiency in autoregressive image generation by transforming flat visual vocabulary into a semantic hierarchy, simplifying the prediction task to accelerate training and improve generation quality.
generative models
https://openreview.net/pdf?id=80YSL3oy7z
2025-09-10T10:12:18
4
[ { "id": "jWlBk0CHf6", "forum": "80YSL3oy7z", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission3542/Reviewer_T1kK", "reviewer_name": "Reviewer_T1kK", "rating": 4, "confidence": 4, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "In aut...
VMLUFna6VI
https://openreview.net/forum?id=VMLUFna6VI
Reasoning Under Uncertainty: Exploring Probabilistic Reasoning Capabilities of LLMs
3.5
4
[ 2, 4, 2, 6 ]
[ 5, 3, 4, 4 ]
4
[ "Large Language Models (LLMs)", "In-Context Learning", "Probabilistic Reasoning", "Discrete Probability Distributions" ]
Despite widespread success in language understanding and generation, large language models (LLMs) exhibit unclear and often inconsistent behavior when faced with tasks that require probabilistic reasoning. In this work, we present the first comprehensive study of the reasoning capabilities of LLMs over explicit discrete probability distributions. Given observations from a probability distribution, we evaluate models on three carefully designed tasks—mode identification, maximum likelihood estimation, and sample generation—by prompting them to provide responses to queries about either the joint distribution or its conditionals. These tasks thus probe a range of probabilistic skills, including frequency analysis, marginalization, and generative behavior. Through comprehensive empirical evaluations, we demonstrate that there exists a clear performance gap between smaller and larger models, with the latter demonstrating stronger inference and surprising capabilities in sample generation. Furthermore, our investigations reveal notable limitations, including sensitivity to variations in the notation utilized to represent probabilistic outcomes and performance degradation of over 60% as context length increases. Together, our results provide a detailed understanding of the probabilistic reasoning abilities of LLMs and identify key directions for future improvement.
We present the first systematic evaluation of LLMs on reasoning over discrete probability distributions across different tasks.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=VMLUFna6VI
2025-09-20T04:03:24
4
[ { "id": "CLmHS24kqB", "forum": "VMLUFna6VI", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission20968/Reviewer_upbq", "reviewer_name": "Reviewer_upbq", "rating": 2, "confidence": 5, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "Summa...
W8Pzzxn4hm
https://openreview.net/forum?id=W8Pzzxn4hm
Differentiable Cluster Discovery in Temporal Graphs
4
3
[ 4, 6, 2 ]
[ 2, 4, 3 ]
3
[ "Temporal Graph", "Clustering", "Gumbel Softmax" ]
Existing temporal graph clustering methods suffer from poor optimization dynamics due to reliance on heuristically initialized cluster assignment distribution without considering the dynamic nature of the evolving graph. The target cluster assignment distribution often conflicts with evolving temporal representations, leading to oscillatory gradients and unstable convergence. Motivated by the need for differentiable and adaptive clustering in dynamic settings, we propose $\textbf{TGRAIL}$ ($\textbf{T}$emporal $\textbf{Gr}$aph $\textbf{A}$lignment and $\textbf{I}$ndex $\textbf{L}$earning), a novel framework for temporal graph clustering based on Gumbel–Softmax sampling. TGRAIL enables discrete cluster assignments while maintaining gradient flow. To ensure stable training, we formulate the clustering objective as an expectation over Monte Carlo samples and show that this estimator is both unbiased and variance-reduced. Furthermore, we incorporate a temporal consistency loss to preserve the order of interactions across time. Extensive experiments on six real-world temporal graph datasets demonstrate that our approach consistently outperforms state-of-the-art baselines, achieving higher clustering accuracy and robustness. Our results validate the effectiveness of jointly optimizing temporal dynamics and discrete cluster assignments in evolving graphs.
We propose a differentiable cluster assignment framework for temporal graph.
unsupervised, self-supervised, semi-supervised, and supervised representation learning
https://openreview.net/pdf?id=W8Pzzxn4hm
2025-09-20T00:53:51
3
[ { "id": "aYfbHNSYka", "forum": "W8Pzzxn4hm", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission19942/Reviewer_ss9m", "reviewer_name": "Reviewer_ss9m", "rating": 4, "confidence": 2, "soundness": 3, "contribution": 3, "presentation": 4, "summary": "The p...
IZHk6BXBST
https://openreview.net/forum?id=IZHk6BXBST
Rodrigues Network for Learning Robot Actions
6
3.5
[ 8, 8, 6, 2 ]
[ 3, 4, 3, 4 ]
4
[ "Robot learning", "Action understanding", "Neural architecture" ]
Understanding and predicting articulated actions is important in robot learning. However, common architectures such as MLPs and Transformers lack inductive biases that reflect the underlying kinematic structure of articulated systems. To this end, we propose the **Neural Rodrigues Operator**, a learnable generalization of the classical forward kinematics operation, designed to inject kinematics-aware inductive bias into neural computation. Building on this operator, we design the **Rodrigues Network (RodriNet)**, a novel neural architecture specialized for processing actions. We evaluate the expressivity of our network on two synthetic tasks on kinematic and motion prediction, showing significant improvements compared to standard backbones. We further demonstrate its effectiveness in two realistic applications: (i) imitation learning on robotic benchmarks with the Diffusion Policy, and (ii) single-image 3D hand reconstruction. Our results suggest that integrating structured kinematic priors into the network architecture improves action learning in various domains.
We design a new neural network, the Rodrigues Network (RodriNet), that addresses the kinematic structural priors in articulated robot action learning.
applications to robotics, autonomy, planning
https://openreview.net/pdf?id=IZHk6BXBST
2025-09-19T07:52:29
4
[ { "id": "ZpYgdlPP4q", "forum": "IZHk6BXBST", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission14589/Reviewer_fHxR", "reviewer_name": "Reviewer_fHxR", "rating": 8, "confidence": 3, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This ...
Uih66XPzHU
https://openreview.net/forum?id=Uih66XPzHU
CLINIC : Evaluating Multilingual Trustworthiness in Language Models for Healthcare
4.5
3.5
[ 4, 2, 6, 6 ]
[ 4, 4, 3, 3 ]
4
[ "Multilingual", "Trustworthiness", "Language Models" ]
Integrating language models (LMs) in healthcare systems holds great promise for improving medical workflows and decision-making. However, a critical barrier to their real-world adoption is the lack of reliable evaluation of their trustworthiness, especially in multilingual healthcare settings. Existing LMs are predominantly trained in high-resource languages, making them ill-equipped to handle the complexity and diversity of healthcare queries in mid- and low-resource languages, posing significant challenges for deploying them in global healthcare contexts where linguistic diversity is key. In this work, we present \textsc{Clinic}, a \textbf{C}omprehensive Mu\textbf{l}tilingual Benchmark to evaluate the trustworth\textbf{i}ness of la\textbf{n}guage models \textbf{i}n health\textbf{c}are. \name systematically benchmarks LMs across five key dimensions of trustworthiness: truthfulness, fairness, safety, robustness, and privacy, operationalized through 18 diverse tasks, spanning 15 languages (covering all the major continents), and encompassing a wide array of critical healthcare topics like disease conditions, preventive actions, diagnostic tests, treatments, surgeries, and medications. Our extensive evaluation reveals that LMs struggle with factual correctness, demonstrate bias across demographic and linguistic groups, and are susceptible to privacy breaches and adversarial attacks. By highlighting these shortcomings, \name lays the foundation for enhancing the global reach and safety of LMs in healthcare across diverse languages.
Multilingual Trustworthiness Benchmark for HealthCare
datasets and benchmarks
https://openreview.net/pdf?id=Uih66XPzHU
2025-09-19T23:14:01
4
[ { "id": "DXkkNT9ksZ", "forum": "Uih66XPzHU", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission19256/Reviewer_qWCd", "reviewer_name": "Reviewer_qWCd", "rating": 4, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This ...
lBctELT2f9
https://openreview.net/forum?id=lBctELT2f9
OrtSAE: Orthogonal Sparse Autoencoders Uncover Atomic Features
6
3.666667
[ 4, 8, 6 ]
[ 3, 5, 3 ]
3
[ "sparse autoencoder", "mechanistic interpretability", "language model", "representation learning", "feature disentanglement", "regularization" ]
Sparse autoencoders (SAEs) are a technique for sparse decomposition of neural network activations into human-interpretable features. However, current SAEs suffer from feature absorption, where specialized features capture instances of general features creating representation holes, and feature composition, where independent features merge into composite representations. In this work, we introduce Orthogonal SAE (OrtSAE), a novel approach aimed to mitigate these issues by enforcing orthogonality between the learned features. By implementing a new training procedure that penalizes high pairwise cosine similarity between SAE features, OrtSAE promotes the development of disentangled features while scaling linearly with the SAE size, avoiding significant computational overhead. We train OrtSAE across different models and layers and compare it with other methods. We find that OrtSAE discovers 9% more distinct features, reduces feature absorption (by 65%) and composition (by 15%), improves performance on spurious correlation removal (+6%), and achieves on-par performance for other downstream tasks compared to traditional SAEs.
We introduce Orthogonal Sparse Autoencoders (OrtSAE), a novel approach to train SAEs that enforces orthogonality between learned features, reducing feature absorption and composition while enhancing performance on downstream tasks.
interpretability and explainable AI
https://openreview.net/pdf?id=lBctELT2f9
2025-09-19T20:36:22
3
[ { "id": "eMKi53GUZh", "forum": "lBctELT2f9", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission18226/Reviewer_9DwE", "reviewer_name": "Reviewer_9DwE", "rating": 4, "confidence": 3, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "The p...
PcgckhFQja
https://openreview.net/forum?id=PcgckhFQja
STR-Bamba: Multimodal Molecular Textual Representation Encoder-Decoder Foundation Model
4.5
4
[ 4, 6, 4, 4 ]
[ 4, 4, 4, 4 ]
4
[ "Foundation Model", "Transformer", "Mamba-2", "SMILES", "SELFIES", "InChI", "IUPAC Name", "Molecular Formula", "Polymer SMILES", "Electrolyte Formulation" ]
Most large-scale chemical language models are trained on a single textual molecular representation using self-supervised learning over large unlabeled corpora. These models excel in tasks such as property prediction and molecule generation by learning contextualized representations of input tokens. However, relying solely on one representation may result in the loss of structural or semantic information captured by alternative formats and may limit the model's ability to generalize across diverse molecular encodings. To address this limitation, we incorporate multiple textual molecular representations—including SMILES, SELFIES, molecular formula, IUPAC name, International Chemical Identifier (InChI), serialized polymer graph (SPG), and electrolyte formulations in an unified vocabulary to harness the unique strengths of each format. Here, we introduce a large encoder-decoder chemical foundation model based on the Bamba architecture, a hybrid of Transformers and Mamba-2 layers, designed to support multi-representational inputs. The model is pre-trained in a BERT-style on 588 million samples, resulting in a corpus of approximately 29 billion molecular tokens. These models serve as a foundation for language chemical research in supporting different complex tasks, including molecular properties prediction, classification, and molecular translation. Furthermore, extensive studies of the multimodal molecular latent space indicate cross-representation alignment and reveal how different textual encodings of the same molecule can converge toward a unified semantic representation. This shared space may facilitate deeper insights into molecular structure, enhance generalization, and support a broad range of downstream applications.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=PcgckhFQja
2025-09-20T02:57:28
4
[ { "id": "WiFPn1XCoL", "forum": "PcgckhFQja", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission20618/Reviewer_VYHY", "reviewer_name": "Reviewer_VYHY", "rating": 4, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 2, "summary": "This ...
gyPUMAq5xN
https://openreview.net/forum?id=gyPUMAq5xN
Steering Risk Preferences in Large Language Models by Aligning Behavioral and Neural Representations
4
4
[ 2, 4, 6, 4 ]
[ 4, 3, 4, 5 ]
4
[ "risky choices", "steering", "large language model", "representation engineering", "AI safety" ]
Changing the behavior of large language models (LLMs) can be as straightforward as editing the Transformer’s residual streams using appropriately constructed "steering vectors." These modifications to internal neural activations, a form of representation engineering, offer an effective and targeted means of influencing model behavior without retraining or fine-tuning the model. But how can such steering vectors be systematically identified? We propose a principled approach, which we call self-alignment, that uncovers steering vectors by aligning latent representations elicited through behavioral methods (specifically, Markov chain Monte Carlo with LLMs) with their neural counterparts. To evaluate this approach, we focus on extracting latent risk preferences from LLMs and steering their risk-related outputs using the aligned representations as steering vectors. We show that the resulting steering vectors successfully and reliably modulate LLM outputs in line with the targeted behavior.
We propose a self-alignment method to derive steering vectors by aligning behavioral and neural representations of risk.
applications to neuroscience & cognitive science
https://openreview.net/pdf?id=gyPUMAq5xN
2025-09-20T03:00:40
4
[ { "id": "wscj5tfibH", "forum": "gyPUMAq5xN", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission20637/Reviewer_4JjW", "reviewer_name": "Reviewer_4JjW", "rating": 2, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 3, "summary": "The p...
fJYsE7Kq7o
https://openreview.net/forum?id=fJYsE7Kq7o
Quantifying the noise sensitivity of the Wasserstein metric for images
4
3.666667
[ 4, 0, 8 ]
[ 3, 5, 3 ]
3
[ "optimal transport", "earth mover's distance", "cryo-electron microscopy", "image similarity", "robustness" ]
Wasserstein metrics are increasingly being used in domains like generative modeling and computer vision as similarity scores for images represented as discrete measures on a grid, yet their behavior under noise remains poorly understood. In this work, we consider the sensitivity of the (signed) Wasserstein distance with respect to pixel-wise additive noise and derive exact (non-asymptotic) bounds. Among other results, we prove that the error in the signed 2-Wasserstein distance scales with the square root of the noise standard deviation, whereas the $L_2$ norm scales linearly. We present experiments that support our theoretical findings and point to a peculiar phenomenon where increasing the level of noise can decrease the Wasserstein distance. A case study on cryo-electron microscopy images demonstrates that the Wasserstein metric can preserve the geometric structure even when the $L_2$ metric fails to do so.
We study the impact of pixel-wise noise when comparing images via the (signed) Wasserstein distance
other topics in machine learning (i.e., none of the above)
https://openreview.net/pdf?id=fJYsE7Kq7o
2025-09-20T05:19:30
3
[ { "id": "YyM17b3j6e", "forum": "fJYsE7Kq7o", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission21368/Reviewer_PgAu", "reviewer_name": "Reviewer_PgAu", "rating": 4, "confidence": 3, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "The p...
pbzlzndDKZ
https://openreview.net/forum?id=pbzlzndDKZ
SyMerge: From Non-Interference to Synergistic Merging via Single-Layer Adaptation
4.8
3.6
[ 8, 2, 6, 4, 4 ]
[ 4, 3, 4, 4, 3 ]
5
[ "Model merging", "Multi-task learning", "Task conflicts", "Task vectors", "Task-specific layers" ]
Model merging offers an efficient alternative to multi-task learning by combining independently fine-tuned models, but most prior approaches focus mainly on avoiding task interference. We argue instead that the real potential of merging lies in achieving synergy, where tasks enhance one another. Our intuition comes from a pilot study showing that when a classifier trained on one task is paired with the encoder of another, the resulting cross-task performance strongly predicts merge quality. Moreover, adapting even a single task-specific layer can substantially improve this compatibility, suggesting a simple yet powerful lever for synergy. Building on this insight, we introduce SyMerge, a lightweight framework that jointly optimizes one task-specific layer and merges coefficients. To ensure stability without labels, SyMerge employs a robust self-labeling strategy guided by expert model predictions, avoiding the pitfalls of entropy-based adaptation. This minimalist yet principled design achieves state-of-the-art results across vision, dense prediction, and NLP benchmarks, while also producing adapted layers that transfer effectively to other merging methods.
transfer learning, meta learning, and lifelong learning
https://openreview.net/pdf?id=pbzlzndDKZ
2025-09-14T13:32:40
5
[ { "id": "jEBut7QOT8", "forum": "pbzlzndDKZ", "review_number": 5, "reviewer_id": "ICLR.cc/2026/Conference/Submission4986/Reviewer_BQ5a", "reviewer_name": "Reviewer_BQ5a", "rating": 8, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "The pa...
ebgsbC4x5W
https://openreview.net/forum?id=ebgsbC4x5W
Online Rubrics Elicitation from Pairwise Comparisons
6.5
3.75
[ 6, 6, 6, 8 ]
[ 4, 4, 3, 4 ]
4
[ "rubrics", "checklists", "post-training", "reward hacking", "reinforcement learning" ]
Rubrics provide a flexible way to train LLMs on open-ended long-form answers where verifiable rewards are not applicable and human preferences provide coarse signals. Prior work shows that reinforcement learning with rubric-based rewards leads to consistent gains in LLM post-training. Most existing approaches rely on rubrics that remain static over the course of training. Such static rubrics, however, are vulnerable to reward-hacking type behaviors and fail to capture emergent desiderata that arise during training. We introduce Online Rubrics Elicitation (OnlineRubrics), a method that dynamically curates evaluation criteria in an online manner through pairwise comparisons of responses from current and reference policies. This online process enables continuous identification and mitigation of errors as training proceeds. Empirically, this approach yields consistent improvements of up to 8% over training exclusively with static rubrics across AlpacaEval, GPQA, ArenaHard as well as the validation sets of expert questions and rubrics. We qualitatively analyze the elicited criteria and identify prominent themes such as transparency, practicality, organization, and reasoning.
We are eliciting criteria for online RL training by contrasting a pair of responses from current and reference policies.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=ebgsbC4x5W
2025-09-19T12:46:06
4
[ { "id": "TQNzuZ1GUn", "forum": "ebgsbC4x5W", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission15925/Reviewer_sQcC", "reviewer_name": "Reviewer_sQcC", "rating": 6, "confidence": 4, "soundness": 2, "contribution": 3, "presentation": 3, "summary": "This ...
2EYzLUrAS4
https://openreview.net/forum?id=2EYzLUrAS4
Fixing Incomplete Value Function Decomposition for Multi-Agent Reinforcement Learning
5
4
[ 8, 2, 6, 4 ]
[ 4, 4, 3, 5 ]
4
[ "multi-agent", "reinforcement-learning", "value-function-decomposition", "cooperative", "dec-pomdps" ]
Value function decomposition methods for cooperative multi-agent reinforcement learning compose joint values from individual per-agent utilities, and train them using a joint objective. To ensure that the action selection process between individual utilities and joint values remains consistent, it is imperative for the composition to satisfy the individual-global max (IGM) property. Although satisfying IGM itself is straightforward, most existing methods (e.g., VDN, QMIX) have limited representation capabilities and are unable to represent the full class of IGM values, and the one exception that has no such limitation (QPLEX) is unnecessarily complex. In this work, we present a simple formulation of the full class of IGM values that naturally leads to the derivation of QFIX, a novel family of value function decomposition models that expand the representation capabilities of prior models via a thin "fixing" layer. We derive multiple variants of QFIX, and imple- ment three variants in two well-known multi-agent frameworks. We perform an empirical evaluation on multiple SMACv2 and Overcooked environments, which confirms that QFIX (i) succeeds in enhancing the performance of prior methods, (ii) learns more stably and performs better than its main competitor QPLEX, and (iii) achieves this while employing the simplest and smallest mixing models.
We provide a simple formulation for IGM-complete value function decomposition, and develop a novel family of value function decomposition models based on it.
reinforcement learning
https://openreview.net/pdf?id=2EYzLUrAS4
2025-09-20T06:32:08
4
[ { "id": "5rX8kJ9GaZ", "forum": "2EYzLUrAS4", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission21736/Reviewer_sZyZ", "reviewer_name": "Reviewer_sZyZ", "rating": 8, "confidence": 4, "soundness": 4, "contribution": 4, "presentation": 3, "summary": "The p...
mHAm5Yf1Z5
https://openreview.net/forum?id=mHAm5Yf1Z5
Trust the Process? Backdoor Attack against Vision–Language Models with Chain-of-Thought Reasoning
4
3.5
[ 4, 2, 2, 8 ]
[ 4, 4, 3, 3 ]
4
[ "backdoor attack", "vision-language model", "chain-of-thought" ]
Vision Language Models (VLMs) have demonstrated remarkable capabilities in multimodal understanding, with the integration of Chain-of-Thought (CoT) further enhancing their reasoning abilities. By generating a step-by-step thought process, CoT significantly enhances user trust in the model's outputs. However, we contend that CoT also poses serious security risks as it can be exploited by attackers to execute far more covert backdoor attacks, a threat that remains unexplored by prior work. In this paper, we present the first systematic investigation into the vulnerability of the CoT process in VLMs to backdoor attacks. We introduce **ReWire**, a novel and stealthy backdoor attack that leverages data poisoning to hijack the model's reasoning process. Unlike typical label attacks, ReWire initially generates a correct and plausible reasoning chain consistent with the visual input. Subsequently, it injects a predefined ``pivot statement" that stealthily redirects the reasoning path toward a malicious, attacker-specified conclusion. We conduct extensive experiments on several mainstream open-source VLMs across four distinct datasets, demonstrating that ReWire uniformly achieves an attack success rate of over 97\%. Furthermore, the attack stealth has been fully validated, as the malicious CoT it generates accurately reflects the image's visual content (fidelity), is presented in fluent, natural language (coherence), and forms a logically sound, albeit manipulated, progression to the final malicious answer (consistency). Our findings uncover a critical new security risk in VLM reasoning systems and underscore the urgent need to develop more robust defense mechanisms.
We introduce ReWire, the first backdoor attack specifically designed to hijack the reasoning process (Chain-of-Thought) in Vision Language Models (VLMs).
alignment, fairness, safety, privacy, and societal considerations
https://openreview.net/pdf?id=mHAm5Yf1Z5
2025-09-16T00:04:43
5
[ { "id": "Zkc5DCQmIK", "forum": "mHAm5Yf1Z5", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission6077/Reviewer_9AEJ", "reviewer_name": "Reviewer_9AEJ", "rating": 4, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 3, "summary": "This p...
bEL7xaBdJf
https://openreview.net/forum?id=bEL7xaBdJf
Centrality Graph Shift Operators for Graph Neural Networks
4
3.75
[ 8, 4, 2, 2 ]
[ 4, 3, 3, 5 ]
4
[ "Graph Neural Networks", "Graph Shift Operators", "Centrality" ]
Graph Shift Operators (GSOs), such as the adjacency and graph Laplacian matrices, play a fundamental role in graph theory and graph representation learning. Traditional GSOs are typically constructed by normalizing the adjacency matrix by the degree matrix, a local centrality metric. In this work, we instead propose and study Centrality GSOs (CGSOs), which normalize adjacency matrices by global centrality metrics such as the PageRank, $k$-core or count of fixed length paths. We study spectral properties of the CGSOs, allowing us to get an understanding of their action on graph signals. We confirm this understanding by defining and running the spectral clustering algorithm based on different CGSOs on several synthetic and real-world datasets. We furthermore outline how our CGSO can act as the message passing operator in any Graph Neural Network and in particular demonstrate strong performance of a variant of the Graph Convolutional Network and Graph Attention Network using our CGSOs on several real-world benchmark datasets.
We propose and study Centrality Graph Shift Operators (CGSOs), which normalize adjacency matrices by global centrality metrics. We furthermore outline how CGSOs can act as message passing operators in any Graph Neural Network.
learning on graphs and other geometries & topologies
https://openreview.net/pdf?id=bEL7xaBdJf
2025-09-06T21:42:34
4
[ { "id": "G9k1z2Wbx9", "forum": "bEL7xaBdJf", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission2619/Reviewer_LfRT", "reviewer_name": "Reviewer_LfRT", "rating": 8, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This p...
vqNg2Vl8o1
https://openreview.net/forum?id=vqNg2Vl8o1
Constraint Matters: Multi-Modal Representation for Reducing Mixed-Integer Linear programming
5.5
4
[ 6, 8, 4, 4 ]
[ 3, 5, 4, 4 ]
4
[ "Mixed-integer Linear Programming", "Learning to Optimize", "Model Reduction" ]
Model reduction, which aims to learn a simpler model of the original mixed integer linear programming (MILP), can solve large-scale MILP problems much faster. Most existing model reduction methods are based on variable reduction, which predicts a solution value for a subset of variables. From a dual perspective, constraint reduction that transforms a subset of inequality constraints into equalities can also reduce the complexity of MILP, but has been largely ignored. Therefore, this paper proposes a novel constraint-based model reduction approach for MILPs. Constraint-based MILP reduction has two challenges: 1) which inequality constraints are critical such that reducing them can accelerate MILP solving while preserving feasibility, and 2) how to predict these critical constraints efficiently. To identify critical constraints, we label the tight-constraints at the optimal solution as potential critical constraints and design an information theory-guided heuristic rule to select a subset of critical tight-constraints. Theoretical analyses indicate that our heuristic mechanism effectively identify the constraints most instrumental in reducing the solution space and uncertainty. To learn the critical tight-constraints, we propose a multi-modal representation that integrates information from both instance-level and abstract-level MILP formulations. The experimental results show that, compared to the state-of-the-art MILP solvers, our method improves the quality of the solution by over 50\% and reduces the computation time by 17.47\%.
optimization
https://openreview.net/pdf?id=vqNg2Vl8o1
2025-09-20T11:57:09
4
[ { "id": "W720uBnvgs", "forum": "vqNg2Vl8o1", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission23183/Reviewer_ajfo", "reviewer_name": "Reviewer_ajfo", "rating": 6, "confidence": 3, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This ...
Ys2OgNLNV7
https://openreview.net/forum?id=Ys2OgNLNV7
Think First, Then Select and Verify with Query–Key Alignment
2
3.666667
[ 4, 2, 0 ]
[ 3, 3, 5 ]
3
[ "Query–Key alignment", "attention heads", "white-box selection", "white-box verification", "chain-of-thought (CoT)", "self-consistency", "permutation robustness" ]
We demonstrate that a “think-first” phase via chain-of-thought (CoT) prompting systematically strengthens internal query–key (QK) alignment improving ability to select and verify answers directly from model activations, rather than from decoded tokens. Building on robust multiple-choice evaluation with MMLU-Pro (10 options) and extending to free-form reasoning on MATH-500, GSM8K, and our variant of Humanity’s Last Exam (HLE), we evaluate three settings: (i) MCQA vs MCQA+CoT with QK-based selection; (ii) GSM8K candidate generation with/without CoT followed by QK-based selection among self-proposed answers; and (iii) QK-based verification of LLM solutions and conjectures. We analyze QK-score accuracy, permutation robustness, and diagnostics relating alignment strength to correctness. This design situates QK score selection and verification alongside CoT and self-consistency baselines on canonical reasoning tasks, yielding a white-box, computation-efficient decision rule that aims to match or exceed decoded choices. We argue that these results offer a simple, reproducible path to more reliable reasoning, turning CoT from a purely generative aid into a deliberation-then-selection mechanism grounded in the model’s own representations.
A brief CoT “think-first” phase sharpens QK alignment so we can select and verify answers directly from model activations, outperforming decoded-token choices on MMLU-Pro, GSM8K, MATH-500, and HLE
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=Ys2OgNLNV7
2025-09-20T02:50:51
3
[ { "id": "zYEwaxeRWd", "forum": "Ys2OgNLNV7", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission20582/Reviewer_MDeN", "reviewer_name": "Reviewer_MDeN", "rating": 4, "confidence": 3, "soundness": 2, "contribution": 3, "presentation": 3, "summary": "This ...
heVn5cNfje
https://openreview.net/forum?id=heVn5cNfje
Unified Data Selection for LLM Reasoning
3.5
3.5
[ 4, 4, 2, 4 ]
[ 4, 4, 2, 4 ]
4
[ "llms", "reasoning", "data selection" ]
Effectively training LLMs for complex, long-CoT reasoning is often bottlenecked by the need for massive high-quality reasoning data. Existing methods are either computationally expensive or fail to reliably distinguish high- from low-quality reasoning samples. To address this, we propose High-Entropy Sum (HES)—a training-free metric that sums only the entropy of the top 0.5\% highest-entropy tokens in each reasoning sequence, focusing on critical forking points to better capture reasoning quality. We validate HES across three mainstream training paradigms: SFT, RFT, and RL. In SFT, training on just the top 20\% of data ranked by HES matches full-dataset performance, while using the lowest-HES data severely degrades it. In RFT, HES-based selection outperforms random baseline. In RL, pairing highest-HES successful trajectories with random failed ones enables the model to learn both strong reasoning patterns and diverse failure modes, significantly surpassing existing training-free selection methods. Our findings establish HES as a robust, training-free metric that enables a unified, data-centric approach to efficiently developing advanced reasoning in LLMs.
We propose a novel metric that measures reasoning quality, enabling a unified and more efficient data-centric approach to training powerful LLMs.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=heVn5cNfje
2025-09-17T17:59:33
4
[ { "id": "iihHh2Qtc8", "forum": "heVn5cNfje", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission8937/Reviewer_vsxf", "reviewer_name": "Reviewer_vsxf", "rating": 4, "confidence": 4, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "The pa...
jCFzagXnoy
https://openreview.net/forum?id=jCFzagXnoy
Beyond Context Limits: Subconscious Threads for Long-Horizon Reasoning
4
3.25
[ 6, 4, 2, 4 ]
[ 4, 3, 3, 3 ]
4
[ "reasoning", "inference", "system", "reinforcement learning" ]
To break the context limits of large language models (LLMs) that bottleneck reasoning accuracy and efficiency, we propose the Thread Inference Model (TIM), a family of LLMs trained for recursive and decompositional problem solving, and the corresponding context pruning mechanism, a rule-based context management strategy enabling long-horizon structured reasoning beyond context limits. With this structure-aware context pruning, TIM supports virtually unlimited working memory and multi-hop tool calls within a single language model inference, overcoming output limits, positional-embedding constraints, and GPU-memory bottlenecks. Performance is achieved by modeling natural language as reasoning trees measured by both length and depth instead of linear sequences. The reasoning trees consist of tasks with thoughts, recursive subtasks, and conclusions based on the concept proposed in Schroeder et al, 2025. During generation, we maintain a working memory that retains only the key-value states of the most relevant context tokens, selected by a rule-based subtask-pruning mechanism, enabling reuse of positional embeddings and GPU memory pages throughout reasoning. Experimental results show that our system sustains high inference throughput, even when manipulating up to 90% of the KV cache in GPU memory. It also delivers accurate reasoning on mathematical tasks and handles information retrieval challenges that require long-horizon reasoning and multi-hop tool use.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=jCFzagXnoy
2025-09-19T22:36:21
4
[ { "id": "OEYXSLfFYy", "forum": "jCFzagXnoy", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission18988/Reviewer_jC79", "reviewer_name": "Reviewer_jC79", "rating": 6, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 3, "summary": "The p...
yOEmEXmbV8
https://openreview.net/forum?id=yOEmEXmbV8
Seeing Through Words: Controlling Visual Retrieval Quality with Language
5
3.5
[ 6, 6, 4, 4 ]
[ 3, 4, 4, 3 ]
4
[ "Large Language Models", "Vision-Language Models", "Query Completion" ]
Text-to-image retrieval is a fundamental task in vision--language learning, yet in real-world scenarios it is often challenged by short and underspecified user queries. Such queries are typically only one or two words long, making them semantically ambiguous, prone to collisions across diverse visual interpretations, and lacking explicit control over the quality of retrieved images. To address these issues, we propose a new paradigm of quality-controllable retrieval, which enriches short queries with contextual details while incorporating explicit notions of image quality. Our key idea is to leverage a generative large language model as a query completion function, extending underspecified queries into descriptive forms that capture fine-grained visual attributes such as pose, scene, and aesthetics. We introduce a training framework that conditions query completion on discretized quality levels, derived from relevance and aesthetic scoring models, so that query enrichment is not only semantically meaningful but also quality-aware. The resulting system provides three key advantages: {1} flexibility, as it is compatible with any pretrained vision--language model without modification; {2} transparency, since enriched queries are explicitly interpretable by users; and {3} controllability, enabling retrieval results to be steered toward user-preferred quality levels. Extensive experiments demonstrate that our proposed approach significantly improves retrieval results and provides effective quality control, bridging the gap between the expressive capacity of modern vision--language models and the underspecified nature of short user queries.
applications to computer vision, audio, language, and other modalities
https://openreview.net/pdf?id=yOEmEXmbV8
2025-09-10T00:24:16
4
[ { "id": "9M3z9Etnbn", "forum": "yOEmEXmbV8", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission3459/Reviewer_7fCB", "reviewer_name": "Reviewer_7fCB", "rating": 6, "confidence": 3, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "This p...
H1ncX6O6Yh
https://openreview.net/forum?id=H1ncX6O6Yh
Orak: A Foundational Benchmark for Training and Evaluating LLM Agents on Diverse Video Games
5.5
4.5
[ 8, 4, 2, 8 ]
[ 5, 4, 4, 5 ]
4
[ "LLM", "Agents", "Benchmark", "Games" ]
Large Language Model (LLM) agents are reshaping the game industry, by enabling more intelligent and human-preferable characters. Yet, current game benchmarks fall short of practical needs: they lack evaluations of diverse LLM capabilities across various game genres, studies of agentic modules crucial for complex gameplay, and fine-tuning datasets to adapt pre-trained LLMs into gaming agents. To fill these gaps, we present Orak, a benchmark for training and evaluating LLM agents across 12 popular video games spanning all major genres. Using a plug-and-play interface built on Model Context Protocol (MCP), Orak supports systematic and reproducible studies of agentic modules in varied game scenarios. We further release a fine-tuning dataset of expert LLM gameplay trajectories spanning multiple genres, turning general LLMs into effective game agents. Orak offers a comprehensive evaluation framework, including game leaderboards, LLM battle arenas, and in-depth analyses of input modality, agentic strategies, and fine-tuning effects, establishing a foundation towards versatile gaming agents. Code is available at https://anonymous.4open.science/r/Orak-5013/.
We introduce a comprehensive benchmark for training and evaluating LLM agents on diverse real-world video games
datasets and benchmarks
https://openreview.net/pdf?id=H1ncX6O6Yh
2025-09-19T10:15:05
4
[ { "id": "b7QkQicpx6", "forum": "H1ncX6O6Yh", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission15134/Reviewer_mcXu", "reviewer_name": "Reviewer_mcXu", "rating": 8, "confidence": 5, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "The p...
VG5iE3rzLz
https://openreview.net/forum?id=VG5iE3rzLz
ReGuidance: Diffusion Steering with Strong Latent Initializations Solves Hard Inverse Problems
5
3.75
[ 8, 4, 4, 4 ]
[ 3, 4, 4, 4 ]
4
[ "diffusion", "guidance", "steering", "initializations" ]
In recent years there has been a flurry of activity around using pretrained diffusion models as informed data priors for solving inverse problems, and more generally around steering these models towards certain reward models. Training-free methods like gradient guidance have offered simple, flexible approaches for these tasks, but when the reward is not informative enough, e.g., in inverse problems with highly compressive measurements, these techniques can veer off the data manifold, failing to produce realistic data samples. To address this challenge, we devise a simple algorithm, ReGuidance, that leverages prior methods' solutions as strong initializations and substantially enhancing their realism. Given a candidate solution $x$ produced by a given method, we propose inverting the solution by running the unconditional probability flow ODE in reverse starting from $x$, and then using the resulting latent as an initialization for a simple instantiation of diffusion guidance. In toy settings, we provide theoretical justification for why this technique boosts the reward and brings $x$ closer to the data manifold. Empirically, we evaluate our algorithm on difficult image restoration tasks including large box inpainting, heavily downscaled superresolution, and high noise deblurring with both linear and nonlinear blurring operations. We find that, using a wide range of baseline methods as initializations, applying our method results in much stronger samples with better realism and measurement consistency.
We show that using strong noise initializations alongside diffusion guidance can provably and experimentally solve fundamentally hard reward guidance problems.
generative models
https://openreview.net/pdf?id=VG5iE3rzLz
2025-09-19T12:37:03
4
[ { "id": "QEwFRygU3J", "forum": "VG5iE3rzLz", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission15887/Reviewer_ZaHb", "reviewer_name": "Reviewer_ZaHb", "rating": 8, "confidence": 3, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This ...
efPOXi8P8J
https://openreview.net/forum?id=efPOXi8P8J
LPO: Towards Accurate GUI Agent Interaction via Location Preference Optimization
4.4
3
[ 4, 4, 4, 6, 4 ]
[ 2, 4, 2, 3, 4 ]
5
[ "GUI Agent Interaction", "Location Preference Optimization" ]
The advent of autonomous agents is transforming interactions with Graphical User Interfaces (GUIs) by employing natural language as a powerful intermediary. Despite the predominance of supervised fine-tuning (SFT) methods in current GUI agents for achieving spatial localization, these methods face substantial challenges due to their limited capacity to accurately perceive positional data. Existing strategies, such as reinforcement learning, often fail to assess positional accuracy effectively, thereby restricting their utility. In response, we introduce Location Preference Optimization (LPO), a novel approach that leverages locational data to optimize interaction preferences. LPO uses information entropy to predict interaction positions by focusing on zones rich in information. Besides, we further introduce a dynamic location reward function based on physical distance, reflecting the varying importance of interaction positions. Supported by Group Relative Preference Optimization (GRPO), LPO facilitates an extensive exploration of GUI environments and significantly enhances interaction precision. Comprehensive experiments demonstrate LPO's superior performance, achieving SOTA results across both offline benchmarks and real-world online evaluations. Our code will be made publicly available soon.
We introduce Location Preference Optimization (LPO), a novel method that enhances GUI interactions by utilizing locational data and information entropy to improve spatial accuracy.
applications to robotics, autonomy, planning
https://openreview.net/pdf?id=efPOXi8P8J
2025-09-17T14:24:39
5
[ { "id": "1vjJSyHDbt", "forum": "efPOXi8P8J", "review_number": 5, "reviewer_id": "ICLR.cc/2026/Conference/Submission8538/Reviewer_JXud", "reviewer_name": "Reviewer_JXud", "rating": 4, "confidence": 2, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "This p...
gjtHK8xXZK
https://openreview.net/forum?id=gjtHK8xXZK
Depth in Motion: Robust Self-Supervised Learning via Representation-Optimization-Supervision Synergy
4.5
4.5
[ 6, 4, 6, 2 ]
[ 5, 4, 5, 4 ]
4
[ "Self-supervised monocular depth estimation", "Dynamic scene understanding", "Cost-volume consistency", "Robust photometric supervision", "Frequency-domain uncertainty modeling" ]
Self-supervised monocular depth estimation recovers scene geometry from unlabeled monocular videos, yet its reliance on photometric constancy tends to cause failures in dynamic scenes: motion and occlusion corrupt correspondences, bias optimization toward texture-sparse regions, and drive residuals into heavy-tailed distributions that undermine supervision. To address these challenges, we propose a Representation–Optimization–Supervision Synergy Network (ROSS-Net), which establishes a holistic defense by restructuring the entire estimation flow to mitigate interlinked failure modes. At the representation level, the Spatio-Temporal Epipolar Calibrator (STEC) validates correspondences across appearance, feature, and temporal cues to filter motion-induced mismatches while preserving dynamic evidence. At the optimization level, the Entropy-Guided Spectral Integrator (EGSI) calibrates depth-axis spectra to counter low-frequency optimization bias while adding no inference-time overhead. At the supervision level, the Order-Statistic Consensus Operator (OSCO) trims and reweights outlier residuals, converting noisy reprojections into robust supervision. Experiments on KITTI and NYUv2 show that ROSS-Net significantly outperforms prior methods under motion and occlusion, and generalizes strongly to unseen domains such as Make3D and ScanNet.
unsupervised, self-supervised, semi-supervised, and supervised representation learning
https://openreview.net/pdf?id=gjtHK8xXZK
2025-09-13T11:23:15
4
[ { "id": "VfhgSoTbVT", "forum": "gjtHK8xXZK", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission4647/Reviewer_abFq", "reviewer_name": "Reviewer_abFq", "rating": 6, "confidence": 5, "soundness": 4, "contribution": 4, "presentation": 4, "summary": "This p...
cZFgsLq8Gs
https://openreview.net/forum?id=cZFgsLq8Gs
DeepScientist: Advancing Frontier-Pushing Scientific Findings Progressively
4
3.75
[ 2, 4, 6, 4 ]
[ 4, 4, 3, 4 ]
4
[ "Automated Scientific Discovery", "Large Language Models (LLMs)", "AI Scientist" ]
While previous AI Scientist systems can generate novel findings, they often lack the focus to produce scientifically valuable contributions that address pressing human-defined challenges. We introduce DeepScientist, a system designed to overcome this by conducting goal-oriented, fully autonomous scientific discovery over month-long timelines. It formalizes discovery as a Bayesian Optimization problem, using a cumulative Findings Memory to intelligently balance the exploitation of promising avenues with the exploration of novel hypotheses. Consuming over 20,000 GPU hours, the system generated about 5,000 unique ideas and experimentally validated approximately 1100, ultimately surpassing human-designed 2025 state-of-the-art (SOTA) methods on three frontier AI tasks by 183.7\%, 1.9\%, and 7.9\%. Crucially, this was achieved by autonomously redesigning core methodologies, not merely recombining existing techniques. In a striking demonstration, the system achieved progress on AI text detection in just two weeks that is comparable to three years of cumulative human research. This work provides the first large-scale evidence of an AI achieving discoveries that progressively surpass human SOTA on scientific tasks, producing valuable findings that genuinely push the frontier forward. To facilitate further research into this process, we will open-source all experimental logs and system code.
This is the first empirical demonstration of an AI that acts as an autonomous scientist to progressively push research frontiers, successfully discovering novel methods that outperform the human SOTA across multiple domains.
applications to computer vision, audio, language, and other modalities
https://openreview.net/pdf?id=cZFgsLq8Gs
2025-09-02T22:27:04
4
[ { "id": "Uy9eevrqvA", "forum": "cZFgsLq8Gs", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission911/Reviewer_rLBb", "reviewer_name": "Reviewer_rLBb", "rating": 2, "confidence": 4, "soundness": 2, "contribution": 3, "presentation": 2, "summary": "This pa...
khSwvAYhNU
https://openreview.net/forum?id=khSwvAYhNU
Multi Perspective Actor Critic: Adaptive Value Decomposition for Robust and Safe Reinforcement Learning
2.4
3.4
[ 2, 2, 2, 2, 4 ]
[ 3, 5, 3, 3, 3 ]
5
[ "Reinforcement Learning", "Robust Reinforcement Learning", "Multi-Objective Reinforcement Learning", "Value Decomposition", "Safe Reinforcement Learning" ]
Real-world deployment of reinforcement learning requires simultaneously handling multiple objectives, safety constraints, and model uncertainty, yet existing methods address these challenges in isolation. We present Multi-Perspective Actor-Critic (MPAC), a novel framework that integrates all three aspects. MPAC combines value decomposition with component-specific risk assessment, enabling different objectives to maintain appropriate uncertainty tolerance, with collision avoidance employing extreme conservatism while efficiency permits optimistic planning. A novel influence-based mechanism dynamically adjusts component weights based on their decision relevance and learning progress, eliminating the need for fixed weights or prior reward knowledge. This yields policies that are simultaneously safe, robust to model perturbations, and less conservative than prior approaches. We prove that MPAC converges to a fixed point corresponding to a distributionally robust optimization problem with component-specific ambiguity sets, providing theoretical justification for its design. Empirically, across continuous-control benchmarks with safety constraints and perturbed dynamics, MPAC achieves superior Pareto trade-offs: it maintains high reward while matching or exceeding safety baselines. These results demonstrate that adaptively weighting decomposed objectives under uncertainty is a principled and practical path toward robust safe RL.
reinforcement learning
https://openreview.net/pdf?id=khSwvAYhNU
2025-09-17T19:15:55
5
[ { "id": "Ale65k2OuF", "forum": "khSwvAYhNU", "review_number": 5, "reviewer_id": "ICLR.cc/2026/Conference/Submission9027/Reviewer_8Yua", "reviewer_name": "Reviewer_8Yua", "rating": 2, "confidence": 3, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "- The ...
QP7WX3XmEy
https://openreview.net/forum?id=QP7WX3XmEy
SafeProtein: Red-Teaming Framework and Benchmark for Protein Foundation Models
4.5
3
[ 4, 4, 4, 6 ]
[ 3, 3, 3, 3 ]
4
[ "Biosafety", "Protein Red-teaming", "Protein Jailbreak" ]
Proteins play crucial roles in almost all biological processes. The advancement of deep learning has greatly accelerated the development of protein foundation models, leading to significant successes in protein understanding and design. However, the lack of systematic red-teaming for these models has raised serious concerns about their potential misuse, such as generating proteins with biological safety risks. This paper introduces **SafeProtein**, the first red-teaming framework designed for protein foundation models to the best of our knowledge. SafeProtein combines multimodal prompt engineering and heuristic beam search to systematically design red-teaming methods and conduct tests on protein foundation models. We also curated **SafeProtein-Bench**, which includes a manually constructed red-teaming benchmark dataset and a comprehensive evaluation protocol. SafeProtein achieved continuous jailbreaks on state-of-the-art protein foundation models (up to 70% attack success rate for ESM3), revealing potential biological safety risks in current protein foundation models and providing insights for the development of robust security protection technologies for frontier models. The codes will be made publicly available.
applications to physical sciences (physics, chemistry, biology, etc.)
https://openreview.net/pdf?id=QP7WX3XmEy
2025-09-18T09:05:46
4
[ { "id": "VTzyRcUFU4", "forum": "QP7WX3XmEy", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission10036/Reviewer_vm5Z", "reviewer_name": "Reviewer_vm5Z", "rating": 4, "confidence": 3, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "This ...
qSaRIuBuYx
https://openreview.net/forum?id=qSaRIuBuYx
Topological Retrieval-Augmented Generation via Intersecting Evidence Paths
4.5
3.75
[ 8, 6, 2, 2 ]
[ 3, 4, 4, 4 ]
4
[ "Retrieval-Augmented Generation", "Topological Reranking", "Evidence Path", "Lowest Common Ancestor" ]
Retrieval-Augmented Generation (RAG) struggles with complex queries. While multi-query rewriting enhances recall by capturing diverse semantic dimensions, existing methods falter by consolidating retrieved documents into a flat list for reranking. This discards the crucial structural information from the rewriting process and fails to prioritize documents that bridge different query aspects. To address this issue, we propose HPT-TRACE, a framework that centers on a novel topology-aware reranking mechanism. This framework functions within a topological space defined by our Hierarchical Partition Tree (HPT), which is construction-efficient and does not rely on Large Language Models (LLMs). Our innovative Topological Reranking via Ancestor Convergence Evaluation (TRACE) algorithm operates within this HPT-defined space. Rather than scoring documents in isolation, TRACE considers each document's lineage in the tree as an evidence path. It then reranks candidates by assessing the intersection length of evidence paths originating from different semantic dimensions of the user's query. A document is deemed essential for synthesizing a comprehensive answer if its path contributes to an intersection of substantial length. By explicitly modeling the relationships between intersecting evidence paths, HPT-TRACE provides a framework that is both highly effective and computationally efficient, excelling at identifying the most salient and holistic information to significantly enhance retrieval for complex queries.
generative models
https://openreview.net/pdf?id=qSaRIuBuYx
2025-09-18T18:58:54
4
[ { "id": "n39QevjF3W", "forum": "qSaRIuBuYx", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission11199/Reviewer_9RKV", "reviewer_name": "Reviewer_9RKV", "rating": 8, "confidence": 3, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This ...
T4pK6ByRit
https://openreview.net/forum?id=T4pK6ByRit
LLaVA-UHD v3: Progressive Visual Compression for Efficient Naive-Resolution Encoding in MLLMs
4
4
[ 4, 4, 4 ]
[ 5, 3, 4 ]
3
[ "Multimodal Large Language Model" ]
Visual encoding followed by token condensing has become the standard architectural paradigm in multi-modal large language models (MLLMs). Many recent MLLMs increasingly favor global naive-resolution visual encoding over slice-based methods. To investigate this trend, we systematically compare their behavior on vision-language understanding and attention patterns, revealing that global encoding enhances overall capability but at the expense of greater computational overhead. To address this issue, we present LLaVA-UHD v3, an MLLM centered upon our proposed Progressive Visual Compression (PVC) method, which can be seamlessly integrated into standard Vision Transformer (ViT) to enable efficient naive-resolution encoding. The PVC approach consists of two key modules: (i) refined patch embedding, which supports flexible patch-size scaling for fine-grained visual modeling, (ii) windowed token compression, hierarchically deployed across ViT layers to progressively aggregate local token representations. Jointly modulated by these two modules, a widely pretrained ViT can be reconfigured into an efficient architecture while largely preserving generality. Evaluated across extensive benchmarks, the transformed ViT, termed ViT-UHD, demonstrates competitive performance with MoonViT while reducing TTFT (time-to-first-token) by 2.4$\times$, when developed within an identical MLLM architecture. Building upon ViT-UHD, LLaVA-UHD v3 also achieves competitive performance to Qwen2-VL, while further reducing TTFT by 1.9$\times$.We will release all code and checkpoints to support future research on efficient MLLMs.
We introduce LLaVA-UHD v3, which achieves competitive performance with state-of-the-art MLLMs. With Progressive Visual Compression inside ViT, ViT-UHD improves efficiency by 2.4×, and LLaVA-UHD v3 reduces inference latency by 1.9×.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=T4pK6ByRit
2025-09-15T09:15:40
3
[ { "id": "Gs4Haa2Dh5", "forum": "T4pK6ByRit", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission5312/Reviewer_kLee", "reviewer_name": "Reviewer_kLee", "rating": 4, "confidence": 5, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "This p...
zeqCjGQB4U
https://openreview.net/forum?id=zeqCjGQB4U
Why Keep Your Doubts to Yourself? Trading Visual Uncertainties in Multi-Agent Bandit Systems
6.5
2.75
[ 6, 8, 6, 6 ]
[ 2, 2, 4, 3 ]
4
[ "agent; Vision Language Model; Uncernity" ]
Vision-Language Models (VLMs) enable powerful multi-agent systems, but scaling them is economically unsustainable: coordinating heterogeneous agents under information asymmetry often spirals costs. Existing paradigms, such as Mixture-of-Agents and knowledge-based routers, rely on heuristic proxies that ignore costs and collapse uncertainty structure, leading to provably suboptimal coordination. We introduce Agora, a framework that reframes coordination as a decentralized market for uncertainty. Agora formalizes epistemic uncertainty into a structured, tradable asset (perceptual, semantic, inferential), and enforces profitability-driven trading among agents based on rational economic rules. A market-aware broker, extending Thompson Sampling, initiates collaboration and guides the system toward cost-efficient equilibria. Experiments on five multimodal benchmarks (MMMU, MMBench, MathVision, InfoVQA, CC-OCR) show that Agora outperforms strong VLMs and heuristic multi-agent strategies, e.g., achieving +8.5% accuracy over the best baseline on MMMU while reducing cost by over 3×. These results establish market-based coordination as a principled and scalable paradigm for building economically viable multi-agent visual intelligence systems.
We presents a framework that reframes coordination as a decentralized market for uncertainty.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=zeqCjGQB4U
2025-09-02T02:13:31
4
[ { "id": "aj5Eea9564", "forum": "zeqCjGQB4U", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission582/Reviewer_oqqC", "reviewer_name": "Reviewer_oqqC", "rating": 6, "confidence": 2, "soundness": 4, "contribution": 3, "presentation": 4, "summary": "This pa...
OR4anjIaT3
https://openreview.net/forum?id=OR4anjIaT3
The Attack Means Nothing: Test-time Adversarial Defense Improves Zero-shot Adversarial Robustness for Medical Vision-Language Models
3
3.25
[ 4, 2, 2, 4 ]
[ 3, 3, 3, 4 ]
4
[ "test-time adversarial defense", "medical vision-language model", "classification", "deep learning" ]
Vision-language models (VLMs), exemplified by CLIP, have achieved remarkable zero-shot generalization but remain highly vulnerable to imperceptible adversarial perturbations, posing significant safety threats, particularly in medical scenarios. In this paper, we first prove that VLMs are much more robust than adversarial attacks when faced with weak transformations. Building upon this insight, we propose the The Attack Means Nothing (TAME), a simple yet effective test-time defense paradigm for improving the zero-shot adversarial robustness of medical VLMs. We conduct comprehensive experiments on 11 medical datasets across 9 imaging modalities against three representative white-box attacks (PGD, C&W, and AutoAttack). The BiomedCLIP with a backbone of ViT-B/16 is utilized as the victim model. Extensive experiment results demonstrate that our TAME consistently outperforms other defense methods across all attack types, boosting the vanilla BiomedCLIP by +47.47% under PGD, +46.73% under C&W, and +47.79% under AutoAttack, while maintaining competitive clean accuracy. These significant improvements also suggest a potential risk of label leakage during attacks. Furthermore, our TAME is plug-and-play and can be integrated with other adversarially fine-tuned VLMs to further enhance their defense capabilities. These findings support a practical and generalizable approach to deploying medical VLMs in clinical scenarios with the presence of adversaries. Codes will be available on GitHub.
applications to computer vision, audio, language, and other modalities
https://openreview.net/pdf?id=OR4anjIaT3
2025-09-17T20:19:05
5
[ { "id": "3r00cOY0D6", "forum": "OR4anjIaT3", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission9110/Reviewer_Xs2R", "reviewer_name": "Reviewer_Xs2R", "rating": 4, "confidence": 3, "soundness": 3, "contribution": 2, "presentation": 2, "summary": "The pa...
yK2ELO31lT
https://openreview.net/forum?id=yK2ELO31lT
NOCTA: Non-Greedy Objective Cost-Tradeoff Acquisition for Longitudinal Data
3.5
3
[ 4, 4, 2, 4 ]
[ 3, 3, 2, 4 ]
4
[ "Active Feature Acquisition", "Longitudinal", "Feature Selection" ]
In many critical applications, resource constraints prevent observing all features at test time, motivating selective information acquisition for the predictions. For example, in healthcare, patient data spans diverse features ranging from lab tests to imaging studies, each may carry different information and must be acquired at a cost of time, money, or risk to the patient. Moreover, temporal prediction tasks, where both instance features and labels evolve over time, introduce additional complexity in deciding when or what information is important. In this work, we propose NOCTA, a Non-Greedy Objective Cost-Tradeoff Acquisition method that sequentially acquires the most informative features at inference time while accounting for both temporal dynamics and acquisition cost. We first introduce a cohesive estimation target for our NOCTA setting, and then develop two complementary estimators: 1) a non-parametric method based on nearest neighbors to guide acquisitions (NOCTA-NP), and 2) a parametric method that directly predicts the utility of potential acquisitions (NOCTA-P). Experiments on synthetic and real-world medical datasets demonstrate that both NOCTA variants outperform existing baselines, achieving higher accuracy at lower acquisition costs.
other topics in machine learning (i.e., none of the above)
https://openreview.net/pdf?id=yK2ELO31lT
2025-09-19T07:01:23
4
[ { "id": "q7qDOEcp27", "forum": "yK2ELO31lT", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission14465/Reviewer_73S5", "reviewer_name": "Reviewer_73S5", "rating": 4, "confidence": 3, "soundness": 2, "contribution": 2, "presentation": 3, "summary": "This ...
laaPiJJuAZ
https://openreview.net/forum?id=laaPiJJuAZ
LogSTOP: Temporal Scores over Prediction Sequences for Matching and Retrieval
5
2.75
[ 6, 4, 4, 6 ]
[ 3, 2, 3, 3 ]
4
[ "temporal logic", "video search", "quantitative semantics" ]
Neural models such as YOLO and HuBERT can be used to detect local properties such as objects ("car") and emotions ("angry") in individual frames of videos and audio clips respectively. The likelihood of these detections is indicated by scores in [0, 1]. Lifting these scores to temporal properties over sequences can be useful for several downstream applications such as query matching (e.g., "does the speaker eventually sound happy in this audio clip?"), and ranked retrieval (e.g., "retrieve top 5 videos with a 10 second scene where a car is detected until a pedestrian is detected"). In this work, we formalize this problem of assigning Scores for TempOral Properties (STOPs) over sequences, given potentially noisy score predictors for local properties. We then propose a scoring function called LogSTOP that can efficiently compute these scores for temporal properties represented in Linear Temporal Logic. Empirically, LogSTOP with YOLO and HuBERT, outperforms Large Vision / Audio Language Models and other Temporal Logic-based baselines by at least 16% on query matching with temporal properties over objects-in-videos and emotions-in-speech respectively. Similarly, on ranked retrieval with temporal properties over objects and actions in videos, LogSTOP with Grounding DINO and SlowR50 reports at least a 19% and 16% increase in mean average precision and recall over zero-shot text-to-video retrieval baselines respectively.
We propose scores for temporal properties over sequences (videos / speech) which can be used for query matching (does a video or audio clip satisfy a temporal property) and ranked retrieval (top-k videos with a given temporal event)
neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
https://openreview.net/pdf?id=laaPiJJuAZ
2025-09-17T01:32:31
4
[ { "id": "vjh4vCXd1O", "forum": "laaPiJJuAZ", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission7929/Reviewer_FdQM", "reviewer_name": "Reviewer_FdQM", "rating": 6, "confidence": 3, "soundness": 3, "contribution": 3, "presentation": 4, "summary": "This p...
CB6Ds5T4ae
https://openreview.net/forum?id=CB6Ds5T4ae
RADAR: Reasoning–Ability and Difficulty-Aware Routing in Language Models
5
3.25
[ 4, 4, 4, 8 ]
[ 4, 3, 3, 3 ]
4
[ "routing", "adaptive reasoning", "item response theory", "reasoning models", "large language models" ]
Reasoning language models have demonstrated remarkable performance on many challenging tasks in math, science, and coding. Choosing the right reasoning model for practical deployment involves a performance and cost tradeoff at two key levels: model size and reasoning budget, where larger models and higher reasoning budget lead to better performance but with increased cost and latency. In this work, we tackle this tradeoff from the angle of model configuration routing for different queries, and present RADAR (Reasoning–Ability and Difficulty-Aware Routing), a lightweight, interpretable, and scalable routing framework. Inspired by psychometrics, RADAR learns an item response model from model responses with different budgets to different queries, with interpretable parameters including query difficulties and model-budget abilities. RADAR then routes queries with higher difficulty to model-budget pairs with higher ability, and vice versa. We conduct extensive experiments on 8 widely used challenging reasoning benchmarks, demonstrating the superior performance of RADAR compared to state-of-the-art model routing methods. RADAR also exhibits query generalization capabilities, showing strong performance on out-of-distribution queries in all benchmarks. RADAR is also scalable and can efficiently integrate additional models, by dynamically selecting a small set of evaluation queries to estimate their abilities.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=CB6Ds5T4ae
2025-09-18T04:16:12
4
[ { "id": "AeLC845er9", "forum": "CB6Ds5T4ae", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission9782/Reviewer_St66", "reviewer_name": "Reviewer_St66", "rating": 4, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "This p...
cGjTMuhqz3
https://openreview.net/forum?id=cGjTMuhqz3
From Minor Adjustment to Major Gains: Soft Logit Normalization Loss Enhances Representations and Generalization
5
3
[ 8, 4, 2, 6 ]
[ 2, 3, 5, 2 ]
4
[ "cross-entropy loss", "soft logit normalization loss", "generalization improvement", "ImageNet-1K", "BERT" ]
Developing novel loss functions for small models to attain performance parity with their larger counterparts is an active research area in artificial intelligence. We propose the Soft Logit Normalization (SLN) loss, which normalizes the logit vector by its powered L2-norm before applying the standard softmax function. In comparison with the classical cross-entropy loss, SLN loss significantly improves generalization across multiple vision benchmarks, including CIFAR-10 and ImageNet-1K, enabling small models to match the performance of models with approximately three times more parameters—an improvement comparable to that achieved by advanced knowledge distillation techniques. Beyond vision tasks, experiments on language tasks with large transformer-based models (e.g., BERT$_{LARGE}$ with 340M parameters) demonstrate the versatility of SLN loss across modalities. Theoretical analysis further show that SLN loss facilitates more separable penultimate-layer representations, which contributes to better generalization, as numerically validated on diverse datasets. This work not only advances the practical deployment of efficient models on resource-constrained devices but also opens new directions for research into loss function design.
optimization
https://openreview.net/pdf?id=cGjTMuhqz3
2025-09-20T14:05:35
4
[ { "id": "9rAP4AFSCW", "forum": "cGjTMuhqz3", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission23779/Reviewer_aqLd", "reviewer_name": "Reviewer_aqLd", "rating": 8, "confidence": 2, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "The p...
KTjAeX6u2a
https://openreview.net/forum?id=KTjAeX6u2a
AURA: Structural and Semantic Calibration for Robust Federated Graph Learning
4
4.25
[ 2, 4, 8, 2 ]
[ 4, 4, 5, 4 ]
4
[ "Federated Learning", "Graph Learning", "Robustness", "Geometric Learning" ]
Training highly generalizable server model necessitates requires data from multiple sources in Federated Graph Learning. However, noisy labels are increasingly undermining federated system due to the propagation of erroneous information between nodes. Compounding this issue, significant variations in data distribution among clients make noise node detection more challenging. In our work, we propose an effective structural and semantic calibration framework for Robust Federated Graph Learning, AURA. We observe that spectral discrepancies across different clients adversely affect noise detection. To address this, we employs SVD for self-supervision, compelling the model to learn an intrinsic and consistent structural representation of the data, thereby effectively attenuating local high-frequency perturbations induced by noisy nodes. We introduce two metrics, namely "Depth Influence" and "Breadth Influence". Based on these metrics, the framework judiciously selects and aggregates the most consensual knowledge from the class prototypes uploaded by each client. Concurrently, clients perform knowledge distillation by minimizing the KL divergence between their local model's output distribution and that of the global model, which markedly enhances the model's generalization performance and convergence stability in heterogeneous data environments. AURA demonstrates remarkable robustness across multiple datasets, for instance, achieving a $7.6\%$ $\uparrow$ F1-macro score under a 20\%-uniform noise on Cora. The code is available for anonymous access at \url{https://anonymous.4open.science/r/AURA-F351/}.
learning on graphs and other geometries & topologies
https://openreview.net/pdf?id=KTjAeX6u2a
2025-09-02T00:48:22
4
[ { "id": "ShKdWqB9jq", "forum": "KTjAeX6u2a", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission542/Reviewer_hrJw", "reviewer_name": "Reviewer_hrJw", "rating": 2, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 1, "summary": "The pap...
IhTrFvY7p3
https://openreview.net/forum?id=IhTrFvY7p3
MeSH: Memory-as-State-Highways for Recursive Transformers
4.5
2.75
[ 0, 8, 6, 4 ]
[ 2, 4, 2, 3 ]
4
[ "Recursive Transformer", "Language Model", "Parameter Sharing", "Parameter Efficiency" ]
Recursive transformers reuse parameters and iterate over hidden states multiple times, decoupling compute depth from parameter depth. However, under matched compute, recursive models with fewer parameters often lag behind non-recursive counterparts. By probing hidden states, we trace this performance gap to two primary bottlenecks: __undifferentiated computation__, where the core is forced to adopt a similar computational pattern at every iteration, and __information overload__, where long-lived and transient information must coexist in a single hidden state. To address the issues, we introduce a **Me**mory-as-**S**tate-**H**ighways **(MeSH)** scheme, which externalizes state management into an explicit memory buffer and employs lightweight routers to dynamically diversify computation across iterations. Probing visualizations confirm that MeSH successfully resolves the pathologies by inducing functional specialization across iterations. On the Pythia suite (160M–1.4B), MeSH-enhanced recursive transformers consistently improve over recursive baselines and outperforms its larger non-recursive counterpart at the 1.4B scale, improving average downstream accuracy by +1.06\% with 33\% fewer non-embedding parameters. Our analysis establishes MeSH as a scalable and principled architecture for building stronger recursive models.
We diagnose why recursive transformers underperform and propose a targeted solution for building stronger recursive backbones.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=IhTrFvY7p3
2025-09-18T11:46:33
4
[ { "id": "QZzI3mN4uC", "forum": "IhTrFvY7p3", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission10326/Reviewer_4cbP", "reviewer_name": "Reviewer_4cbP", "rating": 0, "confidence": 2, "soundness": 1, "contribution": 1, "presentation": 1, "summary": "This ...
AnY0zJ8adz
https://openreview.net/forum?id=AnY0zJ8adz
FaithfulFaces: Pose-Faithful Facial Identity Preservation for Text-to-Video Generation
4.5
3.75
[ 6, 4, 4, 4 ]
[ 4, 4, 3, 4 ]
4
[ "Pose-Faithful Facial Identity Preservation", "Identity-Preserving Text-to-Video Generation" ]
Identity-preserving text-to-video generation (IPT2V) empowers users to produce diverse and imaginative videos with consistent human facial identity. Although existing open-source and commercial methods have demonstrated impressive performance in typical scenarios, they still face significant limitations when confronted with challenging cases, such as large facial pose variations or facial occlusions. These challenges frequently result in identity distortion in the generated videos. In this paper, we propose FaithfulFaces, a pose-faithful facial identity preservation learning framework to improve IPT2V in complex dynamic scenes. Specifically, FaithfulFaces first proposes a pose-shared identity aligner that refines and aligns facial poses across distinct views via a pose-shared dictionary and a pose variation–identity invariance constraint. Then, the well-learned aligner can capture the global facial pose representation from the input single-view face image with explicit Euler angle embeddings, which could provide a pose-faithful facial prior for foundational generative models to better preserve identity in the generated videos. In particular, we develop a high-quality video dataset pipeline featuring substantial facial pose variations specifically for our FaithfulFaces to facilitate robust training. Compared to other IPT2V methods, FaithfulFaces achieves state-of-the-art performance across multiple metrics, generating high-quality videos with clear facial structures and consistent identity preservation, even as facial pose changes and occlusions occur. The code and dataset pipeline will be released.
applications to computer vision, audio, language, and other modalities
https://openreview.net/pdf?id=AnY0zJ8adz
2025-09-11T10:35:54
4
[ { "id": "d5Ts1jSzIe", "forum": "AnY0zJ8adz", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission3869/Reviewer_qQ3h", "reviewer_name": "Reviewer_qQ3h", "rating": 6, "confidence": 4, "soundness": 3, "contribution": 2, "presentation": 2, "summary": "This p...
l4noTuvYMP
https://openreview.net/forum?id=l4noTuvYMP
Unsupervised Reinforcement Learning with Verifiable Rewards via First Repeat Criterion
5
3.25
[ 4, 6, 6, 4 ]
[ 3, 3, 4, 3 ]
4
[ "Reinforcement Learning; Unsupervised Learning" ]
Recent advances in Large language models (LLMs) proves that Reinforcement Learning with Verifiable Rewards (RLVR) can enhance the reasoning capabilities of LLMs only with automatically verifiable signals. However, it is still a challenging and labor-consuming task to collect the ground truth answers for reasoning model, specially in the resource-constrained scenario. In this paper, we investigates the potential of unsupervised reinforcement learning with verifiable reward, and propose uns-GRPO framework to improve math reasoning of small LLMs. Firstly, we design an unsupervised reward model by generating pseudo answers by first repeat criterion. It treats the first repeated answer in a sequence of generated responses as the ground-truth, which is demonstrated with efficiency and reliability in resource-constrained settings. Secondly, we propose an adaptive KL regularization to mitigate the noise introduced by pseudo answer. A unique consistency is observed between pseudo answer confidence and accuracy rewards.Adjusting by the accuracy rewards, the adaptive KL regularization enforces conservative optimization when the confidence is low, while encourages diverse exploration when the confidence is high. Experimental results demonstrate that our un-supervised approach achieves stable improvement across diverse models, training datasets, and evaluation tasks.
reinforcement learning
https://openreview.net/pdf?id=l4noTuvYMP
2025-09-16T22:25:35
4
[ { "id": "prCKYDa7E1", "forum": "l4noTuvYMP", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission7700/Reviewer_QgqW", "reviewer_name": "Reviewer_QgqW", "rating": 4, "confidence": 3, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "This p...
J1FIthPL7T
https://openreview.net/forum?id=J1FIthPL7T
Layer-Wise Feedback Signals: Dynamic Regulation for Continual Learning
3
4
[ 2, 4, 2, 4 ]
[ 4, 4, 4, 4 ]
4
[ "continual learning", "dynamic feedback" ]
Continual learning aims to acquire new tasks while preserving performance on previously learned ones, but most methods struggle with catastrophic forgetting. Existing approaches typically treat all layers uniformly, often trading stability for plasticity or vice versa. However, different layers naturally exhibit varying levels of uncertainty (entropy) when classifying tasks. High-entropy layers tend to underfit by failing to capture task-specific patterns, while low-entropy layers risk overfitting by becoming overly confident and specialized. To address this imbalance, we propose an entropy-aware continual learning method that employs a dynamic feedback mechanism to regulate each layer based on its entropy. Specifically, our approach reduces entropy in high-entropy layers to mitigate underfitting and increases entropy in overly confident layers to alleviate overfitting. This adaptive regulation encourages the model to converge to wider local minima, which have been shown to improve generalization. Our method is general and can be seamlessly integrated with both replay- and regularization-based approaches. Experiments on Split-CIFAR100 and Tiny-ImageNet demonstrate substantial performance gains over state-of-the-art baselines.
transfer learning, meta learning, and lifelong learning
https://openreview.net/pdf?id=J1FIthPL7T
2025-09-19T02:46:35
4
[ { "id": "0Ng9WDIcgp", "forum": "J1FIthPL7T", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission13705/Reviewer_oJLf", "reviewer_name": "Reviewer_oJLf", "rating": 2, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 3, "summary": "The p...
5T0BXtJxzN
https://openreview.net/forum?id=5T0BXtJxzN
Can LLMs Reason Soundly in Law? Auditing Inference Patterns for Legal Judgment
4.5
2.25
[ 4, 4, 4, 6 ]
[ 3, 3, 1, 2 ]
4
[ "Large Language Model", "Value Alignment", "Trustworthiness" ]
This paper presents a method to analyze the inference patterns used by Large Language Models (LLMs) for judgment in a case study on legal LLMs, so as to identify potential incorrect representations of the LLM, according to human domain knowledge. Unlike traditional evaluations on language generation results, we propose to evaluate the correctness of the detailed inference patterns of an LLM behind its seemingly correct outputs. To this end, we quantify the interactions between input phrases used by the LLM as primitive inference patterns, because recent theoretical achievements have proven several mathematical guarantees of the faithfulness of the interaction-based explanation. We design a set of metrics to evaluate the detailed inference patterns of LLMs. Experiments show that even when the language generation results appear correct, a significant portion of the inference patterns used by the LLM for the legal judgment may represent misleading or irrelevant logic.
This paper presents a method to analyze the inference patterns used by the Large Language Model for legal judgment.
alignment, fairness, safety, privacy, and societal considerations
https://openreview.net/pdf?id=5T0BXtJxzN
2025-09-18T14:42:34
4
[ { "id": "33UTkBtzmv", "forum": "5T0BXtJxzN", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission10619/Reviewer_B4o6", "reviewer_name": "Reviewer_B4o6", "rating": 4, "confidence": 3, "soundness": 2, "contribution": 3, "presentation": 2, "summary": "The p...
d2tMZHTFWv
https://openreview.net/forum?id=d2tMZHTFWv
Bandit Learning for Online Scheduling with Immediate Decision
5
3
[ 6, 4, 4, 6 ]
[ 4, 3, 3, 2 ]
4
[ "multi-armed bandit", "online scheduling" ]
Online scheduling has been extensively studied in computer science and economics owing to its broad applications. Motivated by streaming task processing in domains such as IoT data streaming and cloud resource allocation, we investigate an online scheduling setting where the scheduler must immediately decide whether to accept an incoming task. Consider a system with $M$ identical machines. At each time step, multiple tasks arrive, and each machine must immediately assign itself to a task or remain idle. Tasks that are not processed immediately are abandoned and cannot be revisited. Upon completion, a task yields a reward, which may be stochastic and initially unknown. Through repeated task completions, the scheduler can learn the reward distributions over time. In this work, we formalize this problem as online scheduling with immediate decision. We first analyze the setting with known rewards, for which we derive a worst-case competitive ratio and propose a near-optimal online algorithm. For the case of unknown and random rewards, we design an efficient bandit algorithm that balances exploration and exploitation, achieving an $O(\log T)$ regret over a time horizon $T$. Experimental results demonstrate the efficacy of the proposed algorithms.
learning theory
https://openreview.net/pdf?id=d2tMZHTFWv
2025-09-18T21:49:22
4
[ { "id": "ag9OfRYNdy", "forum": "d2tMZHTFWv", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission11728/Reviewer_WaxM", "reviewer_name": "Reviewer_WaxM", "rating": 6, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 2, "summary": "This ...
ofYuPZ0sK0
https://openreview.net/forum?id=ofYuPZ0sK0
Optimizing Data Augmentation through Bayesian Model Selection
5.5
3.5
[ 6, 4, 4, 8 ]
[ 3, 3, 4, 4 ]
4
[ "Bayesian Neural Network", "Variational Inference", "Data Augmentation" ]
Data Augmentation (DA) has become an essential tool to improve robustness and generalization of modern machine learning. However, when deciding on DA strategies it is critical to choose parameters carefully, and this can be a daunting task which is traditionally left to trial-and-error or expensive optimization based on validation performance. In this paper, we counter these limitations by proposing a novel framework for optimizing DA. In particular, we take a probabilistic view of DA, which leads to the interpretation of augmentation parameters as model (hyper)-parameters, and the optimization of the marginal likelihood with respect to these parameters as a Bayesian model selection problem. Due to its intractability, we derive a tractable ELBO, which allows us to optimize augmentation parameters jointly with model parameters. We provide extensive theoretical results on variational approximation quality, generalization guarantees, invariance properties, and connections to empirical Bayes. Through experiments on computer vision tasks, we show that our approach improves calibration and yields robust performance over fixed or no augmentation. Our work provides a rigorous foundation for optimizing \da through Bayesian principles with significant potential for robust machine learning.
We present OPTIMA, a Bayesian framework that learns augmentation and model parameters jointly, with theory and experiments showing improved accuracy, calibration, and OOD robustness over strong baselines.
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
https://openreview.net/pdf?id=ofYuPZ0sK0
2025-09-19T23:22:19
4
[ { "id": "tHZUyPuPgS", "forum": "ofYuPZ0sK0", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission19316/Reviewer_5Upk", "reviewer_name": "Reviewer_5Upk", "rating": 6, "confidence": 3, "soundness": 2, "contribution": 3, "presentation": 3, "summary": "This ...
Vq9vhod9fL
https://openreview.net/forum?id=Vq9vhod9fL
Linear Separability in Contrastive Learning via Neural Training Dynamics
3.5
3.75
[ 2, 4, 4, 4 ]
[ 4, 4, 4, 3 ]
4
[ "contrastive learning", "neural network", "linear separability", "training dynamics", "variational analysis", "gradient flow", "neural tangent kernel" ]
The SimCLR method for contrastive learning of invariant visual representations has become extensively used in supervised, semi-supervised, and unsupervised settings, due to its ability to uncover patterns and structures in image data that are not directly present in the pixel representations. However, this success is still not well understood; neither the loss function nor invariance alone explains it. In this paper, we present a mathematical analysis that clarifies how the geometry of the learned latent distribution arises from SimCLR. Despite the nonconvex SimCLR loss and the presence of many undesirable local minimizers, we show that the training dynamics driven by gradient flow tend toward favorable representations. In particular, early training induces clustering in feature space. Under a structural assumption on the neural network, our main theorem proves that the learned features become linearly separable with respect to the ground-truth labels. To support our theoretical insights, we present numerical results that align with our theoretical predictions.
We present a novel theoretical result showing that SimCLR training dynamics lead to clustering and linear separability, despite nonconvex loss and poor local minima.
unsupervised, self-supervised, semi-supervised, and supervised representation learning
https://openreview.net/pdf?id=Vq9vhod9fL
2025-09-20T00:46:13
4
[ { "id": "0EyeQ1M23Y", "forum": "Vq9vhod9fL", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission19900/Reviewer_vGFP", "reviewer_name": "Reviewer_vGFP", "rating": 2, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "This ...
jMbyMp5DCh
https://openreview.net/forum?id=jMbyMp5DCh
Evaluating Cross-Modal Reasoning Ability and Problem Charactaristics with Multimodal Item Response Theory
5.5
3.5
[ 6, 4, 6, 6 ]
[ 4, 3, 4, 3 ]
4
[ "VLM", "Evaluation", "IRT" ]
Multimodal Large Language Models (MLLMs) have recently emerged as general architectures capable of reasoning over diverse modalities. Benchmarks for MLLMs should measure their ability for cross‑modal integration. However, current benchmarks are filled with shortcut questions, which can be solved using only single modality, and thereby yielding unreliable rankings. For example, in vision-language cases, we can find the correct answer without either the image or the text. These low-quality questions unnecessarily increase the size and computational requirements of benchmarks. We introduce a multi-modal and multidimensional item response theory framework (M$^3$-IRT) that extends classical IRT by decomposing both model ability and item difficulty into image‑only, text‑only, and cross‑modal components. M$^3$-IRT estimates cross‑modal ability of MLLMs and each question’s cross‑modal difficulty, enabling compact, high‑quality subsets that better reflect multimodal reasoning. Across 24 VLMs on three benchmarks, M$^3$-IRT prioritizes genuinely cross‑modal questions over shortcuts and preserves ranking fidelity even when 50\% of items are artificially generated low‑quality questions, thereby reducing evaluation cost while improving reliability. M$^3$-IRT thus offers a practical tool for assessing cross‑modal reasoning and refining multimodal benchmarks.
datasets and benchmarks
https://openreview.net/pdf?id=jMbyMp5DCh
2025-09-18T13:10:38
4
[ { "id": "n9SgRRXCBz", "forum": "jMbyMp5DCh", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission10457/Reviewer_5cL5", "reviewer_name": "Reviewer_5cL5", "rating": 6, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "In th...
f9R6eIsGh1
https://openreview.net/forum?id=f9R6eIsGh1
Primal-Dual Direct Preference Optimization for Constrained LLM Alignment
3
2.5
[ 2, 4, 4, 2 ]
[ 2, 2, 3, 3 ]
4
[ "Large language models (LLMs)", "safety or constrained alignment", "rigorous theoretical guarantees for LLM alignment", "primal-dual direct preference optimization (DPO)" ]
The widespread application of Large Language Models (LLMs) imposes increasing demands on safety, such as reducing harmful content and fake information, and avoiding certain forbidden tokens due to rules and laws. While there have been several recent works studying safe alignment of LLMs, these works either require the training of reward and cost models and incur high memory and computational costs, or need prior knowledge about the optimal solution. Motivated by this fact, we study the problem of constrained alignment in LLMs, i.e., maximizing the output reward while restricting the cost due to potentially unsafe content to stay below a threshold. For this problem, we propose a novel primal-dual DPO approach, which first trains a model using standard DPO on reward preference data to provide reward information, and then adopts a rearranged Lagrangian DPO objective utilizing the provided reward information to fine-tune LLMs on cost preference data. Our approach significantly reduces memory and computational costs, and does not require extra prior knowledge. Moreover, we establish rigorous theoretical guarantees on the suboptimality and constraint violation of the output policy. We also extend our approach to an online data setting by incorporating exploration bonuses, which enables our approach to explore uncovered prompt-response space, and then provide theoretical results that get rid of the dependence on preference data coverage. Experimental results on the widely-used preference dataset PKU-SafeRLHF demonstrate the effectiveness of our approach.
alignment, fairness, safety, privacy, and societal considerations
https://openreview.net/pdf?id=f9R6eIsGh1
2025-09-17T11:01:21
4
[ { "id": "7ojSpqQSmJ", "forum": "f9R6eIsGh1", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission8307/Reviewer_VPsQ", "reviewer_name": "Reviewer_VPsQ", "rating": 2, "confidence": 2, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "The pa...
PbYw721DhY
https://openreview.net/forum?id=PbYw721DhY
Cell2Text: Multimodal LLM for Generating Single-Cell Descriptions from RNA-Seq Data
3.5
4
[ 2, 6, 4, 2 ]
[ 5, 4, 3, 4 ]
4
[ "AI for Science", "Large Language Model", "Multimodality", "Bioinformatics" ]
Single-cell RNA sequencing has transformed biology by enabling the measurement of gene expression at cellular resolution, providing information for cell types, states, and disease contexts. Recently, single-cell foundation models have emerged as powerful tools for learning transferable representations directly from expression profiles, improving performance on classification and clustering tasks. However, these models are limited to discrete prediction heads, which collapse cellular complexity into predefined labels that fail to capture the richer, contextual explanations biologists need. We introduce Cell2Text, a multimodal generative framework that translates scRNA-seq profiles into structured natural language descriptions. By integrating gene-level embeddings from single-cell foundation models with pretrained large language models, Cell2Text generates coherent summaries that capture cellular identity, tissue origin, disease associations, and pathway activity, generalizing to unseen cells. Empirically, Cell2Text outperforms baselines on classification accuracy, demonstrates strong ontological consistency using PageRank-based similarity metrics, and achieves high semantic fidelity in text generation. These results demonstrate that coupling expression data with natural language offers both stronger predictive performance and inherently interpretable outputs, pointing to a scalable path for label-efficient characterization of unseen cells.
applications to physical sciences (physics, chemistry, biology, etc.)
https://openreview.net/pdf?id=PbYw721DhY
2025-09-19T18:16:05
4
[ { "id": "Ga3gZNshA5", "forum": "PbYw721DhY", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission17506/Reviewer_k6Tb", "reviewer_name": "Reviewer_k6Tb", "rating": 2, "confidence": 5, "soundness": 1, "contribution": 2, "presentation": 2, "summary": "This ...
a7n5ZrsD2U
https://openreview.net/forum?id=a7n5ZrsD2U
Generalization Error Bound via Embedding Dimension and Network Lipschitz Constant
4
3.5
[ 2, 4, 4, 6 ]
[ 3, 4, 4, 3 ]
4
[ "Generalization Error Bound", "Intrinsic Dimension", "Wasserstein Distance", "Lipschitz continuity" ]
Modern deep networks generalize well even in heavily over-parameterized regimes, where traditional parameter-based bounds become vacuous. We propose a representation-centric view of generalization, showing that the generalization error is controlled jointly by: (i) the intrinsic dimension of learned embeddings, which reflects how much the data distribution is compressed and determines how quickly the empirical distribution of embeddings converges to the population distribution in Wasserstein distance, and (ii) the sensitivity of the downstream mapping from embeddings to predictions, quantified by Lipschitz constants. Together these factors yield a new generalization error bound that explicitly links embedding dimension with network architecture. At the final embedding layer, architectural sensitivity vanishes, and the bound is driven more strongly by embedding dimension, explaining why final-layer dimensionality is often a strong empirical predictor of generalization. Experiments across datasets, architectures and controlled interventions validate the theoretical predictions and demonstrate the practical value of embedding-based diagnostics. Overall, this work shifts the focus of generalization analysis from parameter to representation geometry, offering both theoretical insight and actionable tools for deep learning practice.
learning theory
https://openreview.net/pdf?id=a7n5ZrsD2U
2025-09-17T17:32:36
4
[ { "id": "yOUt9J64hO", "forum": "a7n5ZrsD2U", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission8893/Reviewer_NeUQ", "reviewer_name": "Reviewer_NeUQ", "rating": 2, "confidence": 3, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "This p...
tnlNnQIqeA
https://openreview.net/forum?id=tnlNnQIqeA
Open-Vocabulary Object Detection for Low-Altitude Scenarios Using RGB-Infrared Data: A Benchmark and A New Method
3
4.25
[ 4, 4, 2, 2 ]
[ 5, 4, 4, 4 ]
4
[ "Open-Vocabulary Object Detection", "Low-Altitude Scenarios", "Using Aligned RGB-Infrared Data" ]
Traditional object detection methods are limited by closed datasets, while open-vocabulary object detection (OVOD) overcomes this limitation. However, most existing OVOD approaches are trained on natural scene images and struggle to generalize to low-altitude scenes images (e.g., UAV-captured images) due to domain differences between the datasets. Therefore, this paper aims to advance research on open-vocabulary object detection in low-altitude scenarios. Unlike most existing open-vocabulary methods, which are trained solely on RGB images and corresponding textual annotations, this paper proposes the first low-altitude open-vocabulary object detection dataset using aligned RGB-Infrared images, named RGB-Infrared OVOD Dataset (RIOVOD), equipped with vocabulary-level annotations. We aim to leverage the complementary information between the two modalities, specifically by utilizing the texture features of RGB images and the thermal radiation information from infrared images, to further enhance the performance of OVOD methods. Building on this, we proposed a new architecture for open-vocabulary object detection in low-altitude scenarios using RGB-Infrared data, which is named LSRI. Extensive experiments show that our method outperforms other approaches, achieving a 0.356 $AP_{50}$ on the RGB-Infrared OVOD Dataset, compared to 0.246 and 0.302 $AP_{50}$ achieved by single-modal (RGB and Infrared) methods, respectively.
Open-Vocabulary Object Detection for Low-Altitude Scenarios Using Aligned RGB-Infrared Data
other topics in machine learning (i.e., none of the above)
https://openreview.net/pdf?id=tnlNnQIqeA
2025-09-17T10:13:15
4
[ { "id": "WoBmtnRt3T", "forum": "tnlNnQIqeA", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission8229/Reviewer_M1GP", "reviewer_name": "Reviewer_M1GP", "rating": 4, "confidence": 5, "soundness": 2, "contribution": 2, "presentation": 3, "summary": "This p...
2GhjkigNTx
https://openreview.net/forum?id=2GhjkigNTx
CDTP: A Large-Scale Chinese Data-Text Pair Dataset for Comprehensive Evaluation of Chinese LLMs
6
4.25
[ 4, 8, 8, 4 ]
[ 4, 4, 4, 5 ]
4
[ "Chinese Data-Text Pair Dataset", "Large Language Model", "Chinese Evaluation" ]
Large Language Models (LLMs) have achieved remarkable success across a wide range of natural language processing tasks. However, Chinese LLMs face unique challenges, primarily due to the dominance of unstructured free text and the lack of structured representations in Chinese corpora. While existing benchmarks for LLMs partially assess Chinese LLMs, they are still predominantly English-centric and fail to address the unique linguistic characteristics of Chinese, lacking structured datasets essential for robust evaluation. To address these challenges, we present a \underline{\textbf{C}}omprehensive \underline{\textbf{B}}enchmark for \underline{\textbf{E}}valuating \underline{\textbf{C}}hinese \underline{\textbf{L}}arge \underline{\textbf{L}}anguage \underline{\textbf{M}}odels (CB-ECLLM) based on the newly constructed Chinese Data-Text Pair (CDTP) dataset. Specifically, CDTP comprises over 7 million aligned text pairs, each consisting of unstructured text coupled with one or more corresponding triples, alongside a total of 15 million triples spanning four critical domains. The core contributions of CDTP are threefold: (i) enriching Chinese corpora with high-quality structured information; (ii) enabling fine-grained evaluation tailored to knowledge-driven tasks; and (iii) supporting multi-task fine-tuning to assess generalization and robustness across scenarios, including Knowledge Graph Completion, Triple-to-Text generation, and Question Answering. Furthermore, we conduct rigorous evaluations through extensive experiments and ablation studies to assess the effectiveness, Supervised Fine-Tuning (SFT), and robustness of the benchmark. To support reproducible research, we offer an open-source codebase\footnote{Code and data are available at: \href{https://anonymous.4open.science/r/CDTP-2D04}{https://github.com/CDTP.git}} and outline potential directions for future investigations based on our insights.
datasets and benchmarks
https://openreview.net/pdf?id=2GhjkigNTx
2025-09-18T12:50:32
4
[ { "id": "bKzffBxn3J", "forum": "2GhjkigNTx", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission10428/Reviewer_KL2W", "reviewer_name": "Reviewer_KL2W", "rating": 4, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "The p...
IMSa6CPFoG
https://openreview.net/forum?id=IMSa6CPFoG
Dream2Learn: Structured Generative Dreaming for Continual Learning
4
3.75
[ 0, 6, 2, 8 ]
[ 5, 3, 3, 4 ]
4
[ "Continual Learning", "Off-line brain states", "Generative Latent Space Manipulation" ]
Continual learning struggles with balancing plasticity and stability while mitigating catastrophic forgetting. Inspired by human sleep and dreaming mechanisms, we propose Dream2Learn (D2L), a generative approach that enables models, trained in a continual learning setting, to synthesize structured additional training signals driven by their internal knowledge. Unlike prior methods that rely on real data to simulate the dreaming process, D2L autonomously constructs semantically distinct yet structurally coherent dreamed classes, conditioning a diffusion model via soft prompt optimization. These dynamically generated samples expand the classifier’s representation space, reinforcing past knowledge while structuring features in a way that facilitates adaptation to future tasks. In particular, by integrating dreamed classes into training, D2L enables the model to self-organize its latent space, improving generalization and adaptability to new data. Experiments on Mini-ImageNet, FG-ImageNet, and ImageNet-R show that D2L surpasses existing methods across all evaluated metrics. Notably, it achieves positive forward transfer, confirming its ability to enhance adaptability by structuring representations for future tasks.
transfer learning, meta learning, and lifelong learning
https://openreview.net/pdf?id=IMSa6CPFoG
2025-09-18T23:27:48
4
[ { "id": "ULTeTue1pu", "forum": "IMSa6CPFoG", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission12652/Reviewer_HqT1", "reviewer_name": "Reviewer_HqT1", "rating": 0, "confidence": 5, "soundness": 2, "contribution": 1, "presentation": 2, "summary": "The p...
0Ow7PTK0Qj
https://openreview.net/forum?id=0Ow7PTK0Qj
FastEdit: Low-Rank Structured Regularization for Efficient Model Editing
4
3.75
[ 4, 6, 4, 2 ]
[ 4, 4, 3, 4 ]
4
[ "Large Language Models", "Model Editing", "Knowledge Updating" ]
When new knowledge emerges, it is crucial to efficiently update large language models (LLMs) to reflect the latest information. However, state-of-the-art methods widely adopted in the model editing community --- such as MEMIT, PRUNE, and AlphaEdit --- suffer from prohibitively slow editing speeds, often taking 6 to 14 hours to sequentially edit just 2000 facts on models like LLaMA-3-8B, making real-time updates impractical, especially as model scale increases. Moreover, they require extensive pre-computation to sample pre-edit knowledge --- a step that can take over 24 hours --- severely limiting their deployability. In this paper, we present \textbf{FastEdit}, a highly efficient editing framework that enables rapid and scalable model updates. Our key insight is to exploit the low-rank structure inherent in editing updates through a structured regularizer, allowing us to avoid costly inversions via the Sherman-Morrison-Woodbury (SMW) identity. This drastically accelerates the computation of update matrices while preserving edit quality. Crucially, \textbf{FastEdit} requires only a small number of pre-edit samples, reducing both memory and computational overhead. On 2000 sequential edits, \textbf{FastEdit} completes the process in just \textbf{1 hour} -- an order of magnitude faster than prior work -- without sacrificing accuracy. Our method significantly lowers the barrier to practical model editing, enabling timely and scalable knowledge updates in large models.
applications to computer vision, audio, language, and other modalities
https://openreview.net/pdf?id=0Ow7PTK0Qj
2025-09-20T19:26:30
4
[ { "id": "kAg9ViFHdm", "forum": "0Ow7PTK0Qj", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission25400/Reviewer_ES6g", "reviewer_name": "Reviewer_ES6g", "rating": 4, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 3, "summary": "The p...
q0TTxW6vEe
https://openreview.net/forum?id=q0TTxW6vEe
Sample-Efficient Pruning Model Selection via Lasso
3.5
3
[ 2, 2, 4, 6 ]
[ 3, 4, 3, 2 ]
4
[ "model selection", "neural network pruning", "generalization error", "sample complexity" ]
We study the problem of selecting a pruned neural network from a set of candidates generated by various pruning methods. The goal of a learner is to identify a near-optimal model that achieves low generalization error. Although model selection techniques such as cross-validation are widely used in practice, they often fail to provide guarantees on generalization error or offer only asymptotic guarantees. To address these limitations, we propose an algorithm that jointly selects a pruned network and updates its parameters using an $L_1$-regularization, thereby encouraging sparsity while ensuring low generalization error. For a given error tolerance $\epsilon$, we establish a sample complexity lower bound of $\Omega\left(\frac{1}{\epsilon^2} \log M\right)$, where $M$ is the number of candidate models, demonstrating that our algorithm remains sample-efficient even when the candidate pool is large. Extensive numerical experiments confirm both the practical effectiveness and the theoretical guarantees of the proposed method.
We propose a novel elimination-based algorithm that identifies a near-optimal pruned network from a pool of pruned neural networks.
learning theory
https://openreview.net/pdf?id=q0TTxW6vEe
2025-09-16T09:48:11
4
[ { "id": "m2FDdRkhoj", "forum": "q0TTxW6vEe", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission6503/Reviewer_oCNt", "reviewer_name": "Reviewer_oCNt", "rating": 2, "confidence": 3, "soundness": 3, "contribution": 2, "presentation": 2, "summary": "This p...
oEjqJEMMyi
https://openreview.net/forum?id=oEjqJEMMyi
Eidolon: Unleashing Stealthy Backdoor Pandemic by Infecting a Single Diffusion Model
-1
-1
[]
[]
0
[ "Diffusion Model", "Backdoor Attack" ]
The remarkable success of modern Deep Neural Networks (DNNs) can be primarily attributed to having access to compute resources and high-quality labeled data, which is often costly and challenging to acquire. Recently, text-to-image Diffusion Models (DMs) have emerged as powerful data generators to augment training datasets. Machine learning practitioners often utilize off-the-shelf third-party DMs for generating synthetic data without domain-specific expertise or adaptation. Such a practice leads to a novel and insidious threat: diffusion-model infected with a backdoor can effectively spread into a large number of downstream models, causing a backdoor pandemic. To achieve this for the first time, we propose Eidolon, designed and optimized to stealthily transfer the backdoor injected into a single diffusion model into virtually an infinite number of downstream models without any active attacker role in the downstream training tasks. Proposed Eidolon not only makes the attack stealthier and effective, it also enforces a strict threat model for injecting backdoor into the downstream model compared to conventional backdoor attacks. We propose four necessary tests that a successful backdoor attack on the diffusion model should pass to cause a backdoor pandemic. Our evaluation across a wide range of benchmark datasets and model architectures exhibits that only our attack successfully passes these tests, causing widespread pandemic across many downstream classifiers.
We introduce Eidolon, the first backdoor attack on diffusion model that stealthily causes a widespread backdoor pandemic by passively transferring the attack to downstream models through synthetic data generation
applications to computer vision, audio, language, and other modalities
https://openreview.net/pdf?id=oEjqJEMMyi
2025-09-19T04:31:40
1
[]
UAUimofy3W
https://openreview.net/forum?id=UAUimofy3W
Non-Collaborative User Simulators for Tool Agents
4.666667
3
[ 6, 4, 4 ]
[ 3, 3, 3 ]
3
[ "Tool Agent", "User Simulator", "Non-collaborative User", "Dialogue Simulation" ]
Tool agents interact with users through multi-turn dialogues to accomplish various tasks. Recent studies have adopted user simulation methods to develop these agents in multi-turn settings. However, existing user simulators tend to be agent-friendly, exhibiting only cooperative behaviors, which fails to train and test agents against non-collaborative users in the real world. To address this, we propose a novel user simulator architecture that simulates four categories of non-collaborative behaviors: requesting unavailable services, digressing into tangential conversations, expressing impatience, and providing incomplete utterances. Our user simulator can simulate challenging and natural non-collaborative behaviors while reliably delivering all intents and information necessary to accomplish the task. Our experiments on MultiWOZ and $\tau$-bench reveal significant performance degradation in state-of-the-art tool agents when encountering non-collaborative users. We provide detailed analyses of agents' weaknesses under each non-collaborative condition, such as escalated hallucinations and dialogue breakdowns. Ultimately, we contribute an easily extensible user simulation framework to help the research community develop tool agents and preemptively diagnose them under challenging real-world conditions within their own services.
A non-collaborative user simulation method for tool agent.
datasets and benchmarks
https://openreview.net/pdf?id=UAUimofy3W
2025-09-19T17:04:54
3
[ { "id": "HppYz2JLdI", "forum": "UAUimofy3W", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission17142/Reviewer_LnJA", "reviewer_name": "Reviewer_LnJA", "rating": 6, "confidence": 3, "soundness": 2, "contribution": 3, "presentation": 3, "summary": "This ...
a8QTAl5Hnb
https://openreview.net/forum?id=a8QTAl5Hnb
When Style Breaks Safety: Defending LLMs Against Superficial Style Alignment
5
4.25
[ 6, 6, 2, 6 ]
[ 4, 4, 5, 4 ]
4
[ "Safety Alignment", "Jailbreak", "Large Language Model" ]
Large language models (LLMs) can be prompted with specific styles (e.g., formatting responses as lists), including in malicious queries. Prior jailbreak research mainly augments these queries with additional string transformations to maximize attack success rate (ASR). However, the impact of style patterns in the original queries that are semantically irrelevant to the malicious intent remains unclear. In this work, we seek to understand whether style patterns compromise LLM safety, how superficial style alignment increases model vulnerability, and how best to mitigate these risks during alignment. We first define ASR inflation as the increase in ASR due to style patterns in existing jailbreak benchmark queries. By evaluating $32$ LLMs across seven benchmarks, we find that nearly all models exhibit ASR inflation. Notably, the inflation correlates with an LLM's relative attention to style patterns, which also overlap more with its instruction-tuning data when inflation occurs. We then investigate superficial style alignment, and find that fine-tuning with specific styles makes LLMs more vulnerable to jailbreaks of those same styles. Finally, we propose SafeStyle, a defense strategy that incorporates a small amount of safety training data augmented to match the distribution of style patterns in the fine-tuning data. Across three LLMs, six fine-tuning style settings, and two real-world instruction-tuning datasets, SafeStyle consistently outperforms baselines in maintaining LLM safety.
We investigate how style patterns compromise LLM safety and propose SafeStyle to defend LLMs against superficial style alignment.
alignment, fairness, safety, privacy, and societal considerations
https://openreview.net/pdf?id=a8QTAl5Hnb
2025-09-19T04:59:06
4
[ { "id": "IuAN1m0sOK", "forum": "a8QTAl5Hnb", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission14141/Reviewer_Eupf", "reviewer_name": "Reviewer_Eupf", "rating": 6, "confidence": 4, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "This ...
S4PCF1YxoR
https://openreview.net/forum?id=S4PCF1YxoR
Representation-Based Exploration for Language Models: From Test-Time to Post-Training
5.5
4
[ 4, 8, 2, 8 ]
[ 4, 4, 4, 4 ]
4
[ "Exploration", "language models", "reinforcement learning", "test-time scaling" ]
Reinforcement learning (RL) promises to expand the capabilities of language models, but it is unclear if current RL techniques promote the discovery of novel behaviors, or simply sharpen those already present in the base model. In this paper, we investigate the value of deliberate exploration---explicitly incentivizing the model to discover novel and diverse behaviors---and aim to understand how the knowledge in pre-trained models can guide this search. Our main finding is that exploration with a simple, principled, representation-based bonus derived from the pre-trained language model's hidden states significantly improves diversity and pass@k rates---both for post-training, and in a novel inference-time scaling setting we introduce. (1) For inference-time, exploration with representation-based diversity improves efficiency, consistently improving pass@k rates across a variety of models and reasoning tasks. For example, for Qwen-2.5-14b-Instruct we obtain over 50\% improvement in verifier efficiency on almost all considered tasks. (2) For post-training, we show that integrating this exploration strategy into an RL pipeline improves reasoning performance over that of the initial model and over standard RL post-training. For example, on AIME 2024, our post-trained Qwen-2.5-7b-Instruct's pass@80 matches the pass@256 of GRPO on the same model, demonstrating a 3x improvement in test-time sample efficiency. Overall, our findings suggest that deliberate exploration---with the right notion of diversity---is a practical path toward discovery of new behaviors beyond sharpening.
We find representation-based exploration for language models is helpful both at test-time and post-training time.
reinforcement learning
https://openreview.net/pdf?id=S4PCF1YxoR
2025-09-19T22:57:12
4
[ { "id": "6HJp0t7uvu", "forum": "S4PCF1YxoR", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission19142/Reviewer_ppuG", "reviewer_name": "Reviewer_ppuG", "rating": 4, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This ...
cf604XrSHa
https://openreview.net/forum?id=cf604XrSHa
Transforming Language Models into Program Interpreters via Execution Trace Chain of Thought
-1
-1
[]
[]
0
[ "large language models", "chain of thought", "code execution" ]
Code execution reasoning (CER), the ability to predict code execution on a given input, has emerged as an important aspect of language models' (LMs) coding capabilities. However, many open-source small- to medium-sized LMs continue to perform poorly on simple code snippets, and effective methodologies to enhance CER capability have not yet been established. In this context, we first highlight the limitations of LMs in basic operations in CER. Through our custom tests, including a test that measures the understanding of individual grammar rules, we indicate that code understanding in natural language does not imply actual procedural understanding of code, and that it is necessary to accumulate reasoning steps at a granularity finer than a line in a structured manner. Motivated by these insights, we investigate ET-CoT (Execution Trace Chain of Thought), a method in which execution traces are generated with our custom code interpreter PyTracify and used as chain-of-thought rationales, in order to transform 8B-class LMs to code interpreters specialized for CER. After fine-tuning with 127k examples, we demonstrate the effectiveness of ET-CoT, improving Qwen2.5-7B-Instruct to $70.0\%$ on CruxEval-O and to $88.3\%$ on LiveCodeBench (execution), thereby setting new baselines for the class.
We introduce ET-CoT, an approach where LLMs are fine-tuned on systematic program execution traces to learn to predict code outcomes by generating these traces as a chain of thought.
unsupervised, self-supervised, semi-supervised, and supervised representation learning
https://openreview.net/pdf?id=cf604XrSHa
2025-09-14T18:05:24
1
[]
17iH7ElJOV
https://openreview.net/forum?id=17iH7ElJOV
Environment-Aware On-Manifold 3D Texture Camouflage for Physical Attacks on Vehicle Detectors
2.5
3.75
[ 4, 0, 4, 2 ]
[ 3, 5, 3, 4 ]
4
[ "Physical adversarial attack; vehicle detectors; 3D texture camouflage; environment appearance transfer; affine transform; StyleGAN prior; on-manifold optimization; printability; multi-view robustness." ]
We study full-coverage, printable 3D camouflage attacks on vehicle detectors. Our pipeline decouples photorealism from attackability by combining a closed-form \emph{Intrinsic Appearance Transfer} (IAT) module with an on-manifold StyleGAN texture prior under Expectation-over-Transformations (EOT) focused on camera and environment. IAT carries exposure/white balance/tone and veiling from a reference frame to the render via per-pixel affine carriers and is training-free at test time; adversarial textures are optimized only through early StyleGAN layers to preserve material plausibility. On a scene-controlled CARLA corpus spanning 22 weather/time presets, 8 azimuths, 9 elevations, 6 distances, and 3 locations, our method---optimized white-box on YOLOv3 and evaluated black-box on Faster R-CNN, RetinaNet, RTMDet, and DINO---reduces AP@0.5 from \mbox{0.75} to \mbox{0.11} on YOLOv3 ($-85.8\%$), with corresponding drops to \mbox{0.13} ($-82.5\%$) on Faster R-CNN, \mbox{0.22} ($-68.7\%$) on RetinaNet, \mbox{0.26} ($-67.1\%$) on RTMDet, and \mbox{0.59} ($-31.7\%$) on DINO. Averaged over detectors, AP@0.5 decreases from \mbox{0.7538} to \mbox{0.2863} ($\approx 62\%$). Ablations show that (i) sRGB-domain affine fits excel on unseen \emph{colors}, while linear-RGB fits excel on unseen \emph{textures}; and (ii) cross-color U-Net training with a content loss yields the best perceptual fidelity among learned baselines. Overall, a simple, differentiable IAT combined with a layer-restricted generative prior offers a practical path to robust, photorealistic 3D camouflage that transfers across models and conditions.
applications to robotics, autonomy, planning
https://openreview.net/pdf?id=17iH7ElJOV
2025-09-20T10:16:26
5
[ { "id": "iN8uWvpsQT", "forum": "17iH7ElJOV", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission22706/Reviewer_VapM", "reviewer_name": "Reviewer_VapM", "rating": 4, "confidence": 3, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "This ...
xK2EcRC3xJ
https://openreview.net/forum?id=xK2EcRC3xJ
Variance Matters: Improving Domain Adaptation via Stratified Sampling
3
3.75
[ 2, 2, 6, 2 ]
[ 4, 3, 4, 4 ]
4
[ "stochastic variance reduction", "unsupervised domain adaptation", "maximum mean discrepancy", "correlation alignment", "kernel k-means clustering" ]
Domain shift remains a key challenge in deploying machine learning models to the real world. Unsupervised domain adaptation (UDA) aims to address this by minimising domain discrepancy during training, but the discrepancy estimates suffer from high variance in stochastic settings, which can stifle the theoretical benefits of the method. This paper proposes Variance-Reduced Domain Adaptation via Stratified Sampling (VaRDASS), the first specialised stochastic variance reduction technique for UDA. We consider two specific discrepancy measures – correlation alignment and the maximum mean discrepancy (MMD) – and derive ad hoc stratification objectives for these terms. We then present expected and worst-case error bounds, and prove that our proposed objective for the MMD is theoretically optimal (i.e., minimises the variance) under certain assumptions. Finally, a practical k-means style optimisation algorithm is introduced and analysed. Experiments on three domain shift datasets demonstrate improved discrepancy estimation accuracy and target domain performance.
A novel stochastic variance reduction technique for unsupervised domain adaptation based on stratified sampling, specifically targeting the MMD and CORAL losses
unsupervised, self-supervised, semi-supervised, and supervised representation learning
https://openreview.net/pdf?id=xK2EcRC3xJ
2025-09-18T06:38:41
4
[ { "id": "jRs3JDFnrW", "forum": "xK2EcRC3xJ", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission9916/Reviewer_oBLc", "reviewer_name": "Reviewer_oBLc", "rating": 2, "confidence": 4, "soundness": 2, "contribution": 3, "presentation": 2, "summary": "The pa...
M7NTM8vhB8
https://openreview.net/forum?id=M7NTM8vhB8
Abductive Explanations for Groups of Similar Samples
3
4
[ 2, 4, 2, 4 ]
[ 4, 4, 4, 4 ]
4
[ "explainable AI", "robust explanations", "group explanations", "abductive explanations", "neural network verification" ]
Explaining the decisions of machine learning models is crucial as their use becomes widespread. While many approaches to explanation are based on heuristics or surrogate models without formal guarantees, formal explanations provide reasoning for a particular decision that is guaranteed to be valid. We focus on abductive explanations (AXp) that identify sufficient subsets of input features for a given classification. We extend AXp to not only cover a particular sample, but to cover all of the samples whose features are within a given interval, providing explanations that remain valid even when the features in the explanation vary by up to $\delta$. In addition to applying this notion of $\delta$-robust AXp to a single sample, we also consider \emph{group explanations} ($\delta$-gAXp), which give a common explanation for a group of samples that share the same classification. We evaluate our approach by producing explanations for neural networks with the help of Marabou, a neural network verifier. The evaluation shows that, compared to a recent approach for finding a maximally ``inflated'' explanation, a $\delta$-robust AXp covers a significant volume of the inflated explanation with a dramatically lower runtime. Our evaluation also provides evidence that group explanations capture important features for all the samples within the group much faster than computing explanations for each sample separately.
We introduce $\delta$–robust abductive explanations, which allow for producing feature selection explanations valid for a group of similar samples
interpretability and explainable AI
https://openreview.net/pdf?id=M7NTM8vhB8
2025-09-20T16:43:25
4
[ { "id": "7VVxmrwRq4", "forum": "M7NTM8vhB8", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission24541/Reviewer_Rg8d", "reviewer_name": "Reviewer_Rg8d", "rating": 2, "confidence": 4, "soundness": 1, "contribution": 2, "presentation": 3, "summary": "This ...
K5tcKEQaUr
https://openreview.net/forum?id=K5tcKEQaUr
Frequency-Balanced Retinal Representation Learning with Mutual Information Regularization
4
3.75
[ 4, 6, 4, 2 ]
[ 3, 3, 4, 5 ]
4
[ "Masked Image Modeling", "Masked Autoencoders", "Representation Learning", "Mutual Information", "Retinal Imaging", "Medical Imaging" ]
We propose a frequency-oriented perspective on retinal representation learning by analyzing masked autoencoders (MAE) through the lens of spatial frequency. Our analysis shows that MAE favors low-frequency content while under-encoding diagnostically critical high-frequency structures in retinal images. Because retinal pathology often manifests in high-frequency detail, this bias limits diagnostic performance and motivates frequency-balanced representations. Within a mutual-information (MI) formulation of MAE, we introduce the \emph{Frequency-Balanced Retinal Masked Autoencoder (RetMAE)}, which augments the reconstruction objective with a MI regularizer that suppresses low-frequency redundancy and accentuates clinically salient high-frequency information. Without altering architecture, RetMAE learns frequency-balanced features that surpass those of MAE-based retinal encoders in both quantitative and qualitative evaluations. These results suggest that a frequency-oriented view provides a principled foundation for future advances in ophthalmic modeling.
unsupervised, self-supervised, semi-supervised, and supervised representation learning
https://openreview.net/pdf?id=K5tcKEQaUr
2025-09-15T13:29:39
4
[ { "id": "kzlLGsluKs", "forum": "K5tcKEQaUr", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission5485/Reviewer_Rstj", "reviewer_name": "Reviewer_Rstj", "rating": 4, "confidence": 3, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "This p...
23adDIOrjB
https://openreview.net/forum?id=23adDIOrjB
The Intrinsic Dimension of Prompts in Internal Representations of Large Language Models
4
3
[ 4, 4, 6, 2 ]
[ 2, 2, 4, 4 ]
4
[ "intrinsic dimension", "internal representations", "LLMs", "jailbreaks" ]
We study the geometry of token representations at the prompt level in large language models through the lens of intrinsic dimension. Viewing transformers as mean-field particle systems, we estimate the intrinsic dimension of the empirical measure at each layer and demonstrate that it correlates with next-token uncertainty. Across models and intrinsic dimension estimators, we find that intrinsic dimension peaks in early to middle layers and increases under semantic disruption (by shuffling tokens), and that it is strongly correlated with average surprisal, with a simple analysis linking logits geometry to entropy via softmax. As a case study in practical interpretability and safety, we train a linear probe on the per-layer intrinsic dimension profile to distinguish malicious from benign prompts before generation. This probe achieves 90–95\% accuracy across different datasets, outperforming widely used guardrails such as Llama Guard and Gemma Shield. We further compare against linear probes built from layerwise entropy derived via the Tuned Lens and find that the intrinsic dimension-based probe is competitive and complementary, offering a compact, interpretable signal distributed across layers. Our findings suggest that prompt-level geometry provides actionable signals for monitoring and controlling LLM behavior, and offers a bridge between mechanistic insights and practical safety tools.
We show that the intrinsic dimension of prompt-level token representations peaks in early–middle layers, increases under shuffling and correlates with surprisal. A simple linear probe based on ID flags malicious vs benign prompts with high accuracy.
interpretability and explainable AI
https://openreview.net/pdf?id=23adDIOrjB
2025-09-19T18:32:15
4
[ { "id": "k4MqYUb1Ew", "forum": "23adDIOrjB", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission17592/Reviewer_NKRy", "reviewer_name": "Reviewer_NKRy", "rating": 4, "confidence": 2, "soundness": 2, "contribution": 1, "presentation": 1, "summary": "The p...
Pf8OJjKeT5
https://openreview.net/forum?id=Pf8OJjKeT5
MindAttention: Foveated Visual Encoding for Neural Response Synthesis and Concept-selective Region Localization
4
3.25
[ 8, 2, 4, 2 ]
[ 4, 4, 3, 2 ]
4
[ "Brain Encoding" ]
Synthesizing brain activity via generative models to localize concept-selective cortical regions represents a promising advancement beyond traditional experimental paradigms. However, existing methods largely overlook the spatial selectivity of visual attention -- when visual stimuli contain multiple central targets. The spatial selectivity of human attention significantly reduces the signal intensity of unattended targets during neural encoding, leading to suppressed neural representations and consequently causing bias or failure in data-driven neural concept localization. To address this *synthesis-attention misalignment* problem, we propose *MindAttention*, a generative brain visual encoding framework that anchors concept representation to foveal gaze position. Grounded in the neuroscientific principle that only high-acuity foveal input reliably drives semantic-level cortical responses, we thereby construct a gaze-conditioned generator: simulated activation of a target concept is triggered only when the corresponding object falls within the foveal field. Experiments show that *MindAttention* significantly outperforms existing generative methods in localization accuracy. The incorporation of spatial attention constraints endows the framework with neuro-mechanistic interpretability and cognitive plausibility, establishing a more reliable and biologically grounded paradigm for data-driven exploration of brain concept maps.
applications to neuroscience & cognitive science
https://openreview.net/pdf?id=Pf8OJjKeT5
2025-09-19T01:21:46
4
[ { "id": "H4tZxcsIIE", "forum": "Pf8OJjKeT5", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission13295/Reviewer_hm28", "reviewer_name": "Reviewer_hm28", "rating": 8, "confidence": 4, "soundness": 2, "contribution": 3, "presentation": 3, "summary": "The p...
KL1LNgg73s
https://openreview.net/forum?id=KL1LNgg73s
FedGMR: Federated Learning with Gradual Model Restoration under Asynchrony and Model Heterogeneity
4
3.333333
[ 4, 4, 4 ]
[ 4, 1, 5 ]
3
[ "Federated Learning", "Heterogeneity", "Model Pruning", "Model-heterogeneous" ]
Federated learning (FL) holds strong potential for distributed machine learning. Yet in heterogeneous environments, Bandwidth-Constrained Clients (BCCs) often fail to participate effectively due to limited communication capacity, leading to slow convergence and degraded generalization. To tackle this issue, we propose FedGMR—Federated Learning with Gradual Model Restoration under Asynchrony and Model Heterogeneity. FedGMR progressively increases each client's model capacity during training, enabling BCCs to contribute constantly throughout the training. In addition, a tailored transmission and aggregation mechanism is designed to better accommodate system-level heterogeneity. We establish convergence guarantees under mask-aware aggregation, showing that time-averaged, coverage-weighted densities govern enlarged errors, and that GMR provably tightens the gap to full-model FL. Extensive experiments on FEMNIST, CIFAR-10, and ImageNet-100 show that FedGMR achieves faster convergence and higher accuracy, especially under high heterogeneity and non-IID settings.
infrastructure, software libraries, hardware, systems, etc.
https://openreview.net/pdf?id=KL1LNgg73s
2025-09-18T22:08:54
3
[ { "id": "El4MPeQMYH", "forum": "KL1LNgg73s", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission11900/Reviewer_q72V", "reviewer_name": "Reviewer_q72V", "rating": 4, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "This ...
4qj7qO1fTJ
https://openreview.net/forum?id=4qj7qO1fTJ
BottleneckMLP: Graph Explanation via Implicit Information Bottleneck
3.5
4
[ 4, 4, 4, 2 ]
[ 4, 4, 4, 4 ]
4
[ "Graph Neural Networks", "Explainability", "Information Bottleneck", "Mutual Information", "Representation Learning" ]
The success of Graph Neural Networks (GNNs) in modeling unstructured data has heightened the demand for explainable AI (XAI) methods that provide transparent, interpretable rationales for their predictions. A prominent line of work leverages the Information Bottleneck (IB) principle, which frames explanation as optimizing for representations that maximize predictive information $I(Z;Y)$ while minimizing input dependence $I(X;Z)$. We show that explicit IB-based losses in GNN explainers provide little benefit beyond standard training: the fitting and compression phases of IB emerge naturally, whereas the variational bounds used in explicit objectives are too loose to meaningfully constrain mutual information. To address this, we propose BottleneckMLP, a simple architectural module that implicitly enforces the IB principle. By injecting Gaussian noise inversely scaled by node importance, followed by architectural compression, BottleneckMLP amplifies the reduction of $I(X;Z)$ while increasing $I(Z;Y)$. This yields embeddings where important nodes remain structured and clustered, while unimportant nodes drift toward Gaussianized, high-entropy distributions, consistent with progressive information loss under IB. BottleneckMLP integrates seamlessly with current explainers, as well as subgraph recognition tasks, replacing explicit IB terms and consistently improving predictive performance and explanation quality across diverse datasets.
BottleneckMLP is a general module which implicitly enforces IB, effectively replacing explicit IB loss terms in existing ante-hoc graph explanation frameworks.
interpretability and explainable AI
https://openreview.net/pdf?id=4qj7qO1fTJ
2025-09-20T07:23:43
4
[ { "id": "EBoTsAjG9S", "forum": "4qj7qO1fTJ", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission21969/Reviewer_k24x", "reviewer_name": "Reviewer_k24x", "rating": 4, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 4, "summary": "The p...
5YCdnxerit
https://openreview.net/forum?id=5YCdnxerit
Towards Visual Text Grounding of Multimodal Large Language Model
4.5
3.75
[ 4, 4, 6, 4 ]
[ 4, 3, 3, 5 ]
4
[ "Vision Question Answering", "Multimodal Large Language Models", "Visual Grounding" ]
Despite the existing evolution of Multimodal Large Language Models (MLLMs), a non-neglectable limitation remains in their struggle with visual text grounding, especially in text-rich images of documents. Document images, such as scanned forms and infographics, highlight critical challenges due to their complex layouts and textual content. However, current benchmarks do not fully address these challenges, as they mostly focus on visual grounding on natural images, rather than text-rich document images. Thus, to bridge this gap, we introduce TRIG, a novel task with a newly designed instruction dataset for benchmarking and improving the Text-Rich Image Grounding capabilities of MLLMs in document question-answering. Specifically, we propose an OCR-LLM-human interaction pipeline to create 800 manually annotated question-answer pairs as a benchmark and a large-scale training set of k synthetic data based on four diverse datasets. A comprehensive evaluation of various MLLMs on our proposed benchmark exposes substantial limitations in their grounding capability on text-rich images. In addition, we propose two simple and effective TRIG methods based on general instruction tuning and plug-and-play efficient embedding, respectively. By finetuning MLLMs on our synthetic dataset, they promisingly improve spatial reasoning and grounding capabilities.
We introduce TRIG, a novel task with a newly designed instruction dataset for benchmarking and improving the Text-Rich Image Grounding capabilities of MLLMs in document question-answering.
datasets and benchmarks
https://openreview.net/pdf?id=5YCdnxerit
2025-09-19T05:31:08
4
[ { "id": "8QiVmLZKvd", "forum": "5YCdnxerit", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission14244/Reviewer_rDRd", "reviewer_name": "Reviewer_rDRd", "rating": 4, "confidence": 4, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This ...
JwWaFaH86M
https://openreview.net/forum?id=JwWaFaH86M
Abnaolizer: An AI Agent for Converting Antibodies to Nanobodies
1.333333
4.333333
[ 2, 2, 0 ]
[ 5, 4, 4 ]
3
[ "nanobody design", "antibody engineering", "weakly supervised learning", "cross-modal retrieval", "Pareto optimization", "contrastive learning", "protein representation learning", "drug discovery" ]
Nanobodies, the naturally occurring single-chain antibodies derived from camelids, have emerged as highly promising therapeutic molecules due to their high stability, small size, and ease of engineering. However, generating nanobody candidate sequences from conventional antibodies—one of the primary routes for nanobody development—remains challenging, as rational design is limited by the scarcity of paired data and the complexity of molecular recognition mechanisms. To address this, we propose \textbf{AbNanolizer}, a physics-guided, weakly supervised AI framework for converting conventional antibodies into nanobody candidates. We formalize the task as antigen-conditioned cross-modal retrieval and multi-objective ranking, and design a noise-robust learning scheme to handle weakly paired and mismatched training signals. The framework employs an antigen-conditioned dual-encoder to align sequence representations of conventional antibodies and nanobodies, and jointly optimizes a noise-robust contrastive objective with differentiable Pareto ranking. Optional structural and energetic proxy signals, together with developability predictions, are integrated into a unified optimization. To support reliable decision-making, we perform coverage-guaranteed confidence calibration on retrieval scores. We further construct a rigorous public benchmark and evaluation protocol to enable comparison against strong baselines. Across multiple metrics, AbNanolizer demonstrates consistent improvements and showcases end-to-end applications on three approved drug targets amenable to nanobodies.
applications to physical sciences (physics, chemistry, biology, etc.)
https://openreview.net/pdf?id=JwWaFaH86M
2025-09-01T20:20:58
3
[ { "id": "Fnh2Vece3u", "forum": "JwWaFaH86M", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission132/Reviewer_cRuU", "reviewer_name": "Reviewer_cRuU", "rating": 2, "confidence": 5, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "This pa...
3iHQ97INBP
https://openreview.net/forum?id=3iHQ97INBP
Learning Interpretable Models Using Uncertainty Oracles
3.333333
3.666667
[ 0, 4, 6 ]
[ 4, 3, 4 ]
3
[ "interpretability", "dirichlet_process", "bayesian_optimization" ]
A desirable property of interpretable models is small size, so that they are easily understandable by humans. This leads to the following challenges: (a) small sizes typically lead to diminished accuracy, and, (b) different techniques offer bespoke levers, e.g., L1 regularization, for making this size-accuracy trade-off that might be insufficient to reach the desired balance. We address these challenges here. Earlier work has shown that learning the training distribution creates accurate small models. Our contribution is a new technique that exploits this idea. The training distribution is modeled as a Dirichlet Process for flexibility in representation. Its parameters are learned using Bayesian Optimization; a design choice that makes the technique applicable to non-differentiable loss functions. To avoid challenges with high data dimensionality, the data is first projected down to one-dimension using uncertainty scores of a separate probabilistic model, that we refer to as the uncertainty oracle. Based on exhaustive experiments we show that this technique possesses multiple merits: (1) it significantly enhances small model accuracies, (2) is versatile: it may be applied to different model families with varying notions of size, e.g., depth of a decision tree, non-zero coefficients in a linear model, simultaneously the maximum depth of a tree and number of trees in Gradient Boosted Models, (3) is practically convenient because it needs only one hyperparameter to be set and works with non-differentiable losses, (4) works across different feature spaces between the uncertainty oracle and the interpretable model, e.g., a Gated Recurrent Unit trained using character sequences may be used as an oracle for a Decision Tree that uses character n-grams, and, (5) may augment the accuracies of fairly old techniques to be competitive with recent task-specialized techniques, e.g., CART Decision Tree (1984) vs Iterative Mistake Minimization (2020), on the task of cluster explanation.
interpretability and explainable AI
https://openreview.net/pdf?id=3iHQ97INBP
2025-09-20T08:02:46
3
[ { "id": "SSZVSoAF7S", "forum": "3iHQ97INBP", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission22131/Reviewer_CnJ6", "reviewer_name": "Reviewer_CnJ6", "rating": 0, "confidence": 4, "soundness": 2, "contribution": 1, "presentation": 2, "summary": "In th...
4y25Ifytn8
https://openreview.net/forum?id=4y25Ifytn8
CFT-RAG: An Entity Tree Based Retrieval Augmented Generation Algorithm With Cuckoo Filter
4
3
[ 4, 4, 4, 4 ]
[ 2, 3, 4, 3 ]
4
[ "Retrieval-Augmented Generation", "Tree-RAG", "Cuckoo Filter", "Knowledge Retrieval" ]
Although retrieval-augmented generation(RAG) significantly improves generation quality by retrieving external knowledge bases and integrating generated content, it faces computational efficiency bottlenecks, particularly in knowledge retrieval tasks involving hierarchical structures for Tree-RAG. This paper proposes a Tree-RAG acceleration method based on the improved Cuckoo Filter, which optimizes entity localization during the retrieval process to achieve significant performance improvements. Tree-RAG effectively organizes entities through the introduction of a hierarchical tree structure, while the Cuckoo Filter serves as an efficient data structure that supports rapid membership queries and dynamic updates. The experiment results demonstrate that our method is much faster than baseline methods while maintaining high levels of generative quality. For instance, our method is more than 800% faster than naive Tree-RAG on DART dataset. Our work is available at https://github.com/TUPYP7180/CFT-RAG-2025.
applications to computer vision, audio, language, and other modalities
https://openreview.net/pdf?id=4y25Ifytn8
2025-09-20T14:55:31
4
[ { "id": "f4EawC2y0P", "forum": "4y25Ifytn8", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission23984/Reviewer_rJQQ", "reviewer_name": "Reviewer_rJQQ", "rating": 4, "confidence": 2, "soundness": 2, "contribution": 3, "presentation": 3, "summary": "This ...
m8sPQEd71W
https://openreview.net/forum?id=m8sPQEd71W
Unified Multimodal Model as Auto-Encoder
3.5
3.5
[ 2, 8, 2, 2 ]
[ 3, 3, 4, 4 ]
4
[ "Multimodal", "Unified Multimodal Model", "Generative Model" ]
The pursuit of unified multimodal models (UMMs) has long been hindered by a fundamental schism between multimodal understanding and generation. Current approaches typically disentangle the two and treat them as separate endeavors with disjoint objectives, missing the mutual benefits. We argue that true unification requires more than just merging two tasks. It requires a unified, foundational objective that intrinsically links them. In this paper, we introduce an insightful paradigm through the **Auto-Encoder lens**, *i.e.*, regarding understanding as the encoder (I2T) that compresses images into text, and generation as the decoder (T2I) that reconstructs images from that text. We argue that: *if the encoder truly "understands" the image, its description should capture all essential structure, and if the decoder truly "understands" the text, it should recover that structure faithfully.* Hence, high-fidelity reconstruction serves as a powerful perspective for genuine multimodal unification, evidencing near-lossless, bidirectional information flow between the two processes. To implement this, we propose **UAE**, where we begin by pre-training the decoder with the proposed 700k long-context image-caption pairs to direct it to "understand" the fine-grained and complex semantics from the text, as longer intermediate text, in our Auto-Encoder framework, can preserve more information from the input image for reconstruction. We then propose **Unified-GRPO** via reinforcement learning (RL) to unify the two, which covers two complementary stages: (1) *Generation for Understanding*, where the encoder is trained to generate informative captions that maximize the decoder's reconstruction quality, enhancing its visual perception; (2) *Understanding for Generation*, where the decoder is refined to reconstruct from these captions, forcing it to leverage every detail and improving its long-context instruction following and generation fidelity. Our empirical results suggest that understanding can largely enhance generation (verified on GenEval), while generation, in turn, notably strengthens fine-grained visual perception like small object and color recognition (verified on MMT-Bench). This bidirectional improvement reveals a deep synergy: under the unified reconstruction objective, generation and understanding can mutually benefit each other, moving closer to truly unified multimodal intelligence.
Exploring synergy between visual generation and perception by formulating the unified multimodal model as autoencoder.
applications to computer vision, audio, language, and other modalities
https://openreview.net/pdf?id=m8sPQEd71W
2025-09-01T20:13:14
5
[ { "id": "Blla09cGcp", "forum": "m8sPQEd71W", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission104/Reviewer_N9UK", "reviewer_name": "Reviewer_N9UK", "rating": 2, "confidence": 3, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This pa...
d0xqdsR41U
https://openreview.net/forum?id=d0xqdsR41U
WebChoreArena: Evaluating Web Browsing Agents on Realistic Tedious Web Tasks
4.5
3.75
[ 6, 4, 4, 4 ]
[ 3, 4, 4, 4 ]
4
[ "benchmark", "web browsing agent" ]
Powered by a large language model (LLM), a web browsing agent operates web browsers in a human-like manner and offers a highly transparent path toward automating a wide range of everyday tasks. As web agents become increasingly capable and demonstrate proficiency in general browsing tasks, a critical question emerges: $\textit{Can they go beyond general browsing to robustly handle tasks that are tedious and complex, or chores that humans often avoid doing themselves?}$ In this paper, we introduce \textbf{WebChoreArena}, a new fully reproducible benchmark comprising 532 carefully curated tasks over 300+ hours, designed to address more labor-intensive and tedious tasks. WebChoreArena systematically integrates three key challenges: (i) $\textbf{Massive Memory}$ tasks requiring accurate retrieval of large amounts of information in the observations, (ii) $\textbf{Calculation}$ tasks demanding precise mathematical reasoning, and (iii) $\textbf{Long-Term Memory}$ tasks necessitating long-term memory across multiple webpages. Built on top of the fully reproducible and widely adopted four WebArena environments, WebChoreArena ensures strict reproducibility and enables fair, direct comparisons with the established WebArena benchmark, offering key insights into agent progress. Our experimental results demonstrate that as LLMs evolve, significant performance improvements are observed on WebChoreArena. These findings suggest that WebChoreArena is well-suited to measure the advancement of state-of-the-art LLMs with greater clarity. Nevertheless, the results also indicate that even with GPT-5, there remains substantial room for improvement compared to WebArena, highlighting the increased challenges posed by WebChoreArena.
We propose WebChoreArena, a benchmark of 532 complex and tedious web tasks. State-of-the-art LLM agents show notable performance drops, highlighting their limitations beyond general browsing.
datasets and benchmarks
https://openreview.net/pdf?id=d0xqdsR41U
2025-09-13T16:55:12
4
[ { "id": "p48iNYwdhD", "forum": "d0xqdsR41U", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission4726/Reviewer_cMr3", "reviewer_name": "Reviewer_cMr3", "rating": 6, "confidence": 3, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "The pa...
Jq0KN0lgtM
https://openreview.net/forum?id=Jq0KN0lgtM
FRL-SAGE: Stackelberg Game-Theoretic Defense Against Adaptive Adversaries in Federated Reinforcement Learning
2
4
[ 0, 4, 2, 2, 2 ]
[ 5, 4, 3, 4, 4 ]
5
[ "Federated reinforcement learning", "adversarial attacks", "defenses", "two-player game", "Stackelberg game" ]
Federated Reinforcement Learning (FRL) enables multiple agents to collaboratively train policies without sharing raw trajectories, but remains highly vulnerable to adversarial clients. Unlike supervised FL, FRL’s sequential and policy-driven nature allows attackers to adapt strategies across rounds, while defenders must covertly reallocate protections in response. This evolving interaction naturally resembles a two-player strategic game, yet existing defenses assume static adversaries and fail to capture such dynamics. We propose FRL-SAGE (Stackelberg Adversarial Game Equilibrium in Federated Reinforcement Learning), the first framework to formalize attacker–defender dynamics in FRL as a Stackelberg security game. The defender, acting as leader, commits to client-level protections under a budget, while the attacker, as follower, best responds by selecting clients to compromise. We define asymmetric utilities: attacker utility is damage inflicted minus attack cost, while defender utility is the negative sum of residual damage and defense costs. The attacker’s optimization reduces to a 0/1 knapsack problem, solvable via dynamic programming or greedy heuristics, while the defender’s bilevel planning is NP-hard but tractable through exact enumeration or scalable relaxation-based routines. To evaluate the framework concretely, we instantiate an adversary that uses gradient-noise injection and analyze four representative regimes, ranging from static single-client compromise to dynamic multi-client reshuffling with heterogeneous client importance. We theoretically establish equilibrium existence, prove computational hardness, and provide approximation guarantees for scalable solvers. Experiment on CartPole, a standard FRL testbed, illustrate that FRL-SAGE reduces attack-induced performance loss while operating within realistic defense budgets, supporting its role as a principled game-theoretic foundation for proactive defense in adversarial FRL.
We model adversarial federated reinforcement learning as a Stackelberg game and propose FRL-SAGE, a framework that optimizes defender strategies to maximize expected utility under dynamic attacks.
alignment, fairness, safety, privacy, and societal considerations
https://openreview.net/pdf?id=Jq0KN0lgtM
2025-09-17T23:14:07
5
[ { "id": "ekfIZNwg3J", "forum": "Jq0KN0lgtM", "review_number": 5, "reviewer_id": "ICLR.cc/2026/Conference/Submission9425/Reviewer_Spi5", "reviewer_name": "Reviewer_Spi5", "rating": 0, "confidence": 5, "soundness": 1, "contribution": 1, "presentation": 1, "summary": "The pa...
bvaaydGKYp
https://openreview.net/forum?id=bvaaydGKYp
From Experience to Strategy: Empowering LLM Agents with Trainable Graph Memory
5
3.25
[ 8, 2, 6, 4 ]
[ 3, 4, 3, 3 ]
4
[ "Large Language Model Agents ; Graph Memory; Reinforcement Learning" ]
Large Language Models (LLMs) based agents have demonstrated remarkable potential in autonomous task-solving across complex, open-ended environments. A promising approach for improving the reasoning capabilities of LLM agents is to better utilize prior experiences in guiding current decisions. However, LLMs acquire experience either through implicit memory via training, which suffers from catastrophic forgetting and limited interpretability, or explicit memory via prompting, which lacks adaptability. In this paper, we introduce a novel agent-centric, trainable, multi-layered graph memory framework and evaluate how context memory enhances the ability of LLMs to utilize parametric information. The graph abstracts raw agent trajectories into structured decision paths in a state machine and further distills them into high-level, human-interpretable strategic meta-cognition. In order to make memory adaptable, we propose a reinforcement-based weight optimization procedure that estimates the empirical utility of each meta-cognition based on reward feedback from downstream tasks. These optimized strategies are then dynamically integrated into the LLM agent’s training loop through meta-cognitive prompting. Empirically, the learnable graph memory delivers robust generalization, improves LLM agents' strategic reasoning performance, and provides consistent benefits during Reinforcement Learning (RL) training.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=bvaaydGKYp
2025-09-18T15:35:50
4
[ { "id": "u8XT0rqcDD", "forum": "bvaaydGKYp", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission10748/Reviewer_m6Dm", "reviewer_name": "Reviewer_m6Dm", "rating": 8, "confidence": 3, "soundness": 3, "contribution": 3, "presentation": 3, "summary": "This ...
ourMOktSsW
https://openreview.net/forum?id=ourMOktSsW
Rule-Bottleneck RL: Learning to Decide and Explain for Sequential Resource Allocation via LLM Agents
4.5
3.5
[ 4, 4, 4, 6 ]
[ 4, 4, 4, 2 ]
4
[ "Joint decision and explanation", "rule-bottleneck", "constrained-resource allocation", "agent" ]
Deep Reinforcement Learning (RL) has demonstrated remarkable success in solving sequential resource allocation problems, but often suffers from limited explainability and adaptability---barriers to integration with human decision-makers. In contrast, LLM agents, powered by large language models (LLMs), provide human-understandable reasoning but may struggle with effective sequential decision making. To bridge this gap, we introduce Rule-Bottleneck RL (RBRL), a novel LLM agent framework for resource allocation problems that jointly optimizes language-based decision policy and explainability. At each step within RBRL, an LLM first generates candidate rules---language statements capturing decision priorities tailored to the current state. RL then optimizes rule selection to maximize environmental rewards and explainability, with the LLM acting as a judge. Finally, an LLM chooses the action (optimal allocation) based on the rule. We provide conditions for RBRL performance guarantees as well as the finite-horizon evaluation gap of the learned RBRL policy. Furthermore, we provide evaluations in real-world scenarios, particularly in public health, showing that RBRL not only improves the performance of baseline LLM agents, but also approximates the performance of Deep RL while producing more desirable human-readable explanations. We conduct a survey validating the improvement in the quality of the explanations.
We design a novel rule-based RL framework that provides joint explanation and decision optimization for high-stake resource allocation problems.
alignment, fairness, safety, privacy, and societal considerations
https://openreview.net/pdf?id=ourMOktSsW
2025-09-19T01:37:00
4
[ { "id": "uKAQpoEuk7", "forum": "ourMOktSsW", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission13366/Reviewer_gFYN", "reviewer_name": "Reviewer_gFYN", "rating": 4, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 2, "summary": "This ...
og9fnUKHjw
https://openreview.net/forum?id=og9fnUKHjw
Geometry-Editable and Appearance-Preserving Object Composition
3
3.5
[ 4, 2, 4, 2 ]
[ 3, 4, 4, 3 ]
4
[ "Object Composition", "Generative Models" ]
General object composition (GOC) aims to seamlessly integrate a target object into a background scene with desired geometric properties, while simultaneously preserving its fine-grained appearance details. Recent approaches derive semantic embeddings and integrate them into advanced diffusion models to enable geometry-editable generation. However, these highly compact embeddings encode only high-level semantic cues and inevitably discard fine-grained appearance details. We introduce a Disentangled Geometry-editable and Appearance-preserving Diffusion (DGAD) model that first leverages semantic embeddings to implicitly capture the desired geometric transformations and then employs a cross-attention retrieval mechanism to align fine-grained appearance features with the geometry-edited representation, facilitating both precise geometry editing and faithful appearance preservation in object composition. Specifically, DGAD builds on CLIP/DINO-derived and reference networks to extract semantic embeddings and appearance-preserving representations, which are then seamlessly integrated into the encoding and decoding pipelines in a disentangled manner. We first integrate the semantic embeddings into pre-trained diffusion models that exhibit strong spatial reasoning capabilities to implicitly capture object geometry, thereby facilitating flexible object manipulation and ensuring effective editability. Then, we design a dense cross-attention mechanism that leverages the implicitly learned object geometry to retrieve and spatially align appearance features with their corresponding regions, ensuring faithful appearance consistency. Extensive experiments on public benchmarks demonstrate the effectiveness of the proposed DGAD framework.
We propose DGAD, a disentangled diffusion model for object composition. It uses semantic embeddings for geometry control and a dense cross-attention to preserve appearance.
generative models
https://openreview.net/pdf?id=og9fnUKHjw
2025-09-16T19:57:30
4
[ { "id": "v6h2xF0O71", "forum": "og9fnUKHjw", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission7477/Reviewer_s7VW", "reviewer_name": "Reviewer_s7VW", "rating": 4, "confidence": 3, "soundness": 2, "contribution": 3, "presentation": 2, "summary": "This p...
W63sPIQPz4
https://openreview.net/forum?id=W63sPIQPz4
Learn to change the world: Multi-level reinforcement learning with model-changing actions
4.666667
3.666667
[ 2, 8, 4 ]
[ 3, 5, 3 ]
3
[ "Reinforcement learning; Markov decision process; Robust learning; Optimization" ]
Reinforcement learning usually assumes a given or sometimes even fixed environment in which an agent seeks an optimal policy to maximize its long-term discounted reward. In contrast, we consider agents that are not limited to passive adaptations: they instead have model-changing actions that actively modify the RL model of world dynamics itself. Reconfiguring the underlying transition processes can potentially increase the agents' rewards. Motivated by this setting, we introduce the multi-layer configurable time-varying Markov decision process (MCTVMDP). In an MCTVMDP, the lower-level MDP has a non-stationary transition function that is configurable through upper-level model-changing actions. The agent's objective consists of two parts: Optimize the configuration policies in the upper-level MDP and optimize the primitive action policies in the lower-level MDP to jointly improve its expected long-term reward.
Construct a multi-layer MDP system to configurate the RL agent's environment
reinforcement learning
https://openreview.net/pdf?id=W63sPIQPz4
2025-09-20T02:53:56
3
[ { "id": "2wiuduJCAe", "forum": "W63sPIQPz4", "review_number": 3, "reviewer_id": "ICLR.cc/2026/Conference/Submission20590/Reviewer_P38m", "reviewer_name": "Reviewer_P38m", "rating": 2, "confidence": 3, "soundness": 1, "contribution": 2, "presentation": 2, "summary": "The p...
FQ2dMjf88y
https://openreview.net/forum?id=FQ2dMjf88y
Fully Dynamic Coreset Spectral Clustering
4
3.5
[ 4, 6, 2, 4 ]
[ 3, 4, 3, 4 ]
4
[ "Clustering", "coresets", "spectral clustering", "dynamic data structures" ]
We present a fully dynamic data structure that supports edge and node updates and cluster membership queries for spectral clustering with strong theoretical guarantees. Furthermore, our data structure outperforms the state of the art significantly on real world datasets. At the heart of our data structure is the novel notion of *Just-in-Time Sampling Trees*. The worst-case edge update time of our data structure is $O(\log n)$ and the worst-case query time is $O(d_{\max}^2\log^3(n) + \text{vol}(Y))$ where $d_{\max}$ is the maximum degree of the current graph and $\text{vol}(Y)$ is the sum of the unweighted degrees of all nodes in $Y$. Assuming $d_{\max}$ is polylogarithmic, as is the case with many sparse real-world graphs, our method achieves the best known trade-off between query time and update time.
We present the first fully dynamic coreset data structure for spectral clustering
learning on graphs and other geometries & topologies
https://openreview.net/pdf?id=FQ2dMjf88y
2025-09-18T16:57:03
4
[ { "id": "y8PKFFXaA0", "forum": "FQ2dMjf88y", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission10967/Reviewer_HQRM", "reviewer_name": "Reviewer_HQRM", "rating": 4, "confidence": 3, "soundness": 3, "contribution": 3, "presentation": 2, "summary": "This ...
Anv4gdNFaL
https://openreview.net/forum?id=Anv4gdNFaL
One-Shot Exemplars for Class Grounding in Self-Supervised Learning
6
3
[ 6, 4, 4, 8, 8 ]
[ 3, 1, 3, 4, 4 ]
5
[ "Self-supervised learning", "One-shot exemplar", "Representation learning" ]
Self-Supervised Learning (SSL) has recently achieved remarkable progress by leveraging large-scale unlabeled data. However, SSL pretrains models without relying on human annotation, so it usually does not specify the class space. This inevitably weakens the effectiveness of the learned representation in most downstream tasks that have the intrinsic class structure. In this work, we introduce the new easy setting of One-Shot Exemplar Self-Supervised Learning (OSESSL), requiring only one instance annotation for each class. By introducing this extremely sparse supervision, OSESSL provides the minimum class information to guide the exploration of unlabeled data, achieving significant performance boosts with neglectable annotation cost (i.e., a complexity of $\mathcal{O}(1)$ w.r.t. the sample size). In this OSESSL setting, we propose a simple yet effective framework that leverages the single-labeled exemplar to build the class-specific prototype for learning reliable representations from the huge unlabeled data. To this end, we also build a novel consistency regularization, which extends the sparse exemplar supervision into the decision boundaries, thus improving the robustness of the learned representation. Extensive experiments on real-world datasets clearly validate the reliability of this simple and practical setting. The proposed approach successfully outperforms the state-of-the-art methods, achieving gains of approximately 3\% and 6\% $k$-NN accuracy on CIFAR-100 and ImageNet-100, respectively.
We introduce a new one-shot exemplar self-supervised learning setting that enhances representation learning with just a single annotation per class.
unsupervised, self-supervised, semi-supervised, and supervised representation learning
https://openreview.net/pdf?id=Anv4gdNFaL
2025-09-16T09:40:27
5
[ { "id": "SXOPqQAJtk", "forum": "Anv4gdNFaL", "review_number": 5, "reviewer_id": "ICLR.cc/2026/Conference/Submission6489/Reviewer_Jd4w", "reviewer_name": "Reviewer_Jd4w", "rating": 6, "confidence": 3, "soundness": 4, "contribution": 4, "presentation": 4, "summary": "This p...
yumDmlGCc9
https://openreview.net/forum?id=yumDmlGCc9
Expressive and Invariant Graph Learning via Canonical Tree Cover Neural Networks
5
4
[ 6, 6, 4, 4 ]
[ 4, 3, 4, 5 ]
4
[ "graph neural networks", "canonicalization", "invariance", "tree", "molecule graph" ]
While message-passing NNs (MPNNs) are naturally invariant on graphs, they are fundamentally limited in expressive power. Canonicalization offers a powerful alternative by mapping each graph to a unique, invariant representation on which expressive encoders can operate. However, existing approaches rely on a single canonical sequence, which flattens the structure, distorts graph distances, and restricts expressivity. To address these limitations, we introduce Canonical Tree Cover Neural Networks (CTNNs), which represent the graph with a canonical spanning tree cover, i.e., a small collection of canonical trees covering all edges. Each tree is then processed with an existing expressive tree encoder. Theoretically, tree covers better preserve graph distances than sequences, and on sparse graphs, the cover recovers all edges with a logarithmic number of trees in the graph size, making CTNNs strictly more expressive than sequence-based canonicalization pipelines. Empirically, CTNNs consistently outperform invariant GNNs, random samplers, and sequence canonicalizations across graph classification benchmarks. Overall, CTNNs advance graph learning by providing an efficient, invariant, and expressive representation learning framework via tree cover-based canonicalization.
learning on graphs and other geometries & topologies
https://openreview.net/pdf?id=yumDmlGCc9
2025-09-20T05:32:50
4
[ { "id": "3dZhMG4uKj", "forum": "yumDmlGCc9", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission21437/Reviewer_1KKi", "reviewer_name": "Reviewer_1KKi", "rating": 6, "confidence": 4, "soundness": 2, "contribution": 3, "presentation": 3, "summary": "This ...
p6mPrnFp8N
https://openreview.net/forum?id=p6mPrnFp8N
Beyond the Shot: Rethinking Cinematography Understanding with Foundational Skill Evaluation
3.5
3.25
[ 4, 2, 4, 4 ]
[ 3, 4, 4, 2 ]
4
[ "multimodal large language models", "cinematography understanding", "dataset and benchmark" ]
Cinematography understanding refers to the ability to recognize not only the visual content of a scene but also the cinematic techniques that shape narrative meaning. This capability is attracting increasing attention, as it enhances multimodal understanding in real-world applications and underpins coherent content creation in film and media. As the most comprehensive benchmark for this task, ShotBench spans a wide range of cinematic concepts and VQA-style evaluations, with ShotVL achieving state-of-the-art results on it. However, our analysis reveals that ambiguous option design in ShotBench and ShotVL’s shortcomings in reasoning consistency and instruction adherence undermine evaluation reliability, limiting fair comparison and hindering future progress. To overcome these issues, we systematically refine ShotBench through consistent option restructuring, conduct the first critical analysis of ShotVL’s reasoning behavior, and introduce an extended evaluation protocol that jointly assesses task accuracy and core model competencies. These efforts lead to ShotBench++, a refined and expanded benchmark that enables more reliable assessment and fosters future advances in cinematography understanding. The benchmark and code will be publicly released.
alignment, fairness, safety, privacy, and societal considerations
https://openreview.net/pdf?id=p6mPrnFp8N
2025-09-17T08:57:53
4
[ { "id": "hvaf9wpC2C", "forum": "p6mPrnFp8N", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission8154/Reviewer_MnGU", "reviewer_name": "Reviewer_MnGU", "rating": 4, "confidence": 3, "soundness": 3, "contribution": 2, "presentation": 3, "summary": "This p...
7CFlXvCoN6
https://openreview.net/forum?id=7CFlXvCoN6
Group-Relative REINFORCE Is Secretly an Off-Policy Algorithm: Demystifying Some Myths About GRPO and Its Friends
4.5
3.5
[ 4, 4, 4, 6 ]
[ 4, 4, 3, 3 ]
4
[ "Large Language Models", "Reinforcement Learning", "LLM Post-Training", "Off-Policy RL", "GRPO" ]
Off-policy reinforcement learning (RL) for large language models (LLMs) is attracting growing interest, driven by practical constraints in real-world applications, the complexity of LLM-RL infrastructure, and the need for further innovations of RL methodologies. While classic REINFORCE and its modern variants like Group Relative Policy Optimization (GRPO) are typically regarded as on-policy algorithms with limited tolerance of off-policyness, we present in this work a first-principles derivation for *group-relative REINFORCE* without assuming a specific training data distribution, showing that it admits a *native off-policy interpretation*. This perspective yields two general principles for adapting REINFORCE to truly off-policy settings: regularizing policy updates, and actively shaping the data distribution. Our analysis demystifies some myths about the roles of importance sampling and clipping in GRPO, unifies and reinterprets two recent algorithms --- Online Policy Mirror Descent and Asymmetric REINFORCE --- as regularized forms of the REINFORCE loss, and offers theoretical justification for seemingly heuristic data-weighting strategies. Our findings lead to actionable insights that are validated with extensive empirical studies, and open up new opportunities for principled algorithm design in off-policy RL for LLMs.
We present a native off-policy interpretation for group-relative REINFORCE, and its broad implications.
foundation or frontier models, including LLMs
https://openreview.net/pdf?id=7CFlXvCoN6
2025-09-12T13:20:00
4
[ { "id": "0Noqvecg6M", "forum": "7CFlXvCoN6", "review_number": 4, "reviewer_id": "ICLR.cc/2026/Conference/Submission4283/Reviewer_vzAn", "reviewer_name": "Reviewer_vzAn", "rating": 4, "confidence": 4, "soundness": 2, "contribution": 2, "presentation": 4, "summary": "The au...