id stringlengths 10 10 | url stringlengths 42 42 | title stringlengths 5 214 | average_rating float64 -1 8.5 | average_confidence float64 -1 5 | ratings listlengths 0 9 | confidences listlengths 0 9 | reviewers_num int64 0 9 | keywords listlengths 1 42 | abstract stringlengths 26 4.31k | tldr stringlengths 0 250 | primary_area stringclasses 21 values | pdf_url stringlengths 40 40 | submission_date timestamp[s]date 2025-09-01 19:59:51 2025-09-20 20:18:08 | total_reviews int64 0 18 | reviews listlengths 0 9 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
KV9hrBIqA9 | https://openreview.net/forum?id=KV9hrBIqA9 | Think-on-Graph 3.0: Efficient and Adaptive LLM Reasoning on Heterogeneous Graphs via Multi-Agent Dual-Evolving Context Retrieval | 3 | 4.5 | [
2,
2,
2,
6
] | [
5,
4,
5,
4
] | 4 | [
"Retrieval-Augmented Generation (RAG)",
"Multi-Agent",
"Dual-Evolving",
"Heterogeneous Graph"
] | Retrieval-Augmented Generation (RAG) and Graph-based RAG has become the important paradigm for enhancing Large Language Models (LLMs) with external knowledge.
However, existing approaches face a fundamental trade-off. While graph-based methods are inherently dependent on high-quality graph structures, they face significant practical constraints: manually constructed knowledge graphs are prohibitively expensive to scale, while automatically extracted graphs from corpora are limited by the performance of the underlying LLM extractors, especially when using smaller, local-deployed models.
This paper presents Think-on-Graph 3.0 (ToG-3), a novel framework that introduces Multi-Agent Context Evolution and Retrieval (MACER) mechanism to overcome these limitations.
Our core innovation is the dynamic construction and refinement of a Chunk-Triplets-Community heterogeneous graph index, which pioneeringly incorporates a dual-evolution mechanism of Evolving Query and Evolving Sub-Graph for precise evidence retrieval.
This approach addresses a critical limitation of prior Graph-based RAG methods, which typically construct a static graph index in a single pass without adapting to the actual query.
A multi-agent system, comprising Constructor, Retriever, Reflector, and Responser agents, collaboratively engages in an iterative process of evidence retrieval, answer generation, sufficiency reflection, and, crucially, evolving query and subgraph. This dual-evolving multi-agent system allows ToG-3 to adaptively build a targeted graph index during reasoning, mitigating the inherent drawbacks of static, one-time graph construction and enabling deep, precise reasoning even with lightweight LLMs.
Extensive experiments demonstrate that ToG-3 outperforming compared baselines on both deep and broad reasoning benchmarks,and ablation studies confirm the efficacy of the components of MACER framework. | We introduce Think-on-Graph 3.0 (ToG-3), which provides a unified, efficient, and adaptive solution for complex knowledge reasoning tasks (including deep reasoning and broad reasoning tasks) via Multi-Agent Dual-Evolving Context Retrieval Loop. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=KV9hrBIqA9 | 2025-09-12T21:21:53 | 4 | [
{
"id": "kDTRB3yb1C",
"forum": "KV9hrBIqA9",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4456/Reviewer_ZyKa",
"reviewer_name": "Reviewer_ZyKa",
"rating": 2,
"confidence": 5,
"soundness": 1,
"contribution": 1,
"presentation": 3,
"summary": "The paper presents ToG 3.0, as a multi-agent RAG framework mainly focusing on a dynamic heterogeneous graph (chunk-triple-community) and a dual-evolving retrieval mechanism for both query and graph. While the idea of adaptive graph refinement is interesting, the work suffers from critical flaws in novelty, experimental validity, and practicality that undermine its contributions.",
"strengths": "- The paper clearly identifies limitations in static GraphRAG methods, especially under resource-constrained settings with lightweight LLMs, among them, I deeply agree with the extraction performance when LLMs cannot follow the instructions with proper format.\n- The integration of multi-agent collaboration is designed properly, though it's not new, with iterative graph refinement attempts to address query-dependent reasoning.",
"weaknesses": "- The biggest problem is found in Table 1, the EM scores surprisingly exceed F1 scores (e.g., HotpotQA: EM=0.520 vs. F1=0.312). This inversion contradicts typical QA evaluations, where F1 most usually surpasses EM, which is also widely reported in the baselines. It suggests obvious flaws in answer extraction, metric computation, or dataset alignment.\n- Bad performance. The proposed method could hardly outperform the baselines, especially when there are concerns about the token costs and the efficiency problem.\n- Limited Novelty. The \"Chunk-Triple-Community\" graph schema is not fundamentally novel. Similar multi-layer graph structures (e.g., E2GraphRAG's hierarchy, RAPTOR’s recursive summarization, Youtu-GraphRAG's knowledge tree, and G-Reasoner's Quato Graph) have been explored extensively. The paper fails to delineate clear advancements.\n- The dual-evolving mechanism resembles iterative retrieval-generation paradigms like Self-RAG, which makes sense but not that novel. This design also brings concerns that:\n - Costs. Token costs will increase incredibly when including reasoning, reflection and graph refinement.\n - Latency and efficiency. similar reasons as above.\n- MDP formulation is not well justified. It is more like a story.",
"questions": "- Why does EM significantly outperform F1? Please provide proper error analysis and clarify it.\n- Does the MDP converge reliably with lightweight LLMs?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T22:19:21",
"modification_date": "2025-11-12T11:16:14",
"review_url": "https://openreview.net/forum?id=KV9hrBIqA9¬eId=kDTRB3yb1C",
"license": "CC BY 4.0"
},
{
"id": "J4SWxM6YJJ",
"forum": "KV9hrBIqA9",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4456/Reviewer_c8gm",
"reviewer_name": "Reviewer_c8gm",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces Think-on-Graph 3.0 (ToG-3), a novel framework that addresses key limitations in graph-based Retrieval-Augmented Generation (RAG) systems by proposing a Multi-Agent Context Evolution and Retrieval (MACER) mechanism. The approach dynamically constructs and refines a Chunk-Triplets-Community heterogeneous graph index through a dual-evolution process involving evolving queries and evolving sub-graphs. A multi-agent system, comprising Constructor, Retriever, Reflector, and Responser agents, collaboratively performs iterative evidence retrieval, answer generation, and context refinement.",
"strengths": "1. The proposed MACER mechanism, together with its dual-evolution process of queries and sub-graphs at its core, has a very intuitive design that is well-motivated.\n\n2. Extensive experiments on a comprehensive suite of benchmarks, including both deep multi-hop and broad reasoning tasks, demonstrate the effectiveness and superior performance of ToG-3. \n\n3. Detailed ablation studies are also included to further validate the contribution of each core component.",
"weaknesses": "1. Marginal Performance Gains: Despite the complex architecture that involves multi-agent collaboration, dual-evolving mechanisms, and heterogeneous graph construction, ToG 3.0 only achieves marginal improvements compared to much simpler baselines. According to Table 1, it has only a minor edge in performance over the more lightweight HippoRAG-2. On specific cases, such as Musique under the EM metric, it is even surpassed by the simplest baseline, NaiveRAG. This raises questions about the practical cost-benefit trade-off of such a complex system.\n\n2. The key baselines are missing. The experimental evaluation does not compare with various important and recent state-of-the-art Graph RAG methods [1-6]. As a result, without considering these relevant baselines, it is hard to verify the claimed superiority of ToG-3.\n\n3. The efficiency analysis is incomplete. The authors have mainly compared ToG-3 against GraphRAG and LightRAG, which are known to be computationally heavy. HippoRAG-2 and other more recent, efficient baselines [1-6] are omitted in the efficiency comparison. This omission weakens the practical efficiency and deployment advantages of ToG-3. A wider comparison is required to solidify its positioning in regard to efficiency.\n\n4. The detailed analysis of the RL-based training is missing. The online MACER process is formulated as a Markov Decision Process, with its dependence on reinforcement learning principles. This introduces significant complexity regarding training stability, reward function and policy hyperparameter tuning, and overall reproducibility. Further discussion on the engineering challenges, convergence guarantees in practice, or sensitivity of the performance to the design of the reward signal is lacking in the paper.\n\n5. While the proposed framework is built upon a graph structure, the paper does not engage or benchmark against established graph algorithms for subgraph retrieval or graph refinement. This omission makes it difficult to assess whether the performance gains come from the novel multi-agent, dual-evolving mechanism or could be partially achieved by applying more specialized graph-theoretic methods to the same heterogeneous graph index. Such a comparison would enhance the contribution and professionalism of the paper.\n\n6. The work presents a multi-agent system but fails to make a compelling case that this is a necessary architecture. The different roles of the agents seem to be a modular decomposition of what could well be a monolithic reasoning process, named Constructor, Retriever, Reflector, and Responser. An ablation study, removing the multi-agent system and using a single, well-prompted LLM in its place, executing the same iterative process, would be far more convincing.\n\n7. Each step in the framework requires multiple calls to the LLM for retrieval, reflection, query evolution, and subgraph evolution. It would be beneficial for the paper to provide an in-depth analysis of how this scales with graph size and query complexity and discuss practical trade-offs for how to balance the achieved performance gains with such a significant increase in computational requirements over simpler, single-shot retrieval methods.\n\n\n[1] E²GraphRAG: Streamlining Graph-based RAG for High Efficiency and Effectiveness.\n\n[2] GraphRAG-R1: Graph Retrieval-Augmented Generation with Process-Constrained Reinforcement Learning.\n\n[3] Graph-R1: Towards agentic graphrag framework via end-to-end reinforcement learning.\n\n[4] KET-RAG: A Cost-Efficient Multi-Granular Indexing Framework for Graph-RAG.\n\n[5] HyperGraphRAG: Retrieval-Augmented Generation via Hypergraph-Structured Knowledge Representation.\n\n[6] Align-GRAG: Reasoning-Guided Dual Alignment for Graph Retrieval-Augmented Generation.",
"questions": "See above.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T10:06:39",
"modification_date": "2025-11-12T11:16:14",
"review_url": "https://openreview.net/forum?id=KV9hrBIqA9¬eId=J4SWxM6YJJ",
"license": "CC BY 4.0"
},
{
"id": "f5gIOJLNHO",
"forum": "KV9hrBIqA9",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4456/Reviewer_J92F",
"reviewer_name": "Reviewer_J92F",
"rating": 2,
"confidence": 5,
"soundness": 2,
"contribution": 1,
"presentation": 1,
"summary": "The paper introduces Think-on-Graph 3.0, an graph-based RAG framework over heterogeneous graphs.\n\nThe core innovation is the multi-agent context evolution and retrieval mechanism, called MACER in the draft. It employs a set of agents to iteratively refine the query and a chunk-triplet-community subgraph during reasoning. The dynamic nature of this framework addresses the limitation of previous graphRAG algorithms that rely on a static graph. And the approach is formalized as a markov decision process with a dual-evolution loop for adaptive evidence retrieval.",
"strengths": "1. The paper tackles a practical challenge in graph-based RAG: the trade-off between graph quality and scalability, especially for open-source, lightweight LLMs in offline or private deployments. \n\n2. ToG-3 mitigates issues like incomplete triplet extraction, insufficient details by dynamic, query-adaptive refinement on a dynamically updated graph.",
"weaknesses": "1. The presentation is poor; instead of adopting a style of professional academic writing, it resembles a course project report. To be honest, most of the content is inconsistent, poorly organized, and reads like AI-generated text.\n2. The overall design is purely engineering-oriented, achieved by stacking a set of agents. Due to the lack of preliminary studies and theoretical analysis, the insights behind the design are unclear.\n3. The agents are applied iteratively; thus, there is no global planning to control the refinement direction. The refinement of the graph has no supervision information and may easily lead to collapsed results rather than good ones, especially when the input corpora become large-scale or heterogeneous.\n4. No efficiency analysis is given to demonstrate its complexity (time and memory).\n\n---\nBelow are weaknesses on experiments.\n\n5. The baselines are limited. Only HippoRAG-2 and four GraphRAG methods with poor performance (worse than NaiveRAG) are included. Stronger baselines are needed, e.g., GFM[1], RAPTOR[2], KGP[3]\n6. The empirical accuracy improvements are marginal across the three datasets. The \"Average\" column in Table 1 is unnecessary and redundant.\n7. Table 2 only compares the indexing and retrieval times, and no token consumption is given. Based on my understanding, a multi-agent system should incur significant token costs.\n8. Table 2 does not compare with other baselines (HippoRAG-2, ToG-2). Based on my experience, the indexing and inference times of GraphRAG and LightRAG are quite high, almost the highest in the GraphRAG family. Comparing with these low-efficiency methods undermines the convincingness of your work's efficiency claims. More efficient GraphRAG methods should be compared, such as HippoRAG-2, RAPTOR[2], E^2GraphRAG[4].\n\n\n\n\n\n- [1] GFM-RAG: Graph Foundation Model for Retrieval Augmented Generation\n- [2] RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval\n- [3] Knowledge Graph Prompting for Multi-Document Question Answering\n- [4] E^2GraphRAG: Streamlining Graph-based RAG for High Efficiency and Effectiveness",
"questions": "plz see above",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T19:34:33",
"modification_date": "2025-11-12T11:16:15",
"review_url": "https://openreview.net/forum?id=KV9hrBIqA9¬eId=f5gIOJLNHO",
"license": "CC BY 4.0"
},
{
"id": "ajbCYDHArH",
"forum": "KV9hrBIqA9",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4456/Reviewer_W2uk",
"reviewer_name": "Reviewer_W2uk",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This paper proposes Think-on-Graph 3.0, a framework for RAG on heterogeneous graphs. The core is a Multi-Agent Coupled Evolutionary Retrieval–Generation loop (MACER), where during answering, the query is iteratively refined by a Reflector, while the evidence subgraph is simultaneously expanded or pruned by a Constructor until a Reflector determines that “sufficient evidence” has been reached. Offline, ToG-3 builds a heterogeneous graph with Chunk–Triplet–Community node types. Online, four agents collaborate in a closed loop that formalizes the process as a Markov Decision Process, with a binary sufficiency reward controlling the stopping condition. Experiments show that ToG-3 achieves the best average results.",
"strengths": "S1. Combining dual evolution (query + subgraph) with a multi-agent loop is an elegant and effective fix to the brittleness of static GraphRAG systems under noisy LLM extraction formalized via MDP.\n\nS2. The Chunk–Triplet–Community graph unifies multi-granular retrieval in a single embedding space, bridging fine-grained evidence and coarse community reasoning.\n\nS3. ToG-3 achieves top or near-top EM/F1 on HotpotQA, 2WikiMultihopQA, and MuSiQue, and demonstrates domain generalization through ELO win-rates across four UltraDomain subsets.",
"weaknesses": "W1. The binary reward depends on whether Suff(q, G_k, a_k)=1, but the paper does not explain the exact implementation, thresholding, or its correlation with EM/F1.\n\nW2. The paper should compare with 1-2 more multi-agent baselines released in recent years, e.g. HM-RAG (with the single modality setting) [1] or Graph Counselor [2].\n\nW3. The authors should polish the writing. For example, Section 3.2.1 contains many unnecessary sparse lines and overly detailed yet non-essential descriptions. It could be written more concisely as a single continuous paragraph instead of being broken into multiple bullet points.\n\n[1] HM-RAG: Hierarchical Multi-Agent Multimodal Retrieval Augmented Generation\n\n[2] Graph Counselor: Adaptive Graph Exploration via Multi-Agent Synergy to Enhance LLM Reasoning",
"questions": "Q1. How is max iteration K chosen? What is the accuracy–latency tradeoff under fixed inference budgets?\n\nQ2. What happens if only evolving-query or evolving-subgraph is introduced into GraphRAG/LightRAG? Or conversely, ToG-3 is reduced to a static graph?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-15T12:30:15",
"modification_date": "2025-11-12T11:16:16",
"review_url": "https://openreview.net/forum?id=KV9hrBIqA9¬eId=ajbCYDHArH",
"license": "CC BY 4.0"
}
] |
noLMXTqgCp | https://openreview.net/forum?id=noLMXTqgCp | Decoupled-Value Attention for Prior-Data Fitted Networks: GP-Inference for Physical Equations | 4 | 2.75 | [
6,
4,
4,
2
] | [
3,
3,
4,
1
] | 4 | [
"Gaussian Process",
"Meta-Learning",
"Prior-data Fitted Networks",
"Learning of Physics"
] | Prior-data fitted networks (PFNs) are a promising alternative to time-consuming Gaussian process (GP) inference for creating fast surrogates of physical systems. PFN reduces the computational burden of GP-training by replacing Bayesian inference in GP with a single forward pass of a learned prediction model. However, with standard Transformer attention, PFNs show limited effectiveness on high-dimensional regression tasks. We introduce Decoupled-Value Attention (DVA)-- motivated by the GP property that the function space is fully characterized by the kernel over inputs and the predictive mean is a weighted sum of training targets. DVA computes similarities from inputs only and propagates labels solely through values. Thus, the proposed DVA mirrors the GP update while remaining kernel-free. We demonstrate that the crucial factor for scaling PFNs is the attention rule rather than the architecture itself. Specifically, our results demonstrate that (a) localized attention consistently reduces out-of-sample validation loss in PFNs across different dimensional settings, with validation loss reduced by more than 50\% in five- and ten-dimensional cases, and (b) the role of attention is more decisive than the choice of backbone architecture, showing that CNN-based PFNs can perform at par with their Transformer-based counterparts. The proposed PFNs provide 64-dimensional power flow equation approximations with a mean absolute error of the order of $10^{-3}$, while being over $80\times$ faster than exact GP inference. | Decoupled-Value Attention (DVA) separates input similarity from label propagation, mirroring Gaussian process updates and enabling scalable, kernel-free PFNs. This achieves architecture-agnostic and scalable PFNs. | transfer learning, meta learning, and lifelong learning | https://openreview.net/pdf?id=noLMXTqgCp | 2025-09-19T11:28:34 | 4 | [
{
"id": "2ZehxxBZnR",
"forum": "noLMXTqgCp",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15541/Reviewer_YKgM",
"reviewer_name": "Reviewer_YKgM",
"rating": 6,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "Prior-Data Fitted Networks (PFNs) are aimed to replace costly Gaussian process (GP) inference with a single learned forward pass for fast physical surrogates, but standard PFNs struggle in high dimensions. Authors introduce Decoupled-Value Attention (DVA), which bases attention weights only on inputs and passes labels only through values, resembling GP-style weighted averaging of targets. DVA improves PFN training in 5D-10D (over 50% lower validation loss), lets CNN backbones match Transformers (implying attention design matters more than architecture), and yields a 64D power flow surrogate with ~10^-3 MAE and a reported >80x inference-time speedup over their GP baseline.",
"strengths": "1) Paper includes simple, targeted architectural contribution (Decoupled-Value Attention) with clear motivation.\n\n2) Authors demonstrate that “vanilla attention PFNs” stall in 5D-10D tasks, where validation loss flattens early and final MSE remains high. With DVA, PFNs continue improving and achieve dramatically lower validation NLL and MSE in 5D and 10D (often >50% reduction in their reported residual bias).\n\n3) Even accounting for fairness caveats (see weaknesses), once trained, PFN+DVA can generate all bus voltages for 4,500 samples in ~0.13–0.17 seconds that is essentially instant compared to AC power flow solvers, and is about two orders of magnitude faster than their GP baseline.",
"weaknesses": "1) A defining feature of GP inference is not just accurate point predictions, but calibrated posterior uncertainty. The paper does not measure calibration, posterior variance quality, credible interval coverage, or any decision-making utility based on uncertainty. All reported metrics are pointwise MSE/MAE and wallclock time. This is a mismatch between the claimes and the experimental support. \n\n2) Authors claim >80x faster than exact GP inference, citing ~0.13 s for PFN vs ~11 s for GP on the 64D power flow task, but the GP baseline is actually 32 separately trained single-output GPs evaluated together, while PFN is a single multi-output model that predicts all 32 voltages in one batched forward pass; PFN inference likely ran on GPU after ~14 hours of pretraining, while the GP hardware/batching setup is not described. Can you clarify exactly how GP inference time was measured (hardware, batching, parallelization) and restate the 80x claim with those details?\n\n3) Authors claim DVA “reduces residual bias by more than 50%,” but “bias” here is defined informally as the gap between the learned posterior predictive distribution and the “true” posterior predictive distribution; in practice you just show a lower validation negative log-likelihood curve and assert that variance is negligible. There is no explicit measurement of posterior predictive bias, and no decomposition of NLL. Will you either justify formally that your NLL reduction corresponds to a 50% reduction in posterior predictive bias, or restate the claim in terms of “lower validation NLL” instead of “lower bias”?\n\n4) There is a conclusion that attention choice (Vanilla vs DVA) matters more than the backbone (Transformer vs CNN), based on similar performance between CNN+DVA and Transformer+DVA and large gains over Vanilla Attention. But the CNN and Transformer models differ by orders of magnitude in parameter count and were tuned with different search spaces. This claim either requires more evidence or needs to be adjusted.",
"questions": "Please refer to the weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T06:15:41",
"modification_date": "2025-11-12T13:36:56",
"review_url": "https://openreview.net/forum?id=noLMXTqgCp¬eId=2ZehxxBZnR",
"license": "CC BY 4.0"
},
{
"id": "GNgaJfW4sR",
"forum": "noLMXTqgCp",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15541/Reviewer_Bxip",
"reviewer_name": "Reviewer_Bxip",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes Decoupled-Value Attention (DVA) for Prior-Data Fitted Networks (PFNs), arguing that standard PFN attention mechanisms fail to scale beyond ~10 input dimensions. DVA computes attention affinities (queries and keys) purely from inputs while propagating labels solely through values, motivated by the structure of Gaussian process inference where kernels operate on inputs and predictions are weighted sums of training outputs. The authors demonstrate that DVA reduces validation loss by >50% in 5D and 10D settings compared to vanilla attention, and show that CNN-based PFNs with DVA can match Transformer performance. They apply their method to 64-dimensional power flow equations, achieving MAE ~10^-3 while being 80× faster than exact GP inference.",
"strengths": "## Strengths \n\n* Clear problem identification and motivation: The paper clearly identifies a real limitation of existing PFNs—their inability to scale beyond ~10 dimensions—and provides intuitive motivation for the proposed solution through the lens of GP inference. The observation that standard PFN attention couples inputs and outputs in ways that break localization is well-articulated.\n\n* Comprehensive experimental validation: The systematic evaluation across multiple dimensions (1D, 2D, 5D, 10D, 64D), architectures (CNN, Transformer), and attention mechanisms (VA, DVA, kernel-based) is thorough. The validation loss curves in Figure 2 clearly demonstrate the benefit of DVA, especially the striking saturation of VA-based models in 10D.\n\n* Architecture-agnostic insight: Demonstrating that CNN-based PFNs can match Transformer performance when equipped with proper attention is valuable. This challenges the implicit assumption in prior PFN work that Transformers are necessary, and suggests attention design is the critical factor.\n\n* Practical application: The 64D power flow experiments demonstrate real-world applicability. Achieving 80× speedup over GP while maintaining acceptable accuracy (MAE ~10^-3) is practically significant for power grid applications.\n\n* Reproducibility: The appendix provides detailed hyperparameter ranges, architectural specifications, and implementation details that aid reproducibility (relatedly, particularly appreciate the code in supplementary materials)",
"weaknesses": "## Weaknesses\n\n* Limited novelty of the core idea: Decoupling inputs and outputs in attention is not particularly novel. The paper positions DVA as specifically designed for PFNs and GP-mimicking, but the core mechanism (computing affinities from inputs only) is a straightforward design choice that has been explored in various forms in the attention literature. The contribution feels incremental rather than introducing fundamentally new concepts.\n\n* Overstated connection to Gaussian processes: The claim that DVA \"mirrors GP inference\" is overstated. While there are superficial similarities, critical differences remain:\n\n1. GP weights β(x*) can be negative and don't sum to 1, while DVA's softmax produces non-negative normalized weights\n2. GPs have a principled covariance kernel with theoretical properties; DVA learns arbitrary similarity via dot products\n3. The authors acknowledge this in Section 3.1 but still heavily market the \"GP alignment\" throughout\n\nThe connection is more of a loose analogy than a rigorous correspondence. The paper would be stronger if it positioned DVA as \"inspired by\" rather than \"mirroring\" GP inference.\n\n* Insufficient comparison to related work: The paper lacks comparison to other localized attention mechanisms or other recent PFN improvements. For example:\n\n1. How does DVA compare to explicit localization post-processing (Nagler 2023)?\n2. What about other input-localized attention mechanisms from the broader literature?\n3. The only comparison to kernel-based attention is in Figure 1— and authors test on functions mismatched to the RBF kernel rather than showing where kernel-based attention might excel\n\n* Limited theoretical understanding: While the paper demonstrates empirically that DVA works, it provides little theoretical insight into why it works. Questions remain:\n\n1. Under what conditions does input-only attention provably improve over joint attention?\n2. Can you characterize when DVA will succeed vs. fail?\n3. What is the sample complexity with DVA vs. VA?\n\nThe paper mentions Nagler (2023)'s theoretical results on bias but doesn't rigorously connect DVA to that framework to my understanding.\n\n\n* Missing ablations and analysis:\n\n1.No ablation on the value encoder φ_y—does its design matter?\n2. What happens with different query/key projections W_q, W_k?\n3. The paper claims DVA \"remains kernel-free\" as an advantage, but doesn't test whether a learnable kernel parameterization might work even better\n\n\n* Evaluation metrics: For the power flow application, MSE and MAE are reported, but uncertainty quantification—one of the main advantages of GPs—is not evaluated. Can DVA-based PFNs provide calibrated uncertainty estimates? This seems like a critical missing piece given the motivation.",
"questions": "## Questions for Authors\n\n* Theoretical characterization: Can you provide any theoretical analysis of when and why DVA reduces bias compared to VA? Even in a simplified setting (e.g., linear case), formal guarantees would strengthen the contribution.\n\n* Kernel-based attention: The experiment in Figure 1 seems designed to make kernel-based attention look bad. Can you show cases where kernel-based attention with an appropriate kernel (e.g., Matérn for less smooth functions) actually works well? What is the fundamental limitation?\n\n* Uncertainty quantification: One major advantage of GPs is calibrated uncertainty. Do PFNs with DVA produce well-calibrated predictive distributions? Can you compare predictive variance quality to exact GP?\n\n* Scalability limits: Where does DVA break down? The 10D experiments show clear improvements, but the 64D power flow uses only 500 training samples. Have you tested truly high-dimensional problems (e.g., 100D+) with sufficient data?\n\n* Value encoding: You mention that \"the final head g(·) and encoding outputs in the value V\" help adjust DVA output toward the true GP posterior mean. Can you ablate this claim? How sensitive is performance to the value encoder design?\n\n* Comparison to Nagler 2023: Nagler argues for post-hoc localization. Can you compare DVA to their approach? Does DVA achieve better or different localization than explicit post-processing?\n\n* Output information: DVA completely removes output information from attention affinities. Are there cases where having some output information is beneficial? Could a weighted combination of input-only and joint attention work better?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T01:09:10",
"modification_date": "2025-11-12T13:36:56",
"review_url": "https://openreview.net/forum?id=noLMXTqgCp¬eId=GNgaJfW4sR",
"license": "CC BY 4.0"
},
{
"id": "IDfMU6W7KC",
"forum": "noLMXTqgCp",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15541/Reviewer_9Cj7",
"reviewer_name": "Reviewer_9Cj7",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "The paper presents decoupled-value attention for prior data fitted networks (PFNs). It computes $Q$/$K$ only from inputs and pass labels only through $V$. It ensures that attention weights depends purely on input similarity while output flows via values. The design mirrors GP conditioning and aims to reduce the bias observed in PFNs that attend over concentrated (x,y) embeddings. The method is instantiated with both Transformer and CNN based PFNs and is evaluated over synthetic GP tasks, Rosenbrock, and a AC power-flow surrogate. Reported results show lower NLL and inference speedup vs exact GPs.",
"strengths": "- simple and clear method\n\n- DVA reduces validation NLL across dimensions specifically in 5D/10D where vanilla attention saturates early.\n\n- the method is backbone agnostic and the authors have shown similar improvements for both CNNs and Transformers.\n\n- high dimensional experiments (64D) with inference speedups shows practical utility.",
"weaknesses": "- theory is very light: the localization argument is intuitive and refers prior PFN theory but there is no formal generalization/bias-consistency result for DVA itself. The softmax non-negativity vs possibly signed GP coefficients is acknowledged but it is not analyzed beyond a short discussion.\n\n- scalability analysis: the attention compute remains quadratic in context size. there is no complexity/memory study comparison to linear/performer style attention under PFN training.\n\n- weak baselines: Power-flow experiments compare to exact GP and PFN+VA and are missing strong sparse GP baselines, kernel regression / kNN, RFF/linear attention variants in higher-D, and physics-informed surrogates common in this domain.\n\n- appendix should have a consolidated hyperparameter table.\n\n- thin ablations",
"questions": "- could you add higher-D comparisons to (a) sparse/variational GP, (b) kNN / kernel regression, (c) performer/linear attention PFNs, and (d) a physics-informed baseline on power flow?\n\n- can you report ablations on (a) capacity of $\\phi_x$/$\\phi_y$ and prediction head $g()$, (b) effect of bucket count on NLL/MSE.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T17:10:11",
"modification_date": "2025-11-12T13:36:57",
"review_url": "https://openreview.net/forum?id=noLMXTqgCp¬eId=IDfMU6W7KC",
"license": "CC BY 4.0"
},
{
"id": "4TpkPhpsb0",
"forum": "noLMXTqgCp",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15541/Reviewer_KGsD",
"reviewer_name": "Reviewer_KGsD",
"rating": 2,
"confidence": 1,
"soundness": 1,
"contribution": 1,
"presentation": 1,
"summary": "This paper proposes Decoupled Value Attention (DVA), a new architecture of Prior data Fitted networks (PFN), with the main contribution being an attention model that separates the Key-Query interaction to apply only on the features (x), to provide weights multiplied by the values y. This decoupling makes the estimation process more intuitive as it measures the similarity between features via the Key-Query matrices to create weights multiplied by the matching labels (y), similarly to a Gaussian process. The authors claim the method they provide leads to the following main contributions: \n\n \n1) The authors show that the DVA reduces the difference between predicted and true posterior distribution in PFN training. \n2) The paper shows that a CNN based PFN equipped with DVA performs comparably to a Transformer based DVA-PFN \n3) The authors show that DVA enables PFNs to scale complex, high-dimensional problems. The authors demonstrate this on a 64-dimensional power flow simulation.",
"strengths": "1. The premise of the paper is reasonable and seems to be well-founded. The summation of the label and feature embeddings in the original PFN appears to be hurting the results in the provided synthetic cases. \n \n\n2. The \"Attention is More Important than Architecture\" finding is a significant contribution. By showing that a CNN+DVA can match a Transformer+DVA, the authors successfully decouple the PFN concept from a strict reliance on the Transformer, opening the door to other, potentially more efficient options.",
"weaknesses": "1. The key demonstration of the original PFN's power was its ability to learn a complex hierarchical model, and from my understanding this is the main reason to implement the K data set training they suggest. However, the authors only test the mechanism against a simple, fixed hyperparameter RBF kernel. Thus, significant omission to not test if DVA retains the ability to approximate these more complex, mixed priors, which was a primary advantage of the PFN framework is problematic. \n \n\n2. The original PFN paper validated its method on a wide range of real-world tabular data. This paper ignores those benchmarks and instead introduces a new, highly specific physics problem (power flow). Without a direct comparison on the original paper benchmarks, it is impossible to assess if DVA is an improvement or a specialized architecture that only excels on certain tasks. \n \n\n3. The authors claim DVA succeeds in high dimensional regimes where the VA PFN simply stops learning, they link this threshold of D = 10 in their GP tests, but when looking at the Müller et al works, they consider datasets with a large amount of features (e.g., covertype = 55), of a similar order of the 64 feature space considered in the paper “power flow” problem. \n\n \n\n4. Section 2.2 sketches the PFN training objective, but a reader unfamiliar with PFNs will still need (Muller et al) to follow training/inference. \n\nThe paper’s central contribution is the DVA method, which adjusts the original PFN method. This idea seems sensible, as the usage of the attention Key-Query to generate weights for the label is highly intuitive and closely matches the established GP methods. Still, as the main claim of the paper is to provide a better alternative to an existing method, it was not clear enough that the advantages appear numerically and that there is a clear advantage for high-dimensional cases.",
"questions": "1. A key result of the original PFN paper (Muller et al) was its ability to learn an intractable posterior from a hierarchical GP prior. Can the authors provide results showing DVA's performance on this task? \n \n\n2. The original PFN paper (Muller et al) performed a high variety of tests and comparisons with a great contribution for tabular data, while this paper focuses mainly on synthetic data sets and the “power flow” problem, is there any reason to choose that specific problem? Could the method be applied to additional cases?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T12:32:52",
"modification_date": "2025-11-12T13:36:57",
"review_url": "https://openreview.net/forum?id=noLMXTqgCp¬eId=4TpkPhpsb0",
"license": "CC BY 4.0"
}
] |
Ba5hOI2SkF | https://openreview.net/forum?id=Ba5hOI2SkF | Learning Deep Modality-Shared Self-Expressiveness for Image Clustering with Textual Information | 5 | 4 | [
6,
6,
4,
4
] | [
4,
4,
4,
4
] | 4 | [
"deep clustering",
"self-expressive model",
"multimodal"
] | Leveraging textual information for image clustering has emerged as a promising direction, driven by the powerful representations of vision-language models. However, existing approaches usually leverage modality alignment, which merely shapes the representations implicitly, failing to preserve and exploit modality-specific structures, and leaving the overall representation distribution unclear. In this paper, we propose a simple but principled approach, termed deep modality-shared self-expressive model (DeepMORSE), which simultaneously learns structured representations that conform to the union of modality-specific subspace structures and, via a modality-shared self-expressive model, discovers structures shared across modalities. We evaluate our DeepMORSE approach on seven widely used image clustering benchmarks and observe performance improvements exceeding 4\% on the UCF-101, DTD-47, and ImageNet-Dogs datasets. In addition, we demonstrate the strong transferability of the learned representations by achieving state-of-the-art performance on downstream tasks such as image retrieval and zero-shot classification—without requiring any task-specific losses or post-processing. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=Ba5hOI2SkF | 2025-09-18T21:23:43 | 4 | [
{
"id": "qLvkuymQcJ",
"forum": "Ba5hOI2SkF",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11568/Reviewer_JHm3",
"reviewer_name": "Reviewer_JHm3",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper aims to leverage textual information for image clustering. Specifically, this work assumes that there is a modality-invariant relationship within both the vision and text subspaces, i.e., each data point can be linearly represented by the same set of other data points using the same coefficients for both vision and text after appropriate transformations. Thereafter, enhanced vision representations can be learned and regularized by the textual information. For each image, the textual information is learned by a sparse coding using the dictionary to cover the image's representation. The proposed method shows better performance on various datasets. Moreover, the contribution of each component from the proposed method is well demonstrated and the proposed combination shows the best performance.",
"strengths": "1) To leverage the textual information for clustering images, this work proposes to learn enhanced image representations constrained by the modality-invariant relationship between data points. Specially, after appropriate transformations, each data point can be represented by a linear combinations of other datapoints using the same coefficients for both the vision and textual representations. The proposed optimization framework is sound accordingly.\n\n2) Compared to the reported baselines, the proposed method provides better performance on various datasets, which demonstrate the effectiveness of the proposal. Moreover, the ablation study shows the contribution of each component well.\n\n3) The paper is well written motivated by sound discussion and easy to follow.",
"weaknesses": "1) The proposed method employs a set of transformations f, g for each modality. It would be interesting to show that this transformation is necessary through the experiments. For example, how about the performance without doing the transformation, while keep all other learning objectives.\n\n2) Given the vision and text representations, it is applicable to treat each as a view for each data point. Showing the state-of-the-art multi-view clustering using both of them would help sufficiently demonstrate that multi-view clustering is not that helpful compared to the proposed method. \n\n3) Some strong unimodal deep clustering methods are discussed or compared, e.g., CoKe, SeCu. Any reasons for that?",
"questions": "Related questions can be found in the weakness section.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T03:14:56",
"modification_date": "2025-11-12T12:44:13",
"review_url": "https://openreview.net/forum?id=Ba5hOI2SkF¬eId=qLvkuymQcJ",
"license": "CC BY 4.0"
},
{
"id": "dp8j0NEkIw",
"forum": "Ba5hOI2SkF",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11568/Reviewer_D4J1",
"reviewer_name": "Reviewer_D4J1",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "- This paper introduces the DeepMORSE to address the challenge in image clustering with textual information, where existing modality alignment methods often fail to preserve modality-specific structures and leave the overall representation distribution unclear. \n\n- DeepMORSE operates by simultaneously learning structured representations that conform to the union of modality-specific subspace structures and explicitly discovering patterns shared across modalities via a modality-shared self-expressive model. \n\n- In practical scenarios where text counterparts are not readily available, the approach generates necessary textual data for each image by solving a cross-modal sparse coding problem to ensure both semantic accuracy and adherence to a union-of-subspaces structure. \n\n- Extensive experiments demonstrate that DeepMORSE achieves state-of-the-art clustering performance on seven benchmarks, observing performance improvements exceeding 4% on the UCF-101, DTD-47, and ImageNet-Dogs datasets, while also showing strong transferability to image retrieval and zero-shot classification without requiring task-specific optimization.",
"strengths": "This paper's core strength lies in its simplicity. DeepMORSE overcomes limitations of prior alignment methods by simultaneously learning structured representations and explicitly discovering patterns shared across modalities through a modality-shared model. \n\nThe approach achieves descent clustering performance across seven benchmarks, reporting improvements in clustering accuracy, including gains exceeding on the UCF-101, DTD-47, and ImageNet-Dogs datasets. \n\nFurthermore, the learned structured representations demonstrate transferability and robustness, achieving comparable results on downstream tasks such as image retrieval and zero-shot classification without requiring any additional optimization.",
"weaknesses": "1. One weakness of the current framework is its limitation in scope to vision-language data, and the necessary extension of the method to other modalities, such as acoustics or hyperspectral imagery, has yet to be investigated. \n\n2. While exhibiting low memory consumption, DeepMORSE requires a slightly longer total training and testing time compared to baselines, such as TAC. Leading to challenges in real-world application deployment.\n\n3. The necessity of leveraging textual information necessitates a crucial pre-processing step, utilizing cross-modal sparse coding and a predefined dictionary to generate textual counterparts when image-text pairs are unavailable, thereby introducing external complexity and dependence on the quality and sparsity of this synthetic data. This unpaired data situation is very common in real-world settings. Furthermore, ablation studies reveal that DeepMORSE suffers a sharp degradation in clustering performance when components relying on the textual modality are removed, demonstrating that the overall effectiveness is highly dependent on the presence and successful integration of both modality expressions.\n\n4. Although generally robust to hyperparameter choices, the model requires task-specific adjustments. Specifically, increasing the output dimensions and balancing the hyperparameter **($\\gamma$)** for downstream evaluations on datasets containing more categories suggests that the default configuration is not universally optimal for larger class counts.",
"questions": "1. Please answer the questions in the weakness section.\n\n2. The paper explicitly lists the limitation that the theoretical underpinnings of modality-shared self-expression remain largely unexplored, leaving the fundamental working mechanism insufficiently understood. Can the authors elaborate on the specific challenges encountered when attempting to derive theoretical guarantees for the shared coefficient matrix $C$ (Equation 4) in the multimodal setting, similar to those established for unimodal sparse subspace clustering? What are the most promising theoretical avenues for future research to address this gap?\n\n3. The authors noted that for datasets with more than 128 categories (StanfordCars, SUN397), it was necessary to increase the output dimension $d$ and enlarge the balancing hyperparameter $\\gamma$. Is there a principled rule or heuristic that can guide the selection of $d$ and $\\gamma$ based on the number of classes $C$ or the complexity of the dataset to ensure the model maintains optimal performance and avoids convergence issues, instead of relying on manual tuning?\n\n4. Ablation studies show that DeepMORSE, which uses modality-shared self-expression yields significantly larger improvements than simply combining coefficients derived independently from each modality. Can the authors provide a more detailed, perhaps qualitative, explanation of why the rigid constraint of enforcing the exact same coefficient matrix $C$ across both image and text representations (in Equation 4) is crucial for uncovering robust shared structures, compared to merely integrating two separate affinity matrices?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T09:29:19",
"modification_date": "2025-11-12T12:44:14",
"review_url": "https://openreview.net/forum?id=Ba5hOI2SkF¬eId=dp8j0NEkIw",
"license": "CC BY 4.0"
},
{
"id": "w6qWUJmQ78",
"forum": "Ba5hOI2SkF",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11568/Reviewer_4ETM",
"reviewer_name": "Reviewer_4ETM",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper addresses the challenge of cross-modal retrieval, where the goal is to retrieve relevant samples across different modalities (e.g., retrieving images given text queries, or vice versa). The authors propose a modality-specific deep learning framework that explicitly learns separate but aligned representations for each modality.",
"strengths": "1. Experimental results show consistent improvements over strong baselines (e.g., DCCA, Corr-AE, CCA) across multiple datasets, suggesting the proposed approach generalizes well.\n\n2. The paper clearly identifies the limitations of enforcing overly tight shared embedding spaces. The motivation for learning modality-specific representations is intuitive.",
"weaknesses": "1. While the concept of modality-specific embeddings is valuable, the implementation primarily extends known ideas (e.g., DCCA) rather than introducing a fundamentally new network design.\n\n2. The paper could benefit from a deeper theoretical justification for the chosen balance between intra- and inter-modal losses. The trade-off parameter is empirically chosen without clear reasoning.\n\n3. It’s unclear how much each component (e.g., intra-modal loss, modality-specific subnetworks) contributes to the final performance. A comprehensive ablation table would strengthen the claims.\n\n4. The two-stream modality-specific design likely doubles training cost, but the paper doesn’t quantify this or discuss efficiency trade-offs.",
"questions": "1. How sensitive is the retrieval performance to the weighting between intra- and inter-modal losses? Is there a principled way to select this parameter?\n\n2. Can this approach handle large-scale multimodal datasets (e.g., millions of image–text pairs) without significant computational overhead?\n\n3. Does the method use explicit negative sampling or rely entirely on pairwise constraints? Could incorporating contrastive loss improve robustness?\n\n4. Could the same framework extend naturally to more than two modalities (e.g., audio–video–text)?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T02:24:12",
"modification_date": "2025-11-12T12:44:14",
"review_url": "https://openreview.net/forum?id=Ba5hOI2SkF¬eId=w6qWUJmQ78",
"license": "CC BY 4.0"
},
{
"id": "bcUbxENNWN",
"forum": "Ba5hOI2SkF",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11568/Reviewer_xkPq",
"reviewer_name": "Reviewer_xkPq",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes a deep modality-shared self-expressive model for multi-modal image clustering which can simultaneously learns structured representations conforming to the union of modality-specific subspaces and discovers structures shared across modalities. Experiments on image clustering benchmarks and several downstream tasks demonstrate the effectiveness of proposed method.",
"strengths": "1. This paper proposes a simple-yet-effective method for multi-modal image clustering, which relies on a deep modality-shared self-expressive model.\n2. The proposed model can jointly learn representations conforming to a union of modality-specific subspaces and discovers shared structures across modalities.\n3. The learned structured representations can be directly applied to downstream tasks including image retrieval and zero-shot classification.",
"weaknesses": "1. The motivation for this work requires clarification. The paper repeatedly notes that \"the distribution of the aligned representation in existing methods remains unclear,\" but the term \"unclear\" is not sufficiently defined. It would be helpful to illustrate this limitation more concretely, for instance, by providing visualizations on toy examples. Furthermore, the rationale behind why the proposed deep modality-shared self-expressive model can make sense remains unclear. It’s better to provide intuitive explanations or theoretical/experimental analysis to establish its foundation.\n2. Regarding Table 1, several details require clarification. First, the distinction between \"CLIP (k-means)\" and \"CLIP (zero-shot)\" is unclear. Why the former is without textual information and the latter is with textual information? The authors should provide the details about these two settings in the paper. Second, the superscript attached to \"PRO-DSC\" is undefined—please clarify if this denotes some missing information or is simply a typographical error. \n3. Regarding the second row of Table 2, the configuration is described as utilizing only the textual modality. However, the corresponding task is image clustering. Does this imply that image clustering is performed directly using the image representations from the pre-trained CLIP model, without any fine-tuning? \n4. Regarding Figure 4, the analysis of hyperparameter sensitivity, which is currently conducted on only two datasets, may not be sufficient to draw general conclusions about the model's robustness. To provide a more comprehensive evaluation, it is essential to extend this analysis to include all datasets used in the study, as different datasets may exhibit varying sensitivities to hyperparameter changes.\n5. For zero-shot classification, the authors utilize the known categories of the datasets to construct the dictionary, which contains the embeddings of prompts “A photo of class” extracted by a pretrained CLIP text encoder. And the image embeddings are also sourced from pretrained CLIP model. When using Eq. (11) to obtain the textual counterpart for each image embedding, the “zero-shot” scenario seems to be broken since the model gains prior knowledge of the test categories.\n6. How about generating text captions for the images using MLLMs and then encode the image/caption pairs into embedded space using CLIP?",
"questions": "See Weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-25T20:52:38",
"modification_date": "2025-11-12T12:44:15",
"review_url": "https://openreview.net/forum?id=Ba5hOI2SkF¬eId=bcUbxENNWN",
"license": "CC BY 4.0"
}
] | |
aX3E6LirK5 | https://openreview.net/forum?id=aX3E6LirK5 | pFedMMA: Personalized Federated Fine-Tuning with Multi-Modal Adapter for Vision-Language Models | 4.5 | 3.5 | [
6,
4,
4,
4
] | [
4,
4,
4,
2
] | 4 | [
"Multi-Modal Adapter",
"Personalized Federated Fine-Tuning",
"Few-Shot Learning of Vision Language Models"
] | Vision-Language Models (VLMs) like CLIP have demonstrated remarkable generalization in zero- and few-shot settings, but adapting them efficiently to decentralized, heterogeneous data remains a challenge. While prompt tuning has emerged as a popular parameter-efficient approach in personalized federated learning, existing methods often sacrifice generalization in favor of personalization, struggling particularly on unseen classes or domains. In this work, we propose pFedMMA, the first personalized federated learning framework that leverages multi-modal adapters for vision-language tasks. Each adapter contains modality-specific up- and down-projection layers alongside a globally shared projection that aligns cross-modal features. Our optimization strategy allows clients to locally adapt to personalized data distributions while collaboratively training the shared projection to improve global generalization. This design is also communication-efficient, as only the shared component is exchanged during communication rounds. Through extensive experiments across eleven datasets, including domain- and label-shift scenarios, we show that pFedMMA achieves state-of-the-art trade-offs between personalization and generalization, outperforming recent federated prompt tuning methods. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=aX3E6LirK5 | 2025-09-18T17:43:15 | 4 | [
{
"id": "NILAgvFwdI",
"forum": "aX3E6LirK5",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11070/Reviewer_VEYb",
"reviewer_name": "Reviewer_VEYb",
"rating": 6,
"confidence": 4,
"soundness": 4,
"contribution": 2,
"presentation": 3,
"summary": "The paper designs an algorithm (pFedMMA) to personalize Vision-Language Models in a federated learning setup to adapt to client-side data heterogeneity. The design is based on client-specific multi-modal parallel adapters with each adapter's cross-modal shared projection layer aggregated for generalization under federated learning. The paper presents extensive empirical evidence (under data heterogeneity, label-shift, and feature-shift) that pFedMMA improves the generalization-personalization trade-off compared to personalized federated prompt tuning methods. The adapters are restricted to higher layers of the image and text encoders of the VLM to keep communication cost of aggregating the shared projection layers in check while capturing a large proportion of the full potential of the design w.r.t. generalization and personalization.",
"strengths": "(S1) The paper is very well-organized and well-written. It is delightfully easy to read. Intuitions from related work are provided in several places for proper contextualization. The problem, algorithm design, and experimental setups are all well motivated. Results and intuition are well communicated.\n\n(S2) Experiments and metrics presented provide good quality empirical evidence to support the claims in the paper. A diversity of datasets and experimental conditions covering data heterogeneity, label-shift, and feature-shift have been considered. The results communicate that a demonstrated improvement is achieved by pFedMMA in the generalization-personalization trade-off in federated learning of VLMs for several datasets.",
"weaknesses": "(W1) While related work is mostly well cited, I believe that one relevant paper [FedDAT] is missing. Contributions of this manuscript should be contextualized and differentiated w.r.t. this reference. [FedDAT] Chen, H., Zhang, Y., Krompass, D., Gu, J., & Tresp, V. (2024). FedDAT: An Approach for Foundation Model Finetuning in Multi-Modal Heterogeneous Federated Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(10), 11285-11293.\n\n(W2) Lines 461-462: Based on Fig. 4, one can only comment about evolution of personalization accuracy, not generalization. To support the analogous generalization claim, a plot of the base-to-novel generalization vs communication round is needed.\n\n(W3) Line 357: It would be good to have results with one more optimizer (Muon or Adam) besides SGD to ensure that the conclusions hold true on change of optimizer. Results can be in the appendix, but referenced & commented on in the main body.\n\nThings to improve the paper that did not impact the score:\n- Lines 363-365: It would be good to have a cited reference for the use of \"base-to-novel generalization\" metric.\n- Section 4.3: Among all the studies presented, only the \"Adapting Variant Options for PFL\" is an ablation study. The others are hyperparameter choice experiments.",
"questions": "(Q1) Lines 18-20 - \"In this work, we propose pFedMMA, the first personalized federated learning framework that leverages multi-modal adapters for vision-language tasks.\" I feel that this is a bit too strong of a claim, since several elements of the design already appear in the related works - a) [FedDAT] incorporates adapters in FL, and b) FedPGP deals with generalization-personalization trade-offs. Is it more apt to state this work as an advancement (with explicit clarifications on what parts are advancements)? Same comment for Lines 86-88.\n\n(Q2) Would pFedMMA translate well to state-of-the-art privacy and security aware enhancements to the aggregation method? Some explicit commentary on this aspect is warranted in the paper's main body (or in the appendix and referenced in the main body).\n\n(Q3) Lines 264-265: Should this aggregation formula be changed when number of samples varies across clients? How are the empirical results generated for large differences in number of samples?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-06T07:47:51",
"modification_date": "2025-11-12T12:37:28",
"review_url": "https://openreview.net/forum?id=aX3E6LirK5¬eId=NILAgvFwdI",
"license": "CC BY 4.0"
},
{
"id": "Sox5rrPEe2",
"forum": "aX3E6LirK5",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11070/Reviewer_Tmrt",
"reviewer_name": "Reviewer_Tmrt",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes pFedMMA, a personalized federated learning framework for Vision-Language Models (VLMs), which for the first time incorporates multi-modal adapters into federated fine-tuning. The proposed architecture includes modality-specific up- and down-projection layers along with a globally shared projection layer. During training, all components are updated locally, while only the shared projection is aggregated globally. Extensive experiments across multiple datasets demonstrate that pFedMMA achieves a SOTA trade-off between personalization and generalization.",
"strengths": "1. pFedMMA effectively introduces multi-modal adapters into personalized federated learning, balancing personalization and generalization. It addresses the poor generalization of existing prompt-tuning methods on unseen classes.\n\n2. The asymmetric training mechanism, which aggregates only the shared projection layer, reduces communication costs while retaining modality-specific up- and down-projections locally to adapt to local data distributions.\n\n3. Through extensive evaluation across diverse data heterogeneity scenarios, pFedMMA is shown to surpass prior prompt-based PFL techniques in generalization across both domains and categories, without compromising its personalization strength.",
"weaknesses": "1. Although communication cost is reduced, the total number of trainable parameters introduced by pFedMMA is significantly larger than mainstream prompt-tuning methods, increasing local computational and memory burdens, which may not be friendly to resource-constrained devices.\n\n2. Despite achieving the best harmonic mean (HM) performance, pFedMMA shows noticeably lower local accuracy than pFedMoAP on several datasets (e.g., Flowers102 and DTD), indicating that its personalization capability is sacrificed in certain scenarios. The overall performance is sensitive to dataset distributions and lacks stability.\n\n3. In the domain generalization experiments on DomainNet and Office-Caltech10, the experiments do not include federated baselines explicitly developed for domain or feature shift scenarios, which weakens the credibility of their claims regarding domain generalization capability.",
"questions": "1. The paper claims that the shared projection layer improves generalization to unseen classes, but it does not explain why this structure effectively generalizes to semantic categories completely absent during training.\n\n2. The motivation for using harmonic mean (HM) as the main evaluation metric, rather than arithmetic mean, is not sufficiently justified. Moreover, no references are provided to support the use of HM for evaluating the balance between personalization and generalization.\n\n3. The main contribution appears to be a direct adaptation of the centralized MMA to the federated setting, with the added strategy of aggregating only the shared projection layer globally. This raises concerns about limited novelty.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T16:20:39",
"modification_date": "2025-11-12T12:37:29",
"review_url": "https://openreview.net/forum?id=aX3E6LirK5¬eId=Sox5rrPEe2",
"license": "CC BY 4.0"
},
{
"id": "uWk04pGT1z",
"forum": "aX3E6LirK5",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11070/Reviewer_quYy",
"reviewer_name": "Reviewer_quYy",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes pFedMMA, a novel personalized federated learning (PFL) framework for fine-tuning vision-language models (VLMs) like CLIP in decentralized, heterogeneous settings. It introduces multi-modal adapters with modality-specific down- and up-projection layers and a shared projection layer, inserted into upper transformer blocks of both image and text encoders. Clients locally update all adapter components to adapt to their data distributions, while only the shared projection is globally aggregated via FedAvg, balancing personalization and generalization with communication efficiency.",
"strengths": "Comprehensive Evaluation: The study spans diverse heterogeneity scenarios (label shifts via non-overlapping classes, feature shifts via multi-domain datasets like DomainNet), using Dirichlet partitioning for realistic non-IID data. Testing extensive datasets, two backbones (ViT-B/16, ViT-B/32), and few-shot regimes, provides robust evidence of applicability and interpretability.\n\nEfficiency and Scalability: As a parameter-efficient fine-tuning (PEFT) method, it freezes the VLM backbone, training only lightweight adapters. The focus on shared projection aggregation reduces communication costs.",
"weaknesses": "Unverified Cross-Modal Alignment: The core claim of achieving cross-modal consistency via the shared projection lacks rigorous validation. The parallel adapter design with a shared layer assumes modality interaction without explicit mechanisms (e.g., attention or fusion gates), and no quantitative evidence (e.g., cosine similarity, t-SNE visualizations) confirms reduced modality gaps or alignment under federated heterogeneity. \n\nInsufficient Motivation and Problem Framing: The motivation relies on prompt-based PFL methods, sacrificing generalization for personalization, but lacks deep analysis on why adapters inherently outperform prompts or why a hybrid prompt-adapter approach is not explored. Baseline selection is biased toward prompt methods, omitting adapter-based methods such as FedCLIP (Lu et al., 2023).",
"questions": "Please see the weaknesses above.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-26T12:17:34",
"modification_date": "2025-11-12T12:37:30",
"review_url": "https://openreview.net/forum?id=aX3E6LirK5¬eId=uWk04pGT1z",
"license": "CC BY 4.0"
},
{
"id": "LTlOEvyjcX",
"forum": "aX3E6LirK5",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11070/Reviewer_742x",
"reviewer_name": "Reviewer_742x",
"rating": 4,
"confidence": 2,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes pFedMMA for personalized federated learning (PFL) on vision–language models (VLMs). The core idea is to adopt a Multi-Modal Adapter structure, aggregating only the shared projection on the server while keeping the up/down projections as local personalized parameters. This aims to balance personalization vs. generalization under limited communication cost.",
"strengths": "* The method is intuitive and easy to implement.\n* It has an advantage in terms of communication efficiency.",
"weaknesses": "1. While the approach is effective for personalized federated VLMs, I have concerns about novelty: the work largely looks like replacing the backbone with a Multi-Modal Adapter in a standard FL pipeline. The substantive difference from prior PEFT/Adapter + FL lines is not sufficiently quantified.\n2. The paper emphasizes applicability in out-of-distribution (OOD) scenarios, which is closely related to Federated Domain Generalization (FedDG). However, related work is not discussed in depth and experiments do not compare against FedDG-style algorithms also targeting federated VLMs (e.g., PLAN [1]).\n\n```\nReference\n[1] Shuai Gong, Chaoran Cui, Chunyun Zhang, Wenna Wang, Xiushan Nie, and Lei Zhu. Federated domain generalization via prompt learning and aggregation. arXiv:2411.10063, 2024.\n```",
"questions": "1. Why does pFedMMA achieve only 9.26% on the Amazon domain, significantly below FedPGP’s 20.34%? Please provide an explanation/diagnosis.\n2. Could you add comparisons of vision-only / text-only / both-sides shared projection to localize the main information-sharing channel and potential side effects?\n3. Please include the HM formula and a brief rationale for choosing it in the main text.\n4. Fairer baselines beyond prompts: Have you considered other VLM fine-tuning approaches such as CLIP-Adapter + FL (and more general PEFT baselines), given that current baselines are mostly prompt-based?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-22T19:22:37",
"modification_date": "2025-11-12T12:37:31",
"review_url": "https://openreview.net/forum?id=aX3E6LirK5¬eId=LTlOEvyjcX",
"license": "CC BY 4.0"
}
] | |
ZmGfCj1n2P | https://openreview.net/forum?id=ZmGfCj1n2P | A robust PPG foundation model using multimodal physiological supervision | 4 | 4 | [
4,
4,
4
] | [
4,
4,
4
] | 3 | [
"Photoplethysmography (PPG)",
"health",
"ubiquitous computing",
"foundation model",
"wearables",
"representation learning",
"multimodal",
"self-supervised learning",
"time series",
"physiology"
] | Photoplethysmography (PPG), a non-invasive measure of changes in blood volume, is widely used in both wearable devices and clinical settings. Although recent work has explored PPG foundation models using large-scale intensive care unit (ICU) datasets, these efforts often assume the need for clean and high-quality signals. In contrast, we argue that the inherent noise and variability in ICU datasets can be harnessed to build more robust and generalizable representations. To address this, we propose a PPG foundation model that leverages accompanying electrocardiogram and respiratory signals in ICU datasets to select contrastive samples during pretraining. Our approach allows the model to retain and learn from noisy PPG segments, improving robustness without requiring multimodal inputs at inference. Our model, pretrained on 3x fewer subjects than existing state-of-the-art approaches, achieves performance improvements of up to 36\% in classification and 42\% in regression on 14 out of 15 diverse downstream tasks, including stress and heart rate prediction. Our results demonstrate that multimodal supervision can leverage clinical data to enable the development of robust, unimodal foundation models for both clinical and consumer-level data. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=ZmGfCj1n2P | 2025-09-19T22:51:20 | 3 | [
{
"id": "xvsg1yN8mg",
"forum": "ZmGfCj1n2P",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19085/Reviewer_ye6c",
"reviewer_name": "Reviewer_ye6c",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper proposes a PPG foundation model pretrained on noisy ICU PPG by using ECG and respiration only during pretraining to compute HR, RMSSD, breathing rate, breathing amplitude, and RVT. A rank-n-contrast loss then pulls PPG embeddings closer when the targets are similar, aiming to learn noise-robust representations.",
"strengths": "Clear, physiologically grounded supervision: Using ECG/RESP to form targets avoids brittle PPG morphology extraction, yet preserves unimodal inference. The target set (HR, RMSSD, RR, RA, RVT) is plausible for 10-s windows and filtered for physiological ranges.",
"weaknesses": "- The contribution of the method is trivial and uses the common infoNCE loss. The authors only consider the physiological parameters to decide the positive and negative pairs.\n- The robustness of HR/RMSSD/RR/RA/RVT estimation on 10-s windows (detectors, failure handling, thresholds) is crucial; more explicit error rates/quality filters would strengthen claims about the stability of the metric space.\n- The final backbone checkpoint is chosen by VitalVideos systolic BP probe performance for practicality. This could inadvertently bias toward that dataset/task; a small sensitivity analysis (random/earliest/best-avg across a subset) would help.\n- The final backbone is ~28.8M params vs PaPaGei’s ~5–5.7M. The architecture ablation shows gains even when the architecture differs, but fully disentangling capacity from supervision remains tricky without equal-capacity baselines for all comparisons.\n- No fine-tuning heads or end-to-end adaptation are reported; it’s unclear how the model behaves under modest supervised finetuning, which is typical in practice.",
"questions": "Please see the weakness part.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-07T15:38:12",
"modification_date": "2025-11-12T15:05:07",
"review_url": "https://openreview.net/forum?id=ZmGfCj1n2P¬eId=xvsg1yN8mg",
"license": "CC BY 4.0"
},
{
"id": "LV7HqoG7kG",
"forum": "ZmGfCj1n2P",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19085/Reviewer_zKep",
"reviewer_name": "Reviewer_zKep",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces a PPG foundation model pretrained on multimodal ICU data, where synchronized ECG and respiratory signals guide contrastive learning to derive robust and generalizable PPG representations. Compared with prior single-modality approaches (e.g., PaPaGei), the model leverages noise and signal variability more effectively, achieving substantial performance gains on 14 out of 15 downstream tasks across six unseen datasets.",
"strengths": "1. This study leverages ECG and respiratory signals to enhance PPG representation learning, demonstrating a solid understanding of the physiological mechanisms underlying PPG.\n2. It evaluates both cross-subject and within-subject settings, providing a comprehensive view of the model’s generalization performance.\n3. The proposed model achieves significant performance improvements over PaPaGei across multiple downstream tasks.",
"weaknesses": "Key Concerns:\n1. The study compares the proposed model only with PaPaGei, which limits the comprehensiveness of the evaluation. It is recommended to include additional self-supervised or pretrained baselines (e.g., SimCLR, PulsePPG) to strengthen the experimental validity.\n2. The paper claims that the inherent noise and variability in ICU data can be leveraged to improve model robustness. However, the proposed pretraining approach relies on five contrastive objectives computed from synchronized ECG and respiratory signals. It remains unclear how the authors ensure that these auxiliary signals remain reliable when the PPG signal quality deteriorates (e.g., due to patient motion). As a result, this claim may be somewhat overstated, since the method appears to focus more on multimodal assistance for PPG representation learning rather than on directly addressing signal quality issues.\n3. Although the proposed model performs well on downstream tasks, concerns remain regarding its cross-dataset generalization. The model is pretrained solely on the MIMIC dataset, while larger and more diverse publicly available datasets such as MESA or VitalDB could have been incorporated to build a more comprehensive pretraining corpus. Although the authors mention that sleep or anesthesia data may contain relatively stationary signals, incorporating more heterogeneous datasets could capture a wider range of physiological patterns and improve the model’s robustness and generalization.\n4. The method employs only derived features from ECG and respiratory signals instead of the raw multimodal inputs, which may constrain the model’s ability to capture complex temporal dependencies.\n\nMinor Concerns: \n1. Although the number of subjects used is one-third of that in PaPaGei, the total number of data segments is comparable, thus the claim of higher data efficiency is not entirely justified.\n2. The method employs only derived features from ECG and respiratory signals instead of the raw multimodal inputs, which may constrain the model’s ability to capture complex temporal dependencies.\n3. Table 1 does not report the number of subjects and total samples for each downstream dataset, which makes it somewhat difficult to fully assess data balance and generalization stability.",
"questions": "1. Could the authors clarify whether the auxiliary signals (ECG and respiration) remain reliable for contrastive supervision when the PPG signal quality is low, for instance due to patient motion?\n2. Would it be possible to include additional self-supervised baselines such as SimCLR, BYOL, or PulsePPG for a more comprehensive comparison?\n3. Could the authors elaborate on the rationale for using derived metrics instead of full multimodal inputs, and whether any experiments were conducted to validate this design choice?\n4. It would be helpful to provide statistics on the number of subjects and total samples for each downstream dataset, to better contextualize task scale and model performance.\n5. Incorporating downstream tasks related to cardiac arrhythmias (e.g., atrial fibrillation) could further demonstrate the model’s ability to handle abnormal cardiac patterns.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T21:11:15",
"modification_date": "2025-11-12T15:05:08",
"review_url": "https://openreview.net/forum?id=ZmGfCj1n2P¬eId=LV7HqoG7kG",
"license": "CC BY 4.0"
},
{
"id": "uDDB9noDXN",
"forum": "ZmGfCj1n2P",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19085/Reviewer_PDvh",
"reviewer_name": "Reviewer_PDvh",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes a robust PPG foundation model that leverages ECG and respiratory signals to guide sample selection during contrastive learning. The paper emphasizes that by integrating complementary biosignal modalities, the proposed approach effectively mitigates the limitations of unimodal, morphology-based contrastive targets, resulting in substantially improved robustness, generalization, and downstream task performance.",
"strengths": "- The paper’s focus on multi-modal supervision is well-motivated, particularly given that health is inherently multi-modal while most existing foundation models remain unimodal.\n- In general, the ablation and case studies offer meaningful insights and contribute to understanding the model’s behavior and robustness.\n- The paper is clearly written, well-organized, and easy to follow.",
"weaknesses": "**Method:** The proposed approach employs five key physiological metrics from ECG and RESP: HR, RMSSD, RR, RA, and RV to guide multi-modal supervision during pre-training, enabling the model to learn corresponding representations in the latent space. However, many of the downstream tasks evaluated (e.g., HR, BP, and activity recognition) are directly related to these same input measures. This raises concerns about potential task overlap and limited generalization. How can this design choice be justified? If the input features and downstream tasks are closely aligned, it becomes unclear whether the learned representations truly generalize beyond the pre-training objectives. Perhaps, consider evaluations on tasks that are novel and not previously explored. \n\n**Experimental Design**: \nThere are two major concerns with the experimental design: **(1) the choice of baseline** and **(2) the use of derived metrics for training**.\n\nFirst, the experiments rely heavily on PaPaGei as the sole baseline. While PaPaGei is a reasonable point of comparison, it cannot be the only one. The key issue is that PaPaGei is trained exclusively on PPG signals, whereas the proposed model leverages multi-modal supervision, including ECG and respiratory signals. Comparing a unimodal foundation model with a multi-modal supervised one introduces an inherent imbalance and may not provide a fair assessment of performance gains.\n\nSecond, the rationale for using derived physiological metrics (HR, RMSSD, RR, RA, and RVT) during pre-training needs stronger justification, especially since these metrics are closely correlated with several downstream tasks. It remains unclear whether their inclusion in pre-training offers benefits beyond what could be achieved by incorporating them at the linear probing stage alongside the learned embeddings. More fundamentally, how would a simple baseline model trained directly on these five derived features perform on the same downstream tasks? Addressing this question would help clarify the true contribution of the proposed approach.\n\n**Minor:** \nFigure 3 — The comparison of UMAP plots for heart rate may not be meaningful, as the proposed approach uses heart rate as part of its pre-training objectives, whereas the baseline models do not. Consequently, it is expected that the proposed model exhibits a clearer gradient structure in the latent space, which limits the interpretive value of this comparison.\n\n**Ablation Study:** It would be valuable to analyze the individual contribution of each computed metric derived from the co-recorded signals. Specifically, examining the effect of using HR, RMSSD, RR, RA, and RVT during pre-training, either by incorporating one metric at a time or by comparing groups of metrics (e.g., ECG-based vs. respiratory-based). This could provide deeper insights into which modalities or features most influence model performance.\n\n**Open-Source**: The models and code are not publicly released, even though the proposed approach is trained and evaluated on open-source datasets. This limits the reproducibility of the work and weakens its overall contribution, particularly given that other open-source PPG foundation models already exist [1, 2].\n\nOverall, the main contribution of this work appears to be the inclusion of additional biosignal modalities during contrastive pre-training, which improves the performance of a PPG foundation model. While this idea is interesting, the contribution is not sufficiently significant, as prior studies have already explored unimodal versus multimodal representations in similar contexts [3, 4]. Importantly, given the limitations in the experimental design, methodological justification, and lack of open-source release discussed above, I lean toward a weak reject recommendation.\n\n[1] Pillai, A., Spathis, D., Kawsar, F., & Malekzadeh, M. (2024). Papagei: Open foundation models for optical physiological signals. _arXiv preprint arXiv:2410.20542_.\n\n[2] Saha, M., Xu, M. A., Mao, W., Neupane, S., Rehg, J. M., & Kumar, S. (2025). Pulse-ppg: An open-source field-trained ppg foundation model for wearable applications across lab and field settings. _Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies_, _9_(3), 1-35.\n\n[3] Zhou, Y., Khasentino, J., Yun, T., Biradar, M. I., Shreibati, J., Lai, D., ... & Hormozdiari, F. (2025). Applying multimodal AI to physiological waveforms improves genetic prediction of cardiovascular traits. _The American Journal of Human Genetics_.\n\n[4] Ezzameli, K., & Mahersia, H. (2023). Emotion recognition from unimodal to multimodal analysis: A review. _Information Fusion_, _99_, 101847.",
"questions": "- Are there other ECG and RESP metrics that can be used during contrastive pre-training?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T15:18:42",
"modification_date": "2025-11-12T15:05:08",
"review_url": "https://openreview.net/forum?id=ZmGfCj1n2P¬eId=uDDB9noDXN",
"license": "CC BY 4.0"
}
] | |
YyPZPrPjQD | https://openreview.net/forum?id=YyPZPrPjQD | TableMaster: A Recipe to Advance Table Understanding with Language Models | 4.666667 | 3 | [
6,
4,
4
] | [
3,
3,
3
] | 3 | [
"Table Understanding",
"Table Reasoning",
"Large Language Model",
"Natural Language Processing"
] | Tables serve as a fundamental format for representing structured relational data. While current language models (LMs) excel at many text-based tasks, they still face challenges in table understanding due to the complex characteristics of tabular data, such as their structured nature. In this paper, we aim to enhance LMs for improved table understanding. We identify four key challenges: 1) difficulty in locating target data, 2) deficiency in table semantics, 3) numerical inaccuracies in textual reasoning, and 4) semantic inflexibility in symbolic reasoning. To address these issues, we propose TableMaster, a recipe and comprehensive framework that integrates multiple solutions to overcome these obstacles. TableMaster first extracts relevant table content and verbalizes it with enriched semantic context. Additionally, we introduce adaptive reasoning, a flexible approach that dynamically adjusts between textual and symbolic reasoning, tailoring the reasoning process to each query. Extensive analyses and experiments demonstrate our findings and the effectiveness of TableMaster. On the WikiTQ dataset, TableMaster achieves an accuracy of 78.13% using GPT-4o-mini, surpassing existing baselines. | TableMaster analyzes the challenges of table understanding with language models and provides a comprehensive recipe and framework to address them. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=YyPZPrPjQD | 2025-09-19T11:42:08 | 3 | [
{
"id": "0KGpPCYL1a",
"forum": "YyPZPrPjQD",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15626/Reviewer_MV4i",
"reviewer_name": "Reviewer_MV4i",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper introduces TableMaster, a novel framework designed to enhance how Large Language Models (LLMs) understand tabular data. The research addresses four key obstacles: difficulty in data localization, semantic deficiency, numerical inaccuracies, and inflexible symbolic reasoning. TableMaster employs a multi-faceted strategy, beginning by isolating relevant data into a \"table-of-focus\" and then using \"verbalization\" to enrich it with semantic context. The framework integrates program-aided reasoning and features an adaptive mechanism that dynamically balances textual and symbolic approaches based on the query. This method has achieved state-of-the-art performance on the WikiTQ and TabFact benchmarks, notably reaching 78.13% accuracy on WikiTQ with GPT-4o-mini, significantly surpassing existing baselines.",
"strengths": "1. The paper demonstrates strong empirical rigor through comprehensive experiments across multiple benchmark datasets and baselines. The thorough ablation studies effectively validate the contributions of individual components, providing clear evidence of the method's effectiveness.\n2. The authors developed a robust system for analyzing and extracting information from general tabular data, with carefully designed modules compatible with various language model backbones.",
"weaknesses": "1. Several key experiments are missing from the main paper, such as the analysis of adaptive reasoning. Including these results in the main body would strengthen the paper.\n2. The framework is thoughtfully designed and comprehensive, but many of its sub-tasks have been extensively studied, with closely related methods already proposed. As a result, the incremental novelty appears limited. \n3. In the related work section the connections of similar methods to this work are not clear. It would help to position the framework relative to each major line of work (what is shared, what differs, and why those differences matter), and to articulate the specific gaps in prior methods that this paper addresses.",
"questions": "Please refer to the weaknesses part.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T17:35:42",
"modification_date": "2025-11-12T13:38:10",
"review_url": "https://openreview.net/forum?id=YyPZPrPjQD¬eId=0KGpPCYL1a",
"license": "CC BY 4.0"
},
{
"id": "8r3qbKILjp",
"forum": "YyPZPrPjQD",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15626/Reviewer_sJMb",
"reviewer_name": "Reviewer_sJMb",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 1,
"presentation": 2,
"summary": "This paper addresses table understanding with language models by identifying four key challenges: (i) difficulty in locating target data, (ii) deficiency of table semantics, (iii) numerical inaccuracy in textual reasoning, and (iv) semantic inflexibility in symbolic reasoning. The authors propose TableMaster, a comprehensive framework that integrates multiple solutions including table-of-focus construction, table verbalization, and adaptive reasoning that dynamically switches between textual and symbolic approaches. The method is evaluated on WikiTQ, TabFact, and FetaQA datasets, showing improvements over existing baselines.",
"strengths": "- The paper provides a thorough empirical analysis of challenges in table understanding, with systematic experiments examining the impact of table size, verbalization, and different reasoning approaches.\n- TableMaster achieves notable improvements across various large-scale LLMs (GPT-3.5-turbo, GPT-4o-mini, LLaMA-3 70B).",
"weaknesses": "- While the integration is well-executed, most individual components (sub-table extraction, table verbalization, program-aided reasoning) have been proposed in prior literature. The novelty primarily lies in their combination rather than in introducing fundamentally new techniques.\n- The section 3-4 can be condensed to leave more space for experiment and analysis. Currently, most the results are in the appendix.\n- The evaluation focuses exclusively on large-scale models. It remains unclear how TableMaster performs on smaller (7–8B) models or what minimal model capabilities are required for it to function effectively.",
"questions": "1. What are the minimum model capabilities required for TableMaster? Have you tested on 7-13B parameter models? At what model size does the framework start to break down?\n2. Given that each component is well-established, what specific insights or contributions does TableMaster provide beyond engineering integration?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T12:14:20",
"modification_date": "2025-11-12T13:38:10",
"review_url": "https://openreview.net/forum?id=YyPZPrPjQD¬eId=8r3qbKILjp",
"license": "CC BY 4.0"
},
{
"id": "jniY3cUyfG",
"forum": "YyPZPrPjQD",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15626/Reviewer_oMBW",
"reviewer_name": "Reviewer_oMBW",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes TableMaster, a framework designed to enhance language models' table understanding capabilities. The paper identifies several challenges: difficulty in locating target data, semantic deficiency in tables, numerical inaccuracies in textual reasoning, and semantic inflexibility in symbolic reasoning. To address these, TableMaster integrates multiple solutions including table-of-focus construction, table verbalization, program-aided reasoning, and adaptive reasoning that dynamically selects between textual and symbolic approaches. Extensive experiments on WikiTQ, TabFact, and FetaQA datasets show that TableMaster achieves good performance.",
"strengths": "1. The paper provides a structured analysis of four fundamental challenges in table understanding, with each challenge directly linked to a targeted solution. \n\n2. TableMaster integrates multiple techniques (table-of-focus, verbalization, adaptive reasoning) into a pipeline. \n\n3. The paper conducts extensive experiments across diverse datasets and LLMs.",
"weaknesses": "1. The core contributions of the paper are primarily engineering-focused. The paper lacks novel advancements in LM architecture or reasoning mechanisms specific to table understanding.\n\n2. Experiments are concentrated on clean, structured tables from specific domains. The framework's performance on real-world noisy tables, hierarchical tables remains insufficiently explored, for example, the BIRD dataset. \n\n3. The dynamic strategy selection relies on LM judgment without robust error handling. \n\n4. While efficiency is discussed, no actual latency measurements or comparison with simpler baselines are provided. The multi-step process (verbalization, reconstruction, adaptive reasoning) likely introduces significant inference time overhead.",
"questions": "please refer to the weakness",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T12:02:22",
"modification_date": "2025-11-12T13:38:11",
"review_url": "https://openreview.net/forum?id=YyPZPrPjQD¬eId=jniY3cUyfG",
"license": "CC BY 4.0"
}
] |
IFsvqHlMPq | https://openreview.net/forum?id=IFsvqHlMPq | Multimodal Masked Polymer Autoencoder for Unified Polymer Informatics | 3 | 3.5 | [
6,
2,
2,
2
] | [
3,
4,
3,
4
] | 4 | [
"Polymer Informatics",
"Multimodal Learning",
"Scientific discovery",
"Data-driven polymer development",
"Multi-view representation learning"
] | Recent advances in large-scale sequence modeling have opened new opportunities for polymer informatics, enabling both property prediction from structures and inverse design of structures from desired properties. Most existing approaches, however, model these tasks as separate mappings, limiting their flexibility and robustness. We propose a multimodal representation learning framework that unifies diverse polymer informatics tasks within a single model. Our approach treats each property or structural element as an individual submodality and introduces an information-theoretic objective that balances informativeness across arbitrary subsets of modalities. The resulting Multimodal Masked Polymer Autoencoder (MMPAE) serves as an end-to-end foundation model, supporting both
cross-modal generation and retrieval. Extensive experiments on large polymer datasets show that MMPAE not only surpasses strong task-specific baselines under realistic missing-value conditions, but also provides a flexible platform for diverse downstream applications within a unified architecture. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=IFsvqHlMPq | 2025-09-20T15:20:29 | 4 | [
{
"id": "KU9uUdjusD",
"forum": "IFsvqHlMPq",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24109/Reviewer_VuYc",
"reviewer_name": "Reviewer_VuYc",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The authors present MMPAE, which is an autoencoder that allows to perform property prediction as well as structure generation\nby unifying diverse polymer informatics. They also perform extensive experiments on large polymer datasets, showing the superiority of MMPAE. \n\nThe paper is well written and addresses important research problems on Polymer with novel technical details.\nAdditionally, the importance of MMPAE is empirically validated with several tasks including property prediction and polymer inverse design.\n\nTwo drawbacks are that no standard deviations are shown in the experimental results. Also, I am unsure of the importance of the metrics (i.e., validity, similarity and RMSE) used for the inverse design task, because these metrics do not always mean that MMPAE successfully design polymers of practical importance and because polymers generated by MMPAE are not shown at all.\n\nConsidering the pros and cons of the current paper, I recommend for a weak acceptance.",
"strengths": "Technical novelty of MMPAE\n\nEmpirical results showing the promise",
"weaknesses": "Analysis that does not include standard deviations\n\nUnclarity of the significance of the metrics and the results on inverse design from a viewpoint of practice (e.g., actual usefulness of the polymer structures generated by MMPAE)",
"questions": "What is a rational of the evaluation protocol of masking PSMILES for property prediction? Do you have any practical scenario of materials discovery where this protocol is useful?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T19:37:52",
"modification_date": "2025-11-12T18:22:20",
"review_url": "https://openreview.net/forum?id=IFsvqHlMPq¬eId=KU9uUdjusD",
"license": "CC BY 4.0"
},
{
"id": "4FCEjfH0Eq",
"forum": "IFsvqHlMPq",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24109/Reviewer_QqP4",
"reviewer_name": "Reviewer_QqP4",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "In this work, the authors propose MMPAE, a multimodal autoencoder for polymer property prediction and inverse design. MMPAE is built on masked token prediction with regularization to balance information across different modalities. In particular, MMPAE includes an Transformer-based encoder, an autoregressive decoder for PSMILES generation, and an MLP for property prediction. The work also includes empirical study on property prediction and inverse design. It also shows advantage in cases with missing input compared with baseline models.",
"strengths": "The proposed method attempts to build a unified model for both property prediction and inverse design, which is innovative.",
"weaknesses": "1. Some details of empirical study is unclear. \n2. The performance of proposed method doesn't show significant benefit over other baselines. For instance, in inverse design, MMPAE is on par with inverse Transformer as shown in Figure 4. \n3. The model is trained and evaluated on PolyOne. However PolyOne is a synthetic dataset where properties are predicted by machine learning model, which makes it unclear about how the proposed method works on real-world tasks.",
"questions": "1. In Figure 3, the missing input experiments may not faithfully reflect the scenario in reality. Though random masking PSMILES tokens may show the robustness of proposed method. In practice, it's usually not certain tokens that are missing but rather some chemical fragments that are totally missed. \n2. $\\beta$ in Eq.9 is set to 1000 which is very large. Does it mean that infoNCE loss dominates the training for MMPAE-InfoNCE? \n3. Increasing training data size beyond 5M doesn't give much gain, does it mean the model is constraint by its capacity? Will a larger model be further improved with more data?\n4. Can HMoE and InfoNCE be combined in training MMPAE?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T03:52:07",
"modification_date": "2025-11-12T18:22:20",
"review_url": "https://openreview.net/forum?id=IFsvqHlMPq¬eId=4FCEjfH0Eq",
"license": "CC BY 4.0"
},
{
"id": "xxjwiC50lY",
"forum": "IFsvqHlMPq",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24109/Reviewer_yNFG",
"reviewer_name": "Reviewer_yNFG",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This work proposes the MMPAE that can unify cross-modal generation/ retrieval of polymer structures and their properties. This model uses a hierarchical MOE to weight unimodal and joint contributions to overcome the flaws from straightforward masked reconstruction training. They mainly have two novelties. One is the hierarchical MOE encoder with a mutual information objective to encourage balanced informativeness across inputs. The other one is the framework that treats each feature as an individual submodality, then uses a transformer for cross-model generation and retrieval. Experiment results show MMPAE outperforms existing multimodal approaches, as well as the strong task-specific baselines in some tasks.",
"strengths": "1. The theoretical proofs in the work are comprehensive. In Section 3, they have proved the limitation in optimizing with the reconstruction objective using mutual information-related methods. Also, they have inferred the final training objective in equation 9 step by step. I admire the author's mathematical knowledge.\n2. This work explored the combination of PSMILES and properties and gained good performance compared with baselines.",
"weaknesses": "1. I am throwing out a question here: the submodality defined in this paper corresponds to the polymor patch or each property. It is like tokens in the transformer models; it is a sequential data format that represents their modality altogether. In my opinion, this cannot be claimed as a multimodal model, as it does not handle two types of input for the same entity; there is no explicit fusion or alignment design for modalities.\n2. For the model structure, it uses a typical auto-encoder transformer model, and uses the CLS token for the two downstream tasks. The novelty may only be slight changes to the loss design based on the MI, which are not significant enough.\n3. In the Figure 2 training approach, the upper CLS is from unmasked embeddings from both X and Y, the lower CLS is from masked ones (after encoding by the transformer). The contrastive learning is conducted between these two CLS tokens, which will lead the model to learn an objective -- how to predict CLS better with less information. In this way, the more input missing, the larger the margin should appear between MMPAE and baselines; however, we did not see this pattern in the results, Figures 3 and 4. \n4. There are no ablation experiments in this paper, which can not prove the methods and novelties proposed in Section 3. \n5. The writing of this paper can be improved. The font in the figure is not consistent with the main text. The motivation claim in the introduction is not well organized and clear, which takes a longer time for readers to follow.",
"questions": "1. Is it a common way to define one property as one modality in the polymer field? What's the rationale for defining in this way?\n2. The method in Figure 2 shows using only the CLS to reconstruct the PSMILES, instead of using a list of tokens in Figure 1. How to decode the whole PSMILES with only one CLS token?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-24T12:48:30",
"modification_date": "2025-11-12T18:22:20",
"review_url": "https://openreview.net/forum?id=IFsvqHlMPq¬eId=xxjwiC50lY",
"license": "CC BY 4.0"
},
{
"id": "8j1JACXKD9",
"forum": "IFsvqHlMPq",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24109/Reviewer_QhSn",
"reviewer_name": "Reviewer_QhSn",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 1,
"summary": "This paper proposes a multimodal representation framework that treats polymer structures and numerical properties within the shared latent space.\nIn particular, each attribute is treated as an individual submodality, while hierarchical mixture-of-experts reweighting and the InfoNCE alignment are further incorporated to improve performance.\nExtensive experiments demonstrate that the proposed method achieves strong performance across various tasks.",
"strengths": "1. Through bridging structure and property within a single end-to-end model, MMPAE provides a flexible foundation for polymer informatics.\n2. To overcome the limitations of the straightforward masked reconstruction objective, this paper further incorporates adaptive unimodal weighting and explicit cross-modal alignment, thereby yielding more balanced representations and improving cross-modal task performance.\n3. Extensive experiments on various tasks demonstrate the effectiveness of the proposed method.",
"weaknesses": "1. The claimed “multimodal” setup is very unconvincing. \n * MMPAE only uses polymer structures and numerical properties as input, closer to multi‑view learning than multimodality. \n * The multimodal methods mentioned in the paper integrate complementary sources of information, such as 2D graphs and 3D geometries, which differ fundamentally from the setup in this work. Therefore, referencing these studies to justify MMPAE is inappropriate.\n\n2. The experimental section is very problematic.\n * For the dataset used in this work, polymers within this dataset are generated by enumeratively combining chemical fragments extracted from synthesized polymers, and their properties are predicted by PolyBERT [1] rather than experimental measurements or high‑fidelity simulations. Such a dataset may be suitable for pretraining purposes (as in PolyBERT itself), but should not be used for benchmarking or evaluating model performance, as it cannot provide reliable or meaningful validation of the proposed method.\n * For the baselines used in this work, it's necessary to include more recent and competitive methods, such as MMPolymer [2], Uni-Poly [3], PolyNC [4], and MCP [5]. The current baselines (e.g., Transpolymer and PolyBERT) are outdated, making the comparisons unconvincing and the claimed performance improvements questionable.\n * For the settings used in this work, randomly masking the input is quite questionable and scientifically unsound. For example, when PSMILES tokens are randomly masked, the ground‑truth property of the original polymer may no longer hold, as the modified polymer no longer corresponds to the same chemical structure. In this case, the experimental results lose their validity and persuasiveness.\n * In addition, it is necessary to provide the original numerical experimental results rather than only presenting visualized figures, as the latter cannot fully support quantitative.\n\n[1] Kuenneth C, Ramprasad R. polyBERT: a chemical language model to enable fully machine-driven ultrafast polymer informatics[J]. Nature communications, 2023, 14(1): 4099.\n\n[2] Wang F, Guo W, Cheng M, et al. Mmpolymer: A multimodal multitask pretraining framework for polymer property prediction[C]//Proceedings of the 33rd ACM International Conference on Information and Knowledge Management. 2024: 2336-2346.\n\n[3] Huang Q, Li Y, Zhu L, et al. Unified multimodal multidomain polymer representation for property prediction[J]. npj Computational Materials, 2025, 11(1): 153.\n\n[4] Qiu H, Sun Z Y. On-demand reverse design of polymers with PolyTAO[J]. npj Computational Materials, 2024, 10(1): 273.\n\n[5] Zhang, Yipeng, Cong Shen, and Kelin Xia. \"Multi-Cover Persistence (MCP)-based machine learning for polymer property prediction.\" Briefings in Bioinformatics 25.6 (2024): bbae465.",
"questions": "Could you provide specific algorithms illustrating the training and inference process of MMPAE, MMPAE+HMoE, and MMPAE+InfoNCE? \nProviding these algorithms would help better illustrate the distinctions among the three variants.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-21T20:06:06",
"modification_date": "2025-11-12T18:22:21",
"review_url": "https://openreview.net/forum?id=IFsvqHlMPq¬eId=8j1JACXKD9",
"license": "CC BY 4.0"
}
] | |
sfe6KFGRlD | https://openreview.net/forum?id=sfe6KFGRlD | Dual-Stage Frequency-based Denoising for Generative Recommendation | 4 | 4 | [
2,
4,
6,
4
] | [
4,
5,
3,
4
] | 4 | [
"Generative Recommendation",
"Frequency-Domain Modeling",
"Denoising",
"Attention Mechanism"
] | Generative recommendation has emerged as a promising frontier in modeling the complex and continuously evolving nature of user preferences. However, its practical effectiveness is often undermined by a fundamental yet overlooked vulnerability: its sensitivity to the pervasive high-frequency sequential noise inherent in raw user interaction data from accidental clicks or transient interests. This paper introduces a paradigm shift that explicitly performs frequency-domain modeling to effectively isolate and suppress sequential noise, while further addressing the challenge of frequency-domain sparsity. Specifically, we propose TONE (Two-stage Optimized deNoising for gEnerative recommendation), a generative framework built around a principled two-stage denoising strategy. In the first stage of item codebook construction, we apply ResGMM (Residual Gaussian Mixture Model) to better fit clustering boundaries, thereby alleviating semantic noise and establishing a robust foundation. In the second stage, on the generative model side, we employ a learnable Gaussian kernel to filter context-specific noise. Furthermore, we redesign the residual frequency-domain attention mechanism with explicit separation of real and imaginary components, and introduce a learnable matrix to counteract attention collapse induced by Fourier energy concentration, while preserving expressiveness. Empirical results demonstrate that TONE achieves the new state-of-the-art performance over strong baselines on three widely used benchmarks, achieving notable improvements on the Amazon Beauty dataset, with gains of 8.93\% in Recall@20 and 8.33\% in NDCG@20. Extensive experiments confirm that explicit frequency-domain denoising is key to unlocking a new level of performance and robustness in generative recommendation. The source code is available at \url{https://anonymous.4open.science/r/TONE-9E07/}. | generative models | https://openreview.net/pdf?id=sfe6KFGRlD | 2025-09-19T00:41:57 | 4 | [
{
"id": "SW8kOeLN56",
"forum": "sfe6KFGRlD",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13114/Reviewer_2L5c",
"reviewer_name": "Reviewer_2L5c",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes TONE, a two-stage framework for generative retrieval that addresses noise issues through frequency-domain modeling. The approach tackles two types of noise: (1) semantic noise in item representations caused by incomplete/misleading metadata, addressed via ResGMM clustering in the codebook construction stage, and (2) high-frequency sequential noise from accidental clicks or transient interests, filtered using frequency-domain attention mechanisms. The authors introduce several technical components including a Complex Residual Frequency Attention (CRFA) module with separated real/imaginary components and a rank-preserving matrix to prevent attention collapse. Experimental results on three benchmarks show improvements over existing methods.",
"strengths": "S1: The paper clearly identifies and distinguishes between semantic noise and high-frequency sequential noise, providing concrete examples that effectively motivate the proposed approach.\n\nS2: The two-stage approach comprehensively addresses both semantic and sequential noise with specific technical components for each challenge.\n\nS3: The paper includes comprehensive ablation studies showing the contribution of each component and further visualization of attention patterns.\n\nS4: The paper is generally well-written with good visual aids to explain the technical approach.",
"weaknesses": "W1. Misleading framing and/or overclaims. Problem definition (Section 3) is a standard sequential recommendation formulation (see e.g., GRU4Rec). Given identical formulation, \"Generative Recommendation\" in the title appears to be a pure overclaim to attract attention. Even TIGER, which uses a similar SemanticID formulation, more accurately titled their work as \"Generative Retrieval\" recognizing the limitations of the paradigm.\n\nW2. Questionable experimental validity and missing baselines. TONE can be separated into codebook improvements and attention module improvements. However:\n\n- TONE's codebook construction method doesn't clearly improve upon baselines. Residual K-means is used by multiple generative retrieval papers building on top of TIGER in 2025, and per Table 2, the proposed ResGMM leads to marginal, likely non s.s. gains (0.0250 vs 0.0249) over this popular baseline on the *single* dataset the authors evaluated. \n\n- TONE omitted multiple recent papers when comparing sequential modeling approaches in its 2nd stage (Section 4.2). eg just checking ICML'25 accepted papers (https://arxiv.org/abs/2502.13581), on the commonly used Beauty dataset, HSTU (ICML'24) achieves 0.0389 NDCG@10, SPM-SID (2024) 0.0399 NDCG@10, and ActionPiece (ICML'25) 0.0424 NDCG@10 -- all three outperforming TIGER and TONE. \n\n- The ActionPiece paper reports SASRec achieving 0.0318 NDCG@10 on Beauty, much higher than the 0.0205 reported here, suggesting potential issues with baseline implementations.\n\nW3. No original theoretical contributions: All theoretical analyses are directly quoted from other papers including Wang et al., 2022a and Yue et al., 2025. The paper provides no original theoretical analysis of why ResGMM specifically helps with semantic noise or formal guarantees about the frequency-domain filtering.\n\nW4. Insufficient justification for complexity: The CRFA module is extremely complex with multiple stages (DFT, separation, independent processing, IDFT, residual connections) but lacks clear justification for each component's necessity. CRFA's marginal improvements (see W2) don't seem to justify this complexity.",
"questions": "- Could you explain the discrepancy between your reported baseline results and those in recent papers eg ActionPiece (ICML'25)?\n\n- What is the statistical significance of the 0.0250 vs 0.0249 NDCG@5 improvement of ResGMM over Residual K-means?\n\n- Why were recent 2024/2025 baselines like ActionPiece, HSTU, SPM-SID, Residual-Kmeans for SID, etc not included in the comparison for the Beauty dataset?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-06T16:03:48",
"modification_date": "2025-11-12T13:03:58",
"review_url": "https://openreview.net/forum?id=sfe6KFGRlD¬eId=SW8kOeLN56",
"license": "CC BY 4.0"
},
{
"id": "XZbCep76dt",
"forum": "sfe6KFGRlD",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13114/Reviewer_5qjW",
"reviewer_name": "Reviewer_5qjW",
"rating": 4,
"confidence": 5,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes a TONE method, aiming to solve the problem of effectively isolating and suppressing the high-frequency sequence noise and semantic noise generally existing in user interaction data within the generative recommendation framework. By introducing a Residual Gaussian Mixture Model and a Residual Frequency-Domain Attention mechanism, the authors design a model capable of better fitting cluster boundaries and filtering high-frequency noise. Experiments conducted on three datasets evaluate the impact of different components, and the results show that TONE is superior to existing state-of-the-art models.",
"strengths": "S1. The paper is well-written, with clear articulation. \n\nS2. The paper outlines the drawbacks of the traditional generative recommendation framework and mitigates its limitations by designing different adaptive modules. \n\nS3. The paper provides a theoretical analysis of the TONE module, proving the rationality of introducing the corresponding module in the paper.",
"weaknesses": "W1. There are many garbled characters in the figures of the paper, such as the sequence number in Figure 1 and Figure 2.\n\nW2. The paper mentions many frequency-domain based models in the related work, but none of them are used as baseline models for comparison.\n\nW3. The authors mentioned the existence of semantic noise in the project and attempted to alleviate this problem through the Residual Gaussian Mixture Model. However, they did not use specific experiments to demonstrate that the semantic noise was truly suppressed.\n\nW4. Although the authors provided an anonymous link to the open-source code, there is no specific implementation code inside. This cannot indicate that the paper's method has good reproducibility.\n\nW5. I am very curious why the performance of the baseline methods used in the paper from the past two years are all worse than the performance of the TIGER method, which is the main comparison subject in the paper, such as TokenRec[1] and ContRec[2]. However, the performance shown in their papers is better than TIGER's performance. \n\n[1] Qu H, Fan W, Zhao Z, et al. TokenRec: Learning to Tokenize ID for LLM-Based Generative Recommendations[J]. IEEE Transactions on Knowledge and Data Engineering, 2025.\n\n[2] Qu H, Fan W, Lin S. Generative Recommendation with Continuous-Token Diffusion[J]. arXiv preprint arXiv:2504.12007, 2025.",
"questions": "All raised questions and suggestions have been pointed out in the \"Weaknesses\" section of our paper. These questions are for reference only:\n\nQ1. Why were the frequency-domain based models mentioned in the related work not used as baseline models for comparison?\n\nQ2. How to prove that the semantic noise existing in the project is truly alleviated by the Residual Gaussian Mixture Model?\n\nQ3. Why is the performance of the baseline methods used in the paper from the past two years all worse than the performance of the TIGER method, which is the main comparison subject in the paper, but the performance shown in their papers is better than TIGER's performance?\n\nQ4. Since the code link was attached in the paper, why is the content empty?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T16:37:17",
"modification_date": "2025-11-12T13:03:58",
"review_url": "https://openreview.net/forum?id=sfe6KFGRlD¬eId=XZbCep76dt",
"license": "CC BY 4.0"
},
{
"id": "03Itrv6a0g",
"forum": "sfe6KFGRlD",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13114/Reviewer_85NY",
"reviewer_name": "Reviewer_85NY",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper addresses the sensitivity of generative recommendation models to high-frequency noise in user interaction sequences by proposing TONE, a dual-stage denoising framework. Stage I employs a Residual Gaussian Mixture Model (ResGMM) for codebook construction to mitigate item semantic noise. Stage II introduces a frequency-enhanced module comprising Adaptive Gaussian Filtering, Complex Residual Frequency Attention, and Rank-Preserving Matrix Learning to explicitly filter high-frequency sequential noise. Experiments on three benchmarks show that TONE achieves state-of-the-art performance, with notable gains of over 8% in Recall@20 and NDCG@20 on the Amazon Beauty dataset.",
"strengths": "1. Originality: Proposes a novel dual-stage frequency-based denoising framework (TONE), which for the first time explicitly and systematically addresses both semantic noise and high-frequency sequential noise in generative recommendation. \n2. Quality: The methodology is rigorously designed, integrating multiple techniques like ResGMM, adaptive filtering, and complex attention. The experimental section is comprehensive, including comparisons with various baselines, detailed ablation studies, and parameter analysis. \n3. Clarity: The overall structure of the paper is logical. The abstract and introduction clearly state the motivations and contributions. The framework diagram (Figure 2) effectively illustrates the method's pipeline. \n4. Significance: If the results are fully reliable, the method provides a powerful new perspective for enhancing the robustness of generative recommendation and demonstrates significant performance improvements on multiple benchmarks, showing practical potential.",
"weaknesses": "1. Credibility of Experimental Results: The magnitude of performance improvement (e.g., +23.83% in NDCG@10 on Software) is exceptionally high, far exceeding typical improvements observed in the field. This strongly suggests the need for extremely rigorous scrutiny of every detail of the experimental setup, including data preprocessing, train/val/test splits, implementation and hyperparameter tuning of baselines. The authors need to provide more convincing evidence to rule out any potential experimental bias. \n2. Method Complexity: TONE introduces multiple complex components (ResGMM, AGF, CRFA, RPML). While ablation studies validate their effectiveness, the overall framework appears heavy, with high computational cost and model complexity, potentially hindering its ease of deployment in practical systems. The computational efficiency analysis (appendix) shows a ~25-30% increase in training time, which is non-trivial. \n3. Interpretability of Frequency-Domain Methods: The paper lacks in-depth analysis or visualization of how the frequency-domain operations concretely affect the item sequence representations and model decisions. For instance, which behaviors are identified as \"high-frequency noise\" and filtered out? What patterns does the frequency-domain attention learn? This limits the reader's understanding of the method's internal mechanisms.",
"questions": "1. To strengthen the credibility of the experimental results, could the authors provide more detailed evidence to ensure that all baseline models (especially TIGER) were compared fairly under identical experimental conditions (including identical dataset splits, preprocessing pipelines, evaluation scripts, and their own thoroughly tuned hyperparameters)? Have you considered running multiple experiments with different random seeds to report the mean and variance of performance? \n2. What are the computational and memory complexities of the Complex Residual Frequency Attention (CRFA) compared to standard self-attention? Is the method still feasible for processing very long user sequences? \n3. The initialization of the Rank-Preserving Matrix (RPML) with Gaussian noise seems simple. Have the authors tried other initialization strategies? Is there any experimental evidence regarding its sensitivity to the initialization method or the hyperparameter α? How does it evolve during training? \n4. Could you provide a concrete case study or visualization showing the changes in a real user sequence before and after processing by AGF and CRFA, for instance, indicating which interactions were identified as \"noise\" and effectively suppressed by the model?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T16:49:24",
"modification_date": "2025-11-12T13:03:59",
"review_url": "https://openreview.net/forum?id=sfe6KFGRlD¬eId=03Itrv6a0g",
"license": "CC BY 4.0"
},
{
"id": "VooGUDUwoX",
"forum": "sfe6KFGRlD",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13114/Reviewer_odwb",
"reviewer_name": "Reviewer_odwb",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper reinterprets the sequential recommendation problem from a frequency-domain perspective and introduces a model named TONE. The authors argue that conventional self-attention implicitly acts as a low-pass filter that mainly preserves DC components, reducing its ability to model high-frequency sequential variations. To mitigate this limitation, they design three modules: (1) Adapted Gaussian Filtering (AGF), which functions as a learnable band-pass filter to suppress high-frequency noise; (2) Complex Residual Frequency Attention (CRFA), which incorporates phase information in the complex domain for time–frequency fusion; and (3) Rank-Preserving Matrix Learning (RPML), which compensates for the low-rank nature of frequency-domain attention through a learnable full-rank correction matrix.\n\nWhile the paper presents an interesting attempt to formalize self-attention from a frequency-domain viewpoint, its theoretical justification contains simplifications that limit the rigor of the analysis. The proposed modules are based on well-established filtering and regularization techniques, offering limited novelty beyond their integration. Experiments on three benchmark datasets (Beauty, Software, and LastFM) show moderate improvements over prior frequency-aware baselines such as FMLP-Rec, FEARec, and FamouSRec, though the evaluation scope remains narrow. Overall, the paper provides a clear formulation and readable presentation but contributes only incremental insights in terms of theory and model design.",
"strengths": "1. Conceptual originality – The paper introduces a novel perspective by interpreting self-attention in the frequency domain, bringing signal-processing concepts into sequential and recommendation modeling.\n2. Simple and clear design – The three proposed modules (AGF, CRFA, RPML) are built with straightforward and intuitive operations, making the overall model easy to follow and understand.\n3. Consistent empirical improvements – Across the three datasets presented by the authors (Beauty, Software, and LastFM), the method demonstrates steady performance gains over prior frequency-based models, supporting its basic effectiveness.",
"weaknesses": "1. Bias in Experimental Design – The paper evaluates performance only on three datasets: Beauty, Software, and LastFM. In particular, Beauty focuses on skincare and cosmetic products, where users tend to continue purchasing items suited to their skin type once identified. As a result, this dataset is dominated by long-term (low-frequency) behavioral patterns, making it especially favorable to the proposed approach of suppressing short-term (high-frequency) noise. However, short-term (high-frequency) variations are not necessarily noise in all domains. In domains where users’ transient interests or responses to trends play an essential role in prediction, such high-frequency fluctuations can represent key behavioral signals. Therefore, while the proposed approach may be effective for domains like Beauty, it could be disadvantageous in settings where such dynamic changes should be actively modeled rather than suppressed. Moreover, two datasets (Sports and Toys) used in TIGER are omitted, leaving the evaluation insufficient to verify generality across diverse behavioral patterns.\n\n2. Simplicity and Lack of Originality in Module Design – The paper proposes a frequency-aware architecture to alleviate the limitations of self-attention, but its three components—AGF, CRFA, and RPML—are straightforward combinations of techniques already well established in prior research. The design remains at a compositional level rather than offering structural or theoretical innovation. Furthermore, applying these existing methods does not involve notable technical challenges or domain-specific constraints, making it difficult to recognize this as a meaningful contribution.\n\n3. Insufficient Theoretical Analysis – The theoretical justification provided throughout the paper lacks sufficient mathematical grounding and rigor. For example, Lemma 1 demonstrates that the high-frequency component converges to zero, but it does not provide any basis for claiming that the low-frequency component is preserved. To substantiate such a claim, one must either show that the low-frequency component does not converge to zero, or mathematically prove that it decays much more slowly than the high-frequency component. If both converge to zero, the relative attenuation rate between the two must be quantitatively compared to establish the dominance of low-frequency information. However, no such quantitative verification or empirical analysis is provided, leaving the conclusion of low-frequency preservation insufficiently supported.\n\n4. Reproducibility Concerns – Although the paper states that all code has been released, the anonymous repository contains only a README file and no executable code. This discrepancy between the reproducibility statement and the actual repository content undermines the credibility of the results. Moreover, such omission can be perceived as an intentional workaround, exploiting the fact that few reviewers check the code directly, and thus cannot be viewed favorably.",
"questions": "The main questions directly correspond to the weaknesses discussed above.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T19:30:21",
"modification_date": "2025-11-12T13:03:59",
"review_url": "https://openreview.net/forum?id=sfe6KFGRlD¬eId=VooGUDUwoX",
"license": "CC BY 4.0"
}
] | |
nFTdyfz4fC | https://openreview.net/forum?id=nFTdyfz4fC | Exploring Aleatoric Uncertainty in Object Detection via Vision Foundation Models | 3.333333 | 3 | [
4,
4,
2
] | [
3,
3,
3
] | 3 | [
"Aleatoric uncertainty",
"Data uncertainty",
"Object detection",
"Data-centric learning"
] | Datasets collected from the open world unavoidably suffer from various forms of randomness or noiseness, leading to the ubiquity of aleatoric (data) uncertainty. Quantifying such uncertainty is particularly pivotal for object detection, where images contain multi-scale objects with occlusion, obscureness, and even noisy annotations, in contrast to images with centric and similar-scale objects in classification. This paper suggests modeling and exploiting the uncertainty inherent in object detection data with vision foundation models and develops a data-centric reliable training paradigm. Technically, we propose to estimate the data uncertainty of each object instance based on the feature space of vision foundation models, which are trained on ultra-large-scale datasets and able to exhibit universal data representation. In particular, we assume a mixture-of-Gaussian structure of the object features and devise Mahalanobis distance-based measures to quantify the data uncertainty. Furthermore, we suggest two curial and practical usages of the estimated uncertainty: 1) for defining uncertainty-aware sample filter to abandon noisy and redundant instances to avoid over-fitting, and 2) for defining sample adaptive regularizer to balance easy/hard samples for adaptive training. The estimated aleatoric uncertainty serves as an extra level of annotations of the dataset, so it can be utilized in a plug-and-play manner with any model. Extensive empirical studies verify the effectiveness of the proposed aleatoric uncertainty measure on various advanced detection models and challenging benchmarks. | probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.) | https://openreview.net/pdf?id=nFTdyfz4fC | 2025-09-18T23:27:22 | 3 | [
{
"id": "LNR2MC82PW",
"forum": "nFTdyfz4fC",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12647/Reviewer_ahvk",
"reviewer_name": "Reviewer_ahvk",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes using vision foundation models (specifically SAM) to estimate aleatoric uncertainty in object detection datasets. The authors fit class-conditional Gaussian distributions in SAM's feature space and compute Mahalanobis distance-based uncertainty scores. These scores are then used for: (1) filtering noisy/redundant training samples, and (2) uncertainty-aware entropy regularisation during training. Experiments on MS-COCO and BDD100K show modest improvements across several detectors.",
"strengths": "1. This is an important problem. Characterising data uncertainty in object detection is a valuable and under-explored problem, particularly given the prevalence of noisy annotations and occluded objects.\n2. The approach is practical. The plug-and-play nature of the method is appealing - uncertainty scores can be computed offline and used with any detector.\n3. Experimental evaluation is comprehensive. The paper includes experiments across multiple detectors (YOLOX, Deformable DETR, FCOS, DINO) and datasets, showing consistent improvements.\n4. Figure 1 provides compelling visual evidence that the uncertainty scores align with human intuition about sample difficulty.",
"weaknesses": "The core technical contribution is applying existing Mahalanobis distance-based uncertainty estimation (Van Amersfoort et al. 2020, Mukhoti et al. 2023) to SAM features for object detection. This is primarily an application rather than a methodological contribution. The technique of modelling feature distributions with Gaussians and computing Mahalanobis distances is well-established in OOD detection and uncertainty quantification literature.\n\nSeveral aspects of the theoretical justification are weak. The paper does not convincingly argue why Mahalanobis distance in SAM's feature space specifically measures aleatoric rather than epistemic uncertainty. Many \"hard\" examples (e.g. occluded objects) could be considered epistemic from a model's perspective. SAM was trained for class-agnostic segmentation, not uncertainty estimation. Why should its feature space be the right representation for quantifying data uncertainty in object detection? The claim of \"implicit semantic knowledge\" needs stronger empirical validation. Using a shared covariance matrix across all classes (Eq. 2) is a strong assumption that is not well justified. Different object classes likely have different feature variances and correlations.\n\nThe paper treats \"hard samples\" and \"uncertain samples\" as equivalent, but these are distinct concepts. A hard but correctly labeled occluded object is challenging but not necessarily uncertain. Filtering such samples (Table 3-4) may remove valuable training data. The three-way categorisation into \"easy/hard/noisy\" is subjective and not rigorously defined.\n\nImprovements are modest and in some cases unvalidated. Performance gains are small (e.g., +0.42% AP for YOLOX-S in Table 2) and no statistical significance testing is provided. The \"noisy sample filtering\" experiments (Table 3) don't verify that filtered samples are actually noisy. The improvements could simply result from removing hard samples that the model overfits to. Manual verification of filtered samples is needed.\n\nBaselines are weak and some comparisons are missing. The \"constant entropy\" baseline (Table 2) is weak. More thorough comparison with focal loss variants is needed. There is no comparison with other uncertainty quantification methods for object detection or learning-based uncertainty estimation approaches and there is limited comparison with other vision foundation models (only DINOv2 briefly in ablation).\n\nThere are also some more minor issues:\n- The process of extracting features from bounding boxes in SAM's feature maps lacks detail. How are features aggregated for objects at different scales?\n- While mentioned as \"negligible,\" the computational cost of extracting features for all training objects is not quantified.\n- The log transformation and normalisation in Eq. 4 appear arbitrary. Why logarithm specifically? The quantile threshold p and regularisation coefficient beta need more thorough sensitivity analysis.\n- Section 4.1's claim that samples with similar uncertainty are \"redundant\" lacks justification. Similar uncertainty doesn't imply similar features or redundancy.\n- Which SAM encoder layer is used? Does this choice matter?\n- There could be an ablation on class-specific vs. shared covariance matrices",
"questions": "Besides responding to the above listed weaknesses the following additional questions could be responded to:\n\n1. Can you provide evidence that filtered samples in Table 3 are actually mislabeled rather than just hard?\n2. How does uncertainty score correlate with actual detection errors on validation data?\n3. Why is shared covariance better than class-specific covariances?\n4. Have you compared with other pre-trained feature spaces (e.g., supervised ImageNet features)?\n5. What is the computational overhead of feature extraction for large-scale datasets?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T21:35:06",
"modification_date": "2025-11-12T12:58:07",
"review_url": "https://openreview.net/forum?id=nFTdyfz4fC¬eId=LNR2MC82PW",
"license": "CC BY 4.0"
},
{
"id": "MCbKrs0rpB",
"forum": "nFTdyfz4fC",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12647/Reviewer_SzDy",
"reviewer_name": "Reviewer_SzDy",
"rating": 4,
"confidence": 3,
"soundness": 1,
"contribution": 1,
"presentation": 3,
"summary": "This paper focuses on exploring aleatoric uncertainty in object detection using vision foundation models. It proposes a method that leverages SAM for feature extraction, models uncertainty with a mixture of Gaussian distributions and Mahalanobis distance, and applies the uncertainty scores for sample filtering and loss regularization. Evaluated on datasets like MS-COCO and BDD100K with detectors such as YOLOX and Deformable DETR, it demonstrates some effectiveness but lacks significant innovation, as the core uncertainty quantification and application strategies are mostly adaptations of existing techniques. The performance improvements are marginal, and the experimental comparisons and theoretical analyses are insufficient.",
"strengths": "1. This paper addresses a practical need in object detection by focusing on aleatoric uncertainty in complex scenarios and follows a clear and reproducible technical path using mature models like SAM and common detectors.\n2. This paper conducts relatively comprehensive experiments across datasets, detector architectures, and backbones.",
"weaknesses": "1. It lacks significant innovation as core methods are adaptations of existing techniques, which can be found in “Mahalanobis Distance for OOD Detection (Lee et al., NIPS 2018)”.\n\n2. It has marginal performance improvements, and insufficient experimental comparisons and theoretical analyses. Specifically, although the authors experimented extensively, the key performance indicator (AP) shows only marginal gains, typically about 0.5% AP over the baseline. This small improvement does not provide sufficient evidence to justify the value and necessity of introducing a new, purportedly innovative method.",
"questions": "1. Why does it lack comparisons with existing object detection uncertainty quantification methods? In my opinion, the paper claims to address Aleatoric Uncertainty, yet fails to compare its approach against commonly used uncertainty methods applicable to object detection, such as the classic Monte Carlo Dropout (MC-Dropout) or Deep Ensembles. This comparison is necessary to demonstrate the unique advantages of the proposed method in quantifying uncertainty for object detection.\n\n2. The paper fails to provide in-depth analysis or ablation studies concerning object scale variance, which is a critical challenge in object detection. The authors must quantify and report whether their uncertainty scores accurately reflect the intrinsic uncertainty of small and ambiguous objects. The absence of this scale-based robustness analysis significantly undermines the method's practical persuasiveness in real-world detection scenarios.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T23:00:52",
"modification_date": "2025-11-12T12:58:07",
"review_url": "https://openreview.net/forum?id=nFTdyfz4fC¬eId=MCbKrs0rpB",
"license": "CC BY 4.0"
},
{
"id": "9wDzTK2Mkt",
"forum": "nFTdyfz4fC",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12647/Reviewer_2soU",
"reviewer_name": "Reviewer_2soU",
"rating": 2,
"confidence": 3,
"soundness": 1,
"contribution": 1,
"presentation": 2,
"summary": "This paper addresses the aleatoric uncertainty that naturally arises in open-world datasets, which often contain randomness and noise. Focusing on object detection which involves occlusion, scale variation, and noisy labels, the authors propose a data-centric reliable training paradigm using vision foundation models (VFMs). The authors model object features from VFMs with a mixture-of-Gaussians and computing Mahalanobis distance-based measures, showing the possible practical usages: uncertainty-aware sample filtering and sample-adaptive regularization.",
"strengths": "- The authors test their algorithm in various scenarios using two base models, two datasets, and three VFMs to demonstrate its robustness.\n- The authors clearly shows the effectiveness of uncertainty-aware use cases in the evaluation section.",
"weaknesses": "1. Lack of novelty in motivation and solution: Aleatoric uncertainty in object detection has already been explored for quite some time, and for essentially the same reasons. Since what to solve has already been addressed in various ways, this paper instead emphasizes how to solve the problem (by leveraging vision foundation models) which also does not sound particularly novel.\n\n2. Overly verbose and bottom-up writing style (below are two examples):\n\t•\tLines 73–80: SAM is not the main contribution. Including details such as “11 million” or “1 billion” in the Introduction is unnecessary, as this section should focus on introducing the main storyline. You could simply mention that you aim to leverage SAM’s strong understanding capability through its high-resolution feature maps.\n\t•\tLines 80–90: The high-level solution description is missing. What exactly enables you to leverage SAM’s capability? What motivates the observation in Line 84? How do you derive the proposed distance metrics? There is no clear high-level explanation of how you conceptualize the problem or what your core intuition or hypothesis is.\n\n\n3. The authors assume a mixture-of-Gaussians structure for the object features. Is this assumption valid in object detection scenarios (unlike image classification, which involves more centric and similar-scale objects)?\n\n4. Citation format: Use ~\\citep or ~\\citet instead of ~\\cite to improve readability.\n\n5. I am not convinced by the author's design choices: Using large VFMs within Deformable DETR and YOLO architectures seems computationally expensive (VFM size >> DETR/YOLO size). The authors essentially borrow the capabilities of VFMs in the training logic without addressing the fundamental limitations of object detection models. It is unclear whether this approach is truly practical or beneficial.",
"questions": "1. Section 4.1: Regarding the filtering of noisy and redundant objects: could you provide failure cases (e.g., false positives and false negatives in filtering)? If possible, please also include qualitative examples of both successful and failed cases.\n2. Computation Overhead: Please discuss the overhead introduced by per-object uncertainty score computation.\n3. Did you observe any notable differences when applying this uncertainty-aware algorithm to both anchor-based object detection models (e.g., Deformable DETR) and anchor-free models (e.g., YOLO)?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T23:09:02",
"modification_date": "2025-11-12T12:58:07",
"review_url": "https://openreview.net/forum?id=nFTdyfz4fC¬eId=9wDzTK2Mkt",
"license": "CC BY 4.0"
}
] | |
ZOV3697bZZ | https://openreview.net/forum?id=ZOV3697bZZ | Towards Generalizable Implicit In-Context Learning with Attention Routing | 5 | 3.25 | [
6,
4,
4,
6
] | [
3,
3,
3,
4
] | 4 | [
"In-context Learning",
"Large Language Model",
"Transfer Learning"
] | Implicit in-context learning (ICL) has newly emerged as a promising paradigm that simulates ICL behaviors in the representation space of Large Language Models (LLMs), aiming to attain few-shot performance at zero-shot cost. However, existing approaches largely rely on injecting shift vectors into residual flows, which are typically constructed from labeled demonstrations or task-specific alignment. Such designs fall short of utilizing the structural mechanisms underlying ICL and suffer from limited generalizability. To address this, we propose In-Context Routing (ICR), a novel implicit ICL method that internalizes generalizable ICL patterns at the attention logits level. It extracts reusable structural directions that emerge during ICL and employs a learnable input-conditioned router to modulate attention logits accordingly, enabling a train-once-and-reuse framework. We evaluate ICR on 12 real-world datasets spanning diverse domains and multiple LLMs. The results show that ICR consistently outperforms prior implicit ICL methods that require task-specific retrieval or training, while demonstrating robust generalization to out-of-domain tasks where existing methods struggle. These findings position ICR to push the boundary of ICL’s practical value. | We propose In-Context Routing, an implicit ICL method that steers attention logits for robust, generalizable few-shot performance at zero-shot cost. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=ZOV3697bZZ | 2025-09-18T05:29:13 | 4 | [
{
"id": "eikM4XmkYy",
"forum": "ZOV3697bZZ",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9862/Reviewer_sqZY",
"reviewer_name": "Reviewer_sqZY",
"rating": 6,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes In Context Routing (ICR) for implicit in context learning. Instead of adding demonstration tokens to the prompt or adding shift vectors in the residual stream, the method extracts Principal ICL Directions from multi domain explicit ICL runs by doing PCA at each layer. A small router then maps a new input to layer wise weights and per head gates. During inference the method adds a low rank input conditioned bias to the attention logits. The authors suggest a kernel view interpretation plus low rank reparameterization. Experiments on diverse datasets with several open models show consistent gains over other implicit ICL and few shot ICL and stronger OOD robustness.",
"strengths": "- Practical efficiency: No prompt length increase and no weight updates in the base model (compared to vanilla ICL). The added compute is low rank and local to the attention logits, which is friendly to deployment compared with long demonstrations or broad fine tuning.\n- Clear design shift in implicit ICL: The key novelty is the move from post hoc residual steering to structural routing at the attention logits. This places the intervention exactly where ICL mechanisms operate and turns implicit ICL into a problem of routing attention paths. This is a fresh axis that is different from LoRA that edits weights and from activation steering that edits residuals.\n- Empirical breadth and stability: The method wins against several implicit ICL baselines on both in domain and out of domain sets and shows fewer collapses below zero shot. It sometimes matches or beats few shot prompting while keeping zero shot latency and memory.",
"weaknesses": "- Data and supervision needs: Router training uses labeled data from several domains. The limits of generalization to tasks with new label spaces or to settings without labels are not fully explored. It remains unclear how far the train once and reuse promise extends.\n- Information usage in PID extraction: Using only the last token Q and K may underuse the rich structure inside demonstrations. The paper argues it is sufficient as an integration point, but alternative choices like pooling across several tokens or using attention rollouts could strengthen the claim.",
"questions": "- Why restrict PID extraction to the last token only. Have you tested using several recent tokens or a learned pooling over the demonstration region, and how would that affect out of domain robustness and interpretability?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T00:52:06",
"modification_date": "2025-11-12T12:22:13",
"review_url": "https://openreview.net/forum?id=ZOV3697bZZ¬eId=eikM4XmkYy",
"license": "CC BY 4.0"
},
{
"id": "S7acJetIc0",
"forum": "ZOV3697bZZ",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9862/Reviewer_7Mtp",
"reviewer_name": "Reviewer_7Mtp",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper introduces In-Context Routing (ICR), a novel approach to improve large language models' in-context learning capabilities without using explicit demonstration examples. ICR extracts generalizable patterns from multi-domain in-context learning by identifying Principal ICL Directions (PIDs) through PCA on attention representations. These patterns are applied via a learnable router that modulates attention logits based on input queries, enabling effective zero-shot inference. Unlike existing vector-based implicit ICL methods that inject task-specific vectors into residual streams, ICR operates at the attention mechanism level, providing better generalization. Experiments on 12 datasets show ICR consistently outperforms baselines, particularly excelling on out-of-domain tasks while maintaining computational efficiency comparable to zero-shot inference.",
"strengths": "1. ICR operates at the attention logits level rather than post-hoc residual stream injection, which is more aligned with how ICL fundamentally works through attention mechanisms\n2. The paper provides rigorous theoretical grounding using the Spiked Covariance Model and Davis-Kahan theorem to explain why PCA on multi-domain ICL bases can extract generalizable patterns. \n3. Instead of additive vector interventions, ICR modulates attention through low-rank modifications to query-key interactions. \n4. Novel use of PCA to extract reusable structural directions from cross-domain attention representations",
"weaknesses": "1. OOD Design Issues:\n The division into \"near-OOD\" and \"far-OOD\" seems subjective. For example, why is MRPC (paraphrase detection) considered \"near\" while CB (NLI) is \"far\"? Both involve sentence-pair understanding. The \"OOD\" tasks are still mostly classification/QA tasks from standard NLP benchmarks. True OOD would include fundamentally different task types (e.g., structured prediction, generation, mathematical reasoning). The paper trains on 5 diverse datasets (AGNews, SST-2, TREC, CSQA, PIQA) which already cover sentiment, QA, and classification. This makes the \"generalization\" less impressive since the model has seen similar task types during training.\n2. The technical contributions are relatively incremental: The core idea of routing attention through PCA-extracted directions is reasonable, but the execution lacks the technical depth and innovation expected for a top-tier venue. A stronger contribution would involve more sophisticated pattern extraction, adaptive routing mechanisms, or novel theoretical insights about ICL.\n3. The quality of PIDs heavily depends on the diversity and quality of initial ICL prompts, but no guidelines are provided for this critical step",
"questions": "1. The experiments only test on 7B-8B models. How does ICR scale to larger models (70B+) where ICL behavior might be fundamentally different?\n2. No analysis of how PID dimensionality (r) should scale with model size or task complexity\n3. Computational cost of extracting PIDs grows with the number of domains, but this overhead isn't thoroughly analyzed when compared with few-shot learning which require zero training but may cost at inference.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T14:24:50",
"modification_date": "2025-11-12T12:22:13",
"review_url": "https://openreview.net/forum?id=ZOV3697bZZ¬eId=S7acJetIc0",
"license": "CC BY 4.0"
},
{
"id": "UE6zIX0kYG",
"forum": "ZOV3697bZZ",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9862/Reviewer_J4bJ",
"reviewer_name": "Reviewer_J4bJ",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces ICR, which extracts Principal ICL Directions from attention and adaptively injects them into logits via a lightweight router. Experiments show ICR outperforms prior implicit ICL, remains stable on out-of-distribution tasks, and achieves strong efficiency. It offers a new paradigm with few parameters, zero-shot generalization, and cross-task reusability.",
"strengths": "- This paper introduces the new paradigm of attention routing, shifting implicit ICL from residual injection to low-rank bias at the logits level, demonstrating clear novelty.\n\n- It achieves consistent gains on open-source models such as Llama2, Qwen2.5, and Llama3.1, showing strong generality and reusability.",
"weaknesses": "1. The evaluation is limited to classification and reasoning tasks, lacking assessment on open-ended QA and long-context reasoning.\n\n2. Experiments are only conducted on 7B/8B models, without validation on larger-scale LLMs.\n\n3. The router relies solely on a fixed MiniLM encoder for query representations, without examining whether alternative encoders could affect routing quality and generalization.",
"questions": "see weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T23:05:40",
"modification_date": "2025-11-12T12:22:13",
"review_url": "https://openreview.net/forum?id=ZOV3697bZZ¬eId=UE6zIX0kYG",
"license": "CC BY 4.0"
},
{
"id": "Mk34n1Xbqp",
"forum": "ZOV3697bZZ",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9862/Reviewer_LNqB",
"reviewer_name": "Reviewer_LNqB",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This paper focuses on reconstructing the attention patterns of few-shot inputs on zero-shot settings. Specifically, this paper propose In-context Routing method, which adds a bias term to the attention logits generated in zero-shot inputs to reconstruct the attention patterns observed in few-shot scenarios. Extensive quantitative experiments demonstrate that the proposed method outperforms a wide range of modern baselines, especially on out-of-domain tasks.",
"strengths": "1. The authors propose a novel framework called attention routing, which enables automatic additive steering of attention logits, which can be broadly applicable across various scenarios. The attempt to explicitly control LLMs' behavior through a mechanistic understanding is highly revolutionary for the field of interpretability. This is my primary reason for recommending the acceptance of this paper.\n\n2. Based on the above attention routing framework, the authors further introduce the In-context Routing (ICR) method. Through extensive quantitative experiments on sufficiently diverse datasets and model types, the authors demonstrate that ICR outperforms multiple baselines.\n\n3. The analysis section provides insightful details about the proposed method, strengthening their claim that ICR provides generalizable attention shaping. In particular, they find that the reshaped attention scores can capture reasoning-oriented tokens, thereby confirming the soundness of the original motivation to ICR.",
"weaknesses": "1. ICR utilizes gradient-based training on relatively large datasets with several tricks to facilitate attention routing. Also, ICR introduces an external text encoder to calculate the two key control gates in the method, and optimizes on a complex loss function. This design may contradict the low-resource spirit of ICL and undermine the overall usability. Furthermore, to my knowledge, the authors did not discuss how the performance of this additional text encoder affects ICR, nor did they provide sufficiently convincing ablation results to confirm the effectiveness of each loss component (e.g., in line 2-4 of Table 4, ablating some loss terms does not harm the accuracy). I consider attention routing to be an elegant framework, but relying on a bulky auxiliary module seems less than ideal.\n \n At the same time, this raises concerns regarding the paper’s main results (Tables 1 and 2): many of the provided baselines (such as TV and FV) involve substantially lower computational costs than ICR, making the comparison somewhat unfair. Although the authors claim that ICR exhibits good generalization and reusability, I would like to see at least a comparison in terms of calculation cost to enhance the credibility of their results.\n \n Moreover, from another perspective, since ICR already uses gradient-based training, it can be reasonable to directly train $\\Delta \\mathbf{A}$. I hope the authors can include such an experiment to demonstrate that their manual selection of the $\\Delta \\mathbf{A}$ basis is not redundant.\n \n2. Mechanistically, the authors employ an external text encoder ($E(\\cdot)$) to predict two key gating units within the ICR framework. These gating units are closely related to the internal structure of the LLM (e.g., selecting the important heads, as the authors mentioned in Section 5.3). Therefore, a crucial question arises: do the $E(x)$ actually contain information about the LLM’s internal structure? Or is $E(x)$ merely irrelevant variables? A simple experiment could address this by ablating $E(x)$ into random vectors. If the former is the case, how is this information then extracted by the two parameters $\\theta$? This should be an interesting analysis, yet the authors skipped it.\n \n3. There are several writing issues that make the paper somewhat difficult to follow, but I believe that this does not significantly affect my overall judgment of the paper.\n \n 1. Line 52. “out-of-domain (OOD)” is ambiguous. You seem to mean that the query lies outside the distribution of the demonstrations, but another possible interpretation is that “the query lies outside the pre-training distribution”. Understanding this is crucial to getting your motivation, so I recommend clarifying it to eliminate ambiguity. Also, the specific experimental setup of Fig. 1 should be described (perhaps in the appendix).\n \n 2. Line 115. This paragraph is somewhat unclear. I don’t fully understand the causal link in “Such additive interventions cannot structurally control how information flows, and thus often remain tied to task-specific representations.” I can understand that using task vectors for steering cannot *explicitly* control the information flow (i.e., attention scores, although not absolutely, since injecting certain components into the attention query could indirectly alter the attention scores), but I don’t see how this leads to being “tied to task-specific representations.” If I have missed something, I apologize.\n \n 3. I suggest that the authors explain how each introduced symbol is grounded. For example, the symbol $\\alpha$ in Equation (3) is confusing, it isn’t clear until Sec. 3.2 introduce it as a parameter to be trained.",
"questions": "1. The authors seem to attribute all the benefits of ICL demonstrations to local attention effects within the query’s tokens (i.e., dynamically filtering task-relevant signals through attention scores). However, as far as I know, additional attention behaviors such as induction heads perform global attention operations from the demonstrations to the query. ICR clearly cannot reconstruct such attention patterns, since it is conducted under zero-shot inputs, yet their method still outperforms vanilla few-shot. This might prompt a new perspective on the mechanism of ICL. I would like to ask how the authors interpret this phenomenon, and whether they could expand their discussion of such mechanisms in the paper.\n \n2. The analysis of layer/head importance (Fig. 4, left and middle) appears to include only the later layers. Could you release the results for all layers? This seems to suggest that certain specific attention heads induce the 0-shot inference, which can thus be improved by attention routing, therefore, it is interesting to get the detailed distribution of such heads.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-16T10:07:06",
"modification_date": "2025-11-12T12:22:14",
"review_url": "https://openreview.net/forum?id=ZOV3697bZZ¬eId=Mk34n1Xbqp",
"license": "CC BY 4.0"
}
] |
8r3oMjN06W | https://openreview.net/forum?id=8r3oMjN06W | A COLLUSION ATTACK ON STABLE SIGNATURE AND A DEFENSE USING DOMAIN-BASED SIGNATURE ASSIGNMENT | 2.5 | 4.25 | [
2,
2,
2,
4
] | [
4,
5,
4,
4
] | 4 | [
"Image watermarking",
"Stable Signature",
"Collusion Attack",
"domain-based signature assginment"
] | Stable Signature is a recent watermarking framework based on latent diffusion models, which generates images with embedded signatures by fine-tuning the decoder. While prior work has shown that watermarks can be removed while maintaining visual quality by retraining the watermarked decoder with clean images, we demonstrate that collusion among multiple users poses a practical and severe threat. Our attack begins by averaging watermarked decoders, which already provides a strong initialization for watermark removal. With encoder access, this initialization can be further fine-tuned to significantly suppress the watermark signal. Even when the encoder is not available, colluders can expand their group size to achieve comparable effectiveness, highlighting the scalability of the attack. To defend against this threat, we propose a domain-based signature assignment mechanism. In this strategy, the watermarking service provider (e.g., one using Stable Signature) partitions the signature space into domains, requiring all users in the same domain to share a fixed set of domain-index bits in their signatures. Experiments show that the domain-index bits remain robust under the collusion attack when the encoder is not available. Our studies suggest that adopting the domain-based signature assignment and keeping the encoder confidential will be good practices when Stable Signature is used as a watermarking solution. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=8r3oMjN06W | 2025-09-12T08:07:05 | 5 | [
{
"id": "hbKFurAspx",
"forum": "8r3oMjN06W",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4208/Reviewer_Z4NL",
"reviewer_name": "Reviewer_Z4NL",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 1,
"presentation": 3,
"summary": "The paper proposes a collusion attack scenario in which multiple users have access to watermarked images (or watermarked models). It introduces a two-stage strategy for removing the Stable Signature watermark: (a) Averaging the decoder parameters across users to initialize the model, and (b) Fine-tuning the averaged model using the collected watermarked images. To defend against this attack, the paper further proposes a domain-based signature assignment mechanism. In this approach, certain reserved server bits are made consistent across all users, thereby mitigating the effectiveness of the averaging-based attack.",
"strengths": "1. The writing is clear and easy to follow, and the overall structure of the paper is well-organized.\n\n2. The proposed collusion attack is novel. Previous defense methods may fail when multiple users collude to perform an attack, making this an important and valuable research direction for the community.",
"weaknesses": "1. The scenario considered in this paper is quite restricted. For the encoder-agnostic case, it is not realistic, see Question 1. For the encoder-aware case, watermark providers can simply protect their systems by black-boxing the encoder and decoder, making the proposed attack inaccessible. This limitation significantly constrains the practical impact of the method.\n\n2. The paper does not sufficiently engage with recent studies on watermark removal, including but not limited to [1–4]. \n\n3. The experiments in this paper are not convincing:\n\n a. The authors use only 100 images for testing, which makes the results statistically unreliable. Metrics such as FID are highly sensitive to the number and diversity of evaluation images.\n\n b. The comparison with other attack methods is limited. The paper only considers three traditional attacks and model purification, while many other relevant attacks exist, such as Rotation, Crop, Erase, Blur, Gaussian Noise, Diffusion Purification [1], Diffusion Regeneration [2], Rinsing Regeneration [3], and Averaging Attack [4].\n\n4. The defense strategy of fixing an n-bit key comes at the cost of reducing the robustness of the watermark itself. The number of valid bits for detection decreases to 48 – n. Although this design improves robustness to the specific attack proposed in the paper, it should make the watermark less robust against other types of attacks.\n\n[1] Weili Nie, Brandon Guo, Yujia Huang, Chaowei Xiao, Arash Vahdat, and Anima Anandkumar. Diffusion models for adversarial purification. In International Conference on Machine Learning (ICML), 2022.\n\n[2] Xuandong Zhao, Kexun Zhang, Yu-Xiang Wang, and Lei Li. Generative autoencoders as watermark attackers: Analyses of vulnerabilities and threats. 2023\n\n[3] Bang An, Mucong Ding, Tahseen Rabbani, Aakriti Agrawal, Yuancheng Xu, Chenghao Deng, Sicheng Zhu, Abdirisak Mohamed, Yuxin Wen, Tom Goldstein, and Furong Huang. Benchmarking the robustness of image watermarks, 2024.\n\n[4] Pei Yang, Hai Ci, Yiren Song, and Mike Zheng Shou. Can simple averaging defeat modern watermarks? Advances in Neural Information Processing Systems, 37:56644–56673, 2024.",
"questions": "1. I find the encoder-agnostic scenario unrealistic. It is unclear under what circumstances one would have access to the latent vectors and decoder weights but not the encoder. In other words, why would the watermark provider share both the latent representations and the decoder with an external party who could potentially launch an attack? If the provider intends to protect the model, they could simply make it black-box, returning only the watermarked images.\n2. Are Table 1 and a portion of Table 3 identical? If so, it would be better to remove Table 1 to avoid redundancy.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T12:29:20",
"modification_date": "2025-11-12T11:13:49",
"review_url": "https://openreview.net/forum?id=8r3oMjN06W¬eId=hbKFurAspx",
"license": "CC BY 4.0"
},
{
"id": "uKbyN80nlp",
"forum": "8r3oMjN06W",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4208/Reviewer_WgTs",
"reviewer_name": "Reviewer_WgTs",
"rating": 2,
"confidence": 5,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper introduces a two-stage collusion attack to remove the watermark signal from the watermarked decoder produced by Stable Signature. The proposed method first averages the weights across multiple watermarked decoders and then finetunes the averaged decoder to further remove the watermark signal. The author also purposes a defense using domain-based signature assignment to mitigate this attack. The experimental results show that the proposed method outperforms multiple baseline methods and can effectively remove watermark from Stable Signature.",
"strengths": "1. The proposed defense, Domain-based Signature Assignment, is straightforward to implement and conceptually simple. It effectively targets the core assumption of the averaging-based attack by introducing correlations.\n2. The paper is well-written and structured. The attack methodology and defense mechanism are explained clearly, and the figures effectively illustrate the core concepts.\n3. The authors evaluate their attack under both encoder-aware and encoder-agnostic settings, analyze the effect of the number of colluders, and compare it against basic image manipulation baselines.",
"weaknesses": "1. The central premise of the threat model is questionable. It assumes a service provider issues a unique watermark to each user. In practice, a provider aiming to identify their own generated content would likely use a single, secret watermark for all users. This would be more robust and would render the proposed collusion attack impossible, as there would be no different watermarks to average out. The motivation for per-user watermarking in a way that enables this attack is not well-justified.\n2. The effectiveness of the model averaging relies on the assumption in Equation 3 that the watermark perturbations $\\Delta^{(i)}$ are small, symmetrically distributed, and uncorrelated, causing them to cancel out. This is presented as an intuition without theoretical proof or strong empirical validation. It is not clear if the fine-tuning process of Stable Signature necessarily produces such well-behaved perturbations.\n3. The paper compares the collusion attack only against very basic image-level attacks (brightness, contrast, JPEG) and briefly mentions model purification. The field of watermark removal is much broader, with more advanced and relevant attacks available. For instance, methods like CtrlRegen and other optimization-based removal techniques should have been included to provide a more meaningful comparison of the attack's potency and the defense's robustness.\n4. The work is focused exclusively on the Stable Signature framework. While interesting, this makes the contribution very narrow.",
"questions": "1. Could you elaborate on the practical scenario that motivates the threat model? Why would a service provider choose to deploy unique, user-specific watermarks in a manner that exposes them to this collusion attack, rather than using a single provider-level watermark? And are there any real-world applications of this threat model?\n2. Can you provide theoretical or empirical evidence to support the assumption that the watermark perturbations $\\Delta^{(i)}$ are approximately zero-mean and uncorrelated across users, as stated in Section 4? How sensitive is the attack's success to this assumption?\n3. To better situate the paper's contribution, would you consider comparing your attack and defense against stronger, more recent watermark removal baselines (e.g., CtrlRegen)?\n4. How could the proposed collusion attack and the domain-based defense be adapted or generalized to other watermarking methods that modify model weights? Does the core idea hold for frameworks other than Stable Signature?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T02:01:53",
"modification_date": "2025-11-12T11:13:49",
"review_url": "https://openreview.net/forum?id=8r3oMjN06W¬eId=uKbyN80nlp",
"license": "CC BY 4.0"
},
{
"id": "MpEvdojb7e",
"forum": "8r3oMjN06W",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4208/Reviewer_7KKq",
"reviewer_name": "Reviewer_7KKq",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 1,
"presentation": 1,
"summary": "This paper investigates a new and realistic threat to stable signature, a watermarking framework designed for latent diffusion models. It introduces the model-level collusion attack. Colluders can collaborate to remove watermarks by combining their models and optionally fine-tuning the result. To defend against thisl, the paper proposes a domain-based signature assignment mechanism that makes watermark keys partially shared across users, preventing effective calcellation during averaging.",
"strengths": "1. The paper systematically study model-level collusion attacks on stable signature.\n2. The proposed collusion attack is conceptually simple yet effective.",
"weaknesses": "1. The entire study focuses only on stable signature. Results may not extend to newer or structurally different watermarking schemes.\n2. The threat model depends on assumptions about user access and linear watermark encoding.\n3. Limited ablation depth and missing baselines.",
"questions": "1. How do results scale when colluder models come from different training checkpoints or noise schedules?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-22T21:07:18",
"modification_date": "2025-11-12T11:13:50",
"review_url": "https://openreview.net/forum?id=8r3oMjN06W¬eId=MpEvdojb7e",
"license": "CC BY 4.0"
},
{
"id": "RSeNEUjIed",
"forum": "8r3oMjN06W",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4208/Reviewer_Ya4P",
"reviewer_name": "Reviewer_Ya4P",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes an approach to attack stable signature based watermarking system by proposing an approach where a small number of users could collude to update the deocder. The authors also propose a defense for this attack. They show that this attack works well in practice when we assume access to the decoder. The defense as well shows promising results on everything except on the encoder aware setting.",
"strengths": "- The attack is simple and intuitive. \n- The paper is also well written and easy to follow. \n- I appreciate that the paper introduces and attack as well as a technique to avert the threat posed by the attack. \n- The attack seems to be practical as with only 3 colluders they are able to achieve significant performance drop of the watermarking system.",
"weaknesses": "- The major weakness is that the authors assume that the attack will have access to the decoder weights as well as the z vector during generation. This in my opinion doesn’t represent a real world setting wherein the model owner would control the entire generation pipeline. A stronger attack would assume knowledge of the decoder architecture and or use a proxy decoder from another model to remove the watermark post-hoc.\n- The authors also give a lot of emphasis on encoder aware or agnostic but for me this distinction is not as important since the authors are already making white box access assumption on the decoder and latent variable, thus assuming access to the encoder is not big. This is especially important since the proposed defense does not work well in the encoder aware attack. \n- PSNR values below 30 seem to be overly large especially for the encoder aware setting. \n- Baseline comparisons are lacking. The authors have cited multiple papers on watermark removal that exist but have not compared with them. \n- It would be nice to see how Bit Acc translated into attack success rates based on the thresholds.",
"questions": "- How does the attack generalize beyond the users that were used for attacking. For example if I average the weight for 3 users, will I be able to use the final decoder to attack a 4th unseen user?\n- How does this attack generalize beyond Stable Signature or is it only applicable for stable signature? Can it generalize to other decoder specific watermarking systems?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-21T05:54:13",
"modification_date": "2025-11-12T11:13:50",
"review_url": "https://openreview.net/forum?id=8r3oMjN06W¬eId=RSeNEUjIed",
"license": "CC BY 4.0"
}
] | |
QAwGkFD8ES | https://openreview.net/forum?id=QAwGkFD8ES | SpectrumKD: Dynamic Dataset Curation for Distribution-Aware Knowledge Distillation of Large Language Models | 2.666667 | 3.666667 | [
2,
4,
2
] | [
4,
3,
4
] | 3 | [
"Large Language Models",
"Knowledge Distillation",
"Data Curation"
] | Knowledge Distillation (KD) is a critical technique for compressing large language models (LLMs) into efficient student models while preserving performance, yet its efficacy remains highly sensitive to training data quality. Current dataset curation approaches mainly focus on quality and information at the instance level, neglecting the global distribution characteristics of the entire training dataset. This oversight often results in suboptimal data selection that degrades distillation outcomes. To address this limitation, we propose SpectrumKD, a principled data curation framework that dynamically refines training datasets across epochs by leveraging the global distribution of instance difficulty. SpectrumKD constructs a difficulty spectrum over the training corpus by ranking instances based on student model evaluation, partitioning them into four distinct learning phases: Early Learning, Continuous Learning, Late Learning, and No Learning. A sliding window segmentation strategy then selects epoch-specific subsets by adaptively shifting a fixed window across the spectrum from low to high difficulty, to ensure an uniform increase in subset difficulty across training epochs. As a plug-and-play module, SpectrumKD enhances diverse white-box KD methods and model architectures with minor computational cost. Extensive experiments across multiple language model benchmarks demonstrate consistent performance gains in distilled models, with improvements observed under varied KD approaches and model families. Crucially, SpectrumKD achieves these gains without modifying core distillation algorithms, highlighting the pivotal role of dataset distribution features and data compatibility in effective LLM distillation. Our work establishes a data-centric paradigm for KD, providing both insights and tools to advance the efficiency and capability of compressed language models. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=QAwGkFD8ES | 2025-09-17T21:51:14 | 3 | [
{
"id": "djJ1u6Hfkj",
"forum": "QAwGkFD8ES",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9274/Reviewer_UAZg",
"reviewer_name": "Reviewer_UAZg",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes SpectrumKD, a curriculum learning-based data utilization strategy for traditional off-policy distillation. The method measures the difficulty of data based on the cross-entropy loss of the untrained student model, using thresholding to categorize instance difficulty. The most difficult fraction of samples are discarded during training, while the remaining samples are learned in a progressively increasing order of difficulty through a sliding window approach. The approach achieves improvements on general instruction evaluation with GPT-2 (0.1B ) and on math/code tasks with Qwen2.5-1.5B.",
"strengths": "The motivation and problem are practical. The plug-and-play module is very useful.",
"weaknesses": "● Lack of originality: Using loss to measure instance difficulty and then applying curriculum learning is not a novel idea, as similar approaches have been widely explored in many curriculum learning-related papers (e.g., Self-paced Learning).\n● Model and evaluation benchmarks: The model used (GPT-2) is relatively outdated, and for the general instruction-following evaluation, the paper does not adopt currently standard benchmarks such as IFEval, which are commonly used in the community.\n● Writing: Some variables are unexplained, such as w_j in line 272. The variable definitions in the sliding window algorithm are unclear and can be confusing.",
"questions": "The data introduced at each stage is predetermined based on the loss of the untrained student model. Is it reasonable to use the pre-defined loss to reflect the difficulty for the current training model?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T11:51:36",
"modification_date": "2025-11-12T12:15:20",
"review_url": "https://openreview.net/forum?id=QAwGkFD8ES¬eId=djJ1u6Hfkj",
"license": "CC BY 4.0"
},
{
"id": "RSMGS34rVu",
"forum": "QAwGkFD8ES",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9274/Reviewer_Q38k",
"reviewer_name": "Reviewer_Q38k",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The paper introduces SpectrumKD, a dynamic dataset curation framework designed to enhance knowledge distillation for large language models by prioritizing global distribution-aware data selection. SpectrumKD constructs a difficulty spectrum by ranking all training instances based on their cross-entropy loss as evaluated by an initial student model. This spectrum is then partitioned into four distinct zones. Based on this partitioning, the framework employs a sliding window curriculum scheduler that progressively shifts across the spectrum from easier to harder instances over the course of training epochs. The method is evaluated across multiple benchmarks, demonstrating consistent performance gains.",
"strengths": "The partitioning of data into distinct learning phases based on difficulty, coupled with adaptive curriculum scheduling, is well-motivated by empirical and theoretical insights. The plug-and-play design, which integrates seamlessly with existing KD methods without modifying core algorithms, is a practical strength.",
"weaknesses": "1. As the authors mention in the limitations, SpectrumKD is primarily based on the assumption that the distribution of instance difficulty follows a log-normal pattern, which depends on the distribution of the dataset. For datasets whose difficulty spectrum deviates significantly from log-normality (e.g., those that are extremely easy or extremely difficult), the values of λa and λb may be severely skewed, and the effectiveness of SpectrumKD has not been validated in such cases.\n2. In Section 3.2, the difficulty metric is defined as the cross-entropy loss Li = −log qθ(yi|xi). It is unclear whether this refers to the total sequence loss or the length-normalized (average) loss. Using the total loss would introduce a significant bias, conflating sequence length with intrinsic difficulty. \n3. There are several presentation issues that merit correction: duplicate section titles (“Comparison with Traditional Curriculum Learning” appears twice in Section 5.4 and Section 5.5), minor article/capitalization errors (e.g., “an uniform” should be “a uniform”), and inconsistent table cross-references (e.g., line 320 referring to “Table 3” for main instruction-following results, which are in Table 1 here).",
"questions": "1. Since the learning ability of the student model increases during training, the initial estimation of instance difficulty may be biased. Have you tried periodically re-estimating the difficulty spectrum (e.g., every few epochs) to adapt to the evolving student model, thereby enabling a more dynamic approach to dataset curation?\n2. Is the performance where curriculum scheduler (1), (2), and (3) are used simultaneously missing in Table 4?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T10:11:48",
"modification_date": "2025-11-12T12:15:20",
"review_url": "https://openreview.net/forum?id=QAwGkFD8ES¬eId=RSMGS34rVu",
"license": "CC BY 4.0"
},
{
"id": "IdGYBkezIU",
"forum": "QAwGkFD8ES",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9274/Reviewer_Trz4",
"reviewer_name": "Reviewer_Trz4",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "SpectrumKD is a pragmatic data-curation layer for white-box knowledge distillation. The authors first compute per-example cross-entropy with an untrained student to build a global “difficulty spectrum,” partitioned into four zones (Early/Continuous/Late/No Learning). Training then uses a fixed-size sliding window that moves from easy to hard across epochs so that the aggregate difficulty of the active subset increases roughly uniformly. A linear temperature ramp softens teacher logits in sync with this progression. The module is drop‑in—it does not change model architectures or KD losses—and is evaluated with several objectives (KLD, JSD, SRKL, on‑policy GKD) across instruction following, math reasoning, and code generation. Empirically, SpectrumKD delivers consistent, modest gains with small overhead, and the paper includes ablations on difficulty metrics, zone thresholds, and scheduling choices.",
"strengths": "- Practical and plug‑and‑play: one offline scoring pass and a lightweight scheduler; compatible with many KD losses and model pairs.\n- Clear, coherent design: difficulty spectrum → four-zone partition → sliding window → temperature ramp; figures make the workflow easy to follow.\n- Broad empirical sweep: multiple tasks, families (GPT‑2/OpenLLaMA2/Qwen2.5), and losses; component ablations and sensitivity studies are provided.\n- Sensible motivation: emphasizes dataset distribution and student–data compatibility rather than only instance‑level “informativeness,” which is often overlooked in KD.\n- Low engineering cost: improvements achieved without touching core KD objectives or model code; overhead appears minor.",
"weaknesses": "- Limited conceptual novelty: CE‑based difficulty ranking, curriculum‑style progression, and temperature ramping are established ideas; the contribution reads as a careful integration rather than a new principle. The distinction from competence‑based CL, uncertainty/divergence sampling (e.g., teacher–student KL, SKD/DDS), and budgeted on‑policy data selection is not sufficiently sharp.\n- Confounded difficulty metric: cross‑entropy correlates with sequence length, domain, and templating. The paper does not report partial‑correlation or length‑controlled analyses, nor task‑specific difficulty signals (e.g., executability for code, logical step correctness for math). This leaves open whether the spectrum primarily captures length/style rather than genuine hardness.\n- Budget fairness is under-specified: filtering out extreme hard samples may change effective update density. It’s unclear whether baselines are matched on tokens, steps, and wall‑clock time; stronger recent sampling baselines under strict budget parity are missing.\n- Generalization gaps: results focus on white‑box KD; applicability to black‑box settings, larger modern teachers (e.g., Llama‑3/Mixtral), and stronger students (≥7B) is not demonstrated.\n- Reporting and reproducibility: significance testing is inconsistent across tables; implementation details are scattered. Releasing scoring caches, subset indices, and configs would materially improve reproducibility.",
"questions": "1) Positioning and novelty\n- In one sentence, what is the genuinely new principle beyond integrating CE-based difficulty + curriculum sliding + temperature ramp? What can SpectrumKD do that competence-based CL or divergence/uncertainty sampling cannot?\n- Which prior methods is SpectrumKD most likely to be confused with? Please spell out the decisive differences and why those matter empirically.\n\n2) Difficulty metric and confounds\n- How correlated is your CE-based difficulty with sequence length and domain/source? Could you share a simple correlation table or length-bucket analysis to show the spectrum isn’t just a length proxy?\n- For math/code, CE can miss “one critical mistake” semantics. Have you tried task-specific difficulty signals (e.g., executability for code, step-consistency for math)? Do they change the spectrum or results meaningfully?\n- What happens with noisy or mislabeled data—does the method simply banish them to “No Learning,” and could that hurt robustness?\n\n3) Budget fairness\n- Across all main tables, are tokens, steps, and wall-clock strictly matched to baselines? If not all three, which two are matched, and where could mismatches inflate gains?\n- Filtering out extreme hard samples can increase effective update density. Can you run a control that keeps the full dataset but adjusts steps/learning rate so “effective updates” are comparable?\n\n4) Scheduler behavior\n- Please describe, in concrete terms, how the window moves when the spectrum has cliffs or multiple modes. How do you prevent oscillation or big jumps? Any smoothing or max-step constraints?\n\n5) Temperature vs. difficulty\n- Are the gains from the temperature ramp independent of the sliding window? A small 2D ablation (several ramps × several sliding schemes) would clarify whether one carries most of the lift.\n\n6) Thresholds and adaptivity\n- Are the four-zone cutoffs fixed across tasks, or tuned? Would a simple adaptive rule (e.g., quantiles chosen to stabilize validation loss) work as well or better?\n\n7) Treatment of very hard examples\n- Instead of excluding them forever, did you try bringing them back late with a small weight (hard replay/contrastive replay)? Any effect on robustness or long‑tail generalization?\n\n8) Scope and scalability\n- Do you expect similar benefits in black-box KD (teacher logits only)? What’s the simplest way to approximate your spectrum there?\n- Have you tried larger modern teachers (e.g., Llama‑3/Mixtral) or stronger students (≥7B)? Do optimal window sizes/thresholds shift with scale?\n\n9) Metrics and significance\n- Can you report variance and significance consistently across the main results, and—where feasible—add stronger task-relevant metrics (e.g., executable pass@k for code, better LLM-as-a-judge or limited human eval for instruction)?\n\n10) Reproducibility and data transparency\n- Will you release the scoring cache, subset indices, and exact configs to let others reproduce the curves? Also, a brief note on dataset licensing/filters and any basic bias checks would be helpful.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-25T17:16:00",
"modification_date": "2025-11-12T12:15:21",
"review_url": "https://openreview.net/forum?id=QAwGkFD8ES¬eId=IdGYBkezIU",
"license": "CC BY 4.0"
}
] | |
ER7zDJXtRI | https://openreview.net/forum?id=ER7zDJXtRI | ComPhy: Composing Physical Models with end-to-end Alignment | 4.5 | 3.25 | [
6,
4,
6,
2
] | [
3,
3,
3,
4
] | 4 | [
"Learning physics",
"Physical systems",
"Partial differential equations",
"Systems of PDEs"
] | Real-world phenomena typically involve multiple, interwoven dynamics that can be elegantly captured by systems of Partial Differential Equations (PDEs). However, accurately solving such systems remains a challenge. In this paper, we introduce ComPhy (CP), a novel modular framework designed to leverage the inherent physical structure of the problem to solve systems of PDEs. CP assigns each PDE to a dedicated learning module, each capable of incorporating state-of-the-art methodologies such as Physics-Informed Neural Networks or Neural Conservation Laws.
Crucially, CP introduces an end-to-end alignment mechanism, explicitly designed around the physical interplay of shared variables, enabling knowledge transfer between modules, and promoting solutions that are the result of the collective effort of all modules.
CP is the first approach specifically designed to tackle systems of PDEs, and our results show that it outperforms state-of-the-art approaches where a single model is trained on all PDEs at once. | We introduce ComPhy, a multi-module approach to learn systems of PDEs by assigning one equation to each module. An alignment mechanism ensures the networks share information to solve the system together. | applications to physical sciences (physics, chemistry, biology, etc.) | https://openreview.net/pdf?id=ER7zDJXtRI | 2025-09-17T23:59:15 | 4 | [
{
"id": "L8fe3MWOgP",
"forum": "ER7zDJXtRI",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9495/Reviewer_amcN",
"reviewer_name": "Reviewer_amcN",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces ComPhy (CP), a modular framework for solving systems of Partial Differential Equations (PDEs) using machine learning. The core innovation involves assigning each PDE in a system to a dedicated learning module (such as PINNs or Neural Conservation Laws) and connecting these modules through an end-to-end alignment mechanism. The alignment process enforces consistency between modules that predict the same physical variables, with particular emphasis on derivative-based alignment losses (Sobolev norm and derivative-only alignment). The authors demonstrate that this compositional approach outperforms standard methods where a single model learns all PDEs simultaneously. Experiments span systems ranging from two to five equations, including Navier-Stokes, acoustic wave equations, and magnetohydrodynamics, showing consistent improvements in accuracy. The paper also provides gradient analysis suggesting that CP's modular structure leads to more balanced gradient distributions during training, which may explain its superior performance.",
"strengths": "Strengths\n\nThe paper demonstrates several notable strengths that support its contribution to the community.\nClear motivation and intuitive approach: The paper effectively motivates the problem of solving coupled PDE systems and presents an intuitive solution. The compositional structure mirrors the mathematical structure of the underlying physical system, making the approach both theoretically appealing and practically sensible. The gradual build-up from methodology to the concrete Navier-Stokes example in Section 2.3 aids understanding.\nStrong and consistent empirical results: The experimental evaluation is comprehensive, covering multiple physical systems with increasing complexity. \n\nThorough experimental methodology: The authors compare against multiple relevant baselines including PINN with gradient reweighting and adaptive point resampling. The experiments are well-documented with detailed problem setups, boundary conditions, and reference solutions in the appendices. \n\nValuable gradient analysis: Section 3.4's gradient histogram analysis provides meaningful insight into why the modular approach succeeds. The observation that CP produces more balanced gradient distributions across layers compared to standard PINNs offers an empirical explanation for the performance gains and could inform future research.\nGeneralization beyond divergence-free equations: Unlike NCL which is specifically designed for divergence-free fields, ComPhy's framework applies to general PDE systems, demonstrating particular value in experiments like acoustics and MHD where NCL-only approaches would be insufficient.",
"weaknesses": "Weaknesses and Concerns\n\n Insufficient analysis of hyperparameter selection and sensitivity\nThe paper does not provide clear guidance on choosing the critical hyperparameter λ_align. While Table 2 and Table 3 show results with fixed hyperparameters, there is no ablation study examining sensitivity to this choice or methodology for setting it. \n\n Module assignment strategy not systematically addressed\nThe paper does not provide principled guidance on how to assign PDEs to modules. For the NS-Euler experiment (Section 4.1), the authors test multiple configurations (2xPINN, PINN+NCL, 2xNCL, 3xPINN) but offer no systematic approach for making this choice. Different assignments can lead to different architectures, but the selection process appears ad-hoc.\nThe paper would benefit from either developing heuristics for module assignment (e.g., based on PDE type, coupling strength, or variable sharing patterns) or demonstrating that the method is robust across reasonable assignment choices.\n\n Comparison fairness and architecture choices\nThe baseline single PINN appears to use the same architecture size as individual CP modules (Table 5), meaning the total CP model has substantially more parameters across all modules. It is unclear whether a larger single PINN with comparable total parameter count would close the performance gap. I wonder whether the observed gains arise specifically from the modular structure or merely from the increased model capacity.",
"questions": "Statistical significance and error bars\nThe results tables (Tables 2 and 3) report point estimates without error bars or confidence intervals. Given that neural network training involves stochastic elements (random initialization, batch sampling), reporting means and standard deviations across multiple runs would strengthen the claims. \n\nAblation on alignment losses\nThe authors should include ablation studies showing performance across a range of λ_align values, demonstrate the method's robustness (or lack thereof) to hyperparameter choices\n\nComputational Efficiency and Cost–Performance Analysis \nThe training time overhead of CP models is mentioned but not quantitatively justified. It would be useful to discuss whether the additional training cost is proportionate to the observed performance improvement. Presenting these metrics side by side would improve the completeness and transparency of the experimental analysis.\n\nParameter-matched baseline \nCompare against a single PINN baseline that is parameter-matched to the full CP system (same total parameter count). Alternatively, show scaling curves where single PINN and CP models are compared across increasing total parameter budgets. This will clarify whether gains are due to modularity or simply model capacity.\n\nAlignment learning\nRecent studies [1][2] have incorporated alignment learning into PINN-like frameworks, demonstrating its potential to enhance physical consistency and optimization efficiency. Therefore, the authors should discuss how their proposed method relates to these works, highlighting the key differences or advantages.\n\n[1]Gradient Alignment in Physics-informed Neural Networks: A Second-Order Optimization Perspective\n\n[2]Physics-informed Temporal Alignment for Auto-regressive PDE Foundation Models",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T13:14:14",
"modification_date": "2025-11-12T12:17:51",
"review_url": "https://openreview.net/forum?id=ER7zDJXtRI¬eId=L8fe3MWOgP",
"license": "CC BY 4.0"
},
{
"id": "XcdLvfeOCa",
"forum": "ER7zDJXtRI",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9495/Reviewer_AyTi",
"reviewer_name": "Reviewer_AyTi",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper presents ComPhy, a novel modular framework designed to leverage the inherent physical structure of the problem to solve systems of PDEs. ComPhy assigns each equation a dedicated learning module, like PINN or NCL, and introduces an end-to-end alignment mechanism to enable knowledge transfer between different modules. The results show that it outperforms state-of-the-art approaches where a single model is trained on all PDEs at once.",
"strengths": "- From my perspective, this is a novel method designed for solving systems of PDEs. It is an interesting research field which have not been explored.\n- The proposed method is novel and seems elegant for solving systems of PDEs. The experiment results also demonstrate that it outperforms plain PINNs.\n- The paper is well written and easy to follow.",
"weaknesses": "- It may be hard, but some theoretical understandings, even intuitive ones, could make the method more convincing.\n- After training, we can use a subset of the trained networks to predict all physical variables. Can some networks sharing the same variables have conflicts?\n- The paper lacks analysis on the efficiency of the model. For a system of N PDEs, each network requires (N+2) loss terms, and the whole system requires N*(N+2) loss terms; how does it influence the training time compared to plain PINNs.\n- Can we replace the current PINNs and NCLs with neural operators? The current framework does not seem to support such modules.",
"questions": "See weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T22:24:52",
"modification_date": "2025-11-12T12:17:52",
"review_url": "https://openreview.net/forum?id=ER7zDJXtRI¬eId=XcdLvfeOCa",
"license": "CC BY 4.0"
},
{
"id": "v7bL9HbhIy",
"forum": "ER7zDJXtRI",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9495/Reviewer_Jj5v",
"reviewer_name": "Reviewer_Jj5v",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "The paper introduces ComPhy, a framework for solving systems of PDEs. Instead of training a single monolithic model on all equations, ComPhy assigns each PDE to a dedicated learning module (e.g., a PINN or NCL) and enforces alignment losses between modules that share physical variables. These alignment losses encourage consistency across modules and improve convergence.",
"strengths": "1. The paper introduces a novel modular design. The decomposition of multi-PDE systems into specialized modules is elegant and well-motivated both computationally and physically.\n\n2. Introducing Sobolev-inspired alignment effectively transfers physical information between modules and leads to empirical gains.\n\n3. Gradient distribution studies convincingly explain why ComPhy’s modular approach stabilizes training compared to conventional PINNs.",
"weaknesses": "See questions below.",
"questions": "1. Managing multiple interacting modules may increase computational and memory overhead, particularly for systems with many PDEs. It would be helpful if the authors could clarify how they address this issue.\n\n2. Since the overall objective combines both alignment losses and module-specific PDE/BC/IC losses, it would be useful to report how sensitive the method is to the relative weighting of these terms. Does performance degrade significantly if the alignment coefficient is varied?\n\n3. Can the modular design generalize to systems where PDEs share only partial or implicit variables?\n\n4. The figures, particularly Figure 1, could be made more intuitive and easier to interpret.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T07:03:31",
"modification_date": "2025-11-12T12:17:52",
"review_url": "https://openreview.net/forum?id=ER7zDJXtRI¬eId=v7bL9HbhIy",
"license": "CC BY 4.0"
},
{
"id": "RS05mBq7VG",
"forum": "ER7zDJXtRI",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9495/Reviewer_M37D",
"reviewer_name": "Reviewer_M37D",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 2,
"presentation": 1,
"summary": "This paper presents a modular framework to tackle the optimization problem of PINNs caused by multiple loss constraints, which is named ComPhy. Specifically, ComPhy proposes to split multiple loss functions into several subsets with shared physical quantities. Then, several modules are configured as PINN or NCL models, and each module will be optimized based on one subset of loss functions. Besides, an alignment loss is newly proposed to align the shared physical quantities in different modules. As for inference, ComPhy only needs to infer the module with complete quantities once. Such a modular framework is expected to ease the difficulty in joint optimization of multiple losses and ensure the physical alignment in the final results. The authors provide sufficient experiments to verify the effectiveness of the proposed ComPhy.",
"strengths": "(1)\tI think the proposed idea is interesting and novel, especially in assigning different loss subsets to different modules.\n\n(2)\tThe experiments are sufficient to deliver a comprehensive evaluation of the proposed method.\n\n(3)\tRich implementation details are included.",
"weaknesses": "### (1) The motivation of ComPhy. Why does this design work?\n\nI think the authors fail to elaborate on why ComPhy performs well. The only statement is in Lines 99-101, that is, “State-of-the-art models like PINNs may suffer from optimization problems when multiple PDEs are involved. ComPhy avoids this issue by using different modules to optimize the different PDEs of the system separately.” However, suppose one of the modules in ComPhy contains complete physics quantities. In that case, it will also be optimized by the newly added alignment losses, which is still a multiple-loss optimization problem. Why is the module optimized in this way better than the model optimized from multiple PDE losses? I think an intuitive understanding is required.\n\nBesides, let us consider the example in Section 2.3. If ComPhy employs two PINN models as modules, at the beginning of training, it is really hard for the second module optimized with Eq.~(8) to generate a reliable solution. Why can the alignment loss help the first module be optimized better?\n\nTherefore, I cannot understand why ComPhy can help with the training. **Maybe the visualization of training curves (training, alignment and test losses) can be a good choice for elaboration. In addition, some theoretical analyses or thought experiments are expected.**\n\n### (2) Too many unjustified or vague claims.\n\nI think this paper contains many unsupported claims or statements, which seriously damage the scientific rigor. Here are some examples:\n\n-\tAbstract: “CP is the first approach specifically designed to tackle systems of PDEs”. Suppose the authors refer to “systems of PDEs” as the combination of multiple PDE equations. In that case, I think there are many related works that tackle the optimization problem of balancing multiple PINN losses, such as [1].\n\n-\tLine 99: “State-of-the-art models like PINNs may suffer from optimization problems when multiple PDEs are involved”. What kind of “optimization problems” do you mean? I think if the authors cannot detail this statement, the motivation of this paper is unclear.\n\n-\tLine 111: “The solution to a PDE system is unique only if all the PDEs are satisfied at once (Evans, 2022).” Although this is not a core statement, it should be noted that many PDEs contain multiple solutions.\n\n-\tEq. (3): The authors do not provide a clear definition for the L_{align}, since in Eq. (3), each row defines one type of L_{align}. I do not know what the final version used in Eq. (4) is.\n\n-\tTable 1: It is really hard to understand the last column. After a long time thinking, I understand that each row of the last column represents one configuration in ComPhy. I think a more direct description is required. \n\n-\tAll the numbers, like 100.000 or 600.000, should be 100,000 or 600,000.\n\n-\tLine 356: “The authors show that when PINNs are optimized correctly, the gradients tend to be evenly distributed across all layers.” This claim is not correct, since in Wang et al. (2021), the main focus is on the imbalance among different losses. Thus, this statement should be “the gradients of multiple PDEs”.\n\n[1] Wang et al. When and why pinns fail to train: A neural tangent kernel perspective. Journal of Computational Physics, 2022.\n\n### (3) How to decide the configuration of ComPhy, such as pure PINNs, pure NCLs, or a combination of PINN and NCL? \n\nAs presented in Table 2, different configurations lead to quite different results. When using ComPhy, it may take a long time to tune the concrete configuration of the modular framework.\n\n### (4) About related work. \n\nI think this paper is not related to the neural operator, which is purely data-driven. The authors should spend more time reviewing papers about PINN optimization, such as [1,2,3].\n\n[1] Wang et al. When and why pinns fail to train: A neural tangent kernel perspective. Journal of Computational Physics, 2022.\n\n[2] Daw et al. Mitigating propagation failures in physics-informed neural networks using retain-resample-release (r3) sampling, ICML 2023.\n\n[3] Wu et al. RoPINN: Region Optimized Physics-Informed Neural Networks, NeurIPS 2024.",
"questions": "Please see Weaknesses. To highlight, I think the authors should answer the following questions carefully:\n\n-\tWhy does ComPhy work well?\n\n-\tHow to decide the configuration of ComPhy?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-26T22:54:10",
"modification_date": "2025-11-12T12:17:53",
"review_url": "https://openreview.net/forum?id=ER7zDJXtRI¬eId=RS05mBq7VG",
"license": "CC BY 4.0"
}
] |
gqkayvdfM7 | https://openreview.net/forum?id=gqkayvdfM7 | Power of Sign: High Probability Bounds Under $(L_0, L_1)$-smoothness and Heavy-Tailed Noise | 3 | 4.25 | [
4,
2,
4,
2
] | [
3,
5,
5,
4
] | 4 | [
"Heavy-tailed noise",
"SignSGD",
"High Probability bounds",
"Generalized Smoothness"
] | In recent years, non-convex optimization problems are more often described by generalized $(L_0, L_1)$-smoothness assumption rather than standard one. Meanwhile, severely corrupted data used in these problems has increased the demand for methods capable of handling heavy-tailed noises, i.e., noises with bounded $\kappa$-th moment. Motivated by these real-world trends and challenges, we explore sign-based methods in this setup and demonstrate their effectiveness in comparison with other popular solutions like clipping or normalization. In theory, we prove the first-known high probability convergence bounds under $(L_0, L_1)$-smoothness and heavy-tailed noises with mild parameter dependencies. In the case of standard smoothness, these bounds are novel for sign-based methods as well. In particular, $\texttt{SignSGD}$ with batching achieves sample complexity $\tilde{O}\left(\left(\frac{\Delta L_0}{\varepsilon^2} + \frac{\Delta L_1}{\varepsilon}\right)\left[1 + \left(\frac{\sigma}{\varepsilon}\right)^\frac{\kappa}{\kappa-1}\right]\right), \kappa \in (1,2]$. Under the assumption of symmetric noises, $\texttt{SignSGD}$ with Majority Voting can robustly work on the whole range of $\kappa \in (0,2]$ with complexity $\tilde{O}\left(\left(\frac{\Delta L_0}{\varepsilon^2} + \frac{\Delta L_1}{\varepsilon}\right)\left[\frac{1}{\kappa^2} + \frac{\sigma^2}{\varepsilon^2}\right]\right)$. We also obtain results for parameter-free methods, Polyak-Lojasiewicz functions and momentum-based methods (in expectation). Our theoretical findings are supported by the superior performance of sign-based methods in training Large Language Models compared to clipping and normalization. | optimization | https://openreview.net/pdf?id=gqkayvdfM7 | 2025-09-20T01:49:35 | 4 | [
{
"id": "W2xSrjfEcU",
"forum": "gqkayvdfM7",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20251/Reviewer_e67G",
"reviewer_name": "Reviewer_e67G",
"rating": 4,
"confidence": 3,
"soundness": 4,
"contribution": 3,
"presentation": 4,
"summary": "This paper studies stochastic optimization of non-convex and $(L_0, L_1)$ smooth functions under heavy-tailed gradient noise. Two sign based methods are analyzed. In addition, two special cases of PL functions, and symmetric noise are also studied.",
"strengths": "1. This paper presents the first high-probability bound for the nonconvex generalized smooth functions. \n2. In the case of symmetric and unimodal noise, high probability bound is also derived when generalized smoothness is assumed.\n3. Large-scale experiments are conducted to validate the performance of M-SignSGD.",
"weaknesses": "1. For Theorem 1, large-batch is required. Is this necessary or a proof artifact? Very large batch is typically not achievable due to hardware limit. \n2. The technical challenges in dealing with additional generalized smoothess condition using sign based method needs more discussions.",
"questions": "1. Are the same hyper-parameters in Table 6 used for all sized models?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-11T18:38:36",
"modification_date": "2025-11-12T15:48:57",
"review_url": "https://openreview.net/forum?id=gqkayvdfM7¬eId=W2xSrjfEcU",
"license": "CC BY 4.0"
},
{
"id": "AI6bEmnAMN",
"forum": "gqkayvdfM7",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20251/Reviewer_1kZD",
"reviewer_name": "Reviewer_1kZD",
"rating": 2,
"confidence": 5,
"soundness": 1,
"contribution": 1,
"presentation": 2,
"summary": "This paper mainly delves into the high-probability convergence of signSGD under $(L_0,L_1)$-smoothness and heavy-tailed noise. Both centralized and distributed settings are studied. The authors also present the in-expectation convergence of signSGD with momentum. For almost all algorithms, their parameter-free versions are also analyzed, at the cost of polynomially weaker constants. Experiments are conducted to validate certain theoretical claims (noise and smoothness dependence). The authors also include pretraining LLMs to validate the effectiveness of sign-based methods.",
"strengths": "1. The paper is relatively well-written and easy to follow.\n2. The experiments are comprehensive, and I am happy to see Figures 1 and 2, which support theoretical claims to some degree.\n3. As a common practice, using various optimizers to pretrain LLaMA models on C4 meets the standards of an optimization paper in ICLR.\n4. The results of MOE training for high data-to-model ratio are reported, which is quite rare and valuable for optimization papers. Since in real-world scenarios, we often go far beyond the Chinchilla optimal ratio.",
"weaknesses": "I am convinced that this version of the paper cannot be accepted for ICLR, with most theoretical results seemingly incorrect and potential problems in the empirical results. I will go over them in detail.\n\n---\n\n**1 Theoretical aspect**\n\n**SignSGD GENERAL CONVERGENCE LEMMA 1 is not correct**. The authors have a major misunderstanding about the concentration inequality Lemma 3. In _Line 1101_: The authors erroneously set $\\lambda$ to be a random variable depending on the algorithmic trajectory. This is clearly wrong, suggesting that all high-probability bounds in this paper are not correct. The authors may refer to Appendix A.1 in [1], and it can be easily seen that such $\\lambda$ would invalidate the whole lemma.\n\n**I am also skeptical about the in-expecation convergence of M-SignSGD**. In _Line 1579-1588_, the authors used Lemma 4 and Assumption 4 to bound the heavy-tailed noise. However, Lemma 4 is stated under $l_2$-norm, but the one used here is the $l_1$-norm version, which makes Lemma 4 not directly applicable. Extra clarifications are needed for this matter. Perhaps one may consider the coordinate-wise version of Lemma 4 and apply it here. This issue might be addressed, but the current derivations are problematic.\n\n---\n\n**2 Empirical aspects**\n\n**Baselines (AdamW) seem to be undertuned**. In Table 6, the optimal lr for AdamW is smaller than M-SignSGD, which makes me very confused. It is well-known that M-SignSGD/Muon could outperform with AdamW if the RMS-norms of their updates are aligned. Generally speaking, this will result in $lr_{sign}\\approx 5lr_{AdamW}$, since the RMS-norm of the AdamW update is roughly 0.2 [2]. I would strongly suggest that the authors evaluate other hyperparameter choices.\n\n**Another serious problem is to turn off Nesterov momentum for AdamW**. This is fine if you keep it down for other optimizers, but the current setup is just unfair, since Nesterov acceleration is widely acknowledged and empirically validated to strongly boost the performance [1-4].\n\nLastly, the authors does not justify why the weight-decay of M-ClippedSGD and M-NSGD are set to zero. Also, the 0.01 weight decay for M-SignSGD and AdamW seems to be smaller than the commonly used 0.1.\n\n**References**\n\n[1] A High Probability Analysis of Adaptive SGD with Momentum.\n\n[2] Muon is scalable for LLM training.\n\n[3] MARS-M: When Variance Reduction Meets Matrices.\n\n[4] MARS: Unleashing the Power of Variance Reduction for Training Large Models. \n\n[5] Fantastic Pretraining Optimizers and Where to Find Them. \n\n---\n\n**3 Motivations**\n\nThis is the least significant point here. I don't think this paper is very well-motivated. First, there are no justifications for why high-probability convergence is important and valuable to the broader ML community. Although, as a theory guy myself, I admit that to establish theoretical convergence for a known algorithm is meaningful, in the main body of the paper, the authors still need to justify the reason to study it. To make things worse, given the current evidence indicating that most of the high-probability bounds are erroneous, this paper seems to become meaningless. Besides this part, it would also be better to empirically validate $(L_0,L_1)$-smoothness and heavy-tailed noise for signSGD, otherwise the paper will become a \"math flexing\" (not to mention that such flex is wrong!).\n\n---\n\n**4 Novelty**\n\nI checked most of the proof in detail and did not find any novel technical insights. The tools are all relatively standard and quite well-known in signSGD/normalized momentum literature. For the signSGD part, the methods come mostly from Sun et al. (2023). For the generalized smoothness part, the analysis stems from Chen et al. (2023); Gorbunov et al. (2024). For the heavy-tailed part, the HT batching tool comes from Kornilov et al. (2024), while the more advanced one in Liu & Zhou (2024) is not utilized. For the high-probability convergence, the type of Bernstein inequality/Freedman's inequality (or its variants) is well-known, but the paper did not invoke them correctly. For the parameter-free part, the type of inequalities to bound unknown parameters is due to Hübler et al. (2024b).\n\n---\n\n**5 Others**\n\n1. I would strongly advise the authors to properly use the \\citet{} and \\citep{} commands.\n2. Line 1145: $\\epsilon^k$ should be in the $l_1$-norm.\n3. Line 460: The citations of Zhang et al. (2020b) and Liu et al. (2023a) are inappropriate. The former does not include pretraining tasks on the Transformer architecture, which does not fit into the empirical settings in this paper. The latter is irrelevant and does not discuss optimization characteristics like generalized smoothness. \n4. Line 2253: I would suggest reporting the exact value of the Chinchilla optimal ratio.\n5. Line 2252: This is not necessarily a problem, but a sequence length of 512 seems to be smaller than common practice. See [1]. I understand the computational resources might be a burden, but many open-sourced implementations I encounter adopt the medium choice of 1024.\n\n**Reference**\n\n[1] The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale.",
"questions": "Please see the __Weakness__ part.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T12:15:14",
"modification_date": "2025-11-12T15:48:57",
"review_url": "https://openreview.net/forum?id=gqkayvdfM7¬eId=AI6bEmnAMN",
"license": "CC BY 4.0"
},
{
"id": "UlJ4CIpdIs",
"forum": "gqkayvdfM7",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20251/Reviewer_q5wA",
"reviewer_name": "Reviewer_q5wA",
"rating": 4,
"confidence": 5,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "The paper studies sign-based stochastic optimization under generalized $(L_0,L_1)$-smoothness and heavy-tailed noise. It provides new high-probability convergence bounds for minibatch SignSGD and Majority-Vote SignSGD. The paper also provides empirical validation through pre-training experiments on large language models, where sign-based methods other baselines in terms of perplexity and training stability.",
"strengths": "1. This paper gives high-probability bounds for sign-based methods under both $(L_0, L_1)$-smoothness and heavy-tailed noise. \n\n2. The proposed M-SignSGD achieves the convergence guarantee without growing batch sizes, which is attractive for resource-constrained training. The authors also include practical extensions like parameter-free tuning and momentum.\n\n3. The paper is also supported by experiments on large-scale LLMs, showing superior performance of the proposed methods over baselines in perplexity and robustness.",
"weaknesses": "1. Minibatch SignSGD and Majority-Vote SignSGD both need batch sizes (or worker counts) that grow with $1/\\epsilon$, which can be costly in memory. Although M-SignSGD avoids large batches, it is no longer bound by the high-probability bound, which differs from the previous analysis. It is not clear why we can not avoid batches for the high-probability bound.\n\n2. The Majority-Vote SignSGD requires symmetric and unimodal noise assumption, which is obviously a very strong assumption.\n\n3. Theorem 4 provides the parameter-free tuning; however, it still requires $\\gamma_0 \\leq \\frac{1}{90 L_1 d}$. In this sense, it seems not truly parameter-free.\n\n4. The main contribution seems to lie in high probability bounds, but the experiments are only conducted on M-SignSGD, which is not a high probability bound. It is quite strange that the authors do not report results on Minibatch SignSGD and Majority-Vote SignSGD, which are the main contribution algorithms in this paper.",
"questions": "See the Weakness part.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T18:26:32",
"modification_date": "2025-11-12T15:48:57",
"review_url": "https://openreview.net/forum?id=gqkayvdfM7¬eId=UlJ4CIpdIs",
"license": "CC BY 4.0"
},
{
"id": "msAy2f9zOF",
"forum": "gqkayvdfM7",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20251/Reviewer_5J95",
"reviewer_name": "Reviewer_5J95",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 1,
"presentation": 2,
"summary": "This work studies sign-based algorithms for non-convex optimization under generalized smoothness and heavy-tailed noise. The authors studied three methods: minibatch-SignSGD, MajorityVote-SignSGD, and M-SignSGD (the first two, with a restarted scheme, can deal with functions satisfying the PL condition). For the first two algorithms, the authors provide high-probability convergence results. For the last one, the authors prove convergence in expectation. Lastly, numerical experiments are conducted to demonstrate the effectiveness of theories.",
"strengths": "1. The writing is reader-friendly.\n1. The motivation is well-explained.",
"weaknesses": "1. Line 047, the work of (Nemirovski et al., 2009) didn't consider non-convex objectives. Moreover, for this sentence, I don't think it is necessary to distinguish between sub-Gaussian and bounded-variance noise, since the authors are discussing convergence in expectation.\n\n1. Line 056, the first relaxation of $(L_0, L_1)$-smoothness to once differentiable functions is due to (Zhang et al., 2020a) cited in the current paper, but not (Chen et al., 2023).\n\n1. Line 065, I cannot see how the work (Davis et al., 2021) reflects the expensive training of large deep learning models. This work is of course important in the literature of high-probability convergence, but the current position is clearly not a fit.\n\n1. Line 111, when discussing the normalization technique, it is better to add the seminal work by Nestrov [1].\n\n1. Line 194, missing a space after \"...LLaMA\".\n\n1. Line 218, Assumption 2 is due to (Chen et al., 2023) cited in the current paper, but not (Gorbunov et al., 2024). Moreover, though I understand the meaning of $u\\in[x,y]$, it is better to provide a formal definition for readers not seen such a condition before.\n\n1. Line 243, Lemma 1 is wrong due to wrong/inaccurate steps in the proof.\n\n 1. Line 1053, the inequality \"...$\\leq \\frac{1}{4}$\" does not hold for $\\frac{1}{4}$.\n\n 1. Line 1062, the definition of $\\psi_k$ is inaccurate. It should condition all randomness up to $x^k$.\n\n 1. Line 1082, $2L_0\\sqrt{d}\\gamma_k$ is wrong, it should be $\\exp(\\frac{1}{48\\sqrt{d}\\log\\frac{1}{\\delta}})L_0\\sqrt{d}\\gamma_k$, which can be arbitrairly large as $\\delta\\to 1$. A similar issue holds for the term $\\frac{\\\\\\|\\nabla f(x^{k-1})\\\\\\|_2}{48\\sqrt{d}\\log\\frac{1}{\\delta}}$.\n\n 1. Line 1085, this step is both right and wrong. It can be implied by bounding $\\\\\\|\\nabla f(x^k)\\\\\\|_2$ directly, but not the way used in the current proof.\n\n 1. Line 1100, the choice of $\\lambda$ collapses the whole proof. Note that $\\lambda$ in Lemma 3 can only be a real constant. However, the current choice is a random variable that depends on the randomness of the entire optimization process.\n\n1. Due to the above point, any result related to Lemma 1 (i.e., all high-probability convergence theorems) does not hold anymore.\n\n1. For the left two in-expectation results, Theorems 3 and 4 are not surprising. As far as I can check, all the proofs are standard and similar to prior works, without any new technical insights. Therefore, I cannot recognize them as very meaningful. In addition, their proofs also contain many errors. Here, I list some:\n\n 1. Line 1568, everything should be in $2$-norm.\n\n 1. Line 1587, missing a constant related to $d$ in the first step.\n\n 1. Similar issues in the proof of Theorem 4.\n\n1. Line 322, it should be \"Clip\".\n\n1. For experiments, I also have two questions:\n\n 1. Did the authors also run minibatch-SignSGD and MajorityVote-SignSGD? If yes, please report the results, as these two algorithms are two major methods studied in the paper. If not, I think it is reasonable to run new experiments.\n\n 1. Please also report the confidence interval for 350M and 1.3B models.\n\n1. Line 923, the statement of Markov's inequality (Proposition 3) seems not correct.\n\n1. Line 932, $\\mid k$ should be $\\mid D_{k-1},\\dots,D_1$.\n\n1. Line 950, Lemma 5 only holds for $a_i\\geq 0,\\forall i\\in[d]$. Moreover, inequality (15) is wrong since $\\\\\\|A\\nabla f(x^k)\\\\\\|_2$\n should be $\\\\\\|\\nabla f(x)\\\\\\|_2$. In addition, please either use $\\\\\\|\\cdot\\\\\\|$ to denote $\\\\\\|\\cdot\\\\\\|_2$ or stick to $\\\\\\|\\cdot\\\\\\|_2$, but do not use both like in the proof.\n\n**References**\n\n[1] Nesterov, Yurii E. \"Minimization methods for nonsmooth convex and quasiconvex functions.\" Matekon 29.3 (1984): 519-531.",
"questions": "See **Weaknesses**.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-17T06:44:28",
"modification_date": "2025-11-12T15:48:57",
"review_url": "https://openreview.net/forum?id=gqkayvdfM7¬eId=msAy2f9zOF",
"license": "CC BY 4.0"
}
] | |
WxTlAbRUE6 | https://openreview.net/forum?id=WxTlAbRUE6 | Benchmarking Compositional generalisation for Learning Inter-atomic Potentials | 2.5 | 4.25 | [
2,
2,
4,
2
] | [
4,
5,
3,
5
] | 4 | [
"neural networks",
"Graph Neural Networks",
"Transformers",
"compositional generalization",
"benchmark tasks"
] | Inter-atomic potentials play an important role for modelling molecular dynamics. Unfortunately, traditional methods for computing such potentials are computationally heavy. In recent years, the idea of using neural networks to approximate these computations has gained in popularity, and a variety of Graph Neural Networks and Transformer based methods have been proposed for this purpose. Recent approaches provide highly accurate estimates, but they are typically trained and tested on the same molecules. It thus remains unclear whether these models mostly learn to interpolate the training labels, or whether their physically-informed designs actually allow them to capture the underlying principles. To address this gap, we propose a benchmark consisting of four tasks that each require some form of compositional generalisation. Training and testing involves separate molecules, but the training data is chosen such that generalisation to the test examples should be feasible for models that learn the physical principles. Our empirical analysis shows that the considered tasks are highly challenging for state-of-the-art models, with errors for out-of-distribution examples often being orders of magnitude higher than for in-distribution examples. | datasets and benchmarks | https://openreview.net/pdf?id=WxTlAbRUE6 | 2025-09-19T17:09:52 | 4 | [
{
"id": "9BGJrcogz5",
"forum": "WxTlAbRUE6",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17168/Reviewer_rZiy",
"reviewer_name": "Reviewer_rZiy",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper introduces a new benchmark for evaluating compositional generalization in machine learning force fields (MLFFs). It defines four types of tasks, each targeting different aspects of out-of-distribution (OOD) behavior and evaluates five representative models (GNNs and Transformers) on force and energy prediction errors for both in-distribution (ID) and OOD settings. The study finds that all models experience significant degradation in performance when tested on OOD data, underscoring the difficulty of achieving robust generalization in MLFFs.",
"strengths": "The OOD generalization of MLFFs is a crucial area of research.\nDecomposing generalising in composition to well structured 4 tasks is commendable.",
"weaknesses": "Limitations and suggestions:\n\na) Scope of generalization: While compositional generalization is addressed, extending the benchmark to temperature variations, allotropic forms, and non-polymeric systems would improve its coverage. Limiting the dataset to small organic molecules is restrictive.\n\nb) Missing state-of-the-art (SOTA) models: The benchmark omits newer high-performing models listed on resources such as the MatBench Discovery Leaderboard(https://matbench-discovery.materialsproject.org/). Including a few top-performing models would make the comparison more comprehensive.\n\nc)Architectural bias analysis: Although the conclusion claims that the benchmark can reveal architectural biases, the paper lacks clear analysis or discussion explaining why certain architectures perform differently.\n\nd)In Figure 4g, GemNet’s energy error appears similar for ID and OOD? I would expect considerably better performance on ID as for other models.\n\ne)In Figure 2, SchNet performs poorly on force MAE but achieves the lowest energy MAE. Please explain this divergence.",
"questions": "See weakness",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-07T11:37:58",
"modification_date": "2025-11-12T13:59:07",
"review_url": "https://openreview.net/forum?id=WxTlAbRUE6¬eId=9BGJrcogz5",
"license": "CC BY 4.0"
},
{
"id": "zrtyfmyR1d",
"forum": "WxTlAbRUE6",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17168/Reviewer_y7VM",
"reviewer_name": "Reviewer_y7VM",
"rating": 2,
"confidence": 5,
"soundness": 2,
"contribution": 1,
"presentation": 3,
"summary": "This paper introduces GMD-25, a benchmark for testing compositional generalisation of machine-learning force fields. It defines four out-of-distribution (OOD) tasks where models are trained on molecules of one type and tested on related but disjoint molecules. The authors generate molecular dynamics trajectories for a set of small organic molecules, then compute reference energies and forces using the GFN2-xTB semi-empirical method. The key finding is that all models achieve low error on in-distribution data but suffer a dramatic accuracy drop on OOD test sets. The paper concludes that current MLFFs may primarily interpolate training data and highlights the need for more physically-driven models with better transferability",
"strengths": "1. The paper targets a relevant gap: existing MLFF benchmarks typically train and test on the same molecules, leaving generalisation untested.\n2. The authors emphasize reproducibility: the full dataset, splits, and training framework will be released upon acceptance\n3. The paper is easy to read",
"weaknesses": "1. The benchmark uses GFN2-xTB to label energies and forces. GFN2-xTB is a semi-empirical tight-binding method (not a high-level ab initio method). While the authors describe it as “more accurate” and “robust”, it is well known that GFN2 is significantly less accurate than DFT or higher-level quantum calculations. In standard MLFF benchmarks, one typically uses DFT to obtain reference forces. Using a semi-empirical method likely introduces non-negligible error/noise into the labels, which may confound the evaluation of generalisation. The paper does not quantify the error of GFN2-xTB itself nor justify that it is “accurate enough” for this purpose.\n2. Although a few classical MLFF architectures is included, the model selection omits several important recent advances. In particular, foundation or large pre-trained models, e.g. UMA, JMP, MACE, are explicitly excluded. The authors argue this is to avoid “memorisation” effects, but excluding such models greatly limits the relevance of the results to the current state of the field. Many state-of-the-art force fields now use massive pre-training then finetuning to improve generalisation. By not evaluating any pre-trained MLFFs, the paper’s conclusions apply only to a narrow slice of older models. In practice, practitioners would likely use a pre-trained model for OOD tasks, so the benchmark’s insight into realistic performance is limited. \n3. The paper is essentially a dataset and benchmark rather than a new modeling method. The idea of splitting training/test molecules to test extrapolation is natural and has been explored. While GMD-25 is carefully constructed, it mostly evaluates known phenomena (models overfit to training molecules) and does not introduce fundamentally new theory or techniques. In its current form, the contribution is mainly empirical. Given this, the benchmark may be more appropriate for a dataset/benchmark track. Furthermore, the tasks focus on fairly simple organic molecules (linear alkanes and functionalized variants). It is not clear how well the conclusions would extend to more complex chemistries (e.g. heteroatom-rich systems, inorganic materials, 3D conformers, etc.).",
"questions": "1. Why was GFN2-xTB chosen for generating the ground-truth energies and forces? Can the authors provide evidence that GFN2-xTB labels are sufficiently accurate (e.g. by comparison to DFT on a subset)? How might any inaccuracies in GFN2-xTB affect the benchmark results?\n2. The authors excluded “foundation” or pre-trained MLFF models from evaluation . Could the authors discuss how a top pre-trained model (e.g. UMA) would be expected to perform on these tasks? Are there plans to include such models to more fully assess state-of-the-art generalisation?\n3. The proposed benchmark is well organized, but it essentially constitutes a new dataset/experimental protocol. Can the authors clarify what novel insights or techniques this work offers beyond the dataset itself? In particular, how does GMD-25 advance our understanding of MLFF generalisation compared to existing datasets like MD17 or ANI-1? Why is it presented as a main-conference contribution rather than a dataset track?\n4. The tasks involve specific chemical classes (e.g. alkanes, alcohols, acids). How sensitive are the results to these choices? Would the authors expect similar findings if the benchmark included, say, aromatic systems or biomolecules? In other words, how broadly do the authors expect the large generalisation gaps to extend?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T18:41:44",
"modification_date": "2025-11-12T13:59:08",
"review_url": "https://openreview.net/forum?id=WxTlAbRUE6¬eId=zrtyfmyR1d",
"license": "CC BY 4.0"
},
{
"id": "Wt0z0xLK2v",
"forum": "WxTlAbRUE6",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17168/Reviewer_QGMr",
"reviewer_name": "Reviewer_QGMr",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes a new benchmark dataset for interatomic potentials, testing the generalizability on the chemical space.",
"strengths": "- The problem addressed is topical, and the field of machine learning potential model development would benefit from an innovative benchmark dataset that specifically focuses on the generalization ability.\n- The paper is well structured, with the focuses and emphasis of the introduced dataset clearly conveyed.\n- The benchmark of the state-of-the-art architectures is extensive.",
"weaknesses": "- MAE for forces is not a great measure of force discrepancy as it is not rotationally invariant.\n- The authors could use slightly more introduction to the idea of functional groups and chemical diversity to the ICLR readers who are not experts in chemistry.\n- I feel like the technical results introduced to the machine learning community represented by the ICLR readership is perhaps limited. This paper might be more suitable for publication in a field-specific journal.",
"questions": "- Could you explain a bit more why you chose the somewhat new GFN2-xTB method, as opposed to more popular methods?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T09:25:18",
"modification_date": "2025-11-12T13:59:08",
"review_url": "https://openreview.net/forum?id=WxTlAbRUE6¬eId=Wt0z0xLK2v",
"license": "CC BY 4.0"
},
{
"id": "83XhRlgqSM",
"forum": "WxTlAbRUE6",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17168/Reviewer_bm4x",
"reviewer_name": "Reviewer_bm4x",
"rating": 2,
"confidence": 5,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces GMD-25, a benchmark designed to assess compositional generalisation in machine-learning interatomic potentials (MLIPs). The authors propose four carefully structured tasks—Length Extrapolation, Functional Group Composition, Duplication, and Combination—to examine whether models can generalise to unseen molecules by recombining known structural motifs.",
"strengths": "The four tasks are conceptually clear and interpretable, each corresponding to a concrete physical generalisation challenge (length, duplication, functional-group recombination).",
"weaknesses": "The benchmark evaluation relies almost entirely on models from 2018–2022 (SchNet, PaiNN, DimeNet++, GemNet). These models are now well-known baselines but no longer representative of the current frontier in MLIPs. The inclusion of EquiFormer-V2 is appreciated, but as a relatively unstable and non-conservative Transformer variant, it cannot represent the practical performance envelope of modern MLIPs.\n\nRecent architectures such as MACE (Batatia et al., NeurIPS 2022), NequIP (Batzner et al., Nature Comm 2022), eSCN (2024), and ViSNet (2023) have become de facto standards for equivariant force fields and would provide a much stronger reference point. As a result, the current experimental section cannot convincingly support claims about state-of-the-art generalisation behavior.\n\nWhile the four tasks are conceptually appealing, they remain relatively simple from the perspective of modern models, such as MACE and eSCN. For example, the Length Extrapolation and Functional Group Duplication tasks involve only small organic chains with linear motifs; these are unlikely to challenge advanced equivariant models that already generalise well across chain lengths and simple functional groups.\n\nWithout testing stronger models on more demanding tasks, it is hard to judge whether the observed generalisation gaps are fundamental or simply reflect underpowered baselines.\n\nThe manuscript lacks a clear justification of whether all models were tuned to their best configuration, and whether they were trained with equal computational budgets. Including stronger baselines makes this aspect even more important.",
"questions": "See Section Weakness.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-16T00:05:24",
"modification_date": "2025-11-12T13:59:09",
"review_url": "https://openreview.net/forum?id=WxTlAbRUE6¬eId=83XhRlgqSM",
"license": "CC BY 4.0"
}
] | |
fArR5qngYw | https://openreview.net/forum?id=fArR5qngYw | From Moments to Models: Graphon Mixture-Aware Mixup and Contrastive Learning | 4 | 3 | [
2,
6,
4
] | [
3,
4,
2
] | 3 | [
"Graphon",
"Graphon mixture",
"Moment",
"Graph Contrastive Learning",
"Graph Mixup"
] | Real-world graph datasets often consist of mixtures of populations, where graphs are generated from multiple distinct underlying distributions. However, modern representation learning approaches, such as graph contrastive learning (GCL) and augmentation methods like Mixup, typically overlook this mixture structure. In this work, we propose a unified framework that explicitly models data as a mixture of underlying probabilistic graph generative models represented by graphons. To characterize these graphons, we leverage graph moments (motif densities) to cluster graphs arising from the same model. This enables us to disentangle the mixture components and identify their distinct generative mechanisms. This model-aware partitioning benefits two key graph learning tasks: 1) It enables a graphon-mixture-aware mixup (GMAM), a data augmentation technique that interpolates in a semantically valid space guided by the estimated graphons, instead of assuming a single graphon per class. 2) For GCL, it enables model-adaptive and principled augmentations. Additionally, by introducing a new model-aware objective, our proposed approach (termed MGCL) improves negative sampling by restricting negatives to graphs from other models. We establish a key theoretical guarantee: a novel, tighter bound showing that graphs sampled from graphons with small cut distance will have similar motif densities with high probability. Extensive experiments on benchmark datasets demonstrate strong empirical performance. In unsupervised learning, MGCL achieves state-of-the-art results, obtaining the top average rank across eight datasets. In supervised learning, GMAM consistently outperforms existing strategies, achieving new state-of-the-art accuracy in 6 out of 7 datasets. | We model graph datasets as a mixture of underlying generative graphons, identified via motif-based clustering, to create superior data augmentation and contrastive learning frameworks. | learning on graphs and other geometries & topologies | https://openreview.net/pdf?id=fArR5qngYw | 2025-09-20T05:26:29 | 3 | [
{
"id": "iiUL2lMRwx",
"forum": "fArR5qngYw",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21404/Reviewer_z9jy",
"reviewer_name": "Reviewer_z9jy",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This submission presents a framework for graph representation learning that models data as a mixture of graphons, using motif density-based clustering to disentangle generative models. It introduces two methods: GMAM (for supervised mixup augmentation) and MGCL (for unsupervised contrastive learning with model-aware sampling). \nA theoretical result provides a bound linking the cut distance between graphons and differences in empirical motif densities. Experiments show improved performance over existing mixup and contrastive learning methods.",
"strengths": "1. The motivation to address graph heterogeneity via graphon mixtures is reasonable and intuitively appealing.\n2. The paper is clearly written and well-organized, with good visual aids (e.g., Figure 1) explaining the workflow.\n3. Empirical results are generally positive, demonstrating improvements on standard benchmark datasets.",
"weaknesses": "1. The theoretical component (Theorem 1) is incremental and largely reuses existing concepts from graph theory (e.g., motif density concentration). \nThe bound provided, although claimed to be tighter, does not appear to yield any substantial new theoretical insight or algorithmic design.\n2. Both GMAM and MGCL are relatively straightforward extensions of existing approaches such as G-Mixup, SIGL, and GraphCL. \nThe modifications mainly add a clustering step based on motif statistics, followed by standard mixup or contrastive loss. \nThis design is incremental and lacks conceptual depth.\n3. The paper only briefly mentions computational complexity in Appendix A.1, without any comparison to baselines or quantitative analysis (e.g., runtime, GPU hours, or scaling with graph size). \nSince the proposed methods require motif counting and multiple graphon estimations, the computational overhead is likely significant.\nWithout this analysis, it is unclear whether the performance gains stem from higher computational cost rather than algorithmic improvement.\n4. Further experimental evaluations are needed. 1) No ablation on the number of mixture components or motif types. 2) No sensitivity study to clustering quality or graphon estimation accuracy. 3) The datasets used are relatively small and may not sufficiently stress-test scalability. 4) Missing discussion on training efficiency and memory requirements.",
"questions": "Could the authors provide an ablation study for GMAM that compares it against a baseline that uses SIGL to estimate a single graphon per class (instead of a mixture)? This would help quantify the specific contribution of the mixture model idea.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T20:50:30",
"modification_date": "2025-11-12T18:02:29",
"review_url": "https://openreview.net/forum?id=fArR5qngYw¬eId=iiUL2lMRwx",
"license": "CC BY 4.0"
},
{
"id": "q1riPUEjBP",
"forum": "fArR5qngYw",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21404/Reviewer_MyDk",
"reviewer_name": "Reviewer_MyDk",
"rating": 6,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "This work introduces a framework for graph representation learning that models datasets as mixtures of underlying generative processes represented by graphons. The key idea is to represent each latent generative mechanism by a graphon, a continuous function that defines connection probabilities between nodes. To uncover these mechanisms, the authors propose to characterize graphs using motif densities (graph moments), which serve as structural fingerprints. Graphs with similar motif statistics are clustered together, and a distinct graphon is estimated for each cluster. Building on this mixture model, the authors propose two applications: Graphon Mixture-Aware Mixup (GMAM) for semantically consistent data augmentation, and Model-aware Graph Contrastive Learning (MGCL) for reducing false negatives in unsupervised learning. The approach is supported by theoretical analysis and achieves competitive results across several benchmark datasets.",
"strengths": "**Conceptual novelty** Clearly identifies and formalizes the overlooked “mixture of graphons” problem, which challenges the single-distribution assumption in existing graph learning frameworks.\n\n**Strong theoretical contribution** Introduces a novel, tighter motif concentration bound and provides complete proofs.\n\n**Empirical validation** Demonstrates improvements on both synthetic and real datasets, with extensive ablation and visualization.\n\n**Interpretability** Motif-based clustering yields interpretable “graph fingerprints” and meaningful estimated graphons.\n\n**Clarity and reproducibility** The presentation is very clear, and the appendices provide all implementation details.",
"weaknesses": "W1: The framework is only evaluated within two settings — Mixup augmentation and contrastive learning. There is no discussion or experiment on extending the mixture-aware framework to other learning paradigms, such as semi-supervised node classification, which essentially corresponds to a subgraph classification task over ego-networks across different hops.\n\nW2: While the proposed methods achieve the best overall results, the performance gains over strong baselines are small, often below 1%, raising concerns about the practical significance of the improvement.\n\nW3:There are minor typos, such as “Equation equation 10” in line 208.\n\nW4: Experiments are confined to small- and medium-scale TUDatasets. It remains unclear how the proposed methods perform on large graphs.\n\nW5: The paper sets the number of mixture components as log of the number of graphs, but the ablation in Appendix shows that performance is quite sensitive to the choice of $K$, suggesting that this prior strategy requires further investigation. In addition, no such ablation is reported for the Mixup setting, where similar sensitivity may arise.\n\nW6: While Appendix presents an ablation on the number of motifs, the paper does not explore how different combinations or types of motifs affect clustering or downstream performance. This leaves open whether the proposed results are robust to motif choice.",
"questions": "See weakness",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T16:40:31",
"modification_date": "2025-11-12T18:02:29",
"review_url": "https://openreview.net/forum?id=fArR5qngYw¬eId=q1riPUEjBP",
"license": "CC BY 4.0"
},
{
"id": "Br03cWTC48",
"forum": "fArR5qngYw",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21404/Reviewer_zrKm",
"reviewer_name": "Reviewer_zrKm",
"rating": 4,
"confidence": 2,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes a unified framework for inferring multiple underlying generative models (i.e., graphon mixtures) from observed graph data and leverages this structure to enhance downstream tasks such as graph mixup augmentation and graph contrastive learning.",
"strengths": "The work elevates \"graph augmentation\" from the observation space to the generative space, which is logically self-consistent. Once the graphon estimation is completed, the per-unit training cost is weakly coupled with K, making the computational overhead appear manageable and facilitating easy integration into existing contrastive learning pipelines.",
"weaknesses": "1. A core idea of this paper is modeling dataset heterogeneity via multiple latent generative factors, which closely resembles the concept of latent factors in disentangled graph representation learning [1, 2]. However, the article lacks comparisons with baselines from this related line of work.\n\n 2. The paper claims to obtain a more disentangled representation but lacks corresponding visualizations or experiments using quantitative disentanglement metrics. For example, visualizations like feature correlation matrices or comparative analyses are missing.\n\n 3. Several choices in the pre-modeling stage (e.g., the selection of K, potential bias in graphon estimation) likely influence the results, yet the paper lacks ablation studies examining these aspects.\n\n\n[1] Disentangled Graph Contrastive Learning. NeurIPS 2021\n\n\n[2] Disentangled Graph Convolution Networks. ICML 2019",
"questions": "See Weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T20:52:18",
"modification_date": "2025-11-12T18:02:30",
"review_url": "https://openreview.net/forum?id=fArR5qngYw¬eId=Br03cWTC48",
"license": "CC BY 4.0"
}
] |
hJvcbkf2nO | https://openreview.net/forum?id=hJvcbkf2nO | Model Stitching by Invariance-aware Functional Latent Alignment | 3.5 | 3.75 | [
2,
2,
4,
6
] | [
3,
4,
3,
5
] | 4 | [
"Functional Similarity",
"Representation Learning",
"Model stitching"
] | In deep learning, functional similarity evaluation quantifies the extent to which independently trained models learn similar input-output relationships. A related concept, representation compatibility, is investigated via model stitching, where an affine transformation aligns two models to solve a task. However, recent studies highlight a critical limitation: models trained on different information cues can still produce compatible representations, making them appear functionally similar \cite{smithfunctional}. To address this, we pose two requirements for similarity under model stitching, probing both forward and backward compatibility. To realize this, we introduce invariance-aware Functional Latent Alignment (I-FuLA), a novel model stitching setting. Experiments across convolutional and transformer architectures demonstrate that invariance-aware stitching settings provide a more meaningful measure of functional similarity, with the combination of invariance-aware stitching and FuLA (i.e., I-FuLA) emerging as the optimal setting for convolution-based models. | Invariance-aware functional latent alignment can make for a reliable functional similarity metric. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=hJvcbkf2nO | 2025-09-18T22:52:43 | 4 | [
{
"id": "SKZ46o30wd",
"forum": "hJvcbkf2nO",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12340/Reviewer_hyyX",
"reviewer_name": "Reviewer_hyyX",
"rating": 2,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This work proposes a new approach to model stitching to measure functional alignment to two different models. The proposed approach directly matches the representations for every layer after the stitching point. The authors perform a wide range of experiments that show this stitching approach is more sensitive that alternative methods for distinguishing models. The authors also look at the effect of stitching on a different dataset with slightly perturbed representations at the stitching layer.",
"strengths": "The authors present an interesting conceptual argument for this new model stitching approach, motivated by previously identified weaknesses of existing stitching methods. They then go on to perform a series of experiments based on tests conducted in prior work.",
"weaknesses": "While conceptually interesting, the results appear to show that direct matching (DM) gives effectively the same interpretation as the proposed FuLA method. This makes sense since FuLA would only differ from DM when the match is poor. It would seem that DM would then be the preferable method for its simplicity. FuLA also requires the two networks being compared to have the same architecture, which DM does not require.\n\nMoreover, the results of the paper are poorly presented. Broadly speaking, the figures are confusing to interpret due to poor labeling, minimal captions, and the size of the text, which makes most of the results almost impossible to read on paper. For example, the titles of the plots in Fig. 4 are never defined or referenced elsewhere in the paper or in the caption. Fig. 6 is even more confusing, where the x-axis label seems to be important but is never explained.",
"questions": "1. For the Identically Represented Inputs (IRI) datasets, why is it necessary to generate a completely new dataset if your goal is simply to perturb the output of the front model at the stitching layer? Why not avoid the whole optimization problem and just directly perturb/add noise to the representation at the stitching layer?\n2. On line 328, why is the conclusion that the forward compatibility notion doesn't differentiate among different models when it clearly shows different behavior (the referenced \"dip\") in the plot? It seems to clearly differentiate it in this case.\n3. Why use a completely different front model in section 3.2?\n4. Why report rAuA only for the robustness examples and not the other examples? The way this metric is reported, via a label on a plot, is also poor and would be much easier to read in a table.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T12:57:45",
"modification_date": "2025-11-12T12:54:19",
"review_url": "https://openreview.net/forum?id=hJvcbkf2nO¬eId=SKZ46o30wd",
"license": "CC BY 4.0"
},
{
"id": "tT0xRJr4ud",
"forum": "hJvcbkf2nO",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12340/Reviewer_MHaa",
"reviewer_name": "Reviewer_MHaa",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 2,
"presentation": 1,
"summary": "This paper introduces I-FuLA, a new model stitching method to better measure functional similarity. It combines a novel objective, Functional Latent Alignment (FuLA), with an \"invariance-aware\" setting that learns the alignment on inputs with identical internal representations (IRIs). Experiments show this method provides a more meaningful similarity score, as it can distinguish between models trained on different visual cues and avoids exploiting spurious shortcuts. This leads the authors to conclude that robust and non-robust networks are less functionally similar than previously believed.",
"strengths": "* **S1**: The paper tackle an interesting problem of stitching different neural networks and evaluating latent similarities.\n* **S2**: The paper introduce the backward compatibility, as a new and useful rule for measuring latent similarity.",
"weaknesses": "* **W1. Clarity and Presentation**: The major weakness is that the paper is dense and can be difficult to follow. It doesn't follow a clear and linear story. Additional, the core concepts of \"forward\" and \"backward\" compatibility, while central to the paper, are not introduced with sufficient clarity early on. The notation, though systematic, adds to the cognitive load. The figures, particularly the \"stitching plots,\" are small and contain a lot of information, making them hard to decipher without extensive cross-referencing with the text. A more guided walkthrough of one of the plots in the main text would have been beneficial.\n\n* **W2. Limited Experimental Scope**: The experimental validation is conducted on relatively small-scale datasets (CIFAR-10 and a 10-class subset of ImageNet) and primarily with one architecture (ResNet-18). While VGG-16 and ViT-Tiny are included in the appendix, the main claims are built on the ResNet-18 results. The findings would be much more compelling if demonstrated on larger-scale benchmarks (e.g., the full ImageNet-1k) and with a more diverse set of modern architectures, especially larger Transformers.\n\n* **W3. Novelty and Contribution Statement**: The paper's primary novelty lies in formalizing the \"backward compatibility\" requirement and using IRIs to test it. However, this could be stated more directly in the introduction and contributions list. The introduction of I-FuLA, while new, appears to be a secondary contribution, as the \"invariance-aware\" setting is what drives most of the significant results. The paper could be improved by more clearly delineating the impact of each of these two contributions.\n\n* **W4. Comparison to Other Metrics and Related Work**: The paper does not compare its similarity findings to other representation similarity metrics like Centered Kernel Alignment (CKA) [1] or to other more recent works such as [2]. Such a comparison would help contextualize their results and clarify what unique insights the notion of \"functional similarity\" provides over geometric or statistical similarity of representations. Additional, the authors could consider including in the related work section the following model-stitching works [4,5,6].\n\n* **W5. Reproducibility**: Providing the code would be essential for the community to verify the results and build upon this work.\n\n* **W6. Subjectivity of \"Meaningful Similarity**: A core claim of the paper is that it provides a more \"meaningful\" measure of similarity. However, \"meaningful\" is never formally defined and is instead based on intuitive sanity checks. While the experiments are convincing, the paper would be stronger if it could connect its measure to a more concrete, objective property, or discuss the philosophical underpinnings of what makes a similarity measure meaningful.\n\n\n---\n[1] Kornblith, Simon, et al. \"Similarity of neural network representations revisited.\" International conference on machine learning. PMlR, 2019.\n\n[2] Fumero, Marco, et al. \"Latent functional maps.\" ICML 2024 Workshop on Geometry-grounded Representation Learning and Generative Modeling. 2024.\n\n[4] Maiorca, Valentino, et al. \"Latent space translation via semantic alignment.\" Advances in Neural Information Processing Systems 36 (2023): 55394-55414.\n\n[5] Cannistraci, Irene, et al. \"Bootstrapping parallel anchors for relative representations.\" ICLR Tiny Paper (2023).\n\n[6] Lähner, Zorah, and Michael Moeller. \"On the direct alignment of latent spaces.\" Proceedings of UniReps: the First Workshop on Unifying Representations in Neural Models. PMLR, 2024.",
"questions": "* **Q1. Generality for Transformers**: The results indicate that for ViT-Tiny, the proposed I-FuLA is not the optimal setting, and I-SLM is preferred. Does this imply that the definition of \"meaningful functional similarity\" is architecture-dependent, and that different model families may require different criteria?\n* **Q2. Scalability to Larger Datasets**: How would the authors expect these findings to translate to more complex, large-scale benchmarks like the full ImageNet-1k dataset? The generation of the DIRIs dataset seems computationally intensive; is this approach feasible at that scale?\n* **Q3. Scalability to Larger Models**: How would the authors expect these findings to translate to larger networks, such as larger Vision Transformers (e.g., ViT-Small/Base/Large) or models like DINO?\n* **Q4. Framing of Similarity as a Limitation**: In the abstract, the fact that models trained on different information cues can produce compatible representations is framed as a \"critical limitation.\" Could the authors elaborate on why this is a limitation, rather than an interesting property of neural networks (e.g., demonstrating that different paths can lead to functionally similar solutions)?\n* **Q5. On the Role of the Stitching Layer's Capacity**: The experiments use a 1x1 convolutional layer for stitching. Could the results be sensitive to the capacity of this transformation layer? Is it possible that models appear dissimilar simply because a simple affine transformation is insufficient to align their representations, and a more powerful non-linear \"stitching function\" might reveal deeper similarities?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T18:22:38",
"modification_date": "2025-11-12T12:54:19",
"review_url": "https://openreview.net/forum?id=hJvcbkf2nO¬eId=tT0xRJr4ud",
"license": "CC BY 4.0"
},
{
"id": "i47Y55YSAP",
"forum": "hJvcbkf2nO",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12340/Reviewer_HTsy",
"reviewer_name": "Reviewer_HTsy",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This paper investigates the problem of functional similarity in deep neural networks using the model stitching paradigm. The authors argue that existing stitching methods, which primarily focus on \"forward compatibility\" (i.e., maintaining task performance), can be misleading. They show that these methods often find high similarity even between models trained on different \"information cues\" (e.g., color vs. texture). To address this, the paper introduces two requirements for a more meaningful similarity measure: (1) latent-level forward compatibility, ensuring internal representations transition similarly after stitching, and (2) \"backward compatibility\", ensuring inputs that are invariant to the first model are also treated similarly by the second.\n\nThe paper proposes a new stitching objective, Functional Latent Alignment (FuLA), to enforce forward compatibility, and an \"invariance-aware\" training setup using Identically Represented Inputs (IRIs) to probe for backward compatibility.",
"strengths": "- The paper addresses the critical problem of understanding and quantifying the similarity between neural network representations, which is fundamental to interpretability and model understanding.\n\n- The paper's primary conceptual contribution is the introduction of \"backward compatibility\" as a necessary condition for meaningful similarity.\n\n- The authors conduct a comprehensive set of experiments across multiple architectures (ResNet, VGG, ViT) and under various conditions, including different data cues and model robustness settings.",
"weaknesses": "1. The paper is difficult to follow. The writing is often dense, and key concepts like \"information cues\" are used without a precise definition. Most importantly, Figures 4, 5 and 6 are not well-explained in the caption, are hard to interpret and it is extremely hard to match the trace to the correct item in the legend. \n\n2. The interpretation of some results is questionable. For example, in the cross-data stitching experiment (CIFAR-RGB vs. CIFAR-grayscale), the sharp performance drop for I-FuLA is presented as a success. However, one could argue that since the underlying image content is the same, with different augmentations, a good similarity metric should yield high similarity. The paper does not defend *why* this sharp drop is a desirable property.\n\n\n3. The paper's core premise, that models trained on different \"information cues\" should be considered functionally dissimilar, is not sufficiently motivated. This stance appears to contradict a growing body of work suggesting that models can and should learn compatible or geometrically aligned representations if the underlying data semantics are the same (e.g., the Platonic Hypothesis). The paper fails to adequately position itself against this literature, for example the works on relative representations are cited but sligthly misinterpreted (Moschella et al., 2022, Cannistraci et al., 2023): they have already been used as invariance-aware similarity measure between representations (e.g., Section 4.1 in Moschella et al) and not only for model stitching.\n\n\n4. The analysis is missing crucial ablations. The expressivity of the stitching transformation S is fixed to a linear layer. However, the capacity of this transformation is a critical factor that could heavily influence the stitching outcome. An analysis with different capacities (e.g., identity, a small MLP) is needed to disentangle the effects of the stitching objective from the effects of the transformation's capacity.",
"questions": "- Could the authors clarify their position with respect to the emerging similarity literature? These works suggest that compatible representations should emerge from data with shared semantics, even if the inputs differ (e.g., images and captions). Why should models trained on grayscale vs. RGB images be considered functionally dissimilar?\n\n- Regarding the cross-data stitching experiment (Fig. 4, \"Cross-data\"), the authors present the sharp decline in performance for I-FuLA as a positive outcome. Further elaboration is needed on why this is a desirable result. An alternative viewpoint is that the models should be able to find common ground, as the semantic content is largely preserved between colored and grayscale images.\n\n- How do the results and conclusions change when varying the expressivity of the stitching transformation S (e.g., using a multi-layer perceptron instead of a single linear layer)? It seems possible that a more expressive stitch layer could overcome the dissimilarities that I-FuLA is designed to detect, which would challenge the paper's conclusions.\n \n- The paper would benefit from a more high-level, intuitive explanation of the core concepts before diving into the formal notation. This would make the work more accessible.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T22:14:14",
"modification_date": "2025-11-12T12:54:20",
"review_url": "https://openreview.net/forum?id=hJvcbkf2nO¬eId=i47Y55YSAP",
"license": "CC BY 4.0"
},
{
"id": "Vgp7vetgeg",
"forum": "hJvcbkf2nO",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12340/Reviewer_ZFEj",
"reviewer_name": "Reviewer_ZFEj",
"rating": 6,
"confidence": 5,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "The paper considers a few recently discovered issues with representation similarity characterizations, that indicate that they are either overly permissive (like stitching with task loss matching, where the stitching layer can get \"too creative\" in achieving a good task loss) and too penalizing (eg stitching with direct matching, where there is too much focus on fitting a single layer as closely as possible, without regard of the dynamics of the network as a whole).\n\nThe paper proposes two techniques. One is a new loss for stitching called functional latent alignment (fula) that involves matching all the layers after the stitching layer, and the other is the utilisation of identically represented inputs as a means to examine whether the inputs that are represented similarly indeed are processed similarly.\n\nThe paper then presents an empirical evaluation of the method and compares it with known baselines, showing that the indentically represented inputs are indeed a good tool for separating real difference from \"cheating\" stitchings.",
"strengths": "The paper introduces very interesting ideas about fixing the current stitching approaches in order to get a cleaner insight into representation similarity. The main idea seems to be that one needs to look at both the representations in each layer as well as the expected invariances while propagating the representations.\n\nThe empirical evaluation is interesting and indeed supports the claims that the proposed techniques add a new, useful perspective.",
"weaknesses": "The main problem is with the presentation. The paper is very hard to read, even for someone who is familiar with the area. The paper is extremely dense; there is a lot of content packed in a limited space, and so things are not sufficiently motivated, explained, and discussed. The plots are extremely small, and even in color, they are difficult to read. It requires a lot of concentration to understand what the plots show and how the experiments were conducted. At the same time, Figures 1 and 2 take up a lot of space, while I found them a lot more confusing than helpful. Even after understanding the text, I still had a hard time making sense of these plots. I personally don't think they are necessary, at least in the main text, and then you could have larger plots and more words to explain what is going on.\n\nAs for the method, it is evident that req B does the heavy lifting, while FULA is not that different from SLM. So I was not entirely convinced that FULA is even necessary. I think the idea of input invariance is the key here. However, I found the motivation of req B less clear, I think it would need some more support and explanation. Even sec 2.3.1 is quite confusing because it is not clear how we take care of req B exactly. (Later becomes somewhat clearer, but essentially just from the way you construct the plots.)",
"questions": "Are there any cases when FULA is clearly necessary and \"better\" (in the sense of some sanity checks) than SLM?\n\nYou promise at some point that some sanity checks are being used (lacking any formal \"oracle), which is fine, but then you do not seem to state your sanity checks clearly and up front. What are these?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T01:42:09",
"modification_date": "2025-11-12T12:54:20",
"review_url": "https://openreview.net/forum?id=hJvcbkf2nO¬eId=Vgp7vetgeg",
"license": "CC BY 4.0"
}
] |
oijKOpfSmX | https://openreview.net/forum?id=oijKOpfSmX | KeyVID: Keyframe-Aware Video Diffusion for Audio-Synchronized Visual Animation | 5.5 | 3.5 | [
6,
4,
6,
6
] | [
3,
4,
4,
3
] | 4 | [
"Audio to Video Generation",
"Keyframe Generation",
"Video Generation"
] | Generating video from various conditions, such as text, image, and audio, enables precise spatial and temporal control, leading to high-quality generation results. Most existing audio-to-visual animation models rely on uniformly sampled frames from video clips. Such a uniform sampling strategy often fails to capture key audio-visual moments in videos with dramatic motions, causing unsmooth motion transitions and audio-visual misalignment. To address these limitations, we introduce KeyVID, a keyframe-aware audio-to-visual animation framework that adaptively prioritizes the generation of keyframes in audio signals to improve the generation quality. Guided by the input audio signals, KeyVID first localizes and generates the corresponding visual keyframes that contain highly dynamic motions. The remaining frames are then synthesized using a motion interpolation module, effectively reconstructing the full video sequence. This design enables the generation of high frame-rate videos that faithfully align with audio dynamics, while avoiding the cost of directly training with all frames at a high frame rate. Through extensive experiments, we demonstrate that KeyVID significantly improves audio-video synchronization and video quality across multiple datasets, particularly for highly dynamic motions | generative models | https://openreview.net/pdf?id=oijKOpfSmX | 2025-09-11T09:43:22 | 4 | [
{
"id": "ITEL8ceibs",
"forum": "oijKOpfSmX",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3854/Reviewer_aTg2",
"reviewer_name": "Reviewer_aTg2",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper presents KeyVID, a keyframe-aware diffusion framework for generating videos that are temporally synchronized with input audio. The core idea is to exploit the correlation between peaks in the motion signal (optical flow intensity) and peaks in the audio signal to determine key moments of action. The system decomposes the task into three modules: a Keyframe Localizer that predicts motion peaks from audio, a Keyframe Generator that synthesizes visual frames conditioned on audio, text, and the first image, and a Motion Interpolator that fills intermediate frames for smooth transitions. While the underlying assumption “strong sounds correspond to large motions” is conceptually simple, the paper demonstrates that modular design and diffusion-based conditioning yield high-quality, audio-synchronized animations, outperforming prior methods (e.g., AVSyncD) in both quantitative metrics and human preference.",
"strengths": "The paper’s strength lies in its clear conceptual simplicity combined with strong engineering design. Instead of introducing a novel generative paradigm, it isolates key factors affecting audio-visual synchronization and builds an effective three-stage system around them. The modular structure (localization–generation–interpolation) makes the overall process interpretable and flexible. The idea of learning motion saliency from audio peaks via optical-flow supervision is intuitive yet elegantly implemented, enabling temporal precision without requiring explicit motion labels. Moreover, the integration of first-frame conditioning and frame index embeddings ensures temporal consistency and visual coherence across non-uniformly sampled keyframes—an aspect that many prior diffusion-based approaches fail to achieve. Experimental results are convincing, showing SOTA performance on both synchronization and visual quality metrics. The paper is also well-written, with clear motivation and comprehensive ablations that help readers understand the contribution of each module. The proposed framework feels robust, scalable, and generalizable beyond its training distribution.",
"weaknesses": "Despite its strong empirical results, the conceptual novelty is somewhat limited. The paper’s main assumption—that audio peaks align with motion peaks—is simple and well-known in the audio-visual literature. The novelty mainly comes from a careful engineering decomposition rather than a new theoretical insight. The keyframe selection mechanism remains heuristic (based on fixed thresholds and local extrema), which, while effective, feels ad hoc and could limit robustness for more complex or subtle motion types. For instance, the model performs less consistently on “subtle-motion” videos (e.g., violin, trumpet) or single-event sequences (e.g., frog croaking), where perceptual synchronization is harder to judge and the heuristic peak detection may fail. Furthermore, the 2-second clip length used in both training and user studies constrains the evaluation of long-term consistency and overall narrative quality. The model’s dependence on the first frame also raises concerns about appearance drift or overfitting to static conditions when generating longer sequences.",
"questions": "In addition to the weakness, it would be great if authors can response to the following minor comments.\n- The paper would benefit from more discussion of failure cases, especially where KeyVID underperforms in the user study (e.g., low-motion or single-event clips).\n- Figure 5 and Appendix F could be expanded to show visual differences in subtle-motion scenarios, not just high-intensity ones.\n- The authors might consider exploring learnable or probabilistic keyframe selection instead of the fixed heuristic used in Section 3.1.\n- The limitation of using short 2-second videos for subjective evaluation should be explicitly acknowledged; looping or extended clips could help reduce perceptual bias.\n- It would be interesting to see comparisons against pose-based or structure-aware baselines such as TANGO, to assess generalization to human-centric motion.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T11:07:08",
"modification_date": "2025-11-12T11:11:07",
"review_url": "https://openreview.net/forum?id=oijKOpfSmX¬eId=ITEL8ceibs",
"license": "CC BY 4.0"
},
{
"id": "GTPC3awQYS",
"forum": "oijKOpfSmX",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3854/Reviewer_Lpze",
"reviewer_name": "Reviewer_Lpze",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper is for adding audio conditon to existing text-image to video (TI2V) model. \n1. The backbone is DynamiCrafter, the dataset is open-source audio-visual generation dataset AVSyncD. The generated videos are around 2s (48 frames). \n2. The target is to solve the audio-visual misalignment. While the idea is first select audio keyframes, then generate keyframes using selected audio, and finally do video interpolation. \n - The authors train a audio-to-optical flow network to predict optical flow and select audio keyframes based on local minimum/maximum.\n - Use this keyframes audio feature, image and text, and the target generate frame idx to generate video\n - Video interplotation is by finetuning the DynamiCrafter with Wan 2.2 style image mask condtioning. \n3. The objective score beats SoTA and 7 videos results attached.",
"strengths": "1. The paper is well written and easy to follow.\n2. The evaluation contains both objective and subjective mertic/samples and it shows results better than previous methods.\n3. The authors included the detail of each module in appendix.",
"weaknesses": "1. The high level idea sounds rule-based and do not have enough evidence why it is better than generating all frames in once. \n - limition of rule based design: using optical-flow and picking local minimum/maxmum may not suitbale for some smooth audio, e.g., river, plane takes off. the idea maybe not general enough to push to boundary of current ATI2V model. it may require a more general mapping model, for example based on the contrastive learning like text and image.\n - how to set the threshold of key frames number? for the hammer case, if the speed of hitting is very fast, e.g. 10 times in 2 second, should we have a 20-frame keyframe at least.\n2. The implemenation, using a video model to generate discontinus frames by a learned frame embeding but keeping the original rope sounds not strightforward. \n - firstly only using select audio keyframes feature, will this be enough? considering a hammer case only the sounds of hitting is captured.\n - adding the frame idx condtion to the network, is it possible to directly modify existing position embedding?",
"questions": "Overall this is a paper that clear written, and have completive experiments. My concern is the idea itself sounds rule-based and not general. I'm wondering for 2s audio-video generation, this is a length we have enough GPU memory to train directly, maybe end2end modeling could get good results after filteriing out the misaliged audio-visual data from the dataset. The details of my questions are in weakness part.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T06:46:30",
"modification_date": "2025-11-12T11:11:07",
"review_url": "https://openreview.net/forum?id=oijKOpfSmX¬eId=GTPC3awQYS",
"license": "CC BY 4.0"
},
{
"id": "mUJIYK1MXB",
"forum": "oijKOpfSmX",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3854/Reviewer_J6Af",
"reviewer_name": "Reviewer_J6Af",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes an approach for audio-driven image animation, where static images are animated into videos synchronized with input audio both semantically and temporally. The method decomposes the animation process into two stages: the first generates keyframes corresponding to key actions derived from the audio, and the second interpolates between these keyframes to produce continuous motion. Both stages use a video inbetweening model to generate frames.",
"strengths": "I appreciate the idea of generating keyframes or key actions first, which need not be uniformly distributed. This design effectively mitigates the potential mismatch between audio and generated video arising from differences in their sampling frequencies.",
"weaknesses": "1. I am skeptical about the definition of keyframes as frames with peak motion scores. The authors should discuss the applicability and limitations of this definition. For instance, in dance videos, key movements often occur on musical beats, where the motion velocity is near zero—these moments would not correspond to frames with the highest motion scores.\n2. I would like the authors to provide further justification for this keyframe definition.\n3. Based on the provided video result, the method appears to be applicable primarily to sound events. Moreover, the paper presents too few video examples to convincingly demonstrate the effectiveness of the proposed approach.",
"questions": "See the above weakness section",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T07:23:36",
"modification_date": "2025-11-12T11:11:07",
"review_url": "https://openreview.net/forum?id=oijKOpfSmX¬eId=mUJIYK1MXB",
"license": "CC BY 4.0"
},
{
"id": "BL6bdEdX12",
"forum": "oijKOpfSmX",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3854/Reviewer_vQ68",
"reviewer_name": "Reviewer_vQ68",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper argues that most existing ASVA (Audio-to-Visual Animation) models adopt the strategy of uniformly sampling video frames, which leads to two core problems in high-dynamic motion scenarios: (1) Failure to capture key audio-visual moments, resulting in unsmooth motion transitions. (2) Audio-visual temporal misalignment, especially for low-frame-rate models, which struggle to match the fine-grained temporal information of audio.\nTherefore, this paper proposes a keyframe-aware audio-to-visual animation framework that first localizes keyframe positions from the input audio and then generates the corresponding video keyframes using a diffusion model, which designs a keyframe generator network that selectively produces sparse keyframes from the input image and audio, effectively capturing crucial motion dynamics.",
"strengths": "1.\tThe thinking of uniform frames vs. keyframes generation and the keyframe-oriented pipeline in Figure.1 are interesting and beneficial to the research community.\n2.\tThe design of multi-condition cross attention fusion is delicate.\n3.\tThe quantitative comparison results and demos show the effectiveness of proposed method, which is convincing to me.",
"weaknesses": "1.\tThe ablation studies are not very convincing since the results of Table.2 are similar. Especially in terms of the “w.o. Frame Index” setting, the FVD improvement is 1.7% and the degradations of synchronization metrics are 2.1% ~ 2.4%. So it is not clear for me to understand the necessaries of Frame Index. \n2.\tThere is no computation efficiency analysis, which is essential for real-world applications. I am wondering that whether it is heavy to conduct the multiple-condition CA in the U-Net blocks. \n3.\tThe paper does not analyze the performance differences of the proposed method across different scenarios. The paper claims that its method is particularly advantageous in \"intensive motion\" scenarios (in Line.485), but this lacks quantitative analysis and verification.",
"questions": "1.\tDiscuss and explain the effectiveness of proposed technical in this paper, especially the “Frame Index”.\n2.\tCompare the time efficiency of the proposed method with those of baselines. For example, RealTime Factor(RTF) and GFlops should be taken into considerations.\n3.\tAdd more comparisons with baselines on intensive-motion scenarios and non-intensive-motion scenarios, and discuss the differences.\n4.\tWill the code and pretrained model be released?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T02:16:09",
"modification_date": "2025-11-12T11:11:07",
"review_url": "https://openreview.net/forum?id=oijKOpfSmX¬eId=BL6bdEdX12",
"license": "CC BY 4.0"
}
] | |
8HH9dBOxwu | https://openreview.net/forum?id=8HH9dBOxwu | Unified Biomolecular Trajectory Generation via Pretrained Variational Bridge | 6 | 4 | [
4,
8,
4,
8
] | [
4,
4,
4,
4
] | 4 | [
"deep generative model",
"molecular dynamics",
"trajectory generation",
"augmented bridge matching",
"adjoint matching"
] | Molecular Dynamics (MD) simulations provide a fundamental tool for characterizing molecular behavior at full atomic resolution, but their applicability is severely constrained by computational inefficiency. To address this, a surge of deep generative models has recently emerged to learn dynamics at coarsened timesteps for efficient trajectory generation. Nevertheless, most of these methods suffer from two main issues: (i) Non-pretrained models are limited to single-domain simulation; (ii) Pretrained approaches, while tailored for cross-domain scenarios, fail to leverage the structural information learned during pretraining in the generative process due to misaligned training objectives. Here, we propose the Pretrained Variational Bridge (PVB), which first maps the initial state into a noised latent space and then projects it to stage-specific target states using a decoder based on augmented bridge matching. This unifies training for both single-structure and paired trajectory data, ensuring the consistent utilization of extensive cross-domain structural knowledge across stages. Moreover, we incorporate RL optimization for protein-ligand complexes using adjoint matching, which enables the model to rapidly evolve toward the holo state within short simulations, showcasing the potential for efficient post-optimization of docking poses. Experiments on proteins and protein-ligand complexes demonstrate that PVB accurately reproduces thermodynamic and kinetic observables measured in MD simulations, while achieving remarkable generative stability compared with baselines. | applications to physical sciences (physics, chemistry, biology, etc.) | https://openreview.net/pdf?id=8HH9dBOxwu | 2025-09-18T13:32:43 | 4 | [
{
"id": "rqowkUu439",
"forum": "8HH9dBOxwu",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10490/Reviewer_i298",
"reviewer_name": "Reviewer_i298",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "The paper proposes a strategy for developing next-step MD emulators by first pretraining on a bridge that maps between distributions of static structures. This allows models trained on vast amounts of rich static data to be easily tuned on dynamics prediction. The authors show that models pretrained on the PDB and PDBBind can be fine-tuned on ATLAS (protein simulations) and MISATO (protein-ligand simulations) to replicate observables. The authors also develop a RL training strategy for the bridge to steer rollouts towards the holo state of a protein-ligand complex.",
"strengths": "The idea of pretraining a bridge to recapitulate the initial state, and then fine-tuning it to produce the evolved state, is quite interesting. The work also touches upon simulation of protein-ligand simulations, which have been somewhat neglected in the ML for MD literature, despite their significant practical importance.",
"weaknesses": "**Method**\n* The RL formulation of the holo complex finetuning task seems gratuitous. In particular, if the reward is the RMSD to the holo state, why can't the holo state be used in a supervised fine-tuning fashion? If would seem that if the reward is simply the similarity to an explicit, known state, that is the setting of supervised learning, not reinforcement learning.\n\n**Experiments**\n* There are missing controls that make the value of the pretraining bridge hard to interpret. What if we pretrain without a bridge, such as AlphaFlow (with templates)? What if we don't pretrain at all, but use the same architecture? (I assume the retrained ITO baseline is using the ITO architecture).\n* Although I am willing to judge these as not the focus of the paper, the protein-ligand docking evaluations are extremely sparse - the single baseline is AutoDock Vina, despite vast amounts of recent literature.",
"questions": "What is the state matrix in Figure 3, right? The caption says \"Probability differences between PVB\nand MD across the 10 metastable states estimated by MSM.\" --- this shouldn't be a matrix, then.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T08:38:32",
"modification_date": "2025-11-12T12:29:38",
"review_url": "https://openreview.net/forum?id=8HH9dBOxwu¬eId=rqowkUu439",
"license": "CC BY 4.0"
},
{
"id": "dZDRlXy6Uh",
"forum": "8HH9dBOxwu",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10490/Reviewer_JWxB",
"reviewer_name": "Reviewer_JWxB",
"rating": 8,
"confidence": 4,
"soundness": 4,
"contribution": 3,
"presentation": 4,
"summary": "This paper introduces pretrained variational bridge (PVB). The core contribution is a unified framework that first pretrains an encoder-decoder model on a large and diverse dataset of single, static molecular structures to learn generalizable structural features. This pretrained model is then finetuned on paired molecular dynamics (MD) trajectory data to learn system-specific dynamics. Furthermore, the paper presents RL finetuning procedure using adjoint matching, which efficiently optimizes the model to guide trajectories toward specific target states, such as the holo conformation in protein-ligand docking.",
"strengths": "- The paper proposes a novel methodology to include pretraining on datasets with static structures but diverse chemical space, and then finetuning on dynamical data. This enables the model to achieve better chemical transferability despite the limited chemical space coverage of the dynamical data.\n- The integration of RL with adjoint matching for pose-optimization in docking is a novel application. And the authors have shown the improvement in the ligand pose after the finetuning.\n- The model's performance is thoroughly benchmarked across multiple demanding tasks, including protein dynamics, protein-ligand complex dynamics, and holo state exploration. The comparison against several relevant baselines on different datasets demonstrates the effectiveness of the method\n- PVB shows outperformance over baselines across most metrics.\n- Ablation studies have been performed to show the benefit of pretraining and finetuning procedure",
"weaknesses": "- While the paper evaluates against other trajectory-based models, it assesses performance on free energy landscapes. While most of the metrics compared in the paper are actually thermodynamic properties, they can be evaluated with i.i.d. (time-agnostic) sampling models. It will be helpful to benchmark against those methods as well.\n- In the meanwhile, although the time-dependent model describes dynamics, it's not obvious from the benchmarks and applications shown in the paper why the time-dependence is needed, what is its advantage over i.i.d. sampling model. It will help to justify the motivation if the authors can clarify that (time-dependence makes the model to describe thermodynamics better than i.i.d. sampling model) or show some cases when kinetics/dynamics are of practical interests in applications.\n- The rationale behind using two separately finetuned models for the protein (ATLAS) and protein-ligand (MISATO) tasks is not explained. One might expect that a single model finetuned on both could offer better transferability, especially for the protein component of the dynamics.",
"questions": "- Have the authors checked the physicality of the ligands (and proteins as well)? Not only the bond break or clashes, but also stereochemical errors. Does that get better or worse with RL finetuning?\n- The paper claims cross-domain generalization, but it is not specified whether the train/test splits of datasets were performed based on sequence similarity or other metrics to prevent data leakage and rigorously test generalization to unseen protein folds.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T01:29:25",
"modification_date": "2025-11-12T12:29:38",
"review_url": "https://openreview.net/forum?id=8HH9dBOxwu¬eId=dZDRlXy6Uh",
"license": "CC BY 4.0"
},
{
"id": "LdsUVLUqyf",
"forum": "8HH9dBOxwu",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10490/Reviewer_WXYF",
"reviewer_name": "Reviewer_WXYF",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "The paper proposes a method for coarse-grained molecular dynamics simulation, using Brownian bridges. More specifically, they first go to a latent state, and then to the target state. This allows generalization over both single-structure and paired trajectory data, especially in terms of pretraining. Method is evaluated on relevant datasets.",
"strengths": "- The paper works on a relevant problem.\n- Optimal control methods are leveraged for more efficient training.\n- The method is evaluated on relevant benchmarks.",
"weaknesses": "- The explanation of the theory and the notation is quite confusing. I sympathize that this is not trivial, especially with having in a mind a relatively broad target audience from diverse research backgrounds. But considerable effort should be made to improve the writing. I try to make some concrete suggestions below. I'm certainly willing to raise my score if readability is improved!",
"questions": "- Fig. 1. This can be a very informative figure, but the elements are quite small (the arrows and black dot). Consider indicating on the figure what the meaning is of the three small modes on the left and the big one on the right.\n- Why do you use the rmsd, and not the (Gaussian) log likelihood?\n- At the start of 3. Method, Z and C are defined, but are not used in the remaining sections. Do you also model these? If so, how exactly?\n- Why exactly do you use \\mu vs. p? They both indicate probability measures I assume?\n- Please define more clearly what \\mu, and X are on line 41. In general, please take some time to rethink where you define the different mathematical concepts and objects. Now it's a bit all over the place and does not seem to follow a structured explanation, or logical build up in your story.\n- eq. 11, why is there a gradient stop on u when sampling Y_0:1?\n- footnote on line 225. Please don't put this in a footnote, it is very confusing! The bridge is from \\tilde X to X_1, correct? Why not call it Y from the start, please take some time to think about this, I'm sure there's ways to make this paper much more readable if you decide on certain notations from the start and don't start changing/adding things in the middle of your explanation.\n- Prop. 3.2: You are solving an optimal control problem analytically, correct? Is solving the ODE for \\tilde a related to solving the HJB equation? Can you please compare your approach to [1], where this is done for the same kind of ELBO, for linear SDEs?\n\n[1] https://arxiv.org/abs/2505.17150\n\n\nMinor suggestions:\n- line 92. RL is not defined.\n- line 102: 'applying'\n- I would not use (so much) abbreviations in the abstract, it does not improve readability, and RL is not defined. The abstract is quite wordy which makes it also harder to read (e.g. 'inefficiency' -> cost, 'nevertheless', 'remarkable', ..)\n- please use larger brackets and ||, e.g. \\left[ \\right], see eqs. 5, 7, ..\n- line 57: 'domain' (singular)\n- line 234: 'prove', this sentence is also not clear?\n- line 278: Y=Y, very confusing. I suppose one is a stochastic variable and the other one is a realization. You could e.g. use small letters for the realizations.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T19:39:55",
"modification_date": "2025-11-12T12:29:39",
"review_url": "https://openreview.net/forum?id=8HH9dBOxwu¬eId=LdsUVLUqyf",
"license": "CC BY 4.0"
},
{
"id": "CmeWJba9vt",
"forum": "8HH9dBOxwu",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10490/Reviewer_BUEn",
"reviewer_name": "Reviewer_BUEn",
"rating": 8,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces the Pretrained Variational Bridge (PVB), a unified generative framework for biomolecular trajectory generation that leverages pretraining on static 3D structures and finetuning on coarse-grained MD trajectory data.\nThe central idea is to bridge the gap between single-structure pretraining and trajectory-conditioned finetuning with a unified objective: $\\mu(X_1 \\mid X_0)$. \n\nDuring pretraining, the model learns from diverse molecular structures to capture cross-domain structural knowledge.\nDuring finetuning, the model is finetuned on paired transition data $(X_t, X_{t + \\Delta t})$ from dynamic datasets (e.g., ATLAS and MISATO).\n\nAn additional innovation is the RL-based adjoint finetuning using stochastic optimal control, enabling direct optimization for holo state generation in protein–ligand systems.\n\nEmpirical results show that PVB reproduces thermodynamic and kinetic observables (Rg, torsion, TIC projections, MSM occupancy) with stability comparable to MD and substantial improvement over baselines (ITO, MDGen, UniSim) in validity (VAL-CA = 0.97) and decorrelation metrics.\nIn protein–ligand docking, PVB with RL finetuning outperforms AutoDock Vina and non-RL variants.",
"strengths": "* PVB elegantly integrates structural pretraining and trajectory learning through a shared encoder–decoder bridge, aligning objectives across domains. Pretraining on heterogeneous biomolecular structures allows transfer to proteins, small molecules, and protein–ligand complexes without retraining.\n\n* The adjoint-based stochastic control formulation enables memory-efficient fine-tuning toward functional states (e.g., holo forms) without additional networks.\n\n* PVB consistently achieves better or comparable results to classical MD and generative baselines in reproducing both kinetic and thermodynamic observables.\n\n* The RL variant shows meaningful progress toward real drug-design applications, improving ligand placement beyond traditional docking and static generative methods, which can serve as an alternative method for docking.",
"weaknesses": "* While conceptually elegant, the experimental scope is somewhat narrow and the demonstrated benefits are modest under realistic scales. The datasets (ATLAS, MISATO) are relatively small, and the observed improvements, though consistent, are incremental, especially given that baseline MDGen and UniSim already yield physically valid trajectories. Including results on the recently released MDCATH dataset will be make the manuscript stronger.\n\n* As mentioned by the authors, the generation remains sequential, limiting scalability to long timescales or high-throughput ensemble generation. The paper does not analyze the runtime of the PVB for trajectory generation, which is also a concerning fact. While coarse timesteps improve efficiency conceptually, inference speed, wall-clock cost, and scaling to larger systems (e.g., >10⁴ atoms) remain unreported.\n\n* The claimed cross-domain transferability is supported only by protein and protein–ligand tasks; other molecular domains (RNA, materials, polymers) are underexplored.",
"questions": "1. I am particularly interested in the experimental results on the large-scale MDCath dataset, as well as the runtime analysis of the proposed method. Could the authors provide more details or quantitative comparisons to illustrate the computational efficiency and scalability of PVB?\n\n2. Is it possible to extend the proposed framework for parallel trajectory generation, rather than sequential sampling? This could further improve scalability, especially for long biomolecular simulations.\n\n3. Lastly, could the authors elaborate on the role of the latent variable X0? Specifically, in the statement “The latent variable X0 is introduced to avoid the collapse of the conditional probability µ(X1|X0) from degenerating into a Dirac delta measure,\" it would be helpful to clarify why this latent variable is necessary and what would happen if one directly generated X1 conditioned on X0 without it.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-21T14:16:51",
"modification_date": "2025-11-12T12:29:40",
"review_url": "https://openreview.net/forum?id=8HH9dBOxwu¬eId=CmeWJba9vt",
"license": "CC BY 4.0"
}
] | |
neTgHJlQch | https://openreview.net/forum?id=neTgHJlQch | Mind the gap: A method for evaluating and comparing regional knowledge in LLMs | 2.666667 | 3 | [
2,
4,
2
] | [
3,
3,
3
] | 3 | [
"benchmark",
"nlp",
"LLMs",
"evaluation",
"entity linking",
"knowledge graph",
"cultural entities"
] | Large Language Models (LLMs) achieve strong results on general knowledge benchmarks, yet their coverage of region-specific entities—particularly from Latin America—remains limited. To address this gap, we propose CHOCLO, an entity-centric methodology for evaluating LLM knowledge of culturally relevant entities in Latin America. The methodology extracts structured facts from domain-specific resources and organizes them into knowledge graphs spanning nine categories, resulting in more than 44,000 entities and 130,000 questions. Evaluation is carried out through two complementary strategies. The first computes factual scores using token overlap, embedding similarity, LLM-as-a-judge, and multiple-choice accuracy. The second trains probing models that predict these scores directly from LLM embeddings, enabling generation-free evaluation. Results consistently show a regional disparity: GPT-5 and GPT-3.5 score markedly lower on Latin American entities compared to the U.S. and Europe, while models such as Mistral, DeepSeek, and QWEN underperform across all regions. Category-level analysis further reveals that fauna, flora, and traditions are comparatively better represented, whereas public figures and objects show the largest deficits. CHOCLO thus exposes systematic disparities in how LLMs encode Latin American knowledge and provides a step toward culturally inclusive benchmarks that support fairer global evaluation. | This work introduces a benchmark and evaluation framework to measure how well LLMs understand Latin American entities using knowledge graphs and probing methods, revealing consistent performance gaps compared to other regions. | datasets and benchmarks | https://openreview.net/pdf?id=neTgHJlQch | 2025-09-05T22:34:14 | 3 | [
{
"id": "Ve1oPYtQlt",
"forum": "neTgHJlQch",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2426/Reviewer_1sck",
"reviewer_name": "Reviewer_1sck",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper introduces CHOCLO, a framework to evaluate regional and culturally grounded knowledge about underrepresented regions in Large Language Models (LLMs). To do so, the authors curated a dataset with ~44k entities and ~130k questions, spanning across different categories adapted from CVQA: dish, flora, fauna, geography, object, public figure, tradition; ensuring broad thematic coverage while capturing cultural patterns. The authors argue that existing mainstream datasets are skewed, hence LLMs lack cultural knowledge, and therefore focus the analysis on the coverage of entities related to Latin America. CHOCLO uses structured knowledge graphs (KGs) to evaluate factual knowledge at the entity level via four complementary scoring methods, followed by a probing model to predict factual knowledge scores. Experiments show that GPT-3.5, GPT-5, Mistral, DeepSeek, and Qwen demonstrate performance disparities specifically with entities related to LATAM compared to the USA and Europe.",
"strengths": "1. The paper tackles an important aspect of LLMs - information inclusivity. \n2. The evaluation pipeline, containing structured KG-based QA and probing with 4 scoring methods, offers different aspects of understanding of factuality. \n3. The paper presents a detailed quantitative analysis at - cross-region and category level. The results confirm the disparities in information content in LLMs.",
"weaknesses": "1. The dataset curated for this evaluation relies entirely on Wikidata as the primary source of information. However, there is inherent coverage bias in Wikidata on region-specific knowledge. No analysis has been provided on that.\n2. The proposed framework is not technically novel. It combines a couple of existing, well-established methods to evaluate the region-specific LLM knowledge. Moreover, the semantic meaning of the predicted scores is not clear. It is missing statistical significance tests or NLI tests for a better understanding of predicted scores. \n3. The paper emphasises cultural knowledge inclusion in the LLMs, but considers LATAM as a homogenous region, hence also increasing the risk of over generalisation based on languages/linguistic features. The work would have benefited from some analysis based on that.\n4. It would be nice to have the framework tested out for CultureBench",
"questions": "1. What is the impact of Wikidata coverage bias on your framework, and how to deal with it? \n2. How do you ensure the quality of the extracted triple? \n3. How do you find the agreement between the different scoring functions?\n4. Could the probing scores increase biases instead of mitigating?\n5. How do you avoid the overgeneralisation of the analysis done based on the assumption that LATAM is a homogeneous region?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T01:21:31",
"modification_date": "2025-11-12T10:57:13",
"review_url": "https://openreview.net/forum?id=neTgHJlQch¬eId=Ve1oPYtQlt",
"license": "CC BY 4.0"
},
{
"id": "ttskrApZX9",
"forum": "neTgHJlQch",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2426/Reviewer_PyBt",
"reviewer_name": "Reviewer_PyBt",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper analyzes regional knowledge of Latin America in LLMs. Specifically, this paper first extracts structured facts from domain-specific resources and constructs a knowledge graph containing 44,000 entities spanning 9 categories. Using this knowledge resource, this paper proposes CHOCLO, an entity-centric methodology for evaluating LLM knowledge of culturally relevant entities in Latin America. It evaluates the regional knowledge in LLMs using several techniques, including token overlap, embedding similarity, LLM-as-a-judge, and multiple-choice accuracy. This paper also trains a probing model to evaluate the factual score directly from LLM representations. This paper finds several interesting conclusions, such as most LLMs underperform in Latin American knowledge.",
"strengths": "1. The topic is interesting and meaningful to the community. Studying LLMs’ coverage of different regional knowledge is important for the broad applications of LLMs.\n2. The work presents a systematic analysis and comprehensive experiments. The experimental results reveal that current LLMs underperform on Latin American knowledge. This provides some guidance and insights for improving LLM knowledge coverage and supports the development of more diverse LLMs and broader applications for people all over the world.",
"weaknesses": "1. The authors construct a knowledge graph, but there are existing resources (e.g., Wikidata). The paper should analyze whether the constructed knowledge graph adequately captures Latin American knowledge. And what is the advantage compared to existing resources? Is this knowledge graph covering more Latin American knowledge?\n2. The methods used for experimental analysis are mostly existing techniques, which limits the paper’s technical novelty.\n3. The authors should evaluate the reliability of their evaluation approaches. For example, they can analyze the correlation between each evaluation method and human judgments, to validate the reliability of their evaluation methods.\n4. A more fine-grained analysis specific characteristics of Latin American knowledge is needed. The authors should discuss how Latin American knowledge differs from other regional knowledge and why LLMs underperform, such as insufficient training data or other factors, to guide further LLM development.",
"questions": "See Weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T18:13:27",
"modification_date": "2025-11-12T10:57:13",
"review_url": "https://openreview.net/forum?id=neTgHJlQch¬eId=ttskrApZX9",
"license": "CC BY 4.0"
},
{
"id": "1EPjnrxr0H",
"forum": "neTgHJlQch",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2426/Reviewer_VdSb",
"reviewer_name": "Reviewer_VdSb",
"rating": 2,
"confidence": 3,
"soundness": 1,
"contribution": 2,
"presentation": 2,
"summary": "Benchmarks often measure factual knowledge of LLMs in high resource languages or regions or related to high frequency entities. The paper proposes a method called CHOLCO for evaluating the knowledge of LLMs across entities related to traditions, public figures, food and geography. The paper uses Wikidata to extract triples across three regions: Latin America, Europe and United States and converts it into a question. They evaluate LLMs on these questions along with a probe based evaluation technique to understand what the LLM knows about these rare entities.",
"strengths": "1. The paper provides a scalable approach to building a benchmark for less known entities across different regional contexts by using Wikidata to source triples and converting them to templated questions. It provides a comparison of different models' performance across the three regions: United States, Europe and Latin America. \n\n2. The paper compares the performance of different models on factual information across regions and highlights that models perform worse on information related to Latin America.",
"weaknesses": "1. There is no clear basis for the evaluation technique used in Sec 3.2.1 where the authors compute the LLM performance on their benchmark questions based on embedding similarity, lexical overlap, LLM as a judge and multiple choice accuracy. Using LLM as a judge would suffice in this scenario and it is not clear what value the other methods add. Methods such as lexical overlap are potentially noisy as LLMs tend to be verbose and the expected answer is usually a single location. \n\n2. Sec 3.1 talks about the properties used for building the dataset: \"country of origin (property P495), country of citizenship (P27), place of birth (P19), territorial location (P131), and geographic coordinates (P625).\" This seems to be a very limited set of properties which would always cause the label to be a location. This limits the diversity of the answers in the benchmark. \n\n\nPresentation: \n1. In Table 2, QWEN should be replaced with the entire model name.",
"questions": "1. For the evaluation setup, the authors evaluate GPT 3.5 Turbo and GPT-5 Mini, but not GPT-4 or GPT4o. Is there any specific reason for this?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T11:27:58",
"modification_date": "2025-11-12T10:57:13",
"review_url": "https://openreview.net/forum?id=neTgHJlQch¬eId=1EPjnrxr0H",
"license": "CC BY 4.0"
}
] |
GiItKTlJIB | https://openreview.net/forum?id=GiItKTlJIB | How Much Chain-of-Thought Do LLMs Really Need for Physics? | 3 | 3.5 | [
4,
2,
4,
2
] | [
2,
4,
4,
4
] | 4 | [
"chain-of-thought",
"reasoning",
"evaluation"
] | Reasoning-focused language models are increasingly applied to AI for science, but evaluation has not kept pace: benchmarks largely measure end-task accuracy while ignoring whether models genuinely depend on their own reasoning traces. This gap is critical in domains like physics problem solving, where equations, units, and structured terminology make reasoning reliability both essential and testable. We introduce a systematic deletion framework that intercepts chain-of-thought (CoT) mid-generation, removes tokens, and measures downstream effects. Applied to three open-source models—Magistral, Phi-4, and Qwen-A3B—across multiple physics benchmarks, our method shows that models remain accurate under heavy deletions (40–60\%) by “cramming” reconstructed steps into final answers. Overlap analyses reveal that deleted equations and facts often reappear, but inconsistently across strategies, exposing shallow and opportunistic reliance on CoT. These findings underscore that current accuracy-based evaluations are insufficient for scientific domains, and point toward the need for methods that assess reasoning faithfulness as a core requirement for advancing AI for science. | LLMs can solve physics problems by patching gaps in heavily deleted CoT reasoning traces, but without true faithfulness. | applications to physical sciences (physics, chemistry, biology, etc.) | https://openreview.net/pdf?id=GiItKTlJIB | 2025-09-13T01:29:38 | 4 | [
{
"id": "1ZfO851Ikv",
"forum": "GiItKTlJIB",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4526/Reviewer_iLnu",
"reviewer_name": "Reviewer_iLnu",
"rating": 4,
"confidence": 2,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes a deletion‑based framework to probe how much LLMs depend on their CoT when solving physics problems. By systematically removing portions of the generated reasoning and measuring changes in answer accuracy, final answer length, and information overlap, the authors study three open‑source models (Magistral, Phi‑4 and Qwen‑A3B) on three physics benchmarks. Experiments reveal that explicit reasoning prompts improve performance but the CoT can be removed without dramatically hurting accuracy, as models \"cram\" reconstructed steps into the final answer. They conclude that current accuracy‑only evaluations are insufficient and calls for metrics that assess the faithfulness of reasoning.",
"strengths": "- The work tackles an important question about whether CoT explanations genuinely reflect model reasoning, which is crucial for using LLMs in scientific domains.\n\n- The deletion strategy is clearly described and measures multiple downstream effects, such as accuracy, answer length, lexical and frequency overlap. This provides a structured way to examine reliance on intermediate reasoning.\n\n- The experiments cover three different benchmarks of physics domain and multiple LLMs, the authors explore effects of prompt explicitness and different deletion strategies, with the analysis is carefully presented and supported by figures.",
"weaknesses": "- Prior research has already highlighted the gap between answer accuracy and CoT faithfulness and proposed evaluation frameworks. For instance, Nguyen et al. [1] introduce discriminative and generative evaluations that showed LLMs may reach correct answers through incorrect reasoning, and Barez et al. [2] argue that CoT is not, by itself, an adequate explanation. The deletion framework is a more like a straightforward application of such idea to physics domain and does not reveal its novelty or specifity.\n\n- The analysis and experimental obversations are surface‑level, lacking of in-depth exploration of LLMs' internal activation or behavioural pattern. Thus, it cannot wwell explain why models can reconstruct missing steps or whether they use memorised templates versus genuine reasoning.\n\n- The experiments consider end‑of‑scratchpad truncation, random deletion and removal of annotated physics tokens. More nuanced manipulations, such as deleting specific reasoning types, shuffling steps, may yield deeper insight into what information is truly required.\n\n[1] Nguyen et al., Direct Evaluation of Chain-of-Thought in Multi-hop Reasoning with Knowledge Graphs, 2024.\n\n[2] Barez et al., Chain-of-Thought Is Not Explainability, 2025.",
"questions": "1. How does the deletion framework differ from or extend prior CoT‑evaluation methods (e.g., perturbation-based evaluations)? What is novel beyond applying it to physics tasks.\n\n2. Is this framework suitable for other domains, like mathematics or commonsense reasoning?\n\n3. Can you provide more analysis on what kinds of information models \"cram\" into the final answer when reasoning is removed? Are they recalling memorized formulas or recomputing reasoning?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T14:50:55",
"modification_date": "2025-11-12T11:17:03",
"review_url": "https://openreview.net/forum?id=GiItKTlJIB¬eId=1ZfO851Ikv",
"license": "CC BY 4.0"
},
{
"id": "wTTdjqzjHW",
"forum": "GiItKTlJIB",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4526/Reviewer_sbpY",
"reviewer_name": "Reviewer_sbpY",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The authors introduce a systematic deletion framework that intercepts chain-of-thought (CoT) mid-generation, removes tokens, and measures downstream effects. Their method shows that models remain accurate under heavy deletions (40–60%) by “cramming” reconstructed steps into final answers.",
"strengths": "The authors introduce an evaluation paradigm: intercepting CoT mid-generation, deleting intermediate tokens, and measuring their downstream impact on decoded information funneling and final answer quality. The methodology of the paper is clear and the paper is supplemented with numerous schematic and statistical diagrams.",
"weaknesses": "**Methodological Aspects:**\n\nThere are some concerns regarding the evaluation through the deletion of intermediate CoT steps:\n\n1. Has it been considered that some models tend to generate redundant or repetitive content during reasoning? This could lead to the deleted reasoning steps merely being repetitive explanations or step clarifications, potentially causing misjudgment of \"faithfulness in reasoning.\" Based on the prompts provided by the authors, they do not appear to have implemented any measures to prevent models from potentially producing redundancy or repetition in the CoT steps.\n\n2. The authors found that \"models exhibit compensatory cramming behavior—producing longer final answers that attempt to reconstruct missing reasoning.\" This seems to contradict their claimed contribution: if the model's output is based on reconstructing the chain of thought, then the impact of deleting the intermediate steps cannot be truly observed. If the authors aim to explore \"faithfulness in reasoning,\" they should perhaps force the model to reason directly based on the modified chain of thought, rather than allowing it to reconstruct one.\n\n**Experimental Aspects:**\n\n1. The authors' experiments are confined to the physics domain, which is puzzling because their method does not seem directly related to physics. Conducting experiments solely in physics raises doubts about the generalizability of the method.\n\n2. The authors' experiment on information overlap and recovery is confusing. The authors aim to explore \"whether the recovery is faithful,\" but this does not seem synonymous with \"whether the final result is faithful in reasoning.\" Even if the recovery follows a different reasoning trajectory to arrive at the correct answer, the final result could still be faithful in reasoning. Therefore, it seems unwarranted to draw the conclusion stated in that section about \"raising questions about the faithfulness of CoT traces as evidence of underlying reasoning.\"",
"questions": "Same as above",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-26T14:06:53",
"modification_date": "2025-11-12T11:17:03",
"review_url": "https://openreview.net/forum?id=GiItKTlJIB¬eId=wTTdjqzjHW",
"license": "CC BY 4.0"
},
{
"id": "jI4ZzrgxHS",
"forum": "GiItKTlJIB",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4526/Reviewer_Qv4v",
"reviewer_name": "Reviewer_Qv4v",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The paper shows that when parts of CoT reasoning are deleted, LLMs can often cram and reconstruct missing steps in the final answer, revealing that CoTs are partially redundant and not always faithful. Thus, accuracy-based evaluation alone is insufficient; faithfulness should also be assessed.",
"strengths": "- The paper is well motivated, encouraging the community to focus on understanding CoT reasoning and assessing its faithfulness. \n- The study is interesting, the writing is clear, and the visualizations effectively support some of their findings.",
"weaknesses": "- I think some of the experimental results are unusual, as I mentioned in the **Questions** section.\n- I believe the title of the paper is somewhat inconsistent with its content. Although the paper performs deletions on CoT reasoning, the LLM often attempts to regenerate the missing steps (referred to as cramming in the paper). As a result, the CoT is not truly deleted. Moreover, the experiments in the paper also examine overlapping information in these regenerated CoTs. Therefore, the title may not be entirely appropriate.",
"questions": "- Q1: In Figure 2, why does Phi-4 obtain lower scores as the prompt reasoning level increases? \n- Q2: In Figure 2, why is the length of Magistral’s final answer in PhysReason much longer than that of reason?\n- Q3: Again, regarding Magistral, it is unusual that in Figure 4, its score increases as more from-the-end deletion is applied, why?\n- Q4: I also find some unusual phenomena in Figure 7 and would appreciate your clarification. For example, at the leftmost point of each subfigure, when the deletion fraction is 0, no deletion should have been applied. In this case, according to Equations (1) and (2), both $Jaccard$ and $D_{Manhattan}$ should be 0. However, the values shown in several subfigures are different and in some cases even quite large. I am not sure whether I am misunderstanding something or if there is another explanation.\n- Q5: Are there specific example cases that illustrate how the LLM attempts to \"cramming\" after certain parts of the reasoning are deleted? Moreover, under the three different deletion settings, are there any further differences in how the LLM tries to compensate for the missing steps?\n- Q6: When parts of the reasoning are deleted and the LLM tries to cram the missing steps, did your experiment attribute the regenerated content to the intermediate reasoning or to the final answer?\n\nIf the authors can provide a clear answer to my questions, I would consider raising my score.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-25T17:25:27",
"modification_date": "2025-11-12T11:17:03",
"review_url": "https://openreview.net/forum?id=GiItKTlJIB¬eId=jI4ZzrgxHS",
"license": "CC BY 4.0"
},
{
"id": "VISCrJOaFh",
"forum": "GiItKTlJIB",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4526/Reviewer_xWFX",
"reviewer_name": "Reviewer_xWFX",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "The paper probes whether three open-source LLMs (Phi-4, Qwen-A3B, Magistral) truly consume their chain-of-thought (CoT) scratchpads when solving physics problems. A mid-generation deletion framework excises 0–100 % of CoT tokens before decoding; accuracy, answer length and token-overlap metrics are tracked on UG-Physics, PhysReason and PhyBench. Models maintain accuracy after 40–60 % deletion, then “cram” missing equations into longer final answers, indicating only superficial reliance on the scratchpad.",
"strengths": "Mid-generation token deletion is formally defined (Sec. 2.2) and enables causal intervention, a clear methodological advance. Three deletion strategies (end, random, physics-aware) cover complementary failure modes.\n\n3 models × 3 benchmarks × 5 deletion fractions × ≥5 repeats give large-scale empirical coverage.",
"weaknesses": "The paper is poorly presented. Bulletin list items are overused. It also suffers from apparent mistakes in writing and structuring.\n\nOnly nucleus sampling (T=0.6–0.7, p=0.95) used; no greedy, beam-search or temperature ablation (Sec. 2.2). Cramming could be an artefact of high temperature stochasticity.\n\n After Sec. 3.1, all deletion experiments use “Medium Reasoning” prompt only. No evidence that cramming persists under “Full” or “Low” reasoning conditions.\n\n Largest model is 30.5 B; no comparison with 7 B or 100 B+ checkpoints - results in scaling behavior of faithfulness unknown.\n\nJaccard similarity saturates at ~0.25 under 80 % deletion (Fig. 7); ceiling effect may hide recovery quality.",
"questions": "Suggestions:\n\nRelated Works -> Related Work\n\nprovide greedy and beam-search deletion curves.\n\ninclude 7 B and 70 B checkpoints to verify scaling trend.\n\nreport p-values for accuracy drops. add exact-match equation F1 to complement Jaccard.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-19T15:03:09",
"modification_date": "2025-11-12T11:17:03",
"review_url": "https://openreview.net/forum?id=GiItKTlJIB¬eId=VISCrJOaFh",
"license": "CC BY 4.0"
}
] |
FKEHiHU4bN | https://openreview.net/forum?id=FKEHiHU4bN | Revisiting Matrix Sketching in Linear Bandits: Achieving Sublinear Regret via Dyadic Block Sketching | 6.5 | 2.75 | [
8,
6,
6,
6
] | [
2,
4,
2,
3
] | 4 | [
"Linear Bandits",
"Matrix Sketching",
"Multi-scale Sketching"
] | Linear bandits have become a cornerstone of online learning and sequential decision-making, providing solid theoretical foundations for balancing exploration and exploitation.
Within this domain, matrix sketching serves as a critical component for achieving computational efficiency, especially when confronting high-dimensional problem instances.
The sketch-based approaches reduce per-round complexity from $\Omega(d^2)$ to $O(d)$, where $d$ is the dimension.
However, this computational efficiency comes with a fundamental pitfall: when the streaming matrix exhibits heavy spectral tails, such algorithms can incur vacuous *linear regret*.
In this paper, we revisit the regret bounds and algorithmic design for sketch-based linear bandits.
Our analysis reveals that inappropriate sketch sizes can lead to substantial spectral error, severely undermining regret guarantees.
To overcome this issue, we propose Dyadic Block Sketching, a novel multi-scale matrix sketching approach that dynamically adjusts the sketch size during the learning process.
We apply this technique to linear bandits and demonstrate that the new algorithm achieves *sublinear regret* bounds without requiring prior knowledge of the streaming matrix properties.
It establishes a general framework for efficient sketch-based linear bandits, which can be integrated with any matrix sketching method that provides covariance guarantees.
Comprehensive experimental evaluation demonstrates the superior utility-efficiency trade-off achieved by our approach. | We propose a framework for efficient sketch-based linear bandits to address the issue of linear regret that may arise with matrix sketching. | reinforcement learning | https://openreview.net/pdf?id=FKEHiHU4bN | 2025-09-19T09:30:09 | 4 | [
{
"id": "k9YOMWQZaP",
"forum": "FKEHiHU4bN",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14906/Reviewer_NX8x",
"reviewer_name": "Reviewer_NX8x",
"rating": 8,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper revisits the computational and regret trade-offs in sketch-based linear bandits, where matrix sketching is employed to reduce the per-round computational cost from quadratic $O(d^2)$---with $d$ denoting the feature dimension---to subquadratic, while maintaining sublinear regret in $T$, the time horizon. Earlier work has shown that for linear bandits regret of order $O(d\\sqrt{T})$ can be achieved with per-round complexity proportional to $O(d^2)$. Subsequent research demonstrated how to reduce this computational burden using dimensionality reduction techniques, particularly matrix sketching, to achieve per-round complexity $O(dl + l^2)$, where $l$ is the sketch size. However, in these approaches, the regret depends on the *spectral error* introduced by dimensionality reduction and can become linear (i.e., vacuous) when the spectral error exceeds $T^{1/3}$.\n\nThis paper makes progress on this front by introducing an algorithm termed *Dyadic Block Sketching (DBS)*---a multi-scale sketching framework that adaptively doubles the sketch size as learning progresses. The key claim is that this adaptive structure bounds the global covariance error by a user-specified parameter $\\varepsilon$, thereby ensuring sublinear regret independent of the spectral error. Empirical results on synthetic and real-world datasets (e.g., MNIST) support the theoretical analysis, showing improved regret–efficiency trade-offs compared to prior methods.",
"strengths": "The computational efficiency of linear bandits is an important and active area. Reducing per-round complexity while retaining sublinear regret is a significant theoretical question. The proposed framework makes a meaningful contribution toward that goal and the method is natural (adapting the sketch size). The paper is well written with detailed proof given in the appendix. The authors complement their theoretical work with experimental validation, which is an added bonus.",
"weaknesses": "While the claimed regret bound is independent of the spectral error, it still depends on other parameters such as the $\\ell_2$-norm of the feature vectors, and the sketch size of the active block $l_B$. This dependence implies that in certain regimes, the regret may still exhibit linear scaling. Providing a clearer exposition or characterization of when such linear growth arises would strengthen the paper. In particular, the paper would benefit from explicitly stating conditions under which their algorithm reverts to linear regret. This will lead to open questions that can be stated explicitly. Regarding exposition, for semi/non-experts, the presentation is dense and occasionally confusing. Terms like “streaming matrix” and “spectral tail” are introduced early without clear explanations. It is not clear what is streaming matrix means in this context.",
"questions": "The problem you identified: getting sublinear regret algorithm for linear bandits with sub quadratic per-round complexity for is very interesting to me. So it will be very nice to the readers if you could make a table with what is known (including your work) and to what extend this is an open question. Making a separate section on related works and discussing this will also make the paper very nice to read.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-04T02:48:55",
"modification_date": "2025-11-12T13:27:30",
"review_url": "https://openreview.net/forum?id=FKEHiHU4bN¬eId=k9YOMWQZaP",
"license": "CC BY 4.0"
},
{
"id": "2xCJqRhWXc",
"forum": "FKEHiHU4bN",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14906/Reviewer_pFPs",
"reviewer_name": "Reviewer_pFPs",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The authors first show that existing sketch-based linear bandit algorithms can suffer from linear regret. To address this issue, they propose a novel matrix sketching framework called Dyadic Block Sketching (DBS), which adaptively adjusts the sketch size in a multi-scale manner. By applying DBS to linear bandits, the authors achieve sublinear regret bounds. They further validate the effectiveness of the proposed method through experiments on both synthetic and real-world datasets.",
"strengths": "1. The motivation of this paper is clearly presented.\n2. The proposed methods are novel and interesting.\n3. Although I did not examine the proofs in the appendices in detail, the theoretical results appear convincing and reasonable.\n4. The writing is generally clear, though some parts require further clarification (see Weaknesses).",
"weaknesses": "1. In lines 372–379, the authors discuss choosing $\\epsilon$ based on the spectral properties of the data matrix. However, in linear bandit problems, it is usually unknown whether the data matrix is low-rank or has a heavy spectral tail. How should $\\epsilon$ be selected in practice? Moreover, in the experiments, how was the value of $\\epsilon$ determined?\n\n2. Some experimental results require further clarification:\n- In Figure 3(a), why does the regret of $\\epsilon=4$ outperform that with $\\epsilon=2$?\n- In Figure 3(d), when the sketch-based methods use the same amount of space as OFUL, their performance is still inferior to OFUL. The authors should provide a more detailed explanation for this result.\n\n3. Some statements in the paper are unclear or inaccurate:\n- In the abstract, the authors claim that “the sketch-based approaches reduce per-round complexity from Ω($d^2$) to O($d$),” which is not accurate. The computational complexity of matrix sketching depends on the sketch size. If the sketch size is $O(d)$, the overall complexity of sketch-based methods remains $O(d^2)$.\n- In line 66, the statement “thereby reducing the order of spectral error $\\Delta_T$ and decoupling it from $d$” is difficult to understand for readers unfamiliar with Chen et al. (2020). It is recommended that the authors present the regret bound from Chen et al. (2020) in the paper for clarity.\n- In the last line of page 3, the phrase “under certain conditions” is too vague. The authors should describe these conditions in more detail.\n- In the last line of page 5, the phrase “$\\tilde{X}$ the subset of rows approximated by inactive blocks” is confusing. Does it mean “$\\tilde{X}$ is the subset of rows consisting of inactive blocks”?\n- Algorithm 3 is commonly referred to as SCFD (Chen et al., 2020) rather than RFD (Luo et al., 2019). Note that the regularizer in SCFD sums the total mass of subtracted values during the FD procedure, whereas in RFD, it sums only half of the subtracted mass.",
"questions": "see Weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T17:04:09",
"modification_date": "2025-11-12T13:27:31",
"review_url": "https://openreview.net/forum?id=FKEHiHU4bN¬eId=2xCJqRhWXc",
"license": "CC BY 4.0"
},
{
"id": "aMHRJElmGq",
"forum": "FKEHiHU4bN",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14906/Reviewer_v1kF",
"reviewer_name": "Reviewer_v1kF",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper shows that existing sketch-based linear bandits can suffer linear regret when the data matrix has heavy spectral tails. The authors propose Dyadic Block Sketching (DBS), which adaptively adjusts sketch sizes to control global error. Applied to linear bandits, their DBSLinUCB algorithm achieves sublinear regret and better efficiency than prior methods.",
"strengths": "1. This paper provides a clear motivation by highlighting the pitfalls of previous studies. And the introduced of multi-scale sketching approach is well-grounded. \n\n2. The algorithm and its analysis are presented clearly, with good intuition and easy-to-follow explanations. The numerical experiments use meaningful benchmarks, and the results, particularly in Figure 3(c), convincingly validate the claimed performance improvement.",
"weaknesses": "I did not find any major technical weaknesses in this paper. However, as a reader who is not very familiar with matrix sketching applications in bandits or online learning, I have several questions about the positioning of this work and the choice of benchmarks, as mentioned in the question part.",
"questions": "1. The discussion on the parameter choices in Remark 2 is insightful. I am wondering is there any approach on adaptively estimating l0 to make the selection of parameters adaptive to environments. \n\n2. I am not deeply familiar with the literature on matrix sketching for linear bandits, but I noticed that most baselines in this paper are from around five years ago (e.g., SOFUL [Kuzborskij et al., 2019], CBSCFD [Chen et al., 2020]). It would be helpful if the authors could comment on or compare with more recent works, such as Zhang et al. (2023) and Feinberg et al. (2023).\n\n3. In the related work section on multi-scale sketching, the authors explain that previous algorithms were developed for different purposes. I am curious whether there are shared ideas or overlapping design principles between those existing methods and the proposed Dyadic Block Sketching framework.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T19:36:47",
"modification_date": "2025-11-12T13:27:32",
"review_url": "https://openreview.net/forum?id=FKEHiHU4bN¬eId=aMHRJElmGq",
"license": "CC BY 4.0"
},
{
"id": "3AlKkmZkgb",
"forum": "FKEHiHU4bN",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14906/Reviewer_1de1",
"reviewer_name": "Reviewer_1de1",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper revisits efficient linear bandits using matrix sketching. Matrix sketching approximates the streaming matrix X to reduce update costs. Previous methods such as SOFUL lower the per-round complexity from \\Omega(d^2) to O(dl + l^2) by maintaining low-rank sketches, but their regret can become linear when the sketch size is too small or the spectrum decays slowly. The authors propose Dyadic Block Sketching (DBS), a new framework that adaptively adjusts the sketch size at multiple scales without prior knowledge of the data. Applied to linear bandits, this leads to the DBSLinUCB algorithm, which achieves sublinear regret under general conditions. Both theoretical analysis and experiments on synthetic and real-world datasets support its effectiveness.",
"strengths": "1. Theoretical guarantees: When an appropriate value of \\epsilon is chosen in advance, DBSLinUCB achieves the O(\\sqrt{T}) regret of OFUL (Theorem 3), which SOFUL cannot do without knowing spectral information. Even in the worst case, the algorithm attains an O(dk) update complexity (Corollary 1) while maintaining robust regret guarantees.\n\n2. Balanced trade-off: Regret can be controlled via fixed parameters (\\epsilon, l_0), while the update cost depends adaptively on the matrix rank k or Frobenius norm \\|\\tilde{X}\\|^2_F, achieving a balance between theory and efficiency.\n\n3. Flexibility: The DBS framework is modular and compatible with methods such as FD and RFD, supporting wide applications in online and bandit learning.\n\n4. Empirical results: Experiments on synthetic and real datasets show consistent improvements in regret and efficiency, confirming robustness under different settings.",
"weaknesses": "1. Incomplete theoretical coverage: The theoretical guarantees do not fully subsume SOFUL. When l < k and \\|\\tilde{X}\\|^2_F is unknown, the analysis cannot ensure an update cost of O(dl + l^2), and no choice of (\\epsilon, l_0) achieves SOFUL-level efficiency.\n\n2. Efficiency–regret trade-off: Achieving near-optimal O(d\\sqrt{T}) regret may lead to high update costs, especially when the data matrix has large rank or Frobenius norm. It would help to provide examples or evidence showing that in practice, k \\ll d or \\|\\tilde{X}\\|^2_F is small (e.g., constant or o(T^{1/3})).\n\n3. Lack of empirical context: In the MNIST experiments (Figure 5), the rank k of the dataset is not reported. Without this, the complexity comparison is hard to interpret—particularly when k is close to d, where DBSLinUCB may be much slower than SOFUL.",
"questions": "Please address the concerns raised in Weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T23:56:57",
"modification_date": "2025-11-12T13:27:33",
"review_url": "https://openreview.net/forum?id=FKEHiHU4bN¬eId=3AlKkmZkgb",
"license": "CC BY 4.0"
}
] |
LxkfjD81xB | https://openreview.net/forum?id=LxkfjD81xB | Mending synthetic data with MAPS: Model Agnostic Post-hoc Synthetic Data Refinement Framework | 3.5 | 3.75 | [
2,
4,
6,
2
] | [
4,
4,
3,
4
] | 4 | [
"Genertive modeling",
"Synthetic data",
"Post-hoc refinement",
"Privacy-Fidelity tradeoff"
] | Generating high-quality synthetic data with privacy protections remains a challenging ad-hoc process, requiring careful model design and training often tailored to the characteristics of a targeted dataset. We present MAPS, a model-agnostic post-hoc framework that improves synthetic data quality for any pre-trained generative model while ensuring sample-level privacy standards are met. Our two-stage approach first removes synthetic samples that violate privacy by being too close to real data, achieving 0-identifiability guarantees. Second, we employ importance weighting via a binary classifier to resample the remaining synthetic data according to estimated density ratios. We evaluate MAPS across two healthcare datasets (TCGA-metadata, GOSSIS-1-eICU-cardiovascular) and four generative models (TVAE, CTGAN, TabDiffusion, DGD), demonstrating significant improvements in fidelity and utility while maintaining privacy. Notably, MAPS achieves substantial improvements in fidelity metrics, with 40 out of 48 statistical tests demonstrating significant improvements in marginal distributional measures and notable enhancements in correlation structure preservation and joint distribution similarity. For example, Joint Jensen-Shannon Distance reduced from ranges of 0.7888-0.8278 to 0.5434-0.5961 on TCGA-metadata and 0.6192-0.7902 to 0.3633-0.4503 on GOSSIS-1-eICU-cardiovascular. Utility improvements are equally impressive, with classification F1 scores improving from ranges of 0.0866-0.2400 to 0.3043-0.3848 on TCGA-metadata and 0.1287-0.2085 to 0.2104-0.2497 on GOSSIS-1-eICU-cardiovascular across different model-dataset combinations. Additionally, uncertainty quantification analysis via split conformal prediction demonstrates that MAPS considerably improves calibration quality, reducing average prediction set sizes by 55-77\% while maintaining target coverage on TCGA-metadata. The code of this project is available at https://anonymous.4open.science/r/MAPS-EBF8. | MAPS refines synthetic data via identifiability filtering and importance-weighted resampling, improving fidelity and utility while ensuring 0-identifiability guarantees. | generative models | https://openreview.net/pdf?id=LxkfjD81xB | 2025-09-20T00:11:46 | 4 | [
{
"id": "IFtuqR5R3D",
"forum": "LxkfjD81xB",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19681/Reviewer_KA2J",
"reviewer_name": "Reviewer_KA2J",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 2,
"presentation": 2,
"summary": "The paper introduces a framework for refining synthetic tabular data to reduce the risk of privacy violation by removing samples that are too similar to real ones and re-sampling with a learned weight-function to improve data fidelity.",
"strengths": "- In general, the paper is well structured.\n- The two steps of first cleaning out potential copies from the original data and then re-balancing the data can be broadly applied across multiple domains.\n- The method is evaluated on synthetic data from a representative set of generative models.",
"weaknesses": "- No related work section comparing to prior work on improving synthetic data using filtering and sampling strategies. The following works might be relevant:\n\t- Alaa, Ahmed, et al. \"How faithful is your synthetic data? sample-level metrics for evaluating and auditing generative models.\" _International conference on machine learning_. PMLR, 2022.\n\t- Wang, Hao, et al. \"Post-processing private synthetic data for improving utility on selected measures.\" _Advances in Neural Information Processing Systems_ 36 (2023): 64139-64154.\n- Privacy is defined in terms of a distance function, where samples within a certain distance of the original samples are discarded. This could open up the risk of attacks where the identity can be recovered, e.g. in cases where a person has multiple entries associated with them. For large synthetic dataset sizes, it might also be possible to discover empty hyper-spheres in the synthetic data. The original data points could then possibly be detected by interpolating the centroid from the synthetic samples on the hyper-spheres surface.",
"questions": "- How does your framework relate to previous frameworks on refining synthetic data?\n- Could you elaborate on what identifiability protection is in the context of your problem statement? More specifically in the following part: \"Our objective is to refine $\\hat{\\mathcal{D}}$ to produce a subset $\\tilde{\\mathcal{D}} \\subset \\hat{\\mathcal{D}}$ of size $N$ that provides (1) identifiability protections with respect to $\\mathcal{D}$, [...]\". \n\t- What is the type of attack the method should protect against?\n\t- At what point do we consider an individual to be identified?\n\t- What if an individual's data is spread across multiple entries in the tables? \n- Is distance-based filtering sufficient to satisfy the privacy requirement?\n- Does the distance-based filtering approach create identifiable empty hyperspheres in the feature space? If so, could an adversary exploit these gaps to infer the approximate locations of original data points by analyzing the boundaries of the synthetic data distribution?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T18:26:38",
"modification_date": "2025-11-12T15:12:03",
"review_url": "https://openreview.net/forum?id=LxkfjD81xB¬eId=IFtuqR5R3D",
"license": "CC BY 4.0"
},
{
"id": "wlytACUpRu",
"forum": "LxkfjD81xB",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19681/Reviewer_i8fJ",
"reviewer_name": "Reviewer_i8fJ",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The authors proposed a generator-agnostic with post-hoc refinement method to improve tabular synthetic data utility while helping to comply with a record-level privacy constraint. The first stage of the method employs a privacy filter that enforces 0-identifiability by discarding synthetic samples that are closer to any real record than that record’s nearest real neighbor and also a fidelity sampler that trains a real-vs-synthetic binary classifier to estimate the density ratio. \n\nThis is a very interesting approach, especially in pairing a hard, nearest-neighbor privacy screen with a discriminative density-ratio surrogate followed by sampling-importance-resampling to curate a refined synthetic set without retraining the generator. The idea helps to solve a relevant problem where teams often cannot retrain heterogeneous tabular generators but do need a way to improve outputs for better downstream performance.",
"strengths": "I liked the idea of a generator-agnostic and the refine-after-generate pipeline. While each component is not new in the literature, the combination and the operational framing for tabular synthetic data are very useful. However, the authors should improve the discussion about novelty in the paper.\n\nThe authors offer an extensive evaluation across distribution metrics, correlation structure, utility (clustering/classification), uncertainty quantification, and privacy. The results show gains across four diverse generators.\n\nThe addressed problem is a real deployment gap because practitioners often inherit synthetic data from heterogeneous models and need a post-hoc way to improve them without retraining.",
"weaknesses": "Despite the results, to rise from “useful engineering” to “field-shaping”, the paper needs stronger evidence isolating adaptation versus decoding effects and broader comparisons against alternative post-hoc curation strategies.\n\nI am not sure I understand correctly, but if the refinement classifier and distance thresholds are fitted on the full real dataset while downstream models are evaluated on a split of that same dataset, the selection step has indirectly “seen” the test distribution, biasing utility metrics upward. Please make that point clear for the reader.\n\nIn my point of view, there are insufficient baselines in experiments. For example, there is no head-to-head comparison against simple k-nearest-neighbor deduplication, discriminator rejection sampling, KDE/flow-based reweighting, or off-the-shelf curation in popular synthetic-data toolkits, making it hard to isolate where the gains actually come from. \n\nOne interesting addition to the paper is reporting probability calibration or effective sample size under different flattening exponents.\n\nI think that a single global metric for mixed-type data risks over-penalizing legitimate rare patterns or under-penalizing high-variance numeric columns, which can distort both privacy and utility in ways the current analyses do not expose.\n\nPlease discuss the computational costs for generating 30 N samples and training a sizable classifier. For example, runtime/memory vs. dimension and N.",
"questions": "I am concerned about leakage control. To avoid problems, the authors can re-run with a three-way split of real data. For training (for downstream oracle), calibration for MAPS (both Stage-1 ri and Stage-2 ccϕ ), and held-out test used only for downstream evaluation.\n\nI was wondering about distance metrics. What about comparing Stage-1 using Gower distance for mixed types, Mahalanobis, DCR/NNDR thresholds, and learned metric (e.g., via autoencoder latent space)? Maybe that can help with the privacy–utility trade-off.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T05:37:31",
"modification_date": "2025-11-12T15:12:04",
"review_url": "https://openreview.net/forum?id=LxkfjD81xB¬eId=wlytACUpRu",
"license": "CC BY 4.0"
},
{
"id": "NHFvszo6wf",
"forum": "LxkfjD81xB",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19681/Reviewer_PzXw",
"reviewer_name": "Reviewer_PzXw",
"rating": 6,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper proposes MAPS, a two stage, model agnostic post hoc refinement for tabular synthetic data. Stage 1 enforces a sample level 0 identifiability privacy constraint by removing synthetic samples closer to a real point than that point’s nearest real neighbor, implemented with an unweighted feature metric, w = 1. Stage 2 improves fidelity through classifier based density ratio estimation and Sampling Importance Resampling, with importance weights derived from a discriminator, followed by a dataset specific flattening power $\\alpha$ to stabilize resampling. Experiments across TCGA metadata and GOSSIS eICU cardiovascular with four generators show large gains in marginal and joint fidelity, correlation structure, and downstream utility, plus strong reductions in Identifiability Score to zero. Privacy results include mixed changes in membership inference recall on TCGA for some generators, which the paper discusses as a privacy fidelity trade off.",
"strengths": "- A modular refinement pipeline that decouples privacy filtering from fidelity enhancement, applicable to diverse generators without retraining.\n- Solid empirical study with multiple fidelity metrics, utility under train on synthetic and test on real, uncertainty quantification with split conformal prediction, and several privacy probes.\n- Problem setup, equations for density ratio based weighting, and the SIR procedure are clearly laid out, with an algorithmic summary and reproducibility information.\n- Demonstrates a practical path to rescue weak synthetic tabular outputs, which is valuable for real world pipelines in health data and beyond.",
"weaknesses": "1. Stage 1 metric choice lacks statistical justification for mixed type data. The 0 identifiability guarantee is defined using a distance with w = 1 across features. For heterogeneous tabular data, unweighted norms can be dominated by scaling and encoding choices. A simple fix would be to evaluate at least one mixed data appropriate metric such as Gower distance and to report sensitivity of the Identifiability Score and the number of removed samples to the metric choice [2]. The paper should also discuss relative density based criteria such as nearest neighbor distance ratio or distance to closest record, which can better reflect risk in sparse regions than a single absolute threshold [4].\n2. Stage 2 relies on classifier based DRE, yet stability and bias are controlled with an ad hoc flattening exponent $\\alpha$ chosen per dataset, $\\alpha$ = 1.4 on TCGA and $\\alpha$ = 0.8 on GOSSIS, without a principled selection rule or sensitivity analysis. Classifier based DRE is known to suffer high variance when class separation is large. At minimum, provide an ablation of $\\alpha$ over a grid and report its effect on joint Jensen-Shannon distance, downstream F1, and conformal set size. Also justify CDRE versus alternatives like KLIEP or uLSIF that directly fit density ratios and can be more stable under shift [5, 6]. If CDRE is retained, consider early stopping and calibrated probabilities for the discriminator, and report diagnostics such as effective sample size under importance weights.\n3. Privacy interpretation requires more nuance. While Identifiability Score reaches zero across settings, some models on TCGA exhibit higher membership inference recall after refinement. This indicates that geometric privacy via nearest neighbor distance is not sufficient for statistical attacks that exploit higher fidelity to p(x). The paper should reconcile this by: clarifying the formal scope of 0 identifiability, adding density based privacy probes, and, if possible, tuning Stage 1 thresholds jointly with Stage 2 to chart a true utility privacy Pareto frontier [4].\n4. Relation to adjacent lines of work is thin. The selection discriminator and resampling resembles adversarial filtering and train reject reweight ideas. Position MAPS against such filtering frameworks and two sample testing with discriminators to make the contribution line crisper [7, 8]. The paper would also benefit from a discussion contrasting its focus on fidelity and privacy with fairness centric synthetic data work. For instance, SynthFair constructs semi synthetic imaging datasets with controllable confounders to study bias, which is complementary to MAPS. MAPS could be a pre step to high utility data on which post hoc fairness methods operate, but this interaction should be articulated and, if possible, briefly tested on a fairness proxy [1, 9].\n5. Statistical testing and calibration details could be strengthened. The paper reports paired t tests across runs but does not state checks for normality or effect sizes. Reporting confidence intervals and standardized effect sizes would make claims more robust. For conformal prediction, include the nominal coverage $\\alpha$ and show calibration curves or coverage versus set size plots to demonstrate that improvements are not due to parameter choices alone [10].\n\nReferences\n\n[1] Ribeiro FD, Claucich E, Stanley EA, Dimitrakopoulos P, Tsaftaris SA, Ferrante E, Glocker B, Echeveste R. SynthFair: A Semi-Synthetic Medical Imaging Dataset to Propel Research on Bias Detection & Mitigation. InNeurIPS 2025 AI for Science Workshop.\n\n[2] Gower JC. A general coefficient of similarity and some of its properties. Biometrics. 1971 Dec 1:857-71.\n\n[4] Elliot M, Mackey E, O'Hara K, Tudor C. The Anonymisation Decision Making Framework.\n\n[5] Sugiyama M, Nakajima S, Kashima H, Buenau P, Kawanabe M. Direct importance estimation with model selection and its application to covariate shift adaptation. Advances in neural information processing systems. 2007;20.\n\n[6] Sugiyama M, Yamada M, Von Buenau P, Suzuki T, Kanamori T, Kawanabe M. Direct density-ratio estimation with dimensionality reduction via least-squares hetero-distributional subspace search. Neural Networks. 2011 Mar 1;24(2):183-98.\n\n[7] Zellers R, Bisk Y, Schwartz R, Choi Y. Swag: A large-scale adversarial dataset for grounded commonsense inference. arXiv preprint arXiv:1808.05326. 2018 Aug 16.\n\n[8] Lopez-Paz D, Oquab M. Revisiting classifier two-sample tests. arXiv preprint arXiv:1610.06545. 2016 Oct 20. ICLR'17\n\n[9] Bellamy RK, Dey K, Hind M, Hoffman SC, Houde S, Kannan K, Lohia P, Martino J, Mehta S, Mojsilović A, Nagar S. AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM Journal of Research and Development. 2019 Sep 18;63(4/5):4-1.\n\n[10] Angelopoulos AN, Barber RF, Bates S. Theoretical foundations of conformal prediction. arXiv preprint arXiv:2411.11824. 2024 Nov 18.",
"questions": "1. What exact preprocessing and encoding are used to compute distances for mixed numerical and categorical variables, and why is w = 1 appropriate for both datasets? Please provide an ablation with a mixed data metric and report its impact on the number of filtered samples and Identifiability Score. \n2. Please sweep $\\alpha$ over a broad grid on both datasets and report JSD, F1, and average conformal set size, to demonstrate robustness and to justify the chosen values. \n3. Why CDRE over KLIEP or uLSIF for these regimes, especially when raw synthetic and real are highly separable?\n4. Can you quantify a Pareto curve between MIA recall and JSD by jointly varying Stage 1 strictness and Stage 2 $\\alpha$, to show trade offs rather than single operating points? \n5. could you demonstrate that applying a simple post hoc fairness constraint to MAPS refined data leads to improved fairness utility trade offs compared to operating on raw synthetic or real data?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-26T02:35:54",
"modification_date": "2025-11-12T15:12:04",
"review_url": "https://openreview.net/forum?id=LxkfjD81xB¬eId=NHFvszo6wf",
"license": "CC BY 4.0"
},
{
"id": "IOblnWyeJW",
"forum": "LxkfjD81xB",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19681/Reviewer_VsXX",
"reviewer_name": "Reviewer_VsXX",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper introduces MAPS, a model-agnostic post hoc framework designed to enhance the quality of synthetic data while ensuring per-sample 0-identifiability of private samples through a two-step sample selection process. The authors present extensive comparisons between the raw generated synthetic data and the refined dataset, demonstrating improved dataset distribution and downstream task fine-tuning utility. They also evaluate resistance to membership inference attacks to assess the extent of privacy protection. However, several important baseline methods are missing, and the strength of the claimed privacy guarantees remains uncertain.",
"strengths": "1.\tExtensive experiments are included for comparing the refined dataset and the raw dataset from both distribution perspective and model training utility perspective.\n2.\tMIA is performed as a direct demonstration of the privacy protection extent.",
"weaknesses": "1.\tSince real data privacy is repeatedly mentioned in this paper, I am curious, why do you choose 0-idenfiability metric instead of the currently widely applied differential privacy as the privacy guarantee criteria for this work? How is the differential privacy guarantee attribute of the proposed MASP? My concern main rise from experiments in section 4.4, where under some cases, MIA gains better success using 0-identifiability (refined dataset) compared with using no protection at all (raw dataset).\n2.\tLack of baseline comparisons. Privacy-first methods are only mentioned in the introduction part with a short assessment that they produce “synthetic data with notably degraded utility” without any statistical support in this paper or citation of previous papers. These methods are not compared in the experiments. How bad are these methods? If only the second selection step is applied to the generated samples of these methods, the utility of the refined dataset is at what level? Will this dataset be more robust to MIAs or will it be less robust?\n3.\tThe latest generative model used in this paper is DGD which is a model introduced in the year 2023, which is a little bit out of date. These years, there arise another line of work for private dataset synthesis, namely Private Evolution, PE, (Differentially Private Synthetic Data via Foundation Model APIs 1: Images, ICLR2024). I think this line of work should at least be mentioned. I would like to see the outcome of the combination of MAPS with PE or some reason why this should not be considered.",
"questions": "1.\tWhat’s the logic of bolding in Table 4? Some rows (e.g. Accuracy for DOMIAS) do not have a bolded number at all.\n2.\tWill the final refined dataset $\\tilde{\\mathcal{D}}$ contain repeated samples? As the selection method given in line35 to 39 in Algorithm 1 is a sampling with replacement. Why do you choose to do this? Will sampling without replacement be better as it contains more distinct samples?\n3.\tSome citations are not used in a correct format, e.g. “(Grover et al. 2019)” should be something like “Grover et al. (2019)” in line 130.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-16T16:51:12",
"modification_date": "2025-11-12T15:12:05",
"review_url": "https://openreview.net/forum?id=LxkfjD81xB¬eId=IOblnWyeJW",
"license": "CC BY 4.0"
}
] |
dYaIotpCiK | https://openreview.net/forum?id=dYaIotpCiK | Self-Guided Plan Extraction for Instruction-Following Tasks with Goal-Conditional Reinforcement Learning | 4 | 3.5 | [
2,
6,
4,
4
] | [
4,
4,
3,
3
] | 4 | [
"Instruction Following; Reinforcement Learning; Multimodal RL"
] | We introduce a framework for instruction-following tasks. Unlike prior methods that rely on predefined subtasks, our approach enables a language model to generate and refine high-level plans through a self-learning mechanism, reducing the need for manual dataset annotation. The method involves iterative co-training: an RL agent is trained to follow the generated plans, while the language model adapts and modifies these plans based on RL feedback and preferences. This creates a feedback loop where both the agent and the planner improve jointly. We validate the framework in environments with rich dynamics and stochasticity. Results show that our agents adhere to instructions more strictly than baseline methods, while also demonstrating strong generalization to previously unseen instructions. | A self-improving framework couples language-model plan generation with reinforcement learning feedback to achieve robust, generalizable instruction following without predefined subtasks. | applications to robotics, autonomy, planning | https://openreview.net/pdf?id=dYaIotpCiK | 2025-09-20T17:50:58 | 4 | [
{
"id": "eaK9zHutdA",
"forum": "dYaIotpCiK",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24913/Reviewer_6zoq",
"reviewer_name": "Reviewer_6zoq",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper introduces \"SuperIgor,\" a framework for instruction-following in complex, partially observable environments. The method proposes an iterative co-training loop between a LLM planner and a RL agent. The LLM generates high-level plans , which the RL agent, trained with PPO, attempts to execute. The agent's execution success rate is then used to create a preference dataset , which fine-tunes the LLM planner via DPO. The authors claim this self-guided mechanism reduces the need for manual annotation. To handle sparse rewards, the paper also introduces the Skill Curriculum Learning method. Experiments on the CrafText benchmark show the method outperforms baselines and generalizes to unseen instructions.",
"strengths": "The paper tackles the challenging and highly relevant problem of instruction following in dynamic, sparse-reward environments where agents must execute long, complex plans. To solve this problem, the paper clearly identifies the sparse reward problem as a critical bottleneck. The proposed SCL is well-motivated. The ablation study in Figure 4 provides compelling evidence that this curriculum is not just helpful but essential for learning, as even an agent with Oracle plans fails to master more than a few basic skills without it.",
"weaknesses": "The paper's central contribution claim is critically undermined by its own methodology. The abstract and introduction explicitly frame the contribution \"in contrast to prior methods that depend on a fixed set of predefined subtasks.\" However, Sec. 4.1 describes a process that does exactly this. The method starts by \"build a subtask base by extracting and canonicalizing possible subtasks from the instruction dataset\" to create a \"unified vocabulary\" in a \"strict normalized format\". The LLM then generates plans \"in terms of the established subtask base\". This is a fixed set of predefined subtasks. The fact that it is generated from the training dataset rather than manually specified is a minor implementation detail, not the fundamental shift in approach that the paper claims. This contradiction is a major misrepresentation of the work's core contribution.\n\nBesides, the Core DPO Contribution Shows No Empirical Benefit. The paper's primary thesis is that the iterative alignment of the LLM planner via DPO (i.e., the \"self-guided\" feedback loop) is \"highly effective\". This claim is directly and conclusively contradicted by the paper's own results in Figure 3.\n- To isolate the effect of the DPO loop, one must compare SI-SFT (agent trained on SFT-tuned LLM plans) against SI-DPO (agent trained on DPO-tuned LLM plans) in the final \"Cycle 2.\"\n- On Combo CrafText Tasks (Fig 3b): SI-DPO achieves a 0.21 Success Rate. SI-SFT also achieves a 0.21 Success Rate. The DPO loop provides zero benefit.\n- On New Object CrafText Tasks (Fig 3c): SI-DPO achieves a ~0.17 Success Rate. SI-SFT also achieves a ~0.17 Success Rate. \n\nGiven the above points, the paper's strong performance over baselines is almost entirely explained by (a) using plan-based supervision and (b) the Skill Curriculum Learning. The ablation in Figure 4 is the strongest result in the paper, showing SCL is the key enabler. The paper should have been framed around this curriculum, which is critical, rather than the DPO loop, which is empirically useless.\n\nThe method uses the RL agent's overall success rate as a preference signal for DPO. This is an exceptionally noisy and unreliable signal. The paper even admits this in its own limitations (Section I), stating, \"it is difficult to determine whether the failure stems from a flawed plan... or from inadequately trained policy\". This is not a minor limitation; it is the central research challenge of this paradigm, and the paper offers no solution. Using DPO on such a high-variance, ambiguous signal is unsound. The fact that it didn't work (per Weakness #2) is therefore unsurprising.",
"questions": "Please refer to the weakness part above.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-09T21:28:27",
"modification_date": "2025-11-12T18:27:22",
"review_url": "https://openreview.net/forum?id=dYaIotpCiK¬eId=eaK9zHutdA",
"license": "CC BY 4.0"
},
{
"id": "ZfOegsWUN9",
"forum": "dYaIotpCiK",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24913/Reviewer_WwGd",
"reviewer_name": "Reviewer_WwGd",
"rating": 6,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 4,
"summary": "This paper presents a hierarchical framework for language-guided agents. A planning module (a VLM) first produces a high-level plan, which is then executed by an RL agent. The key idea is that the system does not require annotated plans or a predefined skill library: instead, it generates multiple candidate plans zero-shot at the start of training and then evaluates and refines them during training. The agent’s success provides a preference signal for refining the plan generator using DPO. They also propose a curriculum learning method for skill learning, only training with plans that contain at most one skill that has not already been mastered.",
"strengths": "This paper is clearly written and compares against strong baselines (e.g., goal-conditioned PPO, plan-conditioned PPO). They demonstrate that this hierarchical approach allows their method to generalize combinatorially to unseen goals. I found the experiments detailing the benefits of the skill curriculum very clear (Figure 4).",
"weaknesses": "My main concerns are as follows:\n* The paper argues that requiring a predefined set of skills is restrictive. However, the proposed approach still fixes a set of skills at the start of training, derived via prompting, and does not modify this set during training. I think a comparison with previous work mentioned in the paper, like SayCan, which has fixed sets of skills, could therefore be apt (by using the same skills derived via prompting). \n* The paper claims robustness to stochastic environments, but it is not clearly demonstrated in the experimental section how CrafText is stochastic, or how the effects of stochasticity manifest.\n* Figure 3 lacks confidence intervals or the number of seeds, which makes it difficult to assess statistical significance.\n* The prompt for plan generation seems to contain an in-context example that has 3 distinct skills that would be enough to accomplish most of the Craftex tasks. What happens if, instead of defining 3 skills for your task, you use an example from a different environment with skills that are not directly applicable to your environment?",
"questions": "* How sensitive during policy extraction is the hyperparameter for the success rate to count a skill as learned? Would this bias the reward for DPO to have as few skills as possible, as the more skills in a plan, the longer it takes during training to be fully trained on?\n* What is the difference between SI-DPO and SI-SFT? There doesn’t seem to be that large a gap within the same cycle\n* What do you think is the main reason that you are not able to match the performance of the Oracle plans? The oracle performance increases from cycle 1 to cycle 2. How many steps would it take for it to converge?\n* Can you provide examples of the paraphrasing for the OOD evaluations?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-07T04:52:23",
"modification_date": "2025-11-12T18:27:22",
"review_url": "https://openreview.net/forum?id=dYaIotpCiK¬eId=ZfOegsWUN9",
"license": "CC BY 4.0"
},
{
"id": "frfVMqtn8z",
"forum": "dYaIotpCiK",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24913/Reviewer_1mJd",
"reviewer_name": "Reviewer_1mJd",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper addresses instruction-following tasks by integrating plan generation with instruction decomposition. The proposed framework enables iterative plan refinement through co-evolution between plan generation and execution modules without manual annotation. Experimental results demonstrate the effectiveness and generalizability of the method.",
"strengths": "Originality\n\nThe paper introduces a self-supervised learning paradigm for instruction-following tasks that reduces dependency on manually annotated plan datasets. While LLM-RL integration is prevalent in the field, the paper makes a contribution by articulating the plan generation process with sufficient technical depth and providing an analysis of the iterative refinement between language models and RL agents.\n\nClarity\n\nThe method and experiment setups are well-structured. The research questions are explicitly stated, and the experimental design addresses distinct aspects, including effectiveness, generalization, training dynamics under sparse feedback, and performance evolution across iterative cycles. The appendices provide algorithmic specifications and implementation details that enhance reproducibility.\n\nSignificance\n\nThe experimental results are convincing, demonstrating quantifiable improvements over baselines and meaningful ablation studies that substantiate the necessity of key components.",
"weaknesses": "Despite the paper's contributions, several aspects require clarification to strengthen the scientific rigor and reproducibility.\n\n- The definition of the research contains vague descriptions and lacks operational definitions in the paper. For RQ1, the paper evaluates \"generalization\" by testing on compositionally novel instructions (New Objects) and paraphrased formulations, while \"effectiveness\" is not explicitly operationalized. For RQ2, is \"well\" pertains to final performance or learning efficiency? Provide concrete metrics and align the terminology in RQs to establish coherent connections between questions and experimental protocols.\n\n- The paper describes building a \"subtask base by extracting and canonicalizing possible subtasks from the instruction dataset\" to create \"a unified vocabulary.\" Your method extracts subtasks from instructions, while prior work defines them directly. Is there any difference between previous works and yours?\n\n- The model settings and data representation in this paper are somewhat confusing. How do you parse the LLM-generated plan in natural language into PPO? What is the format of the feedback used for fine-tuning? What are the actual inputs and outputs of the policy? These technical details could be elaborated further.",
"questions": "- The paper only demonstrates solving EASY dataset. Have you considered solving MEDIUM and HARD datasets? And how is the result?\n\n- The paper mentions in the introduction that “a language model first decomposes an instruction into a structured sequence of actions,” but I did not find any further discussion of this.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T02:25:31",
"modification_date": "2025-11-12T18:27:22",
"review_url": "https://openreview.net/forum?id=dYaIotpCiK¬eId=frfVMqtn8z",
"license": "CC BY 4.0"
},
{
"id": "qlj1MKQvec",
"forum": "dYaIotpCiK",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24913/Reviewer_UGFb",
"reviewer_name": "Reviewer_UGFb",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The authors introduce SuperIgor, a framework designed for instruction-following tasks. Prior research has addressed complex instructions by predefining subtasks that agents can execute and then decomposing language instructions at the subtask level to solve them. In contrast, SuperIgor employs iterative co-training, where the RL agent follows generated plans, and the LLM adapts and refines those plans based on feedback from the RL agent. Experiments demonstrate superior performance relative to baselines.",
"strengths": "Unlike the majority of studies that treat LLMs as APIs detached from action-executing agents, SuperIgor's approach of optimizing the LLM through feedback from the RL agent represents a key differentiator from existing work.",
"weaknesses": "While the proposed method appears innovative, the experiments fall short in substantiating its novelty. The authors did not incorporate baselines [1] and [2], which require a predefined \"set of possible subtasks,\" as comparisons. Instead, the baselines seem to rely on raw instructions or plans generated by GPT-4. This setup suggests that the primary distinction from baselines may lie not in the claimed benefits of modifying the LLM via RL feedback, but rather in the use of predefined possible subtasks. Although the appendix illustrates how plans evolve during LLM finetuning, the marginal difference in success rates between SI-DPO and SI-SFT raises questions about the true impact of LLM finetuning. \n\n[1] Zhang, Jingwei, et al. \"Game On: Towards Language Models as RL Experimenters.\" *arXiv preprint arXiv:2409.03402* (2024).\n\n[2] Ahn, Michael, et al. \"Do as i can, not as i say: Grounding language in robotic affordances.\" *arXiv preprint arXiv:2204.01691* (2022).",
"questions": "1. The choice of the PPO algorithm for the RL agent is intriguing. What motivated this selection? Additionally, unlike SayCan, which trains individual policies for each skill, the framework appears to enable a single policy to handle multiple skills. Were there any limitations encountered when training with PPO to perform diverse skills?\n2. Success rate was used to assess 'skill mastery' and trigger 'LLM finetuning.' Relying solely on success rate might result in suboptimal skills being learned. Is there a specific reason for not incorporating metrics like reward or value?\n3. As highlighted in the Weakness section, including baselines similar to [1] and [2] would more robustly support the paper's claims.\n4. The paper specifies the use of Qwen2.5-14B-Instruct for the LLM. What was the rationale behind this choice, and does the selection of different LLMs influence the results? \n\n[1] Zhang, Jingwei, et al. \"Game On: Towards Language Models as RL Experimenters.\" *arXiv preprint arXiv:2409.03402* (2024).\n\n[2] Ahn, Michael, et al. \"Do as i can, not as i say: Grounding language in robotic affordances.\" *arXiv preprint arXiv:2204.01691* (2022).",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T18:25:40",
"modification_date": "2025-11-12T18:27:22",
"review_url": "https://openreview.net/forum?id=dYaIotpCiK¬eId=qlj1MKQvec",
"license": "CC BY 4.0"
}
] |
Fn2rSOnpNf | https://openreview.net/forum?id=Fn2rSOnpNf | SlotGCG: Exploiting the Positional Vulnerability in LLMs for Jailbreak Attacks | 5 | 3.5 | [
4,
4,
6,
6
] | [
4,
3,
4,
3
] | 4 | [
"LLM",
"Jailbreak",
"Adversarial Attack",
"Safe AI"
] | As large language models (LLMs) are widely deployed, identifying their vulnerability through jailbreak attacks becomes increasingly critical. Optimization-based attacks like Greedy Coordinate Gradient (GCG) have focused on inserting adversarial tokens to the end of prompts. However, GCG restricts adversarial tokens to a fixed insertion point (typically the prompt suffix), leaving the effect of inserting tokens at other positions unexplored. In this paper, we empirically investigate slots, i.e., candidate positions within a prompt where tokens can be inserted. We find that vulnerability to jailbreaking is highly related to the selection of the slots. Based on these findings, we introduce the Vulnerable Slot Score (VSS) to quantify the positional vulnerability to jailbreaking. We then propose SlotGCG, which evaluates all slots with VSS, selects the most vulnerable slots for insertion, and runs a targeted optimization attack at those slots. Our approach provides a position-search mechanism that is attack-agnostic and can be plugged into any optimization-based attack, adding only 200ms of preprocessing time. Experiments across multiple models demonstrate that SlotGCG significantly outperforms existing methods. Specifically, it achieves 14% higher Attack Success Rates (ASR) over GCG-based attacks, converges faster, and shows superior robustness against defense methods with 42% higher ASR than baseline approaches. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=Fn2rSOnpNf | 2025-09-18T14:58:38 | 4 | [
{
"id": "Ge5elwQBYJ",
"forum": "Fn2rSOnpNf",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10655/Reviewer_ELoX",
"reviewer_name": "Reviewer_ELoX",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper introduces SlotGCG, a novel extension of gradient-based jailbreak optimization that explicitly models positional vulnerability in prompts. The key idea is to identify slots—token-level positions that are more susceptible to adversarial perturbation—using an attention-derived Vulnerable Slot Score (VSS). The method first probes each slot’s sensitivity, then assigns probabilistic insertion weights and integrates them into the GCG optimization loop. Experiments across multiple open-weight LLMs and defenses demonstrate that SlotGCG improves attack success rate, convergence efficiency, and robustness against defense mechanisms, while remaining lightweight and compatible with existing frameworks.",
"strengths": "The paper connects positional token vulnerability with optimization-based jailbreaks, introducing VSS as a quantifiable measure of slot sensitivity. The slot-probing stage is lightweight and can be easily integrated into other attack pipelines, enhancing general applicability.",
"weaknesses": "1. Tokenizer dependence – As slots are token-based, specify the tokenizer used and discuss whether different tokenizers could affect slot boundaries or results.\n\n2. Optimality of Step 3 formula – It is unclear whether the slot-selection formula is optimal. Would selecting top-k slots and renormalizing yield different outcomes? Clarify if this is a tunable hyperparameter and analyze its effect on ASR and prompt coherence.\n\n3. Defense and baselines – The defense side lacks diversity and novelty. The chosen baselines and target models are relatively standard and dated. Including stronger or more recent defense baselines (e.g., [1] [2]) would strengthen the experimental credibility.\n\n4. Limited contribution – The method builds on optimization-based jailbreak attacks (e.g., GCG), yet its improvements appear easily neutralized by simple defense strategies. This raises the question of why such an optimization-based formulation is chosen in the first place. If the approach can be trivially mitigated, the paper should clarify what fundamental insight or practical benefit this “slot vulnerability” perspective contributes beyond existing optimization-based jailbreak methods.\n\n5. Target model and PPL results – Sec. 5.3 does not specify the target model, and the statement that “PPL mitigation is moderate” seems inconsistent with near-zero results. Please clarify both.\n\n6. Unclear notation – In Step 3 of Sec. 4, the variables fsi and S* are undefined. Add explicit notation or a brief symbol explanation for clarity.\n\n7. Minor textual error – Line 213 should mention three prompts instead of four. Please verify and correct.\n\n8. Slot normalization – In Sec. 3.1, slot indices are normalized by the longest prompt in the batch, which likely prevents values near 1.0. The motivation and comparison with per-prompt normalization should be clarified.\n\n[1] Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks\n\n[2] SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding",
"questions": "see above",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-04T00:20:02",
"modification_date": "2025-11-12T12:31:47",
"review_url": "https://openreview.net/forum?id=Fn2rSOnpNf¬eId=Ge5elwQBYJ",
"license": "CC BY 4.0"
},
{
"id": "0oomGIWqeP",
"forum": "Fn2rSOnpNf",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10655/Reviewer_rRJL",
"reviewer_name": "Reviewer_rRJL",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "The paper introduces SlotGCG, a positional variant of GCG that exploits positional vulnerability in LLMs. Instead of appending adversarial tokens as a suffix, SlotGCG identifies vulnerable token slots within the prompt using a lightweight Vulnerable Slot Score (VSS) derived from attention patterns, then inserts and optimizes attack tokens at those positions. The method is attack-agnostic and can be used as a plug-in front end to multiple GCG-style optimizers with minimal extra overhead. Experiments on several open-source models report higher ASR, faster convergence, and improved robustness under certain defenses, with success judged via automatic and human checks.",
"strengths": "1. Novelty\n\nReframes jailbreak optimization from “suffix-only” to positional attacks by identifying vulnerable token slots via a lightweight attention-derived score (VSS) and inserting/optimizing adversarial tokens at those positions.\n\n\n2. Method is general, plug-and-play, and more efficient\n\n\nAttack-agnostic front end that can be attached to multiple GCG-based optimizers with minimal overhead.\nResults show faster convergence/fewer steps and higher ASR than standard suffix-only pipelines under comparable budgets.\n\n3. Good experimental coverage\n\nEvaluates across several commonly used open-source instruction models (e.g., Llama, Mistral, Vicuna, Qwen).\nAdapts to multiple GCG-based attack variants and compares under several defenses, demonstrating consistent gains.",
"weaknesses": "1. Threat model and usability boundaries\n\nThe core VSS metric depends on attention weights (upper-half layers from the after-chat template to adversarial tokens), which are typically unavailable in black-box/closed models. The paper does not clarify applicability in strict black-box settings or provide surrogate attack choice.\n\n2. Transferability is underexplored\n - Cross-model transfer: Do attack prompts found on one model transfer to other models without further optimization (zero-shot transfer)?\n - Seed sensitivity: How does optimization vary with different random seeds (initial tokens, sampling orders)? \n - Context/system-prompt robustness: For the same target model, does changing the system prompt or different context affect ASR?\n\n3. Recency of attack targets\n\nExperiments focus on open-source instruction models (the newest being Qwen-2.5). There is no demonstration on newer/stronger/closed-source LLMs, limiting external validity.\n\n4. Hyperparameter choices lack justification\n\nThe effects of temperature in VSS, the precise definition of “upper-half layers,” and the impact of different after-chat template tokens are not detailed and analyzed. It remains unclear how sensitive VSS and final ASR are to these design choices.\n\n5. Confusion in section THE ROBUSTNESS OF SLOTGCG UNDER DEFENSE METHODS\n\nPerplexity Filter yields 0 ASR for all attack variants, yet the paper claims “Erase-and-Check yields the largest reduction in ASR.” This seems to appear inconsistent.\n\nThe paper attributes some failures to GPT-4 misclassification due to biases in the GPT-based filtering mechanism, but overall ASR is still measured by the same GPT-based judge. This creates a tension: if the judge is unreliable for filtering, why is it reliable for final success labels?\n\n6. Motivation and definition of VSS are hard to follow\n\nFigure 4 is used to motivate “developing a metric,” but VSS has not yet been defined at that point, making the figure difficult to interpret on first read.",
"questions": "1. White-box assumptions and transferability\n\n- Is SlotGCG a pure white-box attack (requiring attention weights) during both scoring and optimization?\n\n- If so, can the resulting adversarial prompts transfer to other models without further optimization (zero-shot cross-model transfer)? \n\n2. Effectiveness against deployed guardrails\n\nCan you please evaluate SlotGCG against current guardrails (e.g., Llama Guard or similar safety classifiers/filters)?\n\n3. Which VSS is shown in Figures 4 and 8?\n\nIn Figures 4 and 8, $\\text{VSS}^{\\text{final}}$ represents the VSS of which slot?\n\n4. Random Multi-Position Insertion in Figure 5\n\nWhat is the exact algorithm for **Random Multi-Position Insertion**? Why does random slot insertion without token optimization achieve faster convergence than GCG?\n\n5. Ablation on only insertion and token allocation via VSS\n\nCan you provide results for **VSS-based slot insertion only** (no token optimization), and compare them with **GCG-only token optimization** (no VSS-based slotting)? An ablation contrasting these two against the full SlotGCG would clarify each component’s contribution.\n\n6. Effect of the **token budget (m)**\n\nHow does the **token budget \\(m\\)** affect SlotGCG’s **ASR**, convergence speed, and stability? Please include curves or tables showing performance as \\(m\\) varies.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T18:19:30",
"modification_date": "2025-11-12T12:31:47",
"review_url": "https://openreview.net/forum?id=Fn2rSOnpNf¬eId=0oomGIWqeP",
"license": "CC BY 4.0"
},
{
"id": "CnwvSwJFnv",
"forum": "Fn2rSOnpNf",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10655/Reviewer_99aE",
"reviewer_name": "Reviewer_99aE",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "This paper argues that jailbreaking susceptibility depends strongly on where adversarial tokens are inserted. It introduces a Vulnerable Slot Score (VSS) to rank token positions by “positional vulnerability,” and proposes SlotGCG, which allocates/optimizes adversarial tokens at high-VSS slots rather than only at the suffix. Across Llama-2/3-8B, Mistral-7B, Vicuna-7B, and Qwen-2.5, SlotGCG reportedly improves attack success rate (ASR) over several GCG-family baselines, converges in fewer iterations, and remains more effective against several input-filtering defenses.",
"strengths": "* The paper presents a clear and original problem framing, defining insertion slots, formalizing the Vulnerable Slot Score (VSS), and linking it to attention patterns to show that positional vulnerability is largely prompt dependent.\n* The method is attack-agnostic and simple, with a clear step-by-step presentation. The analysis is insightful, using random multi-position insertion and attention heatmaps that convincingly support the positional hypothesis.\n* The results show significant empirical gains, with large performance improvements and meaningful reductions in the number of optimization steps needed for success.\n* The attack appears more robust than prior methods against several defenses.\n* The writing is clear, well-structured, and easy to follow.",
"weaknesses": "1. The paper lacks an analysis of transferability across models. It remains unclear whether positional vulnerabilities are model-specific or primarily prompt-dependent. Evaluating SlotGCG as a black-box attack would provide valuable insight into this question.\n2. The evaluation is limited to AdvBench, while several newer jailbreak or safety datasets now exist [1-3]. Including additional benchmarks would strengthen the empirical claims and demonstrate broader robustness.\n3. The method is only tested within the GCG family. Other optimization-based attacks exist, and it is unclear whether the proposed position-finding process generalizes to them. Since the abstract claims applicability to “any” attack, evidence from beyond GCG is needed.\n4. The defense selection is weak. The Erase-and-Check (suffix) version is expected to perform poorly when there is no suffix, as it effectively just deletes the response. Evaluating only the SmoothLLM swap defense is also insufficient; since the attack produces more uniform attention maps, token swapping may be less effective. Other SmoothLLM variants (insert, patch) and stronger recent defenses [4] should be tested for a fair assessment. \n5. The reported preprocessing cost of “+200 ms” is not empirically demonstrated or discussed in detail. The paper should clarify how this value was obtained.\n6. Although the calculation of the Vulnerable Slot Score (VSS) is novel, the general idea that attack performance depends on token position has been explored up to some level in prior work [5-7]. These earlier studies should be acknowledged and discussed.\n\n***Minor remarks:***\n1. Table 5 presents very strong and important results, so it would be better placed in the main text rather than the appendix to highlight its contribution more clearly.\n2. Line 472 refers to “Table 3” for the VSS distribution, but this table is no related. This reference should be corrected.\n\n[1] Zeng, Y., Shen, T., Ding, Y., Zheng, L., Sun, Y., & Chen, H. (2024). JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models\n\n[2] Mazeika, M., Wei, A., Casper, S., Rafailov, R., Dragan, A. D., Finn, C., & Hadfield-Menell, D. (2024). HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal\n\n[3] Xu, W., Wang, X., Zhang, Z., & Li, M. (2023). ToxicChat: Unveiling Hidden Challenges of Toxicity Detection in Real-World User-AI Conversation\n\n[4] Yi, S., Liu, Y., Sun, Z., Cong, T., He, X., Song, J., Xu, K., & Li, Q. (2024). Jailbreak Attacks and Defenses Against Large Language Models: A Survey\n\n[5] Wang, J., Li, H., Peng, H., Zeng, Z., Wang, Z., Du, H., & Yu, Z. (2025). Activation-Guided Local Editing for Jailbreaking Attacks.\n\n[6] Mu, J., Ying, Z., Fan, Z., Jing, Z., Zhang, Y., Yu, Z., & Zhang, X. (2025). Mask-GCG: Are All Tokens in Adversarial Suffixes Necessary for Jailbreak Attacks?\n\n[7] Rocamora, E., Dubey, A., Jauhri, A., Pandey, A., Letman, A., Mathur, A., & Vaughan, A. (2024). Revisiting Character-Level Adversarial Attacks for Large Language Models.",
"questions": "1. Are token budgets (total adversarial tokens) matched across baselines in Table 1?\n2. How sensitive are results to VSS temperature, and number of slots selected?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T06:48:28",
"modification_date": "2025-11-12T12:31:48",
"review_url": "https://openreview.net/forum?id=Fn2rSOnpNf¬eId=CnwvSwJFnv",
"license": "CC BY 4.0"
},
{
"id": "hvcbneD3Ff",
"forum": "Fn2rSOnpNf",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10655/Reviewer_dH2T",
"reviewer_name": "Reviewer_dH2T",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes SlotGCG, which extends GCG jailbreak attacks by inserting adversarial tokens at multiple vulnerable positions throughout prompts rather than only at the suffix. The method uses a Vulnerable Slot Score based on attention patterns to identify optimal insertion positions. Experiments on 6 LLMs show average 14% ASR improvement, faster convergence, and 42% higher robustness against defenses.",
"strengths": "1. well-motivated problem: The systematic exploration of positional vulnerability is underexplored. \n\n2. Comprehensive empirical validation: Testing across 6 models × 4 attack variants × 4 defenses with consistent improvements demonstrates robustness of the approach.\n\n3. Practical efficiency: The method adds only 200ms preprocessing but achieves up to 10× faster convergence, making it immediately deployable as a drop-in enhancement to existing GCG-based methods.",
"weaknesses": "1. SlotGCG shows no improvement or degradation on Mistral-7B and Vicuna-7B in Table 1, but the paper provides no analysis of why positional vulnerability varies across architectures. This limits understanding of when the method applies.\n\n2. The observation that defenses can increase ASR due to GPT-4 filtering during optimization suggests the evaluation methodology itself may be problematic, undermining confidence in the reported improvements.\n\n3. Some hyperparameters lack justification, e.g., why temperature T=8? What happens with other layer selections or temperatures?",
"questions": "1. Can you characterize what architectural or training differences cause SlotGCG to fail on Mistral/Vicuna?\n\n2. What is the performance with (1) only lower layers, (2) only upper layers, (3) random layer selection?\n\n3. AutoDAN also uses flexible token placement. How does SlotGCG compare in effectiveness and efficiency?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T19:53:26",
"modification_date": "2025-11-12T12:31:48",
"review_url": "https://openreview.net/forum?id=Fn2rSOnpNf¬eId=hvcbneD3Ff",
"license": "CC BY 4.0"
}
] | |
0TkMmzQdwd | https://openreview.net/forum?id=0TkMmzQdwd | RETIMECAUSAL: A CONSISTENT EM FRAMEWORK FOR CAUSAL DISCOVERY IN IRREGULAR TIME SERIES | 5.333333 | 3.666667 | [
6,
8,
2
] | [
3,
4,
4
] | 3 | [
"Causal Discovery",
"Expectation-Maximization (EM)",
"Additive Noise Model (ANM)",
"Irregular Sampling",
"Time Series"
] | This paper studies causal discovery in irregularly sampled time series—a pivotal challenge in high-stakes domains like finance, healthcare, and climate science, where missing data and inconsistent sampling frequencies distort causal mechanisms. The core challenge arises from the interdependence between missing data imputation and causal structure recovery: an error in either component can cascade into the other, ultimately distorting the inferred causal graph. Existing methods either impute first and then discover, or jointly optimize both via neural representation learning, but lack explicit mechanisms to ensure mutual consistency of imputation and structure learning. We address this challenge with ReTimeCausal, an EM-based framework that alternates between imputation and structure learning, promoting structural consistency throughout the optimization process. Our framework emphasizes theoretical consistency guarantees for structure recovery, extending classical results to settings with irregular sampling and high missingness. Through kernelized sparse regression and structural constraints, ReTimeCausal iteratively refines missing values (E-step) and causal graphs (M-step), resolving cross-frequency dependencies and missing data issues. Extensive experiments on synthetic and real-world datasets demonstrate that ReTimeCausal outperforms existing state-of-the-art methods under challenging irregular sampling and missing data conditions. | ReTimeCausal is a robust method for causal discovery in multivariate time series with missing and irregular data, employing an EM-style framework grounded in Additive Noise Models to ensure accurate structure recovery across varying conditions. | causal reasoning | https://openreview.net/pdf?id=0TkMmzQdwd | 2025-09-12T18:46:58 | 3 | [
{
"id": "P8qLdi8QKU",
"forum": "0TkMmzQdwd",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4401/Reviewer_eYxy",
"reviewer_name": "Reviewer_eYxy",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces ReTimeCausal, an EM-based framework for causal discovery in multivariate time series with irregular sampling and high data missingness. By alternating between smart imputation and causal graph estimation, the method aims to ensure mutual consistency between the imputed values and the learned causal structure. The authors provide theoretical guarantees of consistency for structure recovery under a set of explicit assumptions. Experiments on synthetic and real-world datasets demonstrate improved performance over classical and recent baselines for both structural accuracy and robustness.",
"strengths": "1. The EM-based approach is an intuitive solution to the often glossed over problem of missing data in prior work.\n2. The theoretical results on structure consistency are justified and rigorous.\n3. The kernelized sparse reg approach allows for nonlinear dynamics as well as linear.\n4. Good recent baselines in experiments.",
"weaknesses": "1. Although multiple real-world datasets are tested, the primary evaluation still leans heavily on synthetic data with relatively small- to medium-scale graphs (10–50 variables). The main real-world benchmark (CausalRivers) comprises a 10-node subgraph, and other datasets (NetSim, CausalTime) are also moderate in scale. Absence of large-scale, high-noise, and highly nonstationary real-world benchmarks undercuts the claimed generalizability.\n2. Theoretical results hinge on a strong modeling assumption: MCAR/MAR missingness (Assumption 3.2). Many domains of practical interest, especially healthcare and finance, often involve MNAR data, latent confounding, or nonstationary relationships. While the authors acknowledge this (Appendix A.5, A.10), the framework offers no empirical or algorithmic pathway for relaxing these beyond a brief mention as future work. \n\n### Minor comments:\n3. You mention Assumption 3.1 in lines 145 and 357 but there is no such assumption in the text.",
"questions": "1. Can the EM pipeline be extended to non-ignorable missingness (e.g., selection/Heckman-style or propensity-aware E-steps)?\n2. What is the time complexity of ReTimeCausal?\n3. How robust is the approach to severe nonstationarity and rapidly shifting lag structures?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T17:39:04",
"modification_date": "2025-11-12T11:15:42",
"review_url": "https://openreview.net/forum?id=0TkMmzQdwd¬eId=P8qLdi8QKU",
"license": "CC BY 4.0"
},
{
"id": "9WcS8JmHA9",
"forum": "0TkMmzQdwd",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4401/Reviewer_CgG6",
"reviewer_name": "Reviewer_CgG6",
"rating": 8,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper addresses causal discovery when multivariate time series are irregularly sampled and contain substantial missingness. Standard interpolation or alignment steps often distort causal order and create spurious dependencies.\nThe authors propose ReTimeCausal, an Expectation–Maximization (EM) style framework that alternates between\n\n1. a structure-aware imputation step that estimates missing values based on conditional expectations under additive-noise models, and\n\n2. a structure-learning step that performs kernelized sparse regression with projection to input space and thresholding to infer lagged causal graphs.\n\n\nA noise-aware imputation mechanism injects residual noise to preserve independence assumptions used for pruning. Theoretical analysis (Proposition 1) establishes structural consistency under standard assumptions — MCAR/MAR missingness, finite-order Markovity, causal sufficiency, and faithfulness — given appropriate smoothing and threshold schedules.\nExperiments on synthetic linear and nonlinear systems and the CausalRivers dataset show that ReTimeCausal maintains high F1-scores even with 60–80 % missingness and outperforms baselines such as PCMCI, DYNOTEARS, and CUTS+ (with TimeMixer preprocessing). On CausalRivers, it achieves the best reported F1 = 0.463 (MR = 0.2) and 0.414 (MR = 0.6).",
"strengths": "- Principled EM alternation: directly enforces consistency between imputation and structure learning.\n- Kernelized sparse regression: captures nonlinear lagged effects and projects to interpretable input-space graphs.\n- Noise-aware imputation: preserves independence assumptions for accurate CAM-style pruning.\n- Theoretical support: provides asymptotic structural consistency (Proposition 1) under clear conditions.\n- Empirical robustness: strong performance across missingness levels; best F1 on CausalRivers with competitive SID/SHD.",
"weaknesses": "- Assumption scope: relies on MCAR/MAR and causal sufficiency; MNAR and latent confounding remain open.\n- Hyperparameter sensitivity: thresholds (γ, β) and smoothing (α) impact performance but lack systematic tuning guidance.\n- Computational profile: runtime and memory costs for kernel features plus EM iterations are not detailed.\n- Evaluation scale: real-world validation limited to a 10-node CausalRivers subgraph; larger datasets would improve external validity.",
"questions": "1. How sensitive are the results to thresholds (γ, β) and smoothing (α)? Any principled selection strategy?\n2. Can you provide empirical convergence diagnostics (E/M objective traces or iteration counts)?\n3. How does the method behave under mild MNAR violations where missingness depends weakly on unobserved values?\n4. What are the main computational bottlenecks, and can kernel features be approximated (e.g., random features)?\n5. How does CAM-style pruning compare with simpler coefficient-magnitude pruning in nonlinear regimes?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T12:24:55",
"modification_date": "2025-11-12T11:15:43",
"review_url": "https://openreview.net/forum?id=0TkMmzQdwd¬eId=9WcS8JmHA9",
"license": "CC BY 4.0"
},
{
"id": "HgmiSKX9WA",
"forum": "0TkMmzQdwd",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4401/Reviewer_UPVv",
"reviewer_name": "Reviewer_UPVv",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper addresses a critical challenge in temporal causal discovery: the presence of missing values and inconsistent sampling frequencies in multivariate time series data. These data issues commonly lead to distortions that severely weaken the performance of subsequent causal inference methods. To overcome this limitation, the authors propose a novel, unified framework that tackles the problems of alignment and imputation concurrently with causal structure learning. The core of their solution is an Expectation-Maximization (EM) style method that allows the two traditionally separate processes to be mutually promoted, thereby improving the robustness and accuracy of the resulting causal graph.",
"strengths": "1. The paper presents a well-motivated approach and proposes a framework that is both simple and effective.\n2. The authors provide a compelling argument challenging the traditional two-step approach (imputation followed by causal discovery), emphasizing instead that these two processes can and should be mutually promoted. This insight forms the basis of their innovative approach.",
"weaknesses": "1. The authors need to clarify the definitions of numerous mathematical symbols. For instance, the variable $k$ appears to denote the number of iteration. Overall, the mathematical notation and methodology presentation in the paper require significant improvement for clarity and rigor.\n2. The empirical evaluation lacks sufficient breadth to fully validate the proposed framework's generalizability and robustness. Specifically, the authors should include comparisons against a wider range of competitive state-of-the-art methods and ideally use more diverse real-world datasets to demonstrate the method's effectiveness across different domains and data characteristics.",
"questions": "1. (Page 3, Equation 1): Regarding the ANM framework employed, could the authors please clarify whether the proposed approach explicitly accounts for and models contemporaneous effects (instantaneous effects) between variables?\n\n2. (Page 5, Equation 3): When dealing with consecutive missing values (as depicted in Figure 2), does the calculation rely solely on directly observed data, or does it utilize autoregressively imputed values from the previous timestep as input?\n\n3. (Page 5, Equation 4): The paper needs to clarify the functional form of $f_i$. The authors state that black-box neural networks are not used; however, it remains ambiguous whether the inherent interpolation or modeling of $f_i$ is strictly linear, a form of low-order non-linearity, or another mechanism.\n\n4. Clarification on the RKHS and dimensional handling is essential. While we infer the input space dimension to be $p$ (based on the $p \\times p$ matrix $\\mathbf{W}$), please confirm this and explain the conceptual role and effective dimensionality of the RKHS. Furthermore, a major inconsistency exists in the regularization output: if $\\mathbf{W}$ is $p \\times p$, and $\\mathbf{G}$ is derived via LASSO, how is the dimension of $\\mathbf{G}$ subsequently reduced or transformed from $p \\times p$ to $d \\times d$?\n\n5. The selection of the crucial threshold parameter should be clarified. Please detail the methodology used to determine this parameter in the experiments, and provide recommended strategies or principles for practitioners seeking to tune it for real-world applications.\n\n6. The current set of comparison methods is too limited. The authors should include a wider range of modern and relevant time series causal discovery baselines. Specifically, please evaluate powerful two-step processes, i.e., combining the most advanced imputation methods with established discovery techniques. Please find a list of relevant references provided below for your consideration.\n\n7. The superior performance of PCMCI + TimeMixer in Table 3 requires a detailed analysis to clarify its mechanism. Does this superior performance primarily stem from the high quality of the TimeMixer imputation? If so, does this imply that combining TimeMixer with other temporal causal discovery methods would also achieve similar high performance? If the strength lies uniquely in the PCMCI method itself, the authors could provide a clear justification for why PCMCI is uniquely well-suited for the TimeMixer-imputed outputs in this missing data scenario.\n\n8. The paper lacks an ablation study to substantiate the central claim of mutual promotion between imputation and discovery. Could the authors provide a visualization (a plot over iterations $k$) showing how both the causal recovery accuracy and the imputation quality evolve, demonstrating how they promote each other?\n\n9. While the paper addresses inconsistent sampling, clarification is needed on how the alignment mechanism handles time distortions (e.g., issues typically addressed by dynamic time warping). How is this addressed, and how does the model differentiate between a true missing block and a time warp?\n\nReferences: \n\na. Han et al. \"Root Cause Analysis of Anomalies in Multivariate Time Series through Granger Causal Discovery.\" ICLR 2025.\n\nb. Gong et al. \"Rhino: Deep causal temporal relationship learning with history-dependent noise.\" ICLR 2023.\n\nc. Gao et al. \"IDYNO: Learning nonparametric DAGs from interventional dynamic data.\" ICML 2022.\n\nd. Marcinkevičs et al. \"Interpretable models for granger causality using self-explaining neural networks.\" ICLR 2021.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T05:35:09",
"modification_date": "2025-11-12T11:15:43",
"review_url": "https://openreview.net/forum?id=0TkMmzQdwd¬eId=HgmiSKX9WA",
"license": "CC BY 4.0"
}
] |
Ozh7G5h7Ce | https://openreview.net/forum?id=Ozh7G5h7Ce | Shared Modular Recurrence for Universal Morphology Control | 4 | 3.333333 | [
2,
6,
4
] | [
4,
3,
3
] | 3 | [
"Deep Reinforcement Learning",
"Robotic Control",
"Generalization"
] | A universal controller for any robot morphology would greatly improve computational and data efficiency. By utilizing contextual information about the properties of individual robots and exploiting their modular structure in the architecture of deep reinforcement learning agents, steps have been made towards multi-robot control. When the robots have highly dissimilar morphologies, this becomes a challenging problem, especially when the agent must generalize to new, unseen robots. In this paper, we hypothesize that the relevant contextual information can be partially observable, but that it can be inferred through interactions for better multi-robot control and generalization to contexts that are not seen during training. To this extent, we implement a modular recurrent transformer-based architecture and evaluate its (generalization) performance on a large set of MuJoCo robots. The results show a substantial improved performance on robots with unseen dynamics, kinematics, and topologies, in four different environments. | Introduction of modular recurrence in the architecture of deep reinforcement learning agents for improved (zero-shot generalization) performance in robotic control. | reinforcement learning | https://openreview.net/pdf?id=Ozh7G5h7Ce | 2025-09-19T22:33:18 | 3 | [
{
"id": "kKtm2Gnlq3",
"forum": "Ozh7G5h7Ce",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18962/Reviewer_yQpS",
"reviewer_name": "Reviewer_yQpS",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper builds upon previous transformer-based universal control methods (MetaMorph and ModuMorph) by introducing a recurrent model to handle partially observable robot contexts. These robot contexts include properties like robot limb mass, shape, gear ratio for each joint, etc, as listed in the appendix. The authors hypothesize that the relevant contextual information can be partially observable, and in such cases, the added recurrent module allows the agent to infer hidden contextual information, achieving better multi-robot control and better generalization to unseen robot contexts. The experiments on MuJoCo show consistent improvements in zero-shot generalization across unseen robot morphologies, dynamics, and kinematics, demonstrating that integrating recurrence helps the controller adapt more effectively to diverse and unfamiliar robots.",
"strengths": "- The performance gain is very consistent, showing promising benefits of the proposed shared modular recurrence.\n- Though I find simple and incremental, this work presents a very clear motivation and well-defined problem setting.",
"weaknesses": "- The scope of the experiments seems very limited to me considering the large diversity of possible robot morphologies. The paper does not analyze how the recurrent mechanism scales with larger or more diverse sets of robots, nor does it investigate the relationship between the amount of training data and the observed generalization gains. Without experiments on different dataset sizes or more complex morphological distributions, I'm not convinced that the proposed recurrent module truly improves generalization in a scalable and robust manner.\n- The proposed approach mainly extends existing transformer-based frameworks MetaMorph and ModuMorph by introducing a recurrent layer to handle partial observability, which is also a hypothesis brought by this work. Although this modification leads to performance improvements, it represents a relatively minor architectural change without introducing new learning principles or theoretical insights. The paper positions the work as addressing partial observability, but the underlying idea of adding recurrence to capture temporal dependencies is conceptually straightforward and has been widely explored in prior reinforcement learning and robotics studies. Therefore, the novelty and contribution may not be sufficient.\n- Several figures use inconsistent evaluation scales, which makes it difficult to visually compare performance across different settings, especially for training/testing comparison. For example, in Figure 4 the returns in Incline reach nearly 3000 during training, whereas in Figure 6 the corresponding test returns drop below 1000.",
"questions": "- Are there any other methods beyond MetaMorph and ModuMorph that can be integrated with recurrence?\n- Why in Fig. 8, train and test performance in Flat Terrain, the variance of ModuMorph when provided with body_mass is exceptionally large? Besides, the x-axis labels in Fig. 8 are slightly misaligned.\n- Why do you think making the context of a robot partially observable is important? To my understanding, the context of a control task is usually available.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T13:35:24",
"modification_date": "2025-11-12T15:03:33",
"review_url": "https://openreview.net/forum?id=Ozh7G5h7Ce¬eId=kKtm2Gnlq3",
"license": "CC BY 4.0"
},
{
"id": "1qlnvgPbHg",
"forum": "Ozh7G5h7Ce",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18962/Reviewer_18Nq",
"reviewer_name": "Reviewer_18Nq",
"rating": 6,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes addressing the challenging problem of training a single Reinforcement Learning policy to control any robot morphology. The authors hypothesise that key robot characteristics (like friction or damping) are partially observable introduce a per-limb Recurrent Neural Network (RNN) into modular transformer-based architectures, demonstrating a substantial and consistent improvement in generalizing to robots.",
"strengths": "* The use of RNN for the partially observable CMDP is well-motivated in Section 2.2. It would be even nicer to provide a few quantitative evidence to show the partial observability. \n* Empirically, the work successfully demonstrates a significant and consistent increase in generalisation performance to unseen robot morphologies, dynamics, and kinematics over strong baselines (MetaMorph and ModuMorph).",
"weaknesses": "* While the paper hypothesises that recurrence allows the agent to infer specific unobservable context (like friction or damping), the paper does not include an explicit analysis or visualisation that confirms what the RNN is encoding or how well it correlates with the true (unobservable) physical properties. \n* The authors also comment on the slow training speed of the RNN. It would be helpful to provide the training speed of the experiments. And also some ablation studies on the key hyperparameters, e.g. RNN's latent state, shared network.",
"questions": "1. For R-MoMo and R-MeMo, why are the positions of the RNN different? Can they inserted both before the embedding or before the transformer? \n2. Can you briefly comment on the training and testing robots in Section 5.4? Are they sampled from the same distribution? \n3. We know that RNN is an architecture to solve a Meta RL problem. Are there other advanced Meta RL methods helpful to this case?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T05:58:21",
"modification_date": "2025-11-12T15:03:34",
"review_url": "https://openreview.net/forum?id=Ozh7G5h7Ce¬eId=1qlnvgPbHg",
"license": "CC BY 4.0"
},
{
"id": "Of4HN3nzPk",
"forum": "Ozh7G5h7Ce",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18962/Reviewer_jmJo",
"reviewer_name": "Reviewer_jmJo",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper builds upon previous modular robotic control approaches, and proposes to infer unobservable but relevant contextual information from history interactions using recurrent networks to enhance cross-embodiment generalization. The resulting modular recurrent transformer-based architectures, R-MeMo and R-MoMo, are validated across four commonly adopted MuJoCo environments, yielding notable performance gains compared with original networks without recurrence.",
"strengths": "1.This paper addresses an important problem in robotic control, i.e., learning universal controllers generalizable to morphologically different agents. The motivation of inferring contextual information from environmental interactions is interesting (though I respectfully believe this motivation is not fully supported by the experiments; Please see Weakness 1). \n\n2.The experimental results are promising, outperforming the latest baselines by a large margin. Four simulation environments with varying difficulty levels are examined, increasing the credibility. \n\n3.The authors provide detailed research background and related works, clearly delineating the relationships between their work and the literature.",
"weaknesses": "1.One of the key claims of this paper is that some unobservable contextual features could be inferred from environmental interactions. However, the notable performance drops seen when some critical features are removed, as reported in Figure 8, indicate that much of the contextual features could not be successfully recovered, which seems contradictory. \n\n2.Since the proposed methods, R-MeMo and R-MoMo, largely build upon MetaMorph and ModuMorph, the authors are suggested to provide a more detailed introduction to their architectures, in order not to cause confusion in readers not familiar with these prior works.",
"questions": "1.The use of RNN for dealing with POMDP seems a common practice. Following Weakness 1, how could one eliminate the possibility that the modular recurrence is merely learning to recover unobservable state transitions (as in a standard POMDP setting) rather than morphological contexts? I would be happy to raise my rating if the authors could by some means disentangle these two, for example, by showing correlation between RNN representations and morphological features. \n\n2.Could the authors explain why, in the R-MoMo architecture, AOH is fed into the base controller (i.e., Transformer) rather than into the context encoder and used for generating network parameters alongside the observable contexts?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T18:11:35",
"modification_date": "2025-11-12T15:03:34",
"review_url": "https://openreview.net/forum?id=Ozh7G5h7Ce¬eId=Of4HN3nzPk",
"license": "CC BY 4.0"
}
] |
9aaiQbIUND | https://openreview.net/forum?id=9aaiQbIUND | Leveraging Generative Trajectory Mismatch for Cross-Domain Policy Adaptation | 4.5 | 3.75 | [
6,
2,
4,
6
] | [
4,
4,
3,
4
] | 4 | [
"Reinforcement Learning; Domain Adaptation; Online Dynamics Adaptation"
] | Transferring policies across domains poses a vital challenge in reinforcement learning, due to the dynamics mismatch between the source and target domains. In this paper, we consider the setting of online dynamics adaptation, where policies are trained in the source domain with sufficient data, while only limited interactions with the target domain are allowed. There are a few existing works that address the dynamics mismatch by employing domain classifiers, value-guided data filtering, or representation learning. Instead, we study the domain adaptation problem from a generative modeling perspective. Specifically, we introduce DADiff, a diffusion-based framework that leverages the discrepancy between source and target domain generative trajectories in the generation process of the next state to estimate the dynamics mismatch. Both reward modification and data selection variants are developed to adapt the policy to the target domain. We also provide a theoretical analysis to show that the performance difference of a given policy between the two domains is bounded by the generative trajectory deviation. More discussions on the applicability of the variants and the connection between our theoretical analysis and the prior work are further provided. We conduct extensive experiments in environments with kinematic and morphology shifts to validate the effectiveness of our method. The results demonstrate that our method provides superior performance compared to existing approaches, effectively addressing the dynamics mismatch. We provide the code of our method at https://anonymous.4open.science/r/DADiff-release-83D5. | We introduce DADiff, a dynamics adaptation method designed from the perspective of diffusion models, and establish a provable performance bound. | reinforcement learning | https://openreview.net/pdf?id=9aaiQbIUND | 2025-09-17T21:43:10 | 4 | [
{
"id": "UklN0o5tvR",
"forum": "9aaiQbIUND",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9260/Reviewer_G9sM",
"reviewer_name": "Reviewer_G9sM",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper addresses the problem of online dynamics adaptation in reinforcement learning, where a policy is pre-trained in a source domain (e.g., a simulator) and must be adapted to a target domain (e.g., the real world) with only limited interactions. The authors propose DADiff, a novel framework that leverages generative models, specifically diffusion models, to capture the dynamics mismatch between domains. The core idea is to interpret the state transition as a conditional generative process and to measure the \"generative trajectory deviation\"—the discrepancy between the latent state trajectories of the source and target domains during the diffusion generation process. The paper provides a theoretical performance bound linking this deviation to the policy's performance gap and proposes two practical variants: DADiff-modify (which penalizes source-domain rewards based on the deviation) and DADiff-select (which filters source-domain data). The method is also extended to the Flow Matching framework. Experiments on MuJoCo environments with kinematic and morphology shifts show that DADiff outperforms several strong baselines, including PAR.",
"strengths": "The paper offers a fresh and principled perspective on dynamics adaptation by framing it as a problem of generative trajectory mismatch. This is a significant conceptual shift from prior work that often relies on domain classifiers or single-step representation learning.",
"weaknesses": "The primary concern is the justification for the added complexity of using a diffusion model for dynamics modeling. While the results are strong, the performance gain over the strongest baseline, PAR, is sometimes marginal (e.g., in `ant(broken hips)` or `walker(broken right foot)`). The paper acknowledges that VGDF, a model-based method, is significantly slower, but it does not provide a detailed analysis of DADiff's own computational cost (e.g., training/inference time, memory footprint) compared to PAR, which is a crucial factor for real-world applicability. The increased complexity needs a more compelling justification in terms of capability.\n\nThe experiments are conducted on standard MuJoCo locomotion tasks, which, while common, have relatively simple and deterministic dynamics. The paper’s core claim is about capturing complex dynamics mismatches via diffusion models. To truly validate the advantage of modeling the full generative trajectory, experiments on tasks with more complex, high-dimensional, or highly stochastic dynamics would be far more convincing. The current experiments, which largely follow the setup of PAR, do not fully showcase the potential benefits of the proposed method in more challenging scenarios.",
"questions": "The paper mentions an extension to Flow Matching (Appendix C). Could the authors elaborate on the specific modifications required in the algorithm? In the diffusion framework, the deviation is calculated using the noise prediction model `ϵ_θ`. What is the direct analogue in the Flow Matching framework? Is it solely based on the vector field prediction `v_θ`, and if so, how does the continuous-time nature of the trajectory in Flow Matching affect the calculation and interpretation of the \"generative trajectory deviation\" compared to the discrete steps in diffusion?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T11:00:00",
"modification_date": "2025-11-12T12:15:09",
"review_url": "https://openreview.net/forum?id=9aaiQbIUND¬eId=UklN0o5tvR",
"license": "CC BY 4.0"
},
{
"id": "PMlkJ4jhC7",
"forum": "9aaiQbIUND",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9260/Reviewer_5rqK",
"reviewer_name": "Reviewer_5rqK",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper provides a diffusion-based method to obtain the domain gap and provides a reward modification and data filtering method for policy learning. They provide a theoretical guarantee of the policy $\\pi$'s performance on the two domain. Theoretical results and empirical results shows performance improvement of the method.",
"strengths": "The paper is well written and easy to follow. They propose a new diffusion-based domain gap measure method similar to DARC and PAR. Similar to DARC and PAR, they identify a performance gap in policy $\\pi$ between the source and target domains, defined by the KL divergence of the latent representation.",
"weaknesses": "1, the odrl benchmark has both gravity shift and friction shift, which are not included in the experiments. Also, what is the shift level of the experiment? Is it easy, medium or hard? \n\n2, the novelty of the paper seems not significant to me. The high-level idea of it is to obtain a more fine-grained shift measurement compared to DARC and PAR, and the theoretical analysis is actully similar to those paper, except with sligtly different notation of the shfit measurement. Also, the performance doesn't have a significant improvement compared to them. \n\n3, DARC and PAR have been shown to be ineffective when the shift is large. What is your performance on a large shift case? \n\n4, The performance of your method seems to rely on an assumption that the domain shift is not that large. If the shift is too large, the KL in Eq. 5 will grow extremely large, or even infinity. The performance is bounded only when the KL of the source and target is bounded. Also, as stated in [1], the KL can be ill defined when the shift is large because there is no support of some target transition in source domain. \n\nIn summary, I am questioning whether the reward modification method is still a valid method to solve the off-dynamcis RL problem as many previous work [1,2] has shown the limitation of it and when the shift is large (the joint distribution is small), the reward modicication method always fails, performing good in the source but poorly in the target. \n\n[1] Composite Flow Matching for Reinforcement Learning with Shifted-Dynamics Data\n[2] Off-Dynamics Reinforcement Learning via Domain Adaptation and Reward Augmented Imitation",
"questions": "See weakness.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T01:51:19",
"modification_date": "2025-11-12T12:15:10",
"review_url": "https://openreview.net/forum?id=9aaiQbIUND¬eId=PMlkJ4jhC7",
"license": "CC BY 4.0"
},
{
"id": "52GWkUDBZX",
"forum": "9aaiQbIUND",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9260/Reviewer_w6iz",
"reviewer_name": "Reviewer_w6iz",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 1,
"summary": "This paper proposes DADiff, an online dynamics adaptation method for RL that measures source–target dynamics shift via generative trajectory deviation from diffusion models. It developed two variants: reward modification and data selection. A performance bound links return gap to KL terms along a shared latent trajectory. Experiments on 4 MuJoCo envs report competitive or superior performance to DARC, VGDF, PAR, SAC‑tune, and SAC‑tar.",
"strengths": "1. Clear theoretical link from generative trajectory discrepancy to performance, with clean proof and recovery of PAR as a special case.\n2. Consistent improved empirical performance on many tasks. DADiff‑modify often leads; DADiff‑select is strong when penalties mis‑shape rewards.\n3. Parallel latent sampling avoids reverse‑chain cost; runtime comparable to model‑free baselines and far below VGDF.",
"weaknesses": "1. Baseline fairness. SAC‑tar is trained for 10^5 target steps, while DADiff and SAC‑tune use 1M source steps + 10^5 target steps. This probes a target‑only‑from‑scratch regime but does not compute‑match total experience. Please add a compute‑matched target‑only control with comparable total environment interactions and gradient updates\n\n2. Insufficient analysis: The text narrates Fig. 2 but provides little analysis in Sec 5.2. Please also quantify the deviation differences between the two generative trajectories, since the paper only covers the computational effeiciency. There should exist a deviation difference between these two trajectories.\n\n3. Writing quality: Multiple typos, symbol switches, and undefined or late‑defined notation reduce clarity. For examples: Fig 4(a) using $\\gamma$ while Eqn 11 using $\\lambda$. $\\phi_i$ is undefined in Eqn 14 until I found out the algorithm is based on SAC. Sec 5.3 states optimal $\\lambda$ is task-dependent while Sec E.2 (line 1019) says $\\lambda$ is task-independent.",
"questions": "Same as weakness\n\nAdditional questions:\n1. Is there a missing square in the Eqn 12 and 13? If not, justify using $E[Q−TQ]$ rather than MSE. If yes, re‑run results with corrected losses and report any deltas.\n2. Can you extend the analysis of why \"directly filtering for transitions with low dynamics mismatch is a more effective strategy than modifying rewards.\" in your Sec 5.2 (line 352). Provide mechanism‑level reasoning and ablations that include filtering only vs reward‑shaping only vs both. Maybe analysis from the perspective of probabilistic trajectory in diffusion model could explain why filtering is better.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T07:04:51",
"modification_date": "2025-11-12T12:15:11",
"review_url": "https://openreview.net/forum?id=9aaiQbIUND¬eId=52GWkUDBZX",
"license": "CC BY 4.0"
},
{
"id": "0JoceklNCh",
"forum": "9aaiQbIUND",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9260/Reviewer_azow",
"reviewer_name": "Reviewer_azow",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces DADiff, a diffusion-based framework that addresses the challenge of transferring reinforcement learning policies across domains with different dynamics. By leveraging the generative trajectory discrepancy between source and target domains, DADiff estimates dynamics mismatch and adapts policies through either reward modification or data selection strategies. Supported by theoretical analysis showing the performance difference is bounded by generative deviation, the method demonstrates superior effectiveness in experiments with kinematic and morphology shifts compared to existing approaches.",
"strengths": "# Strengths\n\n- This paper is well-motivated and mostly well-written\n- This paper is easy to follow, and the studied topic is of importance in the context of the reinforcement learning community. It is always important to develop more general and stronger transfer algorithms in RL, especially considering the fact that online off-dynamics RL papers have rarely appeared in recent years\n- The authors include theoretical analysis to provide better guarantees for the proposed method (despite the fact that some of the theoretical results resemble those in prior works, they are still interesting and bring some insights into the cross-domain reinforcement learning). I appreciate that the authors include a detailed discussion about the connections between the theoretical bounds of their method and those of PAR\n- The presentation is good, and I like the way the authors tell the whole story\n- The parameter study is extensive, covering numerous tasks in the main text and the appendix.",
"weaknesses": "# Weaknesses\n\n- The authors propose to address the online policy adaptation problem from the perspective of generative modeling; however, the downstream methods still rely on reward modification or data filtering, which resembles DARC, PAR, and VGDF\n- The evaluations are limited to kinematic shift and morphology shift. As far as the reviewer can tell, ODRL provides other dynamics shifts like gravity shift, friction shift, etc. This paper can benefit greatly from extending its experimental scope\n- The authors mention flow matching in the main text. This raises questions that there are numerous generative modeling methods other than diffusion models. This paper lacks a comparison between different generative modeling methods.\n\nOverall, I would recommend a \"weak accept\" of this paper.",
"questions": "# Questions\n\n1. As a generative modeling method, diffusion can also be used for data augmentation, e.g., generating samples that lie in the scope of the target domain. What is the insight in using diffusion model for *Generative Trajectory Mismatch* rather than target domain data augmentation?\n2. How diffusion models compare against other generative modeling methods like flow matching, VAE?\n3. The diffusion steps seem to have a significant impact on DADiff. Can authors provide more insights on how to select this parameter and why different diffusion steps can have such significant impacts on DADiff?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-15T16:24:31",
"modification_date": "2025-11-12T12:15:11",
"review_url": "https://openreview.net/forum?id=9aaiQbIUND¬eId=0JoceklNCh",
"license": "CC BY 4.0"
}
] |
3cM4CCoFpe | https://openreview.net/forum?id=3cM4CCoFpe | MSAVQ: Multi-dimensional Sensitivity-Aware Vector Quantization for Ultra-Low-Bit Vision-Language Models | 4 | 3.666667 | [
4,
2,
6
] | [
5,
3,
3
] | 3 | [
"vector quantization",
"llm",
"vlm"
] | Vision-Language Models (VLMs) have achieved remarkable progress, but their massive scale severely limits deployment in resource-constrained settings.
Among existing compression strategies, vector quantization (VQ) stands out for its strong representational power under ultra-low bitwidths.
VQ achieves this by constructing a compact codebook, where weight vectors are mapped to their closest discrete codewords, thereby reducing storage and memory bandwidth requirements while retaining expressive capacity.
However, applying VQ directly to VLMs faces two fundamental challenges:
(1) Modality-induced weight heterogeneity.
In VLMs, image and text inputs induce divergent weight distributions, which a unified codebook fails to capture.
(2) Error compensation mismatch from ignoring first-order gradients.
In VLMs, first-order gradients significantly contribute to quantization error, yet conventional VQ methods neglect them, causing biased compensation and accuracy loss
To this end, we propose \textbf{MSAVQ} (Multi-dimensional Sensitivity-Aware Vector Quantization), a framework that addresses these issues with two key components:
(1) Sensitivity-driven structured
mixed-precision quantization, a mixed-precision scheme that allocates bit-widths based on channel sensitivity, combining global and local saliency metrics for fine-grained and interpretable resource distribution.
(2)Gradient-aware error compensation, a compensation method that explicitly incorporates first-order gradients to address their non-negligible role in VLM quantization errors, with efficient computation enabled by Kronecker and Block-LDL decompositions.
We evaluate MSAVQ on representative VLMs, including LLaVA-onevision, InternVL2, and Qwen2-VL. In 2-bit settings, it consistently surpasses state-of-the-art PTQ methods, achieving up to \textbf{+4.9} higher accuracy (71.4\% vs. 67.0\% on InternVL2-26B).
These results demonstrate that MSAVQ provides a simple and effective solution for ultra-low-bit quantization of multimodal foundation models, enabling practical deployment under strict resource budgets. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=3cM4CCoFpe | 2025-09-02T20:40:42 | 3 | [
{
"id": "YrxfGohs1v",
"forum": "3cM4CCoFpe",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission751/Reviewer_ZaY5",
"reviewer_name": "Reviewer_ZaY5",
"rating": 4,
"confidence": 5,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "Authors propose a novel vector quantization (VQ) method for vision-language models (VLMs). Specifically, authors first analyze two fundamental challenges in applying VQ directly to VLMs. Ones is modality-induced weight heterogeneiity, another is error compensation mismatch from ignoring first-order gradients. To address these two challenges, authors propose their MSAVQ, which contains two main contributions: sensitivity-driven structure mixed-precision quantization strategy and gradient-aware error compensation. The proposed methods reveal their effectiveness on various popular VLMs and show appealing compression ratio.",
"strengths": "1. Experiments are extensive.\n2. The compression ratio is appealing, i.e., 2-bit quantization. \n3. The experimental results show the effectiveness of the proposed methods.",
"weaknesses": "1. In Line 50-51, authors claim that the memory usage of Qwen2-VL-72B exceeds the capacity of most edge devices during inference stage. However, the different model size of VLMs have already defined their deployment conditions. In other words, why should we apply such a huge model, like 72B VLMs, on edge devices? In my opinion, huge models deployed on cloud services, while tiny model, like qwen-0.6b (maybe with some distillation from huge models) can be deployed on edge devices for easy but fast inference. \n2. The weight and token from which layer of which model in Figure 2 are plotted? Also, how is the similarity computed, like the attention score after softmax? Lack necessary description for clarity.\n3. In Figure 3, the red line is hard to distinguish. And is there similar phenomenon happened in other layers of LLaVA-OV or other models?\n4. Figure 6 is too naive to get enough information about how are the \"CSA/MRSBP/OBA\" worked. Authors need to redesign and enrich the figure about overview framework. \n5. In Appendix A.4, authors claim that the task loss is depends on data and layer, however, the KL divergence between the output of the quantized models and full-precision counterparts is the also depends on data and layer. \n6. Why SSMQ can solve the first challenge in VQ of VLMS, i.e. the \" modality-induced weight heterogeneiity\". Authors first claim two challenges in VQ of VLMS in the section of abstract and introduction, then claim that the crucial challenge is \"how to allocate limited bit budgets\" in section 4.1, which is conflict in writing.",
"questions": "see weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-04T15:11:44",
"modification_date": "2025-11-12T10:46:37",
"review_url": "https://openreview.net/forum?id=3cM4CCoFpe¬eId=YrxfGohs1v",
"license": "CC BY 4.0"
},
{
"id": "KOPsSwY6S8",
"forum": "3cM4CCoFpe",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission751/Reviewer_upkj",
"reviewer_name": "Reviewer_upkj",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper proposes MSAVQ, a post-training vector-quantization framework for VLMs that (i) computes multi-dimensional channel sensitivity, (ii) performs structured mixed-precision bit allocation via a closed-form rule, and (iii) applies a gradient-aware error compensation step using a Kronecker-factored Hessian with a first-order surrogate of the gradient and a damped fixed-point projection under quantization constraints.",
"strengths": "- Clear formulation of the VQ/PTQ setup with straightforward notation.\n\n- The closed-form bit-allocation subproblem is convex and simple to implement; the resulting square-root–style allocation is likely numerically stable.\n\n- An attempt to combine first- and second-order information (Kronecker structure) in a single compensation procedure.",
"weaknesses": "- The key step \\(\\nabla L \\approx \\beta E\\) (using residual \\(E\\) as a proxy gradient) lacks alignment evidence. There is no measurements of \\(\\cos(\\nabla L, E)\\), no bounds on \\(\\|\\nabla L-\\beta E\\|\\), and no layer-wise robustness to \\(\\beta\\). With anisotropic curvature, \\(E\\) can point in low-salience directions, making compensation misaligned.\n- Bit-allocation novelty is incremental. The closed-form rule reduces to classic sqrt/water-filling under convex sensitivity models; optimality is not shown jointly with codebook assignment and projection, so end-to-end optimality is unclear.\n- Calibration regime likely underdetermined. Using \\(\\sim\\)O(10^2) pairs for curvature/Kronecker stats in large VLMs is fragile; no curves vs. calibration size, no seed variance, and no distribution-shift tests.",
"questions": "Do projection–compensation iterations show monotone decrease of a defined surrogate or residual norms? Any non-convergent layers?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T20:42:18",
"modification_date": "2025-11-12T10:46:37",
"review_url": "https://openreview.net/forum?id=3cM4CCoFpe¬eId=KOPsSwY6S8",
"license": "CC BY 4.0"
},
{
"id": "hb1pvRKmV0",
"forum": "3cM4CCoFpe",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission751/Reviewer_iNQF",
"reviewer_name": "Reviewer_iNQF",
"rating": 6,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "MSAVQ proposes a multi-dimensional sensitivity-aware vector quantization framework for Vision-Language Models (VLMs). It integrates two key modules—channel-sensitivity-driven structured mixed-precision quantization (SSMQ) and gradient-aware error compensation (GAEC)—to significantly improve quantization accuracy under ultra-low bit settings (2–3 bits). The method consistently outperforms existing state-of-the-art PTQ approaches across multiple representative VLMs, including LLaVA, InternVL2, and Qwen2-VL.",
"strengths": "MSAVQ has a well-founded optimization theory, jointly modeling first- and second-order error terms and deriving a closed-form update rule that theoretically guarantees convergence and numerical stability. It enables efficient implementation, requiring only a small calibration set and simple K-means clustering without any retraining, making it practical and easily reproducible.",
"weaknesses": "The paper lacks hardware-level validation, failing to evaluate the practical deployability of SSMQ and GAEC on real hardware, while the proposed channel-wise adaptive bit allocation may introduce additional storage or computation overhead that remains unaddressed. Meanwhile, its baseline selection is limited—though the method surpasses traditional PTQ approaches such as QuIP, it lacks comparison with more recent quantization methods, which weakens the experimental rigor.",
"questions": "1. Have you conducted validation on real hardware to assess the practical deployability? If so, please provide details on storage/computation overhead introduced by the channel-wise adaptive bit allocation; if not, please explain the reasons and supplement relevant evaluations.\n2. It is necessary to provide performance comparisons with more recent quantization methods. Please supplement these comparative experiments and analyze the performance differences between the proposed method and these methods.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T11:18:44",
"modification_date": "2025-11-12T10:46:37",
"review_url": "https://openreview.net/forum?id=3cM4CCoFpe¬eId=hb1pvRKmV0",
"license": "CC BY 4.0"
}
] | |
7EYYMXYDkL | https://openreview.net/forum?id=7EYYMXYDkL | PLaID++: A Preference Aligned Language Model for Targeted Inorganic Materials Design | 4 | 3.666667 | [
4,
4,
4
] | [
4,
3,
4
] | 3 | [
"Generative models",
"Large Language Models",
"Materials Generation",
"Symmetry",
"Space Group"
] | Reinforcement Learning from Verifiable Rewards (RLVR) has emerged as a promising approach to improve correctness in LLMs, however, in many scientific problems, the objective is not necessarily to produce *the* correct answer, but instead to produce a diverse array of candidates which satisfy a set of constraints. We study this challenge in the context of materials generation. To this end, we introduce PLaID++, an LLM post-trained for stable and property-guided crystal generation. We find that performance hinges on our crystallographic representation and reward formulation. First, we introduce a compact, symmetry-informed Wyckoff text representation which improves computational efficiency and encourages generalization from physical priors. Second, we demonstrate that temperature scaling acts as an entropy regularizer which counteracts mode collapse and encourages exploration. By encoding symmetry constraints directly into text and guiding model outputs towards desirable chemical space, PLaID++ generates structures that are thermodynamically stable, unique, and novel at a $\sim$ 50\% greater rate than prior methods and conditionally generates structures with desired space group properties. Our work demonstrates the potential of adapting post-training techniques from natural language processing to materials design, paving the way for targeted and efficient discovery of novel materials. | We demonstrate the generalizability of a novel symmetry encoding scheme and iterative preference alignment for crystal generation | applications to physical sciences (physics, chemistry, biology, etc.) | https://openreview.net/pdf?id=7EYYMXYDkL | 2025-09-20T11:39:23 | 3 | [
{
"id": "gn1YQJQmv2",
"forum": "7EYYMXYDkL",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23098/Reviewer_W4wS",
"reviewer_name": "Reviewer_W4wS",
"rating": 4,
"confidence": 4,
"soundness": 4,
"contribution": 3,
"presentation": 4,
"summary": "The paper presents PLaID++ which is a language model-based approach for generating stable inorganic crystal structures by combining a Wyckoff position-based text representation with Direct Preference Optimization (DPO) training. The work uses Reinforcement Learning from Interatomic Potentials (RLIP). The model uses the eqV2 MMLIP for structure relaxation and evaluation. Then they use a tiered preference pairing system to create preference pairs for DPO optimization. The authors report sota performance on the MP-20 benchmark with a 22.27% stability rate and a 7.74% S.U.N. rate.",
"strengths": "1. Multiple MLIPs are employed for rigorous validation. They use separate MLIPs for training (eqV2) and evaluation (eSEN) prevents reward hacking, and the inclusion of DFT validation on a 1000-sample subset provides ground-truth verification. \n\n2. The dual strategy for diversity is an innovative practice. The combination of a symmetry-aware representation with Wyckoff positions and dynamic temperature scheduling during DPO training is innovative for encouraging diversity and preventing potential mode collapse. \n\n3. The reported sampling speed shows advantage over diffusion-based methods.",
"weaknesses": "1. The tiered preference pairing scheme is a key contribution of the work and crucial design choice. The design are claimed valid but. rather intuitive and lacks rationale. The work would benefit from a thorough ablation study for e.g. the tiering threshold, pairing strategy, sample portions, etc.\n\n2. The dynamic temperature schedule is described as critical for maintaining diversity. But the work lacks the analysis of sensitivity to temperature e.g. tracking how temperature changes affect exploration patterns or causally lead to the design of S.U.N. materials. The research gap remains to understand the improvements reported in the paper.",
"questions": "1. How was the metastable threshold (0.08 eV/atom) determined? Why you choose the specific tiered preference pairing scheme? Have you performed ablation experiments, e.g. use only the (metastable, unstable) pairs, considering the often noised predictions for the 'stable' crystal structures with MLIPs? In addition, how does the sampling ratio contributes to the final performance gain?\n2. How sensitive is the final S.U.N. rate to the hyperparameters of the temperature schedule? \n3. I believe some samples would fail the relaxation, are they regarded as unstable or disgarded?\n4. It is mentioned that the exploration \"expands into areas underrepresented in the base model\" and shows increased generation of P-block elements. Have you validated whether this represents valid exploration or bias? Have you compared generation patterns between the base pre-trained Qwen before SFT and PLaID++ to determine if the P-block favoritism comes from pre-training bias, SFT, or DPO learning?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T08:26:04",
"modification_date": "2025-11-12T18:15:08",
"review_url": "https://openreview.net/forum?id=7EYYMXYDkL¬eId=gn1YQJQmv2",
"license": "CC BY 4.0"
},
{
"id": "slkU616VXL",
"forum": "7EYYMXYDkL",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23098/Reviewer_y1qQ",
"reviewer_name": "Reviewer_y1qQ",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes PLaID++, a preference-aligned LLM framework for inorganic crystal structure generation. PLalD++ first cast the crystal design into a text generation problem using a symmetry-aware Wyckoff-based representation, trained on MP-20 with SFT on top of Qwen2.5-7B. Then it introduces RLIP (Reinforcement Learning from Interatomic Potentials): the model samples candidate crystals, an ML interatomic potential assigns stability-related scores (stable/metastable/unstable), these scores are converted into preference pairs, and the model is iteratively updated with DPO. A key engineering choice is a temperature-increasing sampling schedule during iterative DPO to prevent diversity collapse.",
"strengths": "1. Treating crystal design as conditional/unconditional text generation and then doing preference-based post-training is a clean and extensible design that the broader LLM-for-science community can reuse.\n\n2. Using ML interatomic potentials to create automatic preference pairs and feeding them to DPO is a nice instantiation of “RLHF without humans” for material science domains. It avoids training a separate value model and keeps the pipeline relatively simple.\n\n3. The paper gives evidence that doing iterative preference optimization on a naive coordinate representation leads to mode/dictionary collapse, while the Wyckoff-based representation maintains S.U.N. This is a useful empirical observation for future work on structure-generators.",
"weaknesses": "1. The reward of preferences is narrow and model-based. All large-scale preference pairs come from MLIPs (eqV2 for training, eSEN for evaluation). Even though they separate train-time and eval-time models to reduce reward hacking, the model may still be aligned to this family of potentials rather than to actual DFT or experimental reality. The 1k DFT sanity check helps, but is too small to fully validate the main claim.\n\n\n2. Targeted design is only partially demonstrated. The title says “targeted inorganic materials design,” but the actual conditional task is mainly space-group conditioning on 7 groups. That is narrower than what “targeted” typically implies in materials (e.g., band gap, stability window, mechanical property). The method may extend, but the paper does not show it.\n\n\n3. Some pipeline details are underspecified. For iterative DPO, the paper does not clearly tabulate per-iteration: number of samples, number of preference pairs, positive/negative distribution across space groups, and exact temperature schedule. That makes it harder to reproduce and to judge whether the gains come from DPO itself or just from resampling at a higher temperature.\n\n\n4. The main ablation shows that the coordinate representation collapses under DPO, but we do not see ablations that separately test: (i) a more structured prompt without Wyckoff; (ii) Wyckoff but no DPO; (iii) entropy regularization without temperature scheduling. So we cannot fully isolate what contributes most to the S.U.N. gain.\n\n\n5. Given that this is a design paper, an example of “top-N generated structures sent for expert or database validation” would have made the story more convincing.",
"questions": "1. Are all preference pairs derived purely from model-generated samples, or do you also form pairs that include real MP-20 structures as positives? If yes, what is the ratio of synthetic–synthetic vs. real–synthetic pairs per iteration?\n\n\n2. You use eqV2 for training and eSEN for evaluation to mitigate reward hacking. Have you tried a third MLIP/evaluator (even a weaker one) to test whether the performance gains generalize across potentials, or do you observe overfitting to the two chosen models?\n\n3. Please provide the exact per-iteration temperature schedule. Also, can you compare “iterative DPO + temperature schedule” against “iterative SFT (no DPO) + the same temperature schedule”? This would clarify whether the diversity preservation comes from DPO or simply from sampling at higher T.\n\n\n4. Your conditional experiments focus on 7 space groups. How would the pipeline change if the target were an observable (e.g., band gap > 2 eV, or formation energy below some threshold) instead of a space group? Can MLIP scores be turned into preferences for such targets without retooling the whole pipeline?\n\n\n5. For the 1k DFT relaxations, were the candidates sampled only from the final aligned model, or proportionally from all DPO iterations? If only from the last one, could this selection bias exaggerate the agreement between MLIP and DFT?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T15:52:17",
"modification_date": "2025-11-12T18:15:08",
"review_url": "https://openreview.net/forum?id=7EYYMXYDkL¬eId=slkU616VXL",
"license": "CC BY 4.0"
},
{
"id": "i3jEZxmPqg",
"forum": "7EYYMXYDkL",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23098/Reviewer_13wd",
"reviewer_name": "Reviewer_13wd",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This paper presents PLaID++, a preference-aligned large language model for inorganic crystal generation. The model fine-tunes a pretrained Qwen-2.5 7B LLM using a new Wyckoff-based textual representation that encodes symmetry and lattice parameters compactly, then post-trains it via Direct Preference Optimization (DPO).\n\nThe authors adapt DPO into a scientific alignment framework called RLIP (Reinforcement Learning from Interatomic Potentials): generated crystals are evaluated with machine-learning interatomic potentials (eqV2, eSEN) for stability, novelty, and space-group correctness, and ranked into preference pairs (e.g., stable > metastable > unstable). Iterative DPO updates align the model toward physically stable and diverse structures, while a dynamic-temperature schedule prevents mode collapse.\n\nOn the MP-20 dataset, PLaID++ achieves 97.3 % stability and a 7.7 % S.U.N. (Stable-Unique-Novel) rate, roughly 50 % higher than prior methods such as FlowLLM and ADiT. Joint training on unconditional and space-group-conditioned prompts also improves targeted generation (↑ 47 % S.S.U.N.). Ablations show that the Wyckoff representation, tiered stability rewards, and dynamic temperature are key to these gains.",
"strengths": "- Clear problem framing and relevance. The paper targets a high-impact task: generating novel, thermodynamically stable inorganic crystals, potentially conditioned on structural symmetry. This is central to materials discovery and of broad scientific interest.\n\n- They check their reward model.\nThey do attempt to show that MLIP-based energy ranking is not totally hallucinated. They correlate MLIP-based energy above hull with true DFT energy above hull on 1,000 samples, and report R² ~0.84 for eSEN near the stability threshold. They also report classification precision/recall for “stable vs metastable vs unstable,” which matters because the entire RLIP pipeline depends on the surrogate reward being aligned with physics.\n\n- Computational throughput / practicality.\nThey emphasize that PLaID++ can sample 10,000 crystals in ~23 minutes on a single H100 and achieves ~5× higher throughput than FlowLLM. High-throughput suggests this may actually be usable for screening, not just a pretty table.",
"weaknesses": "- No new neural architecture; novelty is mostly procedural.\nThe backbone is Qwen-2.5 7B with LoRA. There is no new model class, no new attention mechanism, no new equivariant layer, no new geometry-aware transformer. The “innovation” is:\n- - a structured text format (Wyckoff),\n- - a preference-alignment / DPO loop using physics-derived rewards,\n- - and some engineering heuristics (tiered rewards, temperature ramp).\nThis is valuable, but reviewers will ask: is this ICLR-level novelty, or is it an application of known LLM post-training techniques to a new domain?\n\n- All training/evaluation is on MP-20 only.\nThe model never leaves this relatively standard ~45k-material dataset with ≤20 atoms. There is no demonstration on larger or more diverse datasets (Alexandria, LeMat-Bulk, etc.) and no zero-shot transfer experiment. So we don’t know if PLaID++ is general, or just very tuned to MP-20 chemistry space.\n\n\n- Prompt robustness is not explored.\nThe model is always prompted with essentially the same English template (“Below is a description of a bulk material… The spacegroup number is X… Generate…”). There is no ablation on prompt phrasing, no test of whether the generation is brittle to different wording, no demonstration of controllability beyond “give me space group N.”\nFor a claimed “LLM interface,” this is under-explored. We don’t learn how instruction-like this really is vs how template-locked it is.\n\n- No multi-objective targeting beyond symmetry.\nThey only condition on space group (as a proxy for symmetry class). They mention other properties (band gap, etc.) in the prompt template during SFT, but the reported experiments don’t actually demonstrate controllable generation for properties like band gap or conductivity. For discovery workflows, that’s critical. Right now, controllability is still narrow.\n\n\n\n- Figures could be more readable.\nSeveral figures (e.g., Figure 3, Figure 4, Figure 8) have very small font on axes and legends. For a paper that leans heavily on ablation plots to support its claims (temperature schedule, S.U.N. over iterations, MLIP-vs-DFT correlation), readability matters. Increasing font size / contrast in these plots would make the empirical story much easier to audit.",
"questions": "- How sensitive is performance to the exact wording / structure of the prompt template? Can the model handle natural variations in phrasing, or is it essentially bound to the provided template?\n\n- You frame PLaID++ as “preference aligned” via RLIP. In normal RLHF, preferences come from humans and implicitly encode high-level goals. Here, preferences come from MLIPs and symmetry filters. Do you observe any reward hacking behavior (i.e., crystals that look unphysical to a human crystallographer but score as “stable” under eqV2/eSEN)?\n\n- Can you report transfer? For example, train on MP-20 but prompt for a space group that is rare in MP-20, or generate crystals with >20 atoms, or test on a different dataset split. Right now it’s unclear how generalizable the method is beyond MP-20.\n\n- The font size / legend readability in key figures (3, 4) is quite small. Please enlarge axis labels and legends so that stability / S.U.N. trends and correlation plots can be evaluated without zooming.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T13:07:24",
"modification_date": "2025-11-12T18:15:08",
"review_url": "https://openreview.net/forum?id=7EYYMXYDkL¬eId=i3jEZxmPqg",
"license": "CC BY 4.0"
}
] |
ndrUH7IF3L | https://openreview.net/forum?id=ndrUH7IF3L | Optimizing Mixture of Block Attention | 4.666667 | 3 | [
4,
4,
6
] | [
3,
4,
2
] | 3 | [
"LLM",
"Efficiency",
"Attention"
] | Mixture of Block Attention (MoBA) is a promising building block for efficiently processing long contexts in LLMs by enabling queries to sparsely attend to a small subset of key-value blocks, drastically reducing computational cost.
However, the design principles governing MoBA's performance are poorly understood, and it lacks an efficient GPU implementation, hindering its practical adoption.
In this paper, we first develop a statistical model to analyze MoBA's underlying mechanics. Our model reveals that performance critically depends on the router's ability to accurately distinguish relevant from irrelevant blocks based on query-key affinities. We derive a signal-to-noise ratio that formally connects architectural parameters to this retrieval accuracy. Guided by our analysis, we identify three key pathways for improvement: using smaller block sizes, increasing head dimensions, and applying a short convolution on keys to cluster relevant signals, which enhances routing accuracy.
While theoretically better, small block sizes are inefficient on GPUs. To bridge this gap, we introduce FlashMoBA, a hardware-aware CUDA kernel that enables efficient MoBA execution even with the small block sizes our theory recommends. We validate our insights by training LLMs from scratch, showing that our improved MoBA models match the performance of dense attention baselines. FlashMoBA achieves up to 9× speedup over FlashAttention-2 for small blocks, making our theoretically-grounded improvements practical. Code will be released upon publication. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=ndrUH7IF3L | 2025-09-18T10:42:36 | 3 | [
{
"id": "okk1t7B4NS",
"forum": "ndrUH7IF3L",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10187/Reviewer_yXa6",
"reviewer_name": "Reviewer_yXa6",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "1. The paper presents a statistical analysis of the Mixture of Block attention (MoBA) to motivate the parameters that lead to better performance. Concretely, they propose that the SNR for the MoBA architecture is proportional to the sqrt of the ratio of the head size and the block size: with larger head sizes and smaller block sizes yielding better performance. Additionally, they motivate that semantic clustering in a block also helps improve block retrieval accuracy, leveraging a convolutional layer to demonstrate the point.\n\n2. They provide an efficient implementation for the MoBA architecture in the spirit of FA that computes the top k sparse selection mask without materializing the entire mask, uses it to index into the KV cache to subsequently compute dense attention with the FA methodology , and then finally scatter back the results.",
"strengths": "1. The paper motivates a better understanding of what aspects contribute to the improvement of the MoBA architecture. The characterization of the SNR as a function of the head size and block size is, to the best of my knowledge, novel and provides a good basic approximation on the framework to tune the performance of MoBA attention.\n\n2. Their kernel implementation is particularly helpful for encouraging the broarder adaptation of the architecture. Given that attention tends to be a bottleneck for a number of new tasks, this is very helpful.",
"weaknesses": "The following are my concerns with the paper:\n\n1. The core contribution of the paper is relegated to the Appendix. This makes the paper a bit hard to follow, and given that the results motivate the majority of the paper, I do think that at least a part of it should be featured in the main paper. \n\n2. There are a number of assumptions made in the statistical analysis that may not hold true and at the very least merit some grounding with experimental results: L805 makes the assumption that q^Tk are independant dot products, however [1] argue that the dot product are correlated, accounting for a factor of O(d) and not O(sqrt(d)) as the authors propose. Likewise, the authors use the argument for CLT on large B to motivate characterising the delta between the informative and non informative dot products as a normal distribution, however, we subsequently move to argue that B should be small for improved performance. Thus there seems to be a tension between the proposed theory and subsequent proposed improvements.\n\n3. It would be good to have experiments that directly validate the proposed theory. Concretely, for an S-NIAH task, one can compute the estimator for the score difference directly (the block containing the needle vs all other blocks, aggregated across all attention masks). One way to show the validity of the proposed SNR metric would be to plot the estimator as a function of d and B, and then subsequently show that it does follow the proposed trajectory.\n\n4. The models that the authors investigate have a pretty high degree of global attention (50%). This itself can potentially act as a confounder and mask an pitfalls of the proposed algorithm. It would be more informative to have both an analysis of the proposed algorithm scales compared to vanilla dense and vanilla MoBA (i,e comparing to Section 3.1 in [2]) and how the difference in loss changes with introducing additional blocks of MoBA with lower block size / higher attention heads.\n\n5. The experimental results are somewhat counterintuitive: according to the authors' proposal, the performance should keep improving with reduced block sizes. While that is true for Table 1, on Table 3 the trends do not hold. I would have expected the high SNR should be additionally more effective for the longer context evaluations, but that does not seem to be the case ?\n\n6. For the experiments with higher d, the authors keep the number of heads fixed. Thus, the number of parameters (and consequently compute) for the models with higher d is more, since num parameters scales with O(d^2). This makes the ablation experiment with higher d values hard to compare, since they are not IsoFlops, and introduces an additional confounder: do the models with higher d values improve because of more parameters, or because of the better SNR value.\n\n7. The results on the NIAH benchmark seem quite low: for a 32k - 64k context length, this task usually is taken as a necessary but not sufficient task for long context modeling: in fact even the MoBA authors demonstrate a 100% accuracy upto a 1M context window. However, in Table 2, even on the subset of 200 examples, the accuracy seems to be very low for 32k and 64k context lengths. This seems a bit off: it might be because the authors tried zero shot with RoPE for length extrapolation, which is known to have poor performance. It might be better if the authors adapted the vanilla context length extension strategy of training on longer contexts a bit more with RoPE interpolation, and the using the subsequent models for the evaluations. With the current results, it's hard to understand if the proposed method hurts the long context performance or not.\n\n[1] Yang, Greg, et al. \"Tensor programs v: Tuning large neural networks via zero-shot hyperparameter transfer.\" arXiv preprint arXiv:2203.03466 (2022).\n\n[2] Lu, Enzhe, et al. \"Moba: Mixture of block attention for long-context llms.\" arXiv preprint arXiv:2502.13189 (2025).",
"questions": "For the reduction in the block size experiments, I was not sure of the differences between the proposed experiments by the authors and the experiments on sparsity / granularity tradeoffs of MoBA presented in [1] (section Ablation Study on Fine-Grained Block Segmentation). Would it be possible to clarify the same ? \n\n\n[2] Lu, Enzhe, et al. \"Moba: Mixture of block attention for long-context llms.\" arXiv preprint arXiv:2502.13189 (2025).",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-05T00:31:41",
"modification_date": "2025-11-12T12:25:52",
"review_url": "https://openreview.net/forum?id=ndrUH7IF3L¬eId=okk1t7B4NS",
"license": "CC BY 4.0"
},
{
"id": "W1gkEHXFVA",
"forum": "ndrUH7IF3L",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10187/Reviewer_okij",
"reviewer_name": "Reviewer_okij",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper improves upon the previous sparse attention method, MoBA, through both theoretical analysis and kernel optimization. It analyzes MoBA’s performance from a signal-to-noise ratio (SNR) perspective and identifies that smaller block sizes and larger head dimensions yield performance benefits. To support these findings, the paper implements FlashAttention-style tiling optimizations in the kernel, making MoBA efficient under these settings.",
"strengths": "1. The paper offers a statistical view that links MoBA hyperparameters to the SNR of attention computation. Although the connection between SNR and end-to-end model performance is not formally derived, the analysis provides a useful proxy for selecting better MoBA hyperparameter configurations.\n2. The implementation of a FlashAttention-style MoBA kernel makes the approach practical even with small block sizes. The kernel achieves comparable speed to FlashAttention on short sequences and delivers speedups for long sequence inputs.",
"weaknesses": "1. Experimental setup: The model architecture setup introduces confounding factors. While the paper claims to focus on optimizing MoBA performance, the model architecture employs sliding window attention (SWA) in half of the layers and involves dense attention in others, limiting the proportion of true MoBA layers. This mixture complicates the attribution of performance improvements and makes it unclear how much gain comes from MoBA, instead of SWA or dense attention components.\n2. Key convolution description: The role of the short convolution on keys is not clearly presented. It is unclear how convolution enhances $\\Delta_{\\mu_{\\text{eff}}}$, and additional explanation or intuition is needed. Moreover, implementation details are missing in the main paper. According to Table 1, the performance benefits from adding convolution appear inconsistent.",
"questions": "1. Dense attention can be viewed as MoBA with block size (1) and top-(K = N). Based on the SNR analysis, smaller block sizes should yield better performance. Why then do most MoBA implementations outperform dense attention across benchmarks in Table 1? Could this discrepancy reflect other dominant factors, such as insufficient training data or incomplete convergence?\n2. Could you provide more details on the rationale and implementation of key convolution in the main paper?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T04:37:57",
"modification_date": "2025-11-12T12:25:53",
"review_url": "https://openreview.net/forum?id=ndrUH7IF3L¬eId=W1gkEHXFVA",
"license": "CC BY 4.0"
},
{
"id": "7WjLacUFJX",
"forum": "ndrUH7IF3L",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10187/Reviewer_nD1f",
"reviewer_name": "Reviewer_nD1f",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper addresses the limitations of Mixture of Block Attention (MoBA). The authors develop a statistical model to derive a key signal-to-noise ratio (SNR) related to head dimension and block size, for better router retrieval accuracy. Based on the SNR formula, the paper uses smaller block sizes and larger head dimensions. Other contributions include key-convolution and efficient implementation.",
"strengths": "1. Theoretical Framework: The paper introduces a novel signal-to-noise ratio (SNR) model that provides clear and actionable design principles. This provides guidelines for the selection of head dimension and block size.\n2. High-Performance CUDA Kernel: FlashMoBA is a well-engineered, hardware-aware CUDA kernel. The Tiled-Topk is especially useful.\n3. Strong Benchmark Results: The optimized MoBA models are shown to match or even outperform dense attention on challenging long-context benchmarks like LongBench and RULER.",
"weaknesses": "1. Limited Generalizability Due to Small Model Scale: All experiments are conducted on a 340M parameter model. This raises significant questions about whether the paper's core findings would scale to the much larger models.\n2. Unsubstantiated Link Between SNR and Experiments: The key experiment in Table 4, designed to validate the SNR theory's dependency on head dimension d, fails to control for model size (line 289). It is unclear if the performance improvements in Table 4 are due to the claimed increase in SNR or simply due to the larger model capacity. This methodological flaw means the empirical evidence for the paper's central theoretical claim is not as conclusive as presented.",
"questions": "There are still rooms in the main text. You should put some algorithm into the main text instead of the appendix, such as the algorithm of Tiled-Topk.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-16T15:34:11",
"modification_date": "2025-11-12T12:25:53",
"review_url": "https://openreview.net/forum?id=ndrUH7IF3L¬eId=7WjLacUFJX",
"license": "CC BY 4.0"
}
] | |
IrGJvFKuX2 | https://openreview.net/forum?id=IrGJvFKuX2 | Multi-Agent Game Generation and Evaluation via Audio-Visual Recordings | 4 | 2.666667 | [
4,
2,
6
] | [
3,
2,
3
] | 3 | [
"video-game",
"llms",
"multi-agent",
"agent",
"animations"
] | Generating novel video games is a challenging problem. Large Language Models (LLMs) can generate games and animations, but lack automated evaluation metrics and struggle with complex content. To tackle these issues, we built a new metric and multi-agent system. First, we propose AVR-Eval, a metric for multimedia content where a model compares the Audio-Visual Recordings (AVRs) of two contents and determines which one is better. We show that AVR-Eval properly identifies good from broken or mismatched content. Second, we built AVR-Agent, a multi-agent system to generate JavaScript code from a bank of multimedia assets (audio, images, 3D models) and using AVR feedback. We show higher AVR-Eval with AVR-Agent than one-shot prompt. However, while humans benefit from high-quality assets and audio-visual feedback, they do not significantly increase AVR-Eval for LLMs. This reveals a gap between humans and AI content creation. | New metric for multimedia evaluation and multi-agent framework for video game generation | generative models | https://openreview.net/pdf?id=IrGJvFKuX2 | 2025-09-19T23:38:09 | 3 | [
{
"id": "o6kBJrUadO",
"forum": "IrGJvFKuX2",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19440/Reviewer_uKvp",
"reviewer_name": "Reviewer_uKvp",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper presents two primary contributions: 1) AVR-Eval, an automated, relative evaluation metric for interactive multimedia content, such as JavaScript games. This metric works by generating audio-visual recordings (AVRs) of two competing pieces of content and feeding them to an omni-modal model (Qwen2.5-Omni-7B) to perform a pairwise comparison. 2) AVR-Agent, a multi-agent framework for generating this JavaScript content. The system uses a text-based coding LLM, guided by feedback from an omni-modal model and console logs, to iteratively refine code. It also has access to a bank of human-made multimedia assets.",
"strengths": "1. Addresses a Critical Bottleneck: The paper tackles a core challenge in generative AI for interactive content: the lack of scalable, automated evaluation. Human-in-the-loop evaluation (like WebDev Arena) is a major bottleneck, and the idea of using an omni-modal model to \"watch and listen\" to content is a novel and important research direction.\n\n2. Novelty of the Metric Concept: The AVR-Eval metric moves beyond static code analysis or simple screenshot evaluation. By incorporating audio and video (temporal dynamics), it attempts to capture a more holistic sense of the user experience, which is the correct approach for evaluating games.\n\n3. Interesting (and Honest) Negative Result: The paper's most compelling finding is the failure of the agent to benefit from high-quality assets and AV feedback. This is a valuable, non-trivial result that challenges the community's assumptions about agent capabilities and is worthy of discussion.",
"weaknesses": "1. Experimental Circularity: The entire experimental setup is critically circular. The AVR-Agent uses an omni-modal model (Qwen2.5-Omni-7B) to provide feedback for improvement. The AVR-Eval metric then uses the exact same model (Qwen2.5-Omni-7B) to judge the final quality. The agent is, therefore, being optimized to satisfy the biases of its own evaluator. The paper does not demonstrate that the agent is producing objectively better games; it only demonstrates that it is getting better at pleasing the Qwen-Omni model. This is a fundamental conflict of interest that invalidates the main results.\n\n2. The AVR-Agent is not a complex multi-agent system. It is a two-model pipeline: a coding LLM that writes code and an omni-modal model that provides text descriptions. This is a standard tool-augmented agent or a simple feedback loop, not a \"multi-agent system\" in the sense of complex roles, negotiation, or collaborative planning.\n\n3. Misleading Interpretation of the \"Gap\": The paper claims to have found a \"gap between humans and AI\" because the agent does not benefit from assets or AV feedback. This is a major misinterpretation of a flaw in their own agent's design.\n\n4. Asset Flaw: The agent does not see or hear the assets. It only selects them based on text metadata (e.g., filenames, dimensions, BPM). A human looks at the art and listens to the music. The agent is choosing blind. It is no surprise it cannot leverage assets it has no direct perceptual access to. This is an agent design flaw, not a fundamental AI limitation.\n\n5. Feedback Flaw: The \"audio-visual feedback\" is just a high-level text description from the omni-model (e.g., \"describe the content,\" \"provide subjective feedback\"). This feedback is abstract and non-actionable. How is a coding model supposed to translate \"the visual design is not harmonious\" into a specific JavaScript code change? The technical challenge is bridging this modality-to-code gap, and the paper's solution (just passing text) fails.\n\n6. No Human Correlation for AVR-Eval: The paper proposes AVR-Eval as an automated substitute for human evaluation. To validate such a metric, it is essential to conduct a human correlation study. The authors must show that AVR-Eval's pairwise preferences (A vs. B) strongly correlate with the preferences of human evaluators on the same content pairs. The paper provides zero such data.\n\n7. Weak Metric Validation: The validation in Table 1 is unconvincing. Detecting \"broken\" (crash/black screen) or \"mislabeled\" (fireworks vs. bouncing ball) content is a trivial bar for any modern multimodal model. The only subjective test is against \"human-made\" content, where the generated content \"won\" 32.22% of the time. This 1/3 failure rate to identify high-quality human content is deeply concerning and suggests the metric is not a reliable judge of \"quality.\"\n\n8. Benchmark is a Toy Problem: The benchmark of 5 simple animations and 5 simple games (e.g., \"Bouncing Ball,\" \"Solitaire\") is a toy problem. These tasks do not involve complex game logic, state management, or novel mechanics. It is impossible to generalize any findings from this benchmark to the \"challenging problem\" of novel video game generation.\n\n9. Inconsistent Experimental Protocol: The authors admit that due to API costs (\"paid out-of-pocket\"), stronger models were run with fewer iterations than weaker models. This inconsistent protocol makes the model-to-model comparisons in Figure 4c and Table 5 unreliable, as the results are confounded by the different iteration counts.",
"questions": "1. On Metric Validity: The central claim of this paper is the utility of AVR-Eval. Why did you not conduct a human correlation study to prove that your metric's judgments align with human preferences? Without this, how can we trust any of the results that depend on this metric?\n\n2. On Circularity: How do you defend the experimental design where the agent's feedback mechanism (Qwen2.5-Omni-7B) is the same model as the final evaluator (Qwen2.5-Omni-7B)? How can you demonstrate that your AVR-Agent is learning to make better games and not just learning to game the judge?\n\n3. On the \"Asset Gap\": You claim to have found a \"gap\" because the AI does not benefit from assets. Since your agent only sees text metadata and not the visual or audio content of the assets, is this not a simple failure of your agent's design rather than a fundamental finding about AI?\n\n4. On the \"Feedback Gap\": You claim the agent does not benefit from AV feedback. Your omni-model provides only high-level, non-actionable text descriptions. Did you attempt to provide more structured, code-oriented feedback (e.g., \"The enemy hitbox at (x,y) is too large,\" \"The jump sound is not triggering on line 87\")?\n\n5. On Generalizability: Given that your benchmark is limited to 10 extremely simple, well-defined problems, how can you be confident that your findings, especially the surprising lack of benefit from assets and feedback, will generalize to the generation of even moderately complex, novel video games?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T19:44:30",
"modification_date": "2025-11-12T15:09:21",
"review_url": "https://openreview.net/forum?id=IrGJvFKuX2¬eId=o6kBJrUadO",
"license": "CC BY 4.0"
},
{
"id": "lNTIqZ5fw8",
"forum": "IrGJvFKuX2",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19440/Reviewer_bTx2",
"reviewer_name": "Reviewer_bTx2",
"rating": 2,
"confidence": 2,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "The paper aims to address two issues in game generation: lack automated evaluation metrics and struggle with complex content. Specifically, this paper proposes AVR‑Eval, a relative, pairwise metric that compares two pieces of web‑based multimedia (games or animations) using audio‑visual recordings (AVRs) as inputs to an omni‑modal judge, followed by a text‑model review step. An ablation shows that multi‑round description → comparison and a text‑model review notably reduce failure modes. Besides, this paper proposes AVR‑Agent. In the first stage, the coding model selects which assets to use to produce the desired content given\nthe original description. In the second stage, the coding model is asked to generate the content based on the original description, chosen assets, general guidelines, and evaluation criteria. In the third stage, the content is improved over multiple steps including content description and feedback for the content. Empirical results demonstrate the effectiveness of the Agent and Eval.",
"strengths": "1. Target an interesting goal in achieving automated game design.\n2. Each component in the AVR-Eval or the AVR-Agent is evaluated carefully to demonstrate its effectiveness.",
"weaknesses": "1. Overall the paper is very engineering for designing pipeline and prompts for AVR-Eval and AVR-Agent, and lack main technical contribution.\n2. Not much related work being discussed in the paper so it's hard to place the paper in existing literature. \n3. While AVR‑Eval is intuitive and the ablation is convincing, there is no study of alignment with human raters.\n4. The benchmark uses five game and animations. Many results may not carry to richer game loops, content pipelines, or larger engine use (e.g., asset streaming, physics edge cases, level generation at scale). \n5. The writing should be improved to better separate the discussion about approach and results.",
"questions": "N/A",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T16:13:42",
"modification_date": "2025-11-12T15:09:22",
"review_url": "https://openreview.net/forum?id=IrGJvFKuX2¬eId=lNTIqZ5fw8",
"license": "CC BY 4.0"
},
{
"id": "ocplpumE1C",
"forum": "IrGJvFKuX2",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19440/Reviewer_7AWV",
"reviewer_name": "Reviewer_7AWV",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces AVR-Eval, a new evaluation metric for multimedia and interactive content, and AVR-Agent, a multi-agent system to generate JavaScript code from a bank of multimedia assets (audio, images, 3D models) and using AVR feedback. Experiments across 5 games and 5 animations demonstrate that AVR-Agent improves generation quality compared to one-shot LLM generation",
"strengths": "1.\tThe presentation is clear and easy to follow.\n\n2.\tThe paper bridges game generation, multimodal evaluation, and agent-based code improvement, which is a novel combination not widely explored.\n\n3.\tThe experiments and visualizations are reasonable and well done.\n\n4.\tThe proposed pairwise audio-visual comparison metric is a meaningful step toward automated evaluation of multimedia and interactive content.",
"weaknesses": "1.\tAVR-Agent primarily combines existing LLMs and evaluation loops rather than introducing fundamentally new generation algorithms or architectures.\n\n2.\tThe AVR-Eval metric was tested only on simple 2D JavaScript games and animations; its generalizability to complex, commercial-quality or 3D content is unclear.\n\n3.\tWhile open-sourced, the framework depends on large proprietary models (Gemini, Grok, etc.) and external asset libraries, which may limit full replication.\n\n4.\tThe iterative AVR-Agent pipeline (multiple generations, AV recordings, model feedback) appears computationally expensive, while runtime and cost analyses are missing.\n\n5.\tAVR-Eval lacks direct validation against human preference scores.",
"questions": "1.\tWhat is the time and computational cost of one full AVR-Agent generation cycle (including k-initial candidates and multi-round improvements)?\n\n2.\tCan AVR-Agent be applied to generate larger or multi-level games beyond short demos? How does performance scale with content complexity?\n\n3.\tGiven reliance on commercial APIs and proprietary LLMs, how can other researchers replicate the reported results or extend AVR-Agent using open-source models?\n\n4.\tTo what extent is the framework autonomous? Does it still require human intervention for debugging or asset curation?\n\n5.\tAre the two agents (coding and omni-modal) strictly sequential, or can they interact iteratively (e.g., co-reflection or negotiation)?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T17:49:29",
"modification_date": "2025-11-12T15:09:22",
"review_url": "https://openreview.net/forum?id=IrGJvFKuX2¬eId=ocplpumE1C",
"license": "CC BY 4.0"
}
] |
bgdTK6cniJ | https://openreview.net/forum?id=bgdTK6cniJ | CaNDiCE: Causal Discovery of Nonlinear Dynamics Through Counterfactual Explanations | 4 | 4 | [
6,
2,
4,
4
] | [
4,
5,
4,
3
] | 4 | [
"governing equations",
"nonlinear dynamics",
"causality",
"counterfactuals"
] | The problem of discovering governing equations from noisy observational data has broad applications in scientific discovery, control, and prediction of complex systems. However, existing approaches that infer dynamics directly from data—whether symbolic regression (e.g., tree-based methods) or sparse identification with pre-defined basis functions—often suffer from poor generalizability, sensitivity to noise, and the inclusion of spurious terms. In this work, we present a causality-preserving counterfactual explanations framework for discovering governing equations in dynamical systems. Counterfactuals in this setting are hypothetical governing equations obtained by minimally perturbing basis function coefficients to induce out-of-distribution trajectories. By penalizing counterfactuals that deviate from the observed topological causality, a measure of directed effective influence between state variables, the resulting trajectories remain consistent with the causal structure of the true dynamics inferred from observed data. As such, resulting counterfactuals are obtained only by perturbing causal terms in the governing equation, while spurious terms are naturally suppressed since their perturbations violate causal consistency. We evaluate our approach across a range of dynamical system benchmarks and show that it outperforms state-of-the-art methods, including symbolic regression, library-based sparse regression, and deep learning models, in identifying robust and parsimonious governing equations. | learning on time series and dynamical systems | https://openreview.net/pdf?id=bgdTK6cniJ | 2025-09-20T14:23:10 | 4 | [
{
"id": "2ueZVyc81s",
"forum": "bgdTK6cniJ",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23857/Reviewer_CSxc",
"reviewer_name": "Reviewer_CSxc",
"rating": 6,
"confidence": 4,
"soundness": 4,
"contribution": 3,
"presentation": 4,
"summary": "This work proposes a new approach to select relevant basis functions for sparse identification of governing equations based on topological causality. The authors propose to train a GAN to sample counterfactual trajectories that preserve the causal structure as measured by a topological causality metric. Examining the distribution of sampled counterfactuals then allows for the construction of a minimal library of basis functions by eliminating terms that do not generate causally consistent counterfactuals.",
"strengths": "This paper is well-written and uses a recently developed theoretical tool for causal analysis of dynamical systems to improve system identification. The experiments show that this approach has a significant advantage over standard non-causal system identification, especially for noisy data where standard SINDy fails.",
"weaknesses": "My main concern is regarding the scalability of the counterfactual generation. It seems to be at least quadratic in the number of initial library terms to even evaluate the loss for the GAN. Furthermore, the GAN must effectively sample all sparse combinations to truly find all good counterfactuals. As mentioned in the paper, this is NP-hard. The world model construction may also run into scalability issues due to the need to estimate a stochastic inverse.",
"questions": "1. How computationally expensive is constructing the world model and computing the topological causality metric? How does it scale?\n2. How stable and reproducible is the GAN training, and are there failure modes that you observe? Does failure to sample enough counterfactuals result in a library that is too small to fully capture the dynamics?\n3. Can you run a test on a much higher-dimensional system to give a sense of the scaling behavior?\n4. I'm a bit confused by the diversity-promoting loss in equation 11. If gamma > 0, wouldn't the loss encourage the new counterfactual candidate to be similar to pre-existing counterfactuals (so reducing diversity)? Also, what does it mean to subtract a coefficient from a set of coefficients (are you averaging here)?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T05:18:04",
"modification_date": "2025-11-12T18:20:32",
"review_url": "https://openreview.net/forum?id=bgdTK6cniJ¬eId=2ueZVyc81s",
"license": "CC BY 4.0"
},
{
"id": "doQA82cT8A",
"forum": "bgdTK6cniJ",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23857/Reviewer_xPLj",
"reviewer_name": "Reviewer_xPLj",
"rating": 2,
"confidence": 5,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The authors introduce a couterfactional penalty term to model discovery methods, specifically the spare identification of nonlinear dynamics algorithm. They then show the performance of this method against some of the variants of SINDy.",
"strengths": "The ideas of the paper are actually quite nice. It seems a rather nice innovation to include in the regression architecture. I would strongly encourage the authors to continue pursuing this line of work as it has great potential.",
"weaknesses": "Unfortunately, the method seems rathe immature to me at this point. Specifically, the two models demonstrated in the paper are the Lotka-Volterra and Lorenz system, both of which are very basic models. It certainly fine to demonstrate initially on these models, but it certainly would be expected for an ICLR to have much more challenging models to explore. \n\nAdditionally, the comparisons to SINDy, eSINDy, SPL, while good, are certainly not state-of-the-art methods. In fact, these methods are not really aimed at causal inference. Much more serious comparisons should be made against what are considered causal learning methods. So the comparisons are simply not up to what would be expected.",
"questions": "Only two main questions:\n\nHow does this work for more challenging models than Lorenz/Lotka-Volterra? Especially with noise?\n\nHow does the method actually hold up in comparison with leading causality inference methods?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T01:30:25",
"modification_date": "2025-11-12T18:20:32",
"review_url": "https://openreview.net/forum?id=bgdTK6cniJ¬eId=doQA82cT8A",
"license": "CC BY 4.0"
},
{
"id": "NNhc66vxTu",
"forum": "bgdTK6cniJ",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23857/Reviewer_hwVS",
"reviewer_name": "Reviewer_hwVS",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper addresses the problem of equation discovery from noisy and limited data, which is a long-standing challenge in scientific machine learning. The authors introduce CaNDiCE (Causal Discovery of Nonlinear Dynamics through Counterfactual Explanations), a framework that integrates counterfactual reasoning and topological causality to recover parsimonious, causally meaningful governing equations.\nThe method builds a world model over the coefficients of a predefined basis library, generates counterfactual coefficients via a conditional GAN that satisfy sparsity and causal consistency constraints, and then identifies the minimal causal set of terms to refit a sparse regression model.\nEmpirical evaluations span five benchmark dynamical systems: Lotka–Volterra, Lorenz, Van der Pol, Rössler, and a ball-drop experiment with air resistance, under varying signal-to-noise ratios (SNR) and data regimes. Results show large improvements compared to classical and recent baselines such as SINDy, ESINDy, and SPL, especially in low-data and high-noise conditions.",
"strengths": "1. **Conceptual novelty.** The paper introduces a fresh causal perspective on equation discovery by embedding counterfactual generation within a causal consistency regularization. This adversarial counterfactual setup, where coefficient perturbations are constrained by topological causality, is an original and very interesting idea.\n\n2. **New inspiration for physics discovery using causal constraints.** The incorporation of topological causality (via cross-mapping on reconstructed manifolds) provides a principled way to ensure causal consistency in dynamical systems where classical DAG-based causality notions fail. This is a meaningful step toward causal interpretability in scientific ML.\n\n3. **Clear writing and methodological exposition.** The paper is well structured and mathematically precise. Algorithm 1 and the breakdown into “world model”, “counterfactual model”, and “minimum set discovery” make the pipeline understandable.",
"weaknesses": "1. **Missing key baselines and contextualization.** The paper omits several recent transformer- or diffusion-based approaches to symbolic regression and equation discovery, notably \"ODEFormer: Symbolic Regression of Dynamical Systems with Transformers (ICLR 2024)\", which introduces the ODE-Bench dataset. Including such models would better position CaNDiCE within the current landscape of neural-symbolic discovery.\n\n2. **Theoretical grounding and identifiability.** While the paper discusses causal constraints qualitatively, it lacks a formal analysis of parameter identifiability or conditions under which the causal coefficients are recoverable. For example, under what assumptions does topological causality regularization guarantee recovery of the correct sparse structure?\n\n3. **Computational complexity and scalability.**\nThe combination of stochastic inversion, bootstrapping, and GAN training raises concerns about computational efficiency. The paper would benefit from a runtime or memory comparison with baselines, especially for higher-dimensional systems.\n\n4. **Interpretability and intuition gaps.**\nThe notion of topological causality may be unfamiliar to much of the ICLR audience. A concise **visual** example—e.g., a 2-variable dynamical system with the corresponding manifold mappings—would greatly help build intuition. \n\n5. **Library dependence and limitations.**\nAs shown in the ball-drop case study, CaNDiCE’s success depends heavily on whether the causal functional forms are representable within the predefined basis library. The paper should discuss possible extensions to mitigate this limitation.",
"questions": "1. **Identifiability and guarantees**\nUnder what assumptions does CaNDiCE provably recover the correct causal terms? Is there an identifiable mapping between topological-causality preservation and coefficient consistency?\n\n2. **Complexity and scalability**\nWhat is the asymptotic or empirical computational cost relative to SINDy/ESINDy/SPL? Could the GAN-based counterfactual generation become a bottleneck for high-dimensional systems?\n\n3. **Robustness to library misspecification**\nCan the model adaptively extend or refine its basis set when key functional forms are missing? \n\n4. **Ablation or sensitivity analysis**\nHow sensitive are results to the hyperparameters λ_TC and λ_sp? Does the balance between sparsity and causality penalties substantially affect which terms are identified as causal?\n\n\nI am happy to raise my score if the concerns in questions and weaknesses are addressed.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T19:19:35",
"modification_date": "2025-11-12T18:20:32",
"review_url": "https://openreview.net/forum?id=bgdTK6cniJ¬eId=NNhc66vxTu",
"license": "CC BY 4.0"
},
{
"id": "Oveh7HZ8pV",
"forum": "bgdTK6cniJ",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23857/Reviewer_H6Lp",
"reviewer_name": "Reviewer_H6Lp",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "The paper proposes an equation discovery framework for dynamical systems based on topological causality. It first defines a world model as the posterior of the sparse regression parameters given the bootstrapped data samples and updates the posterior using a stochastic inverse approach. Then, it trains a generator that generates perturbations to the sampled parameters from the world model using a combination of GAN loss and several regularization terms that promote sparsity and causal consistency of the perturbations. The terms whose coefficients are often perturbed by the trained generator are deemed causal and retained in the function library for sparse regression.",
"strengths": "* The paper provides extensive context about related topics, such as topological causality and counterfactuals.\n* The experimental results are impressive on the Lotka-Volterra equation and the Lorenz equation. The proposed method did very well when limited data were available.",
"weaknesses": "* I find the writing in the methodology section unclear in general. For example, in Section 3.1, the kernel density estimator and stochastic inverse approach are crucial for estimating and updating the posterior $\\pi(\\Theta\\Lambda|\\mathbf y)$. These techniques should be (at least briefly) introduced in this problem context. Also, the notation $\\pi(\\Theta\\Lambda|\\mathbf y)$ is slightly confusing. I suppose $\\Theta\\Lambda$ here in fact refers to the simulated trajectory from $\\dot {\\mathbf x} = \\Theta\\Lambda$, but this notation alone might suggest the RHS expression itself.\n* The presentational issues become more severe when it comes to the counterfactual model (Section 3.2 and 3.3). I have a lot of questions regarding these sections. In the CGAN formulation, what is the difference between the role of the discriminator and the classifier? They seem to both classify between the unperturbed (real) parameters and the perturbed (fake) parameters. Also, from the algorithm, it seems that the update of D is decoupled from that of G, which is different from the original GAN formulation. Why is that? I cannot understand eq. (10) either. The classification label $z$ is never explained or defined in the text. And the $\\mathbf y$ inside the expectation does not make sense, since this is a scalar equation.\n* The writing should be improved in general. For example, the second point of the contribution list says that \"... generating counterfactual instances that lead to *out-of-distribution* trajectories... Counterfactuals are obtained by... *in-distribution* trajectories.\" Before reading the later sections, I could not figure out whether the counterfactual model should lead to in-distribution or out-of-distribution trajectories. On second thought, I can understand this statement where \"minimally perturbing\" seems to serve as a negative, but the unnecessary complexity in writing has made the paper difficult to understand.\n* It would be great to include a figure explaining the entire pipeline, in addition to Algorithm 1.\n* The experiments only considered two simple dynamical systems. While the results on these two systems are impressive, it remains to be seen whether this can be generalized to other datasets.\n* The experiments did not compare with weak SINDy, which is specifically designed for noisy data.",
"questions": "* L60: missing cross-reference\n* distinguish between \\citet and \\citep\n* L219: Can you elaborate on why you need the causal consistency constraint?\n* How many samples are needed to train the CGAN counterfactual model? Since it involves some neural networks, does it do well with the small sample sizes in the experiments?\n* How time-efficient is the proposed method compared to baseline SINDy?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-24T08:45:30",
"modification_date": "2025-11-12T18:20:32",
"review_url": "https://openreview.net/forum?id=bgdTK6cniJ¬eId=Oveh7HZ8pV",
"license": "CC BY 4.0"
}
] | |
tAM9SGoEmD | https://openreview.net/forum?id=tAM9SGoEmD | SafeMVDrive: Multi-view Safety-Critical Driving Video Generation in the Real World Domain | 6 | 3.75 | [
4,
6,
8,
6
] | [
3,
4,
4,
4
] | 4 | [
"Autonomous driving testing",
"safety-critical scenario",
"video generation",
"safety"
] | Safety-critical scenarios are essential for evaluating autonomous driving (AD) systems, yet they are rare in practice. Existing generators produce trajectories, simulations, or single-view videos—but they don’t meet what modern AD systems actually consume: realistic multi-view video. We present SafeMVDrive, the first framework for generating multi-view safety-critical driving videos in the real-world domain.
SafeMVDrive couples a safety-critical trajectory engine with a diffusion-based multi-view video generator through three design choices. First, we pick the right adversary: a GRPO-fine-tuned vision-language model (VLM) that understands multi-camera context and selects vehicles most likely to induce hazards. Second, we generate the right motion: a two-stage trajectory process that (i) produces collisions, then (ii) transforms them into natural evasion trajectories—preserving risk while staying within what current video generators can faithfully render. Third, we synthesize the right data: a diffusion model that turns these trajectories into multi-view videos suitable for end-to-end planners. On a strong end-to-end planner, our videos substantially increase collision rate, exposing brittle behavior and providing targeted stress tests for planning modules. Our code and video examples are available at: https://iclr-1.github.io/SMD/. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=tAM9SGoEmD | 2025-09-18T23:01:12 | 4 | [
{
"id": "zP8cbBqs24",
"forum": "tAM9SGoEmD",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12421/Reviewer_izyK",
"reviewer_name": "Reviewer_izyK",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "This paper addresses the critical challenge of generating realistic, safety-critical data for the evaluation of modern autonomous driving (AD) systems. The authors identify a key gap in existing data generation methods: while they can produce trajectories, simulations, or single-view videos, they fail to generate the realistic multi-view video feeds that contemporary end-to-end AD systems actually consume. To solve this, the authors propose SafeMVDrive, a novel framework designed to be the first to synthesize multi-view, safety-critical driving videos in the real-world domain.\nThe core of their method involves coupling a safety-critical trajectory engine with a diffusion-based video generator, guided by three main contributions. First, they employ a fine-tuned Vision-Language Model (VLM) as an intelligent \"adversary\" to select the vehicle most likely to create a hazardous situation based on multi-camera context. Second, they propose a two-stage motion generation process that initially models a direct collision and then transforms it into a plausible near-miss or evasion trajectory, preserving the scenario's risk while ensuring it can be faithfully rendered by the video model. Third, a diffusion model is used to convert these trajectories into the final multi-view video output.",
"strengths": "1. The designed safety-critical video generation pipeline is intuitive and important for training robust E2E model.\n\n2. The proposed two-stage evasion trajectory generator provides diverse collision scenes.\n\n3. The video results look consistent and dynamically plausible.",
"weaknesses": "1. SafeMVDrive works as a data engine to provide more diverse driving scenarios for better training E2E AD systems. However, the experiment section lacks related experiments on how the generated data can improve E2E AD model's performance under long-tailed driving scenarios.\n\n2. In Section 3.2, the authors claim VLM selection outperforms selection methods that rely on non-visual annotations in identifying physically feasible collisions. However, this capability is not shown in the experiment section. It would be better to have experiments showing this ability qualitatively or quantitatively.\n\n3. The motivation for using VLM is not clear. The author mentions that VLM allow fast selection during inference. However, the proposed pipeline mainly serves as a data engine, so runtime efficiency is not a critical problem. Also, the paper didn't compare the running time between simulation-based selection and VLM inference.",
"questions": "1. In line 452, the author states that they use automated annotation to identify all vehicles that can collide with the ego vehicle. How is automated annotation performed? If this process can be automated, why not just find all these vehicles at test time and randomly pick one from the set.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T07:00:20",
"modification_date": "2025-11-12T12:55:10",
"review_url": "https://openreview.net/forum?id=tAM9SGoEmD¬eId=zP8cbBqs24",
"license": "CC BY 4.0"
},
{
"id": "BjnYWIpGQp",
"forum": "tAM9SGoEmD",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12421/Reviewer_GrpU",
"reviewer_name": "Reviewer_GrpU",
"rating": 6,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "This works main contribution is to propose using VLM as an adversarial vehicle selector for multi-view video generation, which start with VLM selection and use trajecotry diffusion models for adversarial scenario genreation, then the genreated multiview videos are generated for E2E planner evaluation. Unlike most works that synthesize vehicle trajecotires, this work starts with visual cues from multi-view videos. The main limitation is that the generated videos are evaluated in an open-loop setting, where the E2E planner’s behavior may diverge from the input videos, limiting the realism and closed-loop consistency of the evaluation.",
"strengths": "- This works focus on an important problem: Generating Safety-critical Multiview videos, especially more research industry has now turned into End-to-End paradigm.\n- Using Video to select Critical Object is an interesting direction, but it also potentially neglect safety-critical scenarios that are not in the perceptive field (collision or far-away vehicles)",
"weaknesses": "- The main limitation and weakness is that this work is not closed-loop: \nThe multi-view videos used as input to the evaluated planner may diverge from the planner’s actual behavior, limiting the realism and validity of the evaluation. The 2-stage evasion stage in table 4 refinement is not the actual evaluated planner's behavior.\n- **Table 2 may not be a fair comparison.** The proposed annotation strategy only identifies vehicles that can collide with the ego vehicle, which may miss adversarial agents farther away. In contrast, a random lane-based neighbor selection could reveal a wider range of challenging interactions. Moreover, some nearest-vehicle collisions (as shown in the qualitative videos) appear uninteresting, such as simple rear-end cases.\n- Why is the evaluation limited to 3 seconds in table 4, rather than testing longer horizons for more realistic interactions?\n- Minor weakness: Fine Tuning VLM w/ CTG++ results, this works proposes to auto label feasible collision vehicles by checking if collisions can happen before off-road and collided with other vehicles, this would potentially scenarios due to CTG++’s performance issue.",
"questions": "- Can the Proposed framework handle Closed-loop simulation? For example, can we have planning algorithms 1 and 2, where the final scenarios may be different depending on the interactions?\n- How well does the VLM handle **multi-view visual inputs**? In particular, does the framework tend to select vehicles primarily from the **front and rear views**, or are there also cases where **side-view vehicles** are chosen as adversarial agents?\n- Report the inference speed of the proposed Pipelines",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T11:30:47",
"modification_date": "2025-11-12T12:55:11",
"review_url": "https://openreview.net/forum?id=tAM9SGoEmD¬eId=BjnYWIpGQp",
"license": "CC BY 4.0"
},
{
"id": "7ZA3EhpV60",
"forum": "tAM9SGoEmD",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12421/Reviewer_ffnN",
"reviewer_name": "Reviewer_ffnN",
"rating": 8,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "This paper proposes a method for synthesizing safety-critical scenario data, for which the authors design a VLM-based selector and a two-stage trajectory generator. By evaluating the quality of the generated videos and the validity rate of the trajectories, this paper demonstrates the effectiveness of the proposed pipeline. Long-tail scenarios are a critical challenge that autonomous driving must address. This paper represents a attempt to improve the performance of autonomous driving models on such long-tail scenarios.",
"strengths": "One notable strength of this work is its clear articulation of a critical gap in existing generative models: despite recent advances in video synthesis, they fail to produce safety-critical driving scenarios—especially those involving potential collisions—that are vital for robustness evaluation of autonomous vehicles. The paper rightly positions this as a core challenge in handling long-tail events, which are rare yet consequential in real-world deployment. \n\nThis paper proposes a novel multi-stage framework: it first employs a VLM-based selector to identify a potentially colliding target vehicle, then uses a trajectory generator to sequentially produce a collision trajectory followed by an evasion trajectory, and finally leverages an existing video generation model—conditioned on the generated trajectories—to synthesize realistic driving videos.",
"weaknesses": "The paper primarily evaluates two components: (1) the accuracy of the VLM-based adversarial vehicle selector, and (2) the performance of the synthesized data when used to test the UniAD planner, reporting metrics such as Collision Rate (CR), Near-Collision Rate (NC), and Time-to-Collision (TTC). \nHowever, it does not address a crucial practical question: can the generated safety-critical scenarios be effectively used for training autonomous driving models? Demonstrating utility in evaluation is valuable, but the potential of the synthetic data to improve model robustness or safety during training—a primary goal of data synthesis—remains unexplored and should be discussed.\n\nFurthermore, the authors directly adopt the pre-trained UniM-LVG video generator without fine-tuning. Given that UniM-LVG’s training data likely underrepresents safety-critical events (e.g., collisions or near-misses), its capacity to generate high-quality, physically plausible safety-critical videos is uncertain. The paper lacks empirical validation—such as visual fidelity checks, physics-based consistency analysis, or user studies—to confirm that the generated scenes are realistic and meaningful in these extreme cases.",
"questions": "1. Please supplement the experiments with discussion or validation demonstrating that UniM-LVG can generate high-quality data for the safety-critical scenarios described in this paper.\n\n2. Collisions represent only one type of safety-critical scenario. The paper should also discuss the cost and methodology required to extend the proposed approach to other types of safety-critical situations.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T12:02:32",
"modification_date": "2025-11-12T12:55:11",
"review_url": "https://openreview.net/forum?id=tAM9SGoEmD¬eId=7ZA3EhpV60",
"license": "CC BY 4.0"
},
{
"id": "Ohw6a5gFM9",
"forum": "tAM9SGoEmD",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12421/Reviewer_c85w",
"reviewer_name": "Reviewer_c85w",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper addresses multi-view, safety-critical video generation in the driving domain. It introduces SafeMVDrive, a framework that integrates a safety-critical trajectory simulator with a multi-view video generator and leverages visual context via a fine-tuned vision–language model (VLM) to select safety-critical vehicles. Experiments demonstrate that SafeMVDrive generates high-quality multi-view driving videos and induces 30% more collisions than the original NuScenes data, highlighting its effectiveness for safety-critical scenario synthesis.",
"strengths": "1. The paper is clearly written and presents rich, detailed content.\n2. The ability to generate safety-critical driving scenarios addresses an important real-world need, as prior approaches have struggled to produce such rare but crucial events.\n3. The paper provides extensive and clear visualizations, which effectively illustrate the high-quality simulation performance of the proposed method.",
"weaknesses": "1. The paper demonstrates notable practical and engineering value; however, its technical novelty is somewhat limited. The approach primarily integrates existing components—multi-view conditional generators, trajectory proposal modules, and VLM-based selection.\n2. The academic value is difficult to assess, as it lacks comparisons with related methods (though such methods may themselves be scarce) and \"safety\" is inherently a broad concept. Overall, its engineering value outweighs its academic value.\n3. The term safety-critical is rather broad, and the paper lacks a precise definition. Moreover, the current work addresses only safety issues arising from other vehicles’ trajectories, whereas real-world safety concerns extend beyond this scope. It is recommended that the paper provide a more concrete and detailed formulation of the problem.\n4. How well does the VLM-based selection generalization, and what happens if the selection fails? Are the final safety-critical scenarios manually adjusted in such cases? My understanding is that the method primarily leverages the VLM to facilitate the generation of more safety-critical simulation scenarios—is this correct?",
"questions": "1. Does training a policy model on generated safety-critical scenarios enhance its robustness to dangerous driving situations?\n2. Since interactive cases in NuScenes are relatively rare, how is the ground truth for safety-critical vehicles selected when using the VLM?\n3. Metrics relying on a single planner tend to be susceptible to fluctuations. It is therefore recommended to evaluate a broader range of planning methods to verify the consistency of results.\n4. What are the practical potential application scenarios of this pipeline? Is it intended to serve as a corner case evaluation benchmarks or to train more robust planners?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-20T20:22:43",
"modification_date": "2025-11-12T12:55:11",
"review_url": "https://openreview.net/forum?id=tAM9SGoEmD¬eId=Ohw6a5gFM9",
"license": "CC BY 4.0"
}
] | |
TxJfywuHSA | https://openreview.net/forum?id=TxJfywuHSA | Fourier Features Let Agents Learn High Precision Policies with Imitation Learning | 4 | 4 | [
2,
4,
6
] | [
4,
4,
4
] | 3 | [
"imitation learning",
"robotics",
"point clouds",
"point maps"
] | Various 3D modalities have been proposed for high-precision imitation learning tasks to compensate for the short-comings of RGB-only policies.
Modalities that explicitly represent positions in Cartesian space have an inherent advantage over purely image-based ones, since they allow policies to reason about geometry.
Point clouds are a common way to represent geometric information, and have several benefits such as permutation invariance and flexible observation size.
Despite their effectiveness, a number of hybrid 2D/3D architectures have been proposed in the literature, indicating that this performance can often be task-dependent.
We hypothesize that this may be due to the spectral bias of neural networks towards learning low frequency functions, which especially affects models conditioned on slow-moving Cartesian features.
Building on prior work that uses a parametric projection from Cartesian space into high-dimensional Fourier space to overcome the innate low-pass filtering characteristic of neural networks, we apply Fourier features to several representative point cloud encoder architectures.
We validate this approach on challenging manipulation tasks from the RoboCasa and ManiSkill3 benchmarks, and find that adding Fourier feature projections provides benefits across diverse encoder architectures and tasks, with meaningful improvements seen in the vast majority of tasks.
We show that Fourier features are a general-purpose tool for point cloud-based imitation learning, which consistently improves performance by enabling policies to leverage geometric details more effectively than models conditioned on Cartesian features. | Fourier feature projections improve all 3D modalities for diffusion imitation learning of high-precision tasks, but are especially beneficial for point cloud policies. | applications to robotics, autonomy, planning | https://openreview.net/pdf?id=TxJfywuHSA | 2025-09-19T02:38:14 | 3 | [
{
"id": "dOcckRymBp",
"forum": "TxJfywuHSA",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13670/Reviewer_9Bk2",
"reviewer_name": "Reviewer_9Bk2",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This paper investigates how Fourier features help mitigate spectral bias. The method is simple — augmenting point cloud encoders with Fourier feature embeddings to enable the network to capture high-frequency geometric cues. Experiments on several tasks from RoboCasa and ManiSkill3 demonstrate consistent improvements on both DP3 and PointPatch.",
"strengths": "1. The idea is straightforward and reasonable at a high level. The experiments conducted in simulation convincingly demonstrate its effectiveness across two 3D baselines (PointPatch, DP3).\n\n2. Implementation details (such as training details) are sufficient and shows good reproducibility.",
"weaknesses": "1. Lack progressive ablation studies. The work employs VariableJitter augmentation to stabilize training with Fourier features. However, this setup lacks fair and step-by-step ablation studies. There is a critical hypothesis that VariableJitter itself may contribute to the observed improvements, or it fits Fourier features. To properly isolate the effects, the paper should include step-by-step comparisons across the following variants: baseline, baseline+aug, fourier, fourier + aug.\n\n2. Lack direct evidence of mitigation spectral bias. The improvements should be supported by direct visualization or spectral analysis between with/wo fourier.\n\n3. Absence of real-world experiments. Real world 3D point clouds contain more noise (e.g. point sparsity, unstable depth sensors, noise or occlusion artifacts), which could lead to unstable learning of policy or sensitivity to spurious geometry. Besides, simulation engines has coarse contact resolution compared with real-world ones. Experiments on real hardware (even under noisy or partially occluded conditions) should make the results much more convincing. \n\n4. Limited contribution scope. The second contribution appears to be a standard validation of first contribution rather than an independent contribution. \n\n5. Limited scope to 3D policy. The motivation regarding spectral bias should also apply to 2D inputs. The authors' justification about RGB is sensitive to viewpoint or lighting is not convincing, especially the experiments presented in the paper are not directly related to these factors. Demonstrating method effectiveness in 2D would broden the research scope.",
"questions": "See the weakness.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-10T01:12:05",
"modification_date": "2025-11-12T13:10:35",
"review_url": "https://openreview.net/forum?id=TxJfywuHSA¬eId=dOcckRymBp",
"license": "CC BY 4.0"
},
{
"id": "1QmOlIjbQK",
"forum": "TxJfywuHSA",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13670/Reviewer_TuVu",
"reviewer_name": "Reviewer_TuVu",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 4,
"summary": "This paper proposes encoding 3D point-cloud positions with Fourier features to help an imitation-learning policy focus on geometric details. The authors conduct experiments on RoboCasa and ManiSkill3 to demonstrate effectiveness. However, the paper still lacks real-world experiments and clear, significant contributions.",
"strengths": "* This paper highlights a trick that many robotics papers overlook.\n* The paper is well presented and easy to read.",
"weaknesses": "* The novelty is limited. While applying Fourier features is a reasonable addition to the imitation-learning network, this is an incremental contribution, and its impact is difficult to validate without large-scale real-world experiments.\n* No real-world experiments. The authors evaluate only on RoboCasa and ManiSkill3, which are known for simplified physics and susceptibility to overfitting. Without convincing real-world results, it is hard to accept this as a substantial contribution to the robotics community.\n* The comparison in Fig. 5 is not fair. The proposed policy is compared qualitatively to a baseline, but the two scenarios differ, making it difficult to conclude that the proposed method attends better to details.\n* The evaluation appears too noisy to support strong conclusions. For example, in RoboCasa (Fig. 4), PP+FF outperforms DP3+FF, while in ManiSkill3 the opposite holds. This suggests the current benchmarking is not informative enough to determine whether the trick consistently helps.",
"questions": "* Why was EDM chosen as the action-conditioned diffusion framework instead of the more commonly used DDPM? Are there specific considerations driving this choice?\n* According to iCT ([https://arxiv.org/abs/2310.14189](https://arxiv.org/abs/2310.14189)), Fourier features are sensitive to hyperparameters. How were the hyperparameters selected here? The current choices appear somewhat arbitrary.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T03:23:27",
"modification_date": "2025-11-12T13:10:35",
"review_url": "https://openreview.net/forum?id=TxJfywuHSA¬eId=1QmOlIjbQK",
"license": "CC BY 4.0"
},
{
"id": "TJcaUNBTM7",
"forum": "TxJfywuHSA",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13670/Reviewer_ApuY",
"reviewer_name": "Reviewer_ApuY",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper studies spectral bias in point-cloud–conditioned imitation learning (IL) policies and proposes a simple, architecture-agnostic fix: apply a NeRF-style Fourier feature mapping to Cartesian 3D inputs before point-cloud encoding. The authors instantiate this on two representative encoders—PointPatch and DP3—and evaluate on RoboCasa (16 high-precision kitchen tasks, 50 demos each) and ManiSkill3 (4 tabletop tasks, 500 demos each). They report consistent gains from the Fourier mapping: e.g., RoboCasa mean success improves from 21.0%→40.0% (PointPatch) and 19.1%→26.3% (DP3), with notable per-task jumps like CloseDrawer 33.3%→70.0%. On ManiSkill3, average success rises from 50.8%→57.5% (PointPatch) and 58.8%→64.2% (DP3). The approach is intentionally minimal (fixed log-spaced bands; variable-magnitude jitter augmentation) and claims broad applicability to 3D-based IL.",
"strengths": "1.Clear, simple idea with broad compatibility. The paper targets a real pain point—networks’ low-pass bias on slowly varying XYZ—and plugs in a standard Fourier mapping that can sit in front of most point-cloud tokenizers, not just a bespoke architecture.\n\n2.Solid experimental coverage. Two encoders (local patch tokens vs. global DP3 token) and two popular benchmarks (RoboCasa, ManiSkill3) under a multi-task IL setup; consistent benefits across most tasks and encoders, with visual qualitative evidence.\n\n3.High-precision tasks emphasized. The study focuses on tasks where small geometric distinctions matter (insertions, buttons, levers), which is where spectral bias plausibly bites most; the per-task tables quantify where improvements are largest.",
"weaknesses": "1.Limited novelty. The core technique (Fourier features / positional encodings) is well-established; the main contribution is a systematic application and study in point-cloud IL.\n\n2.Real-robot validation absent. Claims emphasize high-precision manipulation, but results are purely in simulation (RoboCasa/ManiSkill3). The paper lacks real-world experiments. I would like to see solid real-world experiments to support your claim.",
"questions": "1.Coordinate bounding & periodicity. You note the mapping is periodic and requires points to lie in [−λmax/2, λmax/2]. How are coordinates normalized/cropped in multi-view reconstruction, and what happens to points outside bounds during exploration or camera drift?\n\n2.Sensitivity to frequency design. How sensitive are gains to L, λmin, λmax? Could learned Gaussian RFF or learned sinusoidal frequencies outperform fixed log-spaced bands here? Please include a small sweep or a learned-RFF variant. \n\n3.Why DP3 sometimes drops. On ManiSkill3 PullCube, DP3 + FF underperforms vanilla DP3 (91.7→80.0). What failure mode explains this, and can frequency ranges be task-adapted to mitigate regressions? \n\n4.How does the method handle depth noise, extrinsic/intrinsic miscalibration, or partial occlusion? I would like to see solid real-world experiments to support your claim.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T15:04:35",
"modification_date": "2025-11-12T13:10:36",
"review_url": "https://openreview.net/forum?id=TxJfywuHSA¬eId=TJcaUNBTM7",
"license": "CC BY 4.0"
}
] |
S8bmkHXqgT | https://openreview.net/forum?id=S8bmkHXqgT | Interpretable Preference Elicitation: Aligning User Intent with Controllable Long-tailed Learning | 2.666667 | 3 | [
4,
0,
4
] | [
3,
2,
4
] | 3 | [
"Long-tail learning"
] | Long-tailed recognition remains a significant challenge, where models often struggle with tail class performance and adaptability to diverse user preferences. While recent controllable paradigms leveraging hypernetworks allow numerical specification of head-tail trade-offs, defining these multi-dimensional preference vectors can be unintuitive for users. This paper introduces a novel framework that bridges this gap by enabling users to articulate their preferences through natural language. We propose a two-stage approach: first, optimal numerical preference vectors are identified for canonical distribution scenarios, and a rich corpus of corresponding textual descriptions is generated. Subsequently, a lightweight neural network learns to map sentence embeddings of these textual descriptions to the underlying 3D preference vectors controlling the expert ensemble. Our method significantly enhances the usability and interpretability of controllable long-tailed learning systems without compromising, and even slightly improving, their performance on benchmark datasets. This work facilitates more accessible and practical adaptation of long-tailed models to specific real-world requirements. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=S8bmkHXqgT | 2025-09-20T17:56:10 | 3 | [
{
"id": "UfgO5MyTMG",
"forum": "S8bmkHXqgT",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24934/Reviewer_DK75",
"reviewer_name": "Reviewer_DK75",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 4,
"summary": "This paper explores how to improve the usability of controllable hypernetworks by allowing users to directly specify the desired trade-offs or preferences through natural language. The proposed framework, Interpretable Preference Elicitation (IPE), follows a two-step training process: it first identifies canonical distributional scenarios with their corresponding optimal numerical preference vectors, and then associates these vectors with high-level textual descriptions. The paper demonstrates that this approach enhances usability and interpretability without degrading performance—and in some cases, even achieves slight improvements.",
"strengths": "1. **Relevant direction.** Improving the usability of advanced methods such as preference-controlled hypernetworks is an important research direction toward democratizing machine learning. \n2. **Clarity and simplicity.** The paper is clearly written, and the proposed approach is simple yet effective, substantially improving user-friendliness and interpretability. \n3. **Structured methodology.** Presents a well-designed three-step process for learning a mapping between text representations and preference vectors.",
"weaknesses": "1. **Limited novelty.** The primary novelty lies in the training procedure used to map sentence embeddings to preference vectors. While clearly presented, the three-step process mainly involves dataset construction and mean-squared-error regression, without introducing new algorithmic components or tackling novel challenges. \n2. If trained from scratch, this method requires training 3 different models disjointly (3-stage training).\n3. **Potential collapse.** The current procedure does not explicitly prevent the learned mappings from collapsing nor truly aligns the texts with the desired characteristics, just maps (see Question 2).",
"questions": "1. In Line 187, it is mentioned that a direct end-to-end training would involve an intractable joint optimization problem. Why is this the case? Would it not be feasible to train the hypernetwork using sentence representations (or their projections) as preference vectors, while enforcing alignment between the generated output and desired characteristics? \n2. Step 1.4 of Section 3.4 selects, for each scenario, the set of preference vectors that yield the best performance. \n 1. Given this heuristic, a single vector could belong to the optimal set of multiple scenarios. Have you verified this empirically? Figure 2 shows considerable overlap between some clusters (e.g., light orange and purple, blue and green). \n 2. How do you ensure that this procedure yields disjoint and well-separated clusters? \n 3. How do you guarantee that the mapping does not collapse, i.e., that all scenarios do not end up sharing the same set of vectors? \n3. Line 358 states that the semantic mapping is “well structured.” However, the corresponding figure alone is insufficient evidence for this claim and should be supported by quantitative analysis. \n 1. You could compute intra-cluster and inter-cluster distances. For each distribution type, compute the average distance between preference vectors corresponding to texts of the same type, and compare it to the average distance to vectors of different types. \n 2. Another possible analysis is to test linear separability by labeling data by distribution type and performing KNN classification with k-fold cross-validation. \n4. What are the training details for the hypernetwork? Was a pretrained hypernetwork used, or was it trained from scratch? If pretrained, what dataset and procedure were followed? \n5. Line 343 mentions training with a KL divergence loss, whereas Equation (9) specifies an MSE loss. Which one is actually used? \n\n**Minor comments that do not affect rating**\n1. In Figure 3b, between which two distributions is the KL divergence computed? \n2. In Figure 2, what do the connecting lines between the left and right sides represent? \n3. Table 1 and Figure 3a appear to display the same information. What is the rationale for including both? Additionally, Line 354 should reference Figure 3a as well as Table 1. \n4. Line 469 refers to Figure 3b but appears to mean only a portion of it—please clarify the intended reference.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T05:20:01",
"modification_date": "2025-11-12T18:27:31",
"review_url": "https://openreview.net/forum?id=S8bmkHXqgT¬eId=UfgO5MyTMG",
"license": "CC BY 4.0"
},
{
"id": "4xIreKtJzH",
"forum": "S8bmkHXqgT",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24934/Reviewer_fGM5",
"reviewer_name": "Reviewer_fGM5",
"rating": 0,
"confidence": 2,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "The paper develops a method to tailor neural networks to prediction tasks with long-tailed class distributions. The main novel contribution is a method to describe class trade-offs in natural language.",
"strengths": "The task of specifying a tradeoff between common and tail classes in natural language is interesting. I'm not familiar with the related work, but I'm not sure if prior work has tried to do this.",
"weaknesses": "1. Severe clarity issues:\n- From the outset of the paper's abstract and introduction, it's unclear what problem they are studying and why. \n- The methodology section is very hard to read - the authors do not clarify which part is novel and which part isn't, and there's a lot of notation introduced that ends up being confusing rather than clarifying.\n- Minor: The related work section is all over the place. Inexplicably, it cites the GPT-3 paper, which I'm not sure of the relevance to this paper.\n2. I do not understand the authors' choice of evaluation tasks. Supposedly, the novel contribution is the ability to specify desired handling of common classes vs. long-tail classes in natural language. But there are no metrics or evals for this capability. Instead, the authors just compare on standard metrics and datasets for long-tail prediction.\n3. There are no examples of the natural language specifications in the paper, making it very hard to understand what the method is actually trying to achieve.\n4. Related to the clarity concern in (1), the paper is not self-contained and I had to look at three other prior papers to even understand what problem this paper is studying.\n\nOverall, because this paper is difficult-to-read, poorly motivated, and does not justify its evals, it should be a clear reject.",
"questions": "1. In Table 3 and 4 you report SOTA against all prior long tail methods. What does this have to do with the paper's original motivation on describing preferences over head-tail tradeoffs in natural language?\n2. What are example behaviors that your method allows (by specifying in natural language) that prior methods do not allow?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T12:51:05",
"modification_date": "2025-11-12T18:27:31",
"review_url": "https://openreview.net/forum?id=S8bmkHXqgT¬eId=4xIreKtJzH",
"license": "CC BY 4.0"
},
{
"id": "zVrTYW2QqX",
"forum": "S8bmkHXqgT",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24934/Reviewer_dRpV",
"reviewer_name": "Reviewer_dRpV",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes a novel framework - Interpretable Preference Elicitation which allows Mixture-of-Experts model predictions to be steered using natural language descriptions. The inference pipeline works using a two-staged approach - the natural language text is converted into vectors. The vectors serves as the routing or gating vectors for the MoE models. During training time, the authors conduct an extensive grid search over the gating vectors along the dimensions of steer-ability. Then they use an LLM to generate synthetic data which could be mapped to the vectors. The synthetic data is used to train a Sentence Transformer with a shallow MLP to map the descriptions into the vectors.\n\nThe authors apply this method to examine if the descriptions could encourage the model to improve the accuracy for long-tail classes. Through empirical studies, they show that this method shows better performance than several baselines",
"strengths": "1. The motivation behind this paper is clear. The presentation with the figures makes the overall design easy to follow intuitively.\n2. The empirical results show that this framework has better performance in comparison to several baselines for the vision benchmarks",
"weaknesses": "1. The offline grid search seems to be a computationally intensive process and might not scale with more number of experts, dimensions and modes of steerability.\n2. The central contribution of this paper is that the framework allows steerability through natural language. However, it is not clear whether this claim is proven without any user studies. In fact it is not clear how the natural language text is generated for the experiments to steer the model predictions. It is uncertain if this work could generalize to unforeseen texts.",
"questions": "1. Why is a 3 dimensional preference vector chosen?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T03:38:00",
"modification_date": "2025-11-12T18:27:31",
"review_url": "https://openreview.net/forum?id=S8bmkHXqgT¬eId=zVrTYW2QqX",
"license": "CC BY 4.0"
}
] | |
s9cqFuiD2v | https://openreview.net/forum?id=s9cqFuiD2v | GraDA: Gradient-Guided Knowledge Distillation for Domain Adaptation | 4.5 | 3.75 | [
6,
6,
4,
2
] | [
3,
3,
4,
5
] | 4 | [
"Unsupervised learning",
"Semi-supervised learning",
"Domain adaptation",
"Knowledge distillation",
"Graph learning"
] | In this paper, we explore $\textbf{how to enhance student network performance in knowledge distillation (KD) for domain adaptation (DA)}$. We identify two key factors impacting student performance under domain shift: $\textbf{(1) the capability of the teacher network}$ and $\textbf{(2) the effectiveness of the knowledge distillation strategy}$.
For the first factor, we integrate a Vision Transformer (ViT) as the feature extractor and our proposed Category-level Aggregation (CA) module as the classifier to construct the ViT+CA teacher network. This architecture leverages ViT's ability to capture detailed representations of individual images. Additionally, the CA module employs the message-passing mechanism of a graph convolutional network to promote intra-class relations and mitigate domain shift by grouping samples with similar class information.
For the second factor, we leverage pseudo labels generated by the ViT+CA teacher to guide the gradient updates of the student network's parameters, aligning the student's behavior with that of the teacher. To optimize for efficient inference and reduced computational cost, we use a convolutional neural network (CNN) for feature extraction and a multilayer perceptron (MLP) as the classifier to build the CNN+MLP student network. Extensive experiments on various DA datasets demonstrate that our method significantly surpasses current state-of-the-art approaches. Our code will be available soon. | transfer learning, meta learning, and lifelong learning | https://openreview.net/pdf?id=s9cqFuiD2v | 2025-09-17T14:35:43 | 4 | [
{
"id": "pJnj2Up3oW",
"forum": "s9cqFuiD2v",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8559/Reviewer_GLAa",
"reviewer_name": "Reviewer_GLAa",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes GRADA, a gradient-guided knowledge distillation framework designed to improve student network performance under domain adaptation settings. The authors identify two major factors that limit student effectiveness during domain shift: the strength of the teacher model and the quality of the distillation process. To address these, GRADA employs a ViT-based teacher network enhanced with a Category-level Aggregation module, which strengthens intra-class relationships and reduces domain discrepancies. Knowledge is then transferred to a lighter CNN+MLP student model using pseudo labels that guide parameter updates through gradient alignment. Experiments across multiple domain adaptation benchmarks show that GRADA consistently outperforms state-of-the-art methods, while maintaining efficient inference and reduced computational cost.",
"strengths": "The novelty of GRADA lies in its gradient-guided knowledge distillation framework that explicitly aligns student model updates with a strong ViT-based teacher enhanced by a Category-level Aggregation module. This combination uniquely strengthens intra-class relationships and reduces domain shift, enabling efficient and accurate domain adaptation with lower computational cost.\n\nThe paper demonstrates:\n\n•\tInnovative distillation approach that effectively aligns teacher and student gradients for improved domain adaptation.\n\n•\tStrong empirical performance, outperforming state-of-the-art methods across multiple benchmarks.\n\n•\tEfficiency-focused design, achieving high accuracy while reducing computational cost and inference time.",
"weaknesses": "Some areas that could be further investigated:\n\n•\tIncreased model complexity due to the ViT-based teacher and Category-level Aggregation module.\n\n•\tLimited generalization evidence beyond the evaluated domain adaptation benchmarks.\n\n•\tClarity is impacted due to terminology being used before it is introduced. Please check the paper carefully to ensure readability. E.g. Fig 1 caption, UDA, and in Fig 1, Ours-S, Ours-T. While UDA is in the Table 6, it is common practice to write acronyms in full in the body of the paper the first time they are used. Additionally Ours-S, Ours-T is replaced by other terminology later in the paper.",
"questions": "Can you discuss the increased model complexity due to the ViT-based teacher and Category-level Aggregation module?\n\nCan you discuss how GRADA could be evaluated on real world datasets as opposed to benchmark datasets? Do you envisage any challenges in its use in the real world?\n\nIn the related works, you highlight the shortcomings of the current approaches. Can you describe how GRADA addresses the second shortcoming around knowledge distillation?\n\nPersonally I struggle with claims such as “Notably, the success is fully explainable …”. How do you know that it is “fully” explainable? It is more accurate to simply claim “explained by thorough qualitative analyses.” Please discuss.\n\nLabels of Fig 3(a) and Fig 3(b) in Fig 3 do not align with their use in the text.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T05:01:23",
"modification_date": "2025-11-12T12:06:54",
"review_url": "https://openreview.net/forum?id=s9cqFuiD2v¬eId=pJnj2Up3oW",
"license": "CC BY 4.0"
},
{
"id": "UBL81HYOGt",
"forum": "s9cqFuiD2v",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8559/Reviewer_wKQr",
"reviewer_name": "Reviewer_wKQr",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper introduces GraDA (Gradient-Guided Knowledge Distillation for Domain Adaptation), a method to improve unsupervised domain adaptation (UDA) by distilling knowledge from a powerful ViT-based teacher network to a lightweight CNN-based student network. The teacher combines a Vision Transformer (ViT) for feature extraction with a novel Category-level Aggregation (CA) module, inspired by graph convolutional networks, to enhance intra-class relations and align features across domains using pseudo labels. The student, consisting of a CNN feature extractor and MLP classifier, is trained via gradient guidance from the teacher's pseudo labels, allowing flexible learning without strict imitation.",
"strengths": "Innovative Architecture Integration: Effectively leverages ViT's global representation strengths for training while deploying a compact CNN for inference, addressing real-world deployment challenges on resource-constrained devices.\n\nCategory-level Aggregation (CA) Module: The GCN-inspired module promotes intra-class consistency and class-aware cross-domain alignment, potentially reducing domain shift more robustly than standard MLP classifiers.\n\nGradient-Guided KD Strategy: Unlike traditional logit- or feature-based distillation, this method uses pseudo labels to guide gradients across all student parameters, bridging cross-architecture gaps (ViT to CNN) and allowing the student autonomy in learning, inspired by educational principles.",
"weaknesses": "ependency on Pseudo Labels: The method heavily relies on teacher-generated pseudo labels for both self-enhancement and student guidance, which could propagate errors if initial labels are inaccurate or if the confidence threshold (τ) is poorly tuned, especially in severe domain shifts.\n\nComputational Overhead: While the student is efficient, the ViT+CA teacher requires substantial resources during training, limiting scalability for very large datasets or low-resource environments.",
"questions": "How sensitive is the performance to the pseudo-label confidence threshold τ? Could you provide ablation results on varying τ values across different datasets?\n\nIn the self-enhanced learning step, how does the method mitigate the impact of noisy pseudo labels on the combined dataset D_cb, especially early in training when the teacher is less reliable?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T21:13:10",
"modification_date": "2025-11-12T12:06:55",
"review_url": "https://openreview.net/forum?id=s9cqFuiD2v¬eId=UBL81HYOGt",
"license": "CC BY 4.0"
},
{
"id": "ep6oBoFa2U",
"forum": "s9cqFuiD2v",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8559/Reviewer_Aed7",
"reviewer_name": "Reviewer_Aed7",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes a new framework to improve student network performance in knowledge distillation (KD) for domain adaptation (DA). The authors identify two major factors affecting KD effectiveness under domain shift: the teacher network’s capability and the distillation strategy. To address the first factor, they design a ViT+CA teacher model, which combines a ViT for rich feature extraction with a Category-level Aggregation module that uses graph-based message passing to enhance intra-class relations and reduce domain discrepancies. For the second factor, they employ pseudo labels generated by the teacher to guide the CNN+MLP student model, aligning its learning behaviour with the teacher while maintaining computational efficiency. Experiments across multiple domain adaptation benchmarks show that the proposed method achieves superior performance compared to state-of-the-art approaches.",
"strengths": "This paper is well-recognized and easy to follow.\n\nThe core idea of enhancing the teacher module and improving knowledge distillation is intuitive and reasonable.\n\nExperiments show consistent improvement of the proposed method applied to existing methods.",
"weaknesses": "Why not directly use a pre-trained model with fine-tuning strategies to obtain a stronger teacher model? If this approach was intentionally avoided, please clarify the advantages of your proposed teacher model compared to pre-trained alternatives. It would strengthen the paper to include an experimental comparison with a pre-trained-based domain adaptation (DA) baseline.\n\nThe proposed class-level aggregation techniques resemble prototype-based methods, which have been extensively explored in domain adaptation. Although these techniques are introduced here in the context of knowledge distillation-based DA, please clarify the key differences and novel aspects compared to existing prototype-based approaches.\n\nSince class-level aggregation is used to enhance intra-class relationships within unlabeled target data and generate pseudo-labels, its effectiveness likely depends on the training batch size. Have you conducted any experiments analyzing the impact of batch size on performance? Similarly, as the confidence threshold controls pseudo-label quality, please provide a hyperparameter sensitivity analysis for this threshold.\n\nThe method constructs a cross-domain knowledge graph to align unlabeled target samples with labeled source samples through class-aware feature alignment, where pseudo-labels and source ground-truth labels share identical categories. This design allows the teacher network to capture structural representations and reduce inter-domain discrepancies. Is there any theoretical justification or analysis supporting the effectiveness of this alignment strategy?",
"questions": "Please refer to the weakness section.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T17:02:38",
"modification_date": "2025-11-12T12:06:55",
"review_url": "https://openreview.net/forum?id=s9cqFuiD2v¬eId=ep6oBoFa2U",
"license": "CC BY 4.0"
},
{
"id": "YeJIVRenPL",
"forum": "s9cqFuiD2v",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8559/Reviewer_68Lo",
"reviewer_name": "Reviewer_68Lo",
"rating": 2,
"confidence": 5,
"soundness": 2,
"contribution": 1,
"presentation": 3,
"summary": "The paper investigates how to improve knowledge distillation in a domain adaptation scenario, examining the impact of the teacher network’s architecture and classification-head design on student performance under domain shift. To this end, the authors propose replacing the standard MLP head in the teacher with a Category-level Aggregation (CA) module (inspired by GCNs) to better capture relational information among samples and classes, then distill from a ViT+CA teacher to a CNN+MLP student. Extensive experiments show gains under domain shift.",
"strengths": "1. The paper is well-written and structured, making the methods and results easy to follow.\n2. The experimental evaluation is comprehensive, covering multiple baselines, ablations, and domain-shift scenarios.",
"weaknesses": "1. The authors claim that \"We identify two key factors impacting student performance under domain shift: (1) the capability of the teacher network and (2) the effectiveness of the knowledge distillation strategy.\" However, the statements do indeed seem trivial and already well-established in the literature, and as such, they offer very little in terms of novelty or framing a clear research question. The idea that a teacher model’s capacity (stronger network, more parameters/training) influences knowledge distillation is well-known. Similarly, the “effectiveness of the KD strategy” is also a standard concern [1].\n2. The authors argue that the standard multilayer perceptron (MLP) classification head “may have limited generalization due to its inability to capture relational information among neighboring samples”, and therefore propose a Category-level Aggregation (CA) module inspired by graph convolutional networks (GCNs). However, this motivation is not sufficiently justified in the context of the proposed research question. Specifically, it is not clearly demonstrated why an MLP head would fail in the particular setting of teacher-student knowledge distillation, domain adaptation, and architecture mismatch (e.g., ViT teacher → CNN student). If the goal is to study knowledge distillation under domain shift, one could reasonably adopt an MLP head and still examine the core research question (teacher capability vs KD strategy). In other words, even with an MLP head, one could fairly compare methods and isolate the proposed components.\n3. While replacing an MLP head with a Category-level Aggregation (CA) module inspired by GCNs is interesting, the novelty appears limited when viewed in the context of related work [2] [3].\n\n\n[1] A survey on knowledge distillation: Recent advancements\n\n[2] 2019 - CVPR - GCAN: Graph Convolutional Adversarial Network for Unsupervised Domain Adaptation\n\n[3] 2020 - ECCV - Learning to Combine: Knowledge Aggregation for Multi-Source Domain Adaptation",
"questions": "See Weakness.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T10:44:57",
"modification_date": "2025-11-12T12:06:56",
"review_url": "https://openreview.net/forum?id=s9cqFuiD2v¬eId=YeJIVRenPL",
"license": "CC BY 4.0"
}
] | |
tDdeW2puHW | https://openreview.net/forum?id=tDdeW2puHW | From Real to Synthetic: A Fine-grained Dataset and High-fidelity Biomechanical Model for Animal Behavior Understanding | 4 | 4 | [
8,
2,
6,
2,
2
] | [
4,
4,
5,
4,
3
] | 5 | [
"Animal dataset",
"Biomechanical model",
"Synthetic data generation",
"Behavioral uncertainty quantification",
"Video understanding"
] | Rat behavior research contributes to the exploration of human disease mechanisms. However, existing datasets are scarce and cover limited behavior types, hindering the analysis and modeling of complex behavior patterns. We constructed ActionRat, a new multi-view rat behavior dataset that, for the first time, captures diverse actions during free exploration and brain-computer interface (BCI) control. It combines real and synthetic sequences with fine-grained keypoint annotations and atomic action sequences, supporting broader behavior analysis tasks. To efficiently generate synthetic data for dataset expansion, we developed OpenRatEngine, a high-fidelity 3D virtual biomechanical model. This model integrates anatomical priors from computed tomography (CT) scans, kinematic constraints, and lifelike appearance, reducing the domain gap between synthetic and real data. Equipped with pose control, OpenRatEngine generates synthetic sequences with accurate 3D keypoint annotations. We evaluated behavioral uncertainty quantification and animal pose estimation tasks on the ActionRat dataset, and demonstrated the outstanding synthetic data generation capability and realism of OpenRatEngine. Extensive experiments across deep learning models confirmed the effectiveness and value of both real and synthetic data. | A Fine-grained Benchmark Dataset and High-fidelity Biomechanical Model for Animal Behavior Understanding | datasets and benchmarks | https://openreview.net/pdf?id=tDdeW2puHW | 2025-09-17T10:28:39 | 5 | [
{
"id": "FHoGNf4wbM",
"forum": "tDdeW2puHW",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8245/Reviewer_gBuZ",
"reviewer_name": "Reviewer_gBuZ",
"rating": 8,
"confidence": 4,
"soundness": 3,
"contribution": 4,
"presentation": 3,
"summary": "The paper introduces the OpenRatEngine model and the Action rat dataset. The OpenRatEngine model is a biomechanical model of a rat with bone lengths and positions estimated from CT scans of 5 rats. The ActionRat dataset is a mix of a 3 camera recording of real rat movements with various neural stimulation to trigger diverse actions as well as a synthetic dataset generated from the OpenRatEngine model. The pose in the real recording is annotated with a DeepLabCut model training on 1500 real images.\n\nThe authors benchmark various algorithms for predicting the pose temporal trajectory and compare the OpenRatEngine model to DeepLabCut for 3D pose estimation. Finally they compare the kinematics of the synthetic data to the real data by testing how temporal prediction models generalize across the two datasets.",
"strengths": "Both the ActionRat dataset and the OpenRatEngine model are novel contribution to the biomechanical modeling of rat behavior. The ActionRat dataset has a diverse set of rat behaviors in an open field, and complements Rat7M, the only other 3D rat dataset currently available. The OpenRatEngine models 60 joints on the rat body, which is indeed more than the 38 actuators modeled by Aldorondo et al, 2024. \n\nThe benchmarks of the temporal models will be really useful for the development of future similar models on the ActionRat dataset. With these benchmarks I think researchers may really develop more models to predict animal motion, which is exciting.",
"weaknesses": "The 3D pose estimation evaluation felt quite weak to me, both in terms of methodology description and results. As I understand, the authors compare a DeepLabCut estimator to the OpenRatEngine for estimating the 3D pose of the animal. \nThe test data is not properly specified. How many ground truth annotations for evaluation are there? Figure 2 says that there are 6558 annotated frames, which presumably is 1500 real images for DeepLabCut (as detailed in section 3.3) and 5058 frames fit by the OpenRatEngine from contours. If they do not overlap, what is the evaluation done on?\nBesides this, the evaluation results seem to show that the OpenRatEngine is really quite comparable to DeepLabCut, whereas the qualitative comparison in Figure 3 really shows how much more detailed OpenRatEngine is. It's unclear whether the poor quantitative performance of OpenRatEngine is due to poor fitting of OpenRatEngine model to the rat contours, due to some quirk of the evaluation data, or something else. There really should be more details on the model fitting to data and on the evaluation data.\n\nOn the dataset itself, it's unclear how much automatically annotated data it actually contains. Out 609K frames, there are only 6558 annotated frames. Are the remaining 602K frames annotated automatically (perhaps with DeepLabCut) so that they can be useful for temporal prediction? \n\nCompared to Rat7M, this dataset also does have much more occlusions due to having fewer cameras. This should be noted in the limitations perhaps.\n\nSome small typos:\nLine 104 - rodennt should be rodent\nLine 182 - Should be Lobato-Rios et al 2022 simply, no Victor",
"questions": "See questions in Weaknesses\n\n- Why not calibrate all 3 cameras and use them all for triangulation? The 3D tracking would improve quite a bit. \n- Why are the two ears not modeled in the OpenRatEngine model?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-04T01:19:34",
"modification_date": "2025-11-12T12:03:06",
"review_url": "https://openreview.net/forum?id=tDdeW2puHW¬eId=FHoGNf4wbM",
"license": "CC BY 4.0"
},
{
"id": "ydhxEBzYr4",
"forum": "tDdeW2puHW",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8245/Reviewer_Wxag",
"reviewer_name": "Reviewer_Wxag",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper claims two main contributions:\n\nActionRat Dataset: A multi-camera (three-camera) recording dataset of rat behavior with detailed annotations, including 2D keypoints with and without brain stimulation, and a subset of segmented clips labeled by action category. The authors report that the distribution of action categories differs from freely exploring conditions.\n\nOpenRatEngine: A biomechanical rat model derived from CT scans, used to simulate action sequences. The virtual rat is manually registered to selected keyframes from real videos, and Blender interpolation is used to generate continuous motion time series.",
"strengths": "The inclusion of brain stimulation perturbations adds an interesting causal-intervention dimension that could be valuable for neuroscience and behavior modeling research.\n\nThe dataset includes rich annotations of animal behavior categories, which may support downstream supervised or semi-supervised learning studies.\n\nThe CT-based rigging and virtual modeling pipeline are described transparently and could serve as a reference for other labs interested in synthetic animal data.",
"weaknesses": "The paper does not convincingly argue the need for this dataset from a machine learning perspective. The dataset is relatively small and of lower video quality compared to existing open datasets (for instance, Rat7M Dunn et al.). To strengthen the machine learning contribution, the authors should provide evidence that the dataset exposes failure modes or limitations of current methods, or it enables learning under novel condition. \n\nThe OpenRatEngine rigging and interpolation rely on existing software (manual alignment and Blender interpolation). The authors did not demonstrate applications or downstream tasks that showcase the usefulness or superiority of the OpenRatEngine. For example, evaluating how simulated sequences improve behavior classification, pose estimation for large data.",
"questions": "1. The introduction mentions stimulation-specific behaviors (e.g., spasms and other unique responses). Why aren't these represented as new action categories in the dataset?\n\n2. Is the OpenRatEngine manually registered to the keyframes, or does it involve any machine learning methods?\n\n3. In Table 2, the best model achieves comparable performance on the ActionRat dataset relative to other datasets. To better support the claimed contribution, could the authors compare model performance separately for freely moving vs. stimulation conditions, and show whether including the BMI data improves learned priors for behavior prediction?\n\n4. In Table 3, the 3D pose estimation using DeepLabCut appears to rely on binocular cameras, with reprojection to a monocular view, while the synthetic data are generated using all three cameras (Eq. 5). Is this a fair comparison, given the different numbers of input views? Please also report variance, and the mean and variance across keypoints to make the improvements clearer.\n\n5. For Figure 3, could the authors provide quantitative evaluation metrics beyond qualitative examples? For instance, applying both reconstruction methods to unseen data and comparing with real animal trajectories.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T13:01:10",
"modification_date": "2025-11-12T12:03:07",
"review_url": "https://openreview.net/forum?id=tDdeW2puHW¬eId=ydhxEBzYr4",
"license": "CC BY 4.0"
},
{
"id": "kmjgAWsIRM",
"forum": "tDdeW2puHW",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8245/Reviewer_JWJZ",
"reviewer_name": "Reviewer_JWJZ",
"rating": 6,
"confidence": 5,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper present a multi-view rat behavior dataset named ActionRat, which captures diverse actions during free-exploration and brain-computer interface control. Despite real-captured data, ActionRat also contains expanded synthetic data. To generate synthetic data, the authors further developed OpenRatEngine, which is a 3d virtual biomechanical model with lifelike appearance. OpenRatEngine could generate accurate 3d keypoint annotations.",
"strengths": "1.\tThis paper is clearly written and well motivated. Action recognition and motion capture are important for understanding the behavior of rodents (e.g. rats). This paper presents a new dataset featuring diverse behaviors and high quality annotations. \n\n2.\tPreviously, recordings about the abnormal behaviors of rats are rarely seen. I believe this dataset may play an important role for the community to understand the abnormal behaviors of rats.",
"weaknesses": "0. The main weakness is lack of technical contributions. The technologies used in this paper have been well explored in the past. \n\n1.\tAs recording abnormal behaviors is one of the feature of the video dataset, I would be better to show some real-captured video cases of such video footage in supp video. Existing video only shows the openratengine virtual renderings. \n\n2.\tThe appearance of virtual rat seems limited. Only a white rat appearance was employed to generate the dataset, raising some issues about generation to other kinds of rats.",
"questions": "I do not see critical flaws of the paper. The paper is self-contained with limited technical contributions. However, the problem itself and the data provided are interesting. Collecting such data requires heavy efforts, I do believe technical tricks are not the only criteria for publication. This is why I give a relatively positive rating. \n\nSome minor questions about techniques: \n\n3.\tAt L. 298, how was the weights iteratively refined? Automatically or manually? \n\n4.\tHow was the contours discrepancy metric defined? Were multi-view contours enough to control the detailed motion of rat? \n\n5. L. 300, “coherent” -> “coherence”.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T19:45:54",
"modification_date": "2025-11-12T12:03:08",
"review_url": "https://openreview.net/forum?id=tDdeW2puHW¬eId=kmjgAWsIRM",
"license": "CC BY 4.0"
},
{
"id": "uZLMB42ntD",
"forum": "tDdeW2puHW",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8245/Reviewer_ZMKX",
"reviewer_name": "Reviewer_ZMKX",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 1,
"presentation": 2,
"summary": "The paper presents ActionRat, a multi-view video dataset of rat behavior, and OpenRatEngine, a synthetic rat animation and rendering framework designed to generate realistic pose sequences. The dataset contains ~609K annotated frames across seven behavioral categories (including some BCI-evoked actions), while OpenRatEngine reconstructs 3D rat meshes from CT scans, applies inverse kinematics (IK) control, and uses a contour-based optimization for pose alignment. The authors evaluate synthetic–real correspondence using keypoint reconstruction and motion metrics, and report a small domain gap between real and synthetic data.",
"strengths": "- The pipeline for CT-derived modeling, IK-based control, and rendering is well executed and clearly described.\n- Combining real multi-view recordings with synthetic renderings in a unified dataset is a useful step toward obtaining better correspondence between simulation and behavior\n- The dataset includes stimulation-induced actions, which could open opportunities for modeling causal intervention or neural-behavioral decoding",
"weaknesses": "- The ActionRat dataset (609K frames) is significantly smaller than existing benchmarks such as Rat7M or PAIR-R24 and does not introduce new behavioral contexts, species, or task diversity. The listed seven behaviors are standard (e.g. rearing, grooming, walking), and diversity is asserted but not quantified.\n- The only nominal addition, BCI-evoked behaviors, is underdeveloped, as no downstream applications (e.g. stimulation decoding, closed-loop control) are demonstrated.\n- OpenRatEngine combines standard components: CT-derived skeletons, mesh rigging, inverse kinematics, and contour-based fitting. Similar pipelines exist (e.g. Rat7M synthetic, Animal3D, RatSim), and the paper does not demonstrate a quantitative or methodological advance over them.\n- The evaluations reproduce existing pose-estimation benchmarks (MPJPE/MPJVE) rather than defining new tasks that exploit the unique BCI metadata or synthetic flexibility. Without a concrete downstream problem, the practical value of ActionRat remains unclear.\n- The authors claim that the synthetic-to-real domain gap is small, yet no systematic tests support this. Results are reported on the same rats and camera setups used for training. It remains unclear whether models trained with synthetic data generalize to unseen rats, new recording sessions, or unseen viewpoints.\n- Diversity is neither quantitatively defined nor contextualized against other datasets. All subjects are male, of one strain, and recorded in a fixed apparatus. Hence, diversity appears limited to modest variation in BCI conditions.",
"questions": "1. How is behavioral diversity measured? Please provide a comparison to Rat7M or other datasets\n2. What practical tasks can leverage the BCI metadata? Could this dataset enable learning of stimulation-to-behavior mappings or causal behavior prediction? An illustrative example would clarify its relevance\n3. Are models evaluated per rat or across subjects? A leave-one-subject-out test would show whether the dataset generalizes beyond individual-specific idiosyncrasies\n4. How does stimulation parameters map to specific behavioral classes or motion trajectories?\n5. Beyond pose estimation, what tasks can exploit both real and synthetic modalities?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T13:43:21",
"modification_date": "2025-11-12T12:03:08",
"review_url": "https://openreview.net/forum?id=tDdeW2puHW¬eId=uZLMB42ntD",
"license": "CC BY 4.0"
},
{
"id": "woISoqSc9K",
"forum": "tDdeW2puHW",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8245/Reviewer_pF94",
"reviewer_name": "Reviewer_pF94",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "The authors present ActionRat, an open source dataset comprised of 3D keypoints and action segmentation labels for rat behavior during free exploration and brain stimulation. They also present OpenRatEngine, a biomechanical rat model that is capable of producing realistic synthetic rat behavior data including 3D keypoint trajectories, meshes, and 2D projections onto static camera views. The authors benchmark both the ActionRat dataset and the OpenRatEngine trajectories with several experiments.",
"strengths": "The ActionRat dataset is a valuable asset for the computer vision and behavioral quantification communities. Including a range of atypical and pathological behaviors is essential for capturing a wider range of behaviors that is crucial for training more robust and generalizable models.\n\nThe OpenRatEngine produces highly realistic looking behaviors, and serves as a template for creating similar simulators for other species. The authors have done a good job creating a model that is properly biophysically grounded and visually similar to experimental data, an impressive feat in and of itself.",
"weaknesses": "The main weakness of this paper are the experiments. It's not clear to me how these properly highlight the benefits of the ActionRat dataset or the OpenRatEngine.\n\nFirst off, the \"Behavioral Uncertainty Quantification\" task is never explicitly defined - what is this, and what is it supposed to be testing? - I'm also confused as to why additional datasets are included here, this just feels like a comparison of the different baseline models and doesn't at all focus on ActionRat; their inclusion distracts from the main point of the paper.\n- The text says all models take keypoint sequences from the ActionRat V1 dataset as input - where did these keypoints come from? They aren't mentioned previously. Are these from DLC? If so, was DLC trained with real data, synthetic data, or both?\n- How am I supposed to interpret Table 2? Is the point that the numbers across datasets are similar? I'm not sure that tells me much about about the quality of the ActionRat dataset.\n- L358: \"action diversity predicted using the ActionRat dataset is higher than that of Animal Kingdom\" - Animal Kingdom contains data from a wide range of species, these numbers will not be directly comparable. In my opinion the more interesting question, that actually speaks to the strengths of this dataset, is \"how does action diversity compare when a model is trained on spontaneous vs spontaneous+stimulated behavior?\" This at least can help with the argument that freely moving behavior on its own is not sufficient.\n- L359: \"CCVAE and MCENET...confirming their strong capacity to model motion uncertainty\" - the experiments should focus on demonstrating the strengths of the ActionRat dataset, not comparing model architectures.\n\nFor 3D pose estimation, I am also unclear exactly what the experimental setup is. \n- DeepLabCut models seem straightforward: 2D pose estimators are trained (on all views together? different network per view?) using human annotations, triangulation is run on the stereo views, and then reprojected to the camera 1 image plane (why not compute MPJPE in 3D space?). \n- For OpenRatEngine, how are the synthetic data created? This relates to an earlier question I had. If there are ground-truth annotations for frame t in video v, are other ground truth annotations before and after time t used for the data generation process, and time t represents an interpolated time point? I think I'm missing something important here.\n- L407: \"OpenRatEngine achieves lower errors on most keypoints\" - seems like the ratio is closer to 50/50?\n- L412: \"Results validate...the advantages of OpenRatEngine-generated synthetic data in improving accuracy and robustness\" I see the value of this dataset differently - I think it provides a lot of synthetic data to train pose estimation models that will themselves then be more robust. An experiment that test this would be the following: imagine you have 500 human labeled frames. You train a DLC model, then evaluate it on held-out data (importantly, I think the *animals* themselves should be held out to properly address generalizability and robustness, i.e. train a model on R1-R4 and test on R5 and R6). Then train another DLC model using the same 500 (or whatever) frames from before, plus another 1k or 2k synthetic labels from OpenRatEngine. THe performance on the held-out data should be much better in this case, indicating your synthetic data has been useful for training a better pose estimation model.\n- again, one of the strengths of your dataset is the brain stimulation that results in a more diverse range of poses. You can dig into this more deeply by training on human annotations during non-stimulated periods, then testing on human annotations during both stimulated and non-stimulated periods. If you look at performance split by period I bet it will be much worse during the stimulated period where there are more novel poses. Then youc an train a model on human annotations from both periods, and test on both periods (maybe controlling for the number of training frames), and ideally see reduced errors in the stimulated period, indicating more robustness. then you can repeat this type of experiment using both real and synthetic data. I think there are lots of permutations here, each making their own subtle point.\n\nLack of clarity in some of the writing\n- L34: \"tracking the positions of gait\" doesn't make sense, gait is the tracking of limbs over time\n- L139: SLEAP is mentioned in the middle of a list of datasets but is not itself a dataset\n- the final ActionRat dataset has 8679 segments - does each segment contain just a single behavior? if not, is every frame in the segment separately labeled?",
"questions": "L75 - should the Meijer reference actually be Bolanos et al 2021? at the very least, the Bolanos reference should be included here\n\nThe authors state that BCI interventions \"enhance behavioral diversity and achieves comprehensive coverage of motion patterns\", but this is never quantitatively verified. A simple way to do this would be to take 3D poses during non-stimulated periods and compute PCA on the poses (after doing egocentric alignment to remove uninteresting factors of variation). Then repeat with stimulated plus non-stimulated periods (perhaps taking an equal number of frames from each category, or considering other forms of controls). Plotting variance explained versus number of PCs should show much higher dimensionality for the full dataset. Of course there are other ways to this, this is just a simple suggestion.\n\ntypo L104: rodennt -> rodent\n\nThe authors suggest that fine-grained variations like sniffing and micro-movements are captured in their synthetic data, but I fail to see how this is possible given the interpolation between sparse keyframe methodology. Am I missing something here?\n\nRelated: it is not clear to me exactly what the pipeline for generating a behavioral sequence looks like, and a brief description of this, at the beginning of section 3.2, would help. From what I understand\n1. a (random?) set of sparse keyframes are generated. how are they generated? are these taken from the labeled data? if so, how are 12 labeled keypoints translated to the 60 synthetic keypoints? if they are not taken from the labeled data, how are they constrained to be plausible poses? how \"sparse\" are they in time? if these are\n2. Interpolation is applied to the 3D keypoints. What kind of interpolation? Are there instances where smooth interpolation would actually lead to implausible poses? Does smooth interpolation mean the synthetic dataset has no abrupt movements?\n3. A mesh is created on each frame using the interpolated 3D poses\n4. The mesh is projected into each 2D view\n5. Fur is rendered(?) in each 2D view\n6. Other visual features are added like noise and lighting\n\nL182: Victor et al. should be Lobato-Rios et al\n\nSection 4.3 is more along the lines of the kind of evaluation I was expecting. It might make sense to put this experiment first, demonstrating the consistency between real and synthetic data. Then the following experiments can move beyond that and show how a large amount of realistic synthetic data can lead to improved behavioral models.\n\nTable 4: Is it possible the consistency between real->real and syn->real is less about the data and more about the model architecture saturating performance (or something else)? What is a control experiment that could rule out this option?\n\nTable 5: I'm not sure how to interpret this table/analysis. I thought at first these metrics are being computed between real and synthetic trajectories (one frame at a time) but the caption says \"adjacent frames\" and the text says \"temporal consistency\". So are these values computed between times t and t+1? for which traces? It would be helpful to clarify the relationship bewteen real predictions, synthetic predictions, and time here.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T21:58:11",
"modification_date": "2025-11-12T12:03:09",
"review_url": "https://openreview.net/forum?id=tDdeW2puHW¬eId=woISoqSc9K",
"license": "CC BY 4.0"
}
] |
isBH8kP5AX | https://openreview.net/forum?id=isBH8kP5AX | BMAttn: Block-Aligned Mixed-Precision Attention Quantization for LLM Inference | 3.5 | 3.25 | [
2,
2,
6,
4
] | [
3,
3,
4,
3
] | 4 | [
"LLM",
"Quantization",
"Pruning"
] | The proliferation of Large Language Models (LLMs) with extended context windows is severely hampered by the quadratic complexity of the self-attention mechanism. Existing acceleration methods, such as sparse attention and quantization, often employ uniform compression strategies that are misaligned with the non-uniform distribution of information importance within attention maps. This leads to a suboptimal trade-off between computational efficiency and model accuracy. To address this, we introduce Block-based Mixed-precision Attention (BMAttn), a novel framework that enables fine-grained, importance-aware precision while maintaining a hardware-friendly structure. BMAttn partitions each attention head into high-precision, low-precision, and sparse regions. To ensure computational regularity, these regions are block-aligned. To adapt to varying input lengths, their boundaries are dynamically adjusted using a lightweight affine windowing mechanism. We further propose a saliency-weighted calibration method and a layer-adaptive regularizer to automatically determine the optimal parameters, achieving a superior accuracy-efficiency balance. BMAttn achieves a speedup of up to 3.3× without any accuracy degradation, and a 5× speedup with only a 1\% accuracy loss. | other topics in machine learning (i.e., none of the above) | https://openreview.net/pdf?id=isBH8kP5AX | 2025-09-04T14:59:06 | 4 | [
{
"id": "l0bkVLeGKD",
"forum": "isBH8kP5AX",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1941/Reviewer_9Diq",
"reviewer_name": "Reviewer_9Diq",
"rating": 2,
"confidence": 3,
"soundness": 1,
"contribution": 3,
"presentation": 2,
"summary": "The paper introduces BMAttn (Block-Aligned Mixed-Precision Attention), a framework that partitions each attention head into three regions—high-precision (8-bit), low-precision (4-bit), and sparse (0-bit)—based on distance from the query token. The method claims to maintain “hardware-friendly” block alignment compatible with FlashAttention kernels, while dynamically adjusting precision boundaries via affine functions of sequence length. Calibration uses saliency-weighted metrics (RDW/IPW) and layer-adaptive retention schedules to optimize compression. Empirical results on Qwen2.5-7B, LLaMA-3.1-8B, and GLM-4-9B report ≈3× speedups with “lossless efficiency.”",
"strengths": "1. The three-zone decomposition aligns with attention’s distance heterogeneity and head specialization, while block alignment preserves kernel regularity (Figure 1d, p.4, shows the staircase pattern with B=16). The combination of mixed bitwidths (INT8 for HP, INT4 for LP) and structured sparsity is cleanly specified.\n\n2. Across three backbones and four benchmarks, BMAttn matches or nearly matches full‑precision accuracy, demonstrating how, if provided with real speedup, BMAttn could be a viable choice for real deployment scenarios where a high degree of accuracy is needed.\n\n3. The authors report one‑time calibration cost and outline an O(1) per‑head overhead at inference reinforcing deployability.",
"weaknesses": "1. The paper claims FlashAttention compatibility and “no masking” via direct index computation (Appendix B), but lacks kernel pseudocode, memory layout diagrams, or profiling that would substantiate the claim that warp divergence and gather/scatter are avoided. This is especially important given mixed precision per tile and three zones per head. More concrete details would help reproducibility and clarify whether custom CUDA kernels were required.\n\n2. Experiments compare to FlashAttention‑2 and SageAttention, but omit dynamic‑sparsity baselines (e.g., Sparge) which the related work positions as complementary. Even if orthogonal, end‑to‑end tokens/s and latency comparisons against a strong sparse‑attention baseline would better establish BMAttn’s Pareto position.\n\n3. The text states “Q, K, and P are quantized per block; V per channel” (Sec. 5.1). Presumably P ≡ W (post‑softmax attention weights), but notation is inconsistent with earlier sections. Also, scales/zero‑points and clipping for INT4 are not specified, please report these details for better reproducibility.\n\n4.Sec. 5.1 cites a “device featuring 1 Tbps memory bandwidth, 83 TFLOPs (FP16), 660.6 TOPS (INT8), 1321.2 TOPS (INT4),” but doesn’t specify the actual GPU/ASIC model or whether results are simulated TOPS vs. measured wall‑clock.\n\nAs highlighted in Points 3 & 4, this paper has a consistent issue with descriptions not being precise. I would highly encourage the authors to practice using specific language rather than making broad claims. For example, \"retention regularizer\" is not a common naming convention for \"thresholding hyperparameter\". The overall presentation of this paper is weak, even though, the accuracy results signal a potentially promising idea.",
"questions": "How do the authors compute speedup?\n\nCan the authors explain what they mean by “regular compute pattern compatible with GPU kernels such as FlashAttention”? It doesn’t seem that having different regions of datatype precision would be performant, particularly because the attention computation of Q*K^T is an activation, and storing mixed precision activations on-chip is unlikely to yield performance gains, and almost surely not when we are in smaller context lengths when self-attention is memory bound.\n\nCan you add wall‑clock tokens/s and latency on a named GPU (e.g., A100/H100) across 4k–128k, and profile HBM traffic vs. SageAttention‑8b/‑4b?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T12:12:01",
"modification_date": "2025-11-12T10:52:43",
"review_url": "https://openreview.net/forum?id=isBH8kP5AX¬eId=l0bkVLeGKD",
"license": "CC BY 4.0"
},
{
"id": "1AneOLPn3b",
"forum": "isBH8kP5AX",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1941/Reviewer_K21C",
"reviewer_name": "Reviewer_K21C",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The BMAttn: Block-Aligned Mixed-Precision Attention paper proposes a smart and efficient way to make large language models run faster without losing accuracy. It divides the attention mechanism into small “blocks” and assigns different precision levels to each block depending on how important they are, instead of using one fixed precision for all. This design works well with GPU hardware and maintains high speed and stability. The paper also introduces methods to automatically adjust how much information to keep in each layer and to calibrate attention using a saliency-based weighting approach. Experiments show that this method makes inference up to 3.3× faster while keeping model accuracy almost unchanged. Overall, it’s a practical, well-designed approach to improving the efficiency of large language models for real-world deployment.",
"strengths": "This paper proposes a well-motivated, hardware-aware design that bridges algorithmic adaptivity and system-level efficiency. The introduction of block-aligned mixed precision, coupled with the affine window mechanism, enables fine-grained control of attention precision without compromising GPU regularity — a major advance over uniform quantization and sparsity methods. The saliency-weighted calibration and layer-adaptive retention regularizer add strong theoretical justification and practical effectiveness",
"weaknesses": "While BMAttn combines block-sparse computation with mixed-precision quantization and adaptive zone allocation, the conceptual novelty is limited. The method largely integrates well-known components—sparsity pruning, distance-based masking, block-aligned computation, and quantized attention\n\nNo comparision against the most optimized recent methods from groups like MIT Han Lab (SpargeAttention, Minference) or NVIDIA’s Flash-Decoding kernels",
"questions": "1.The calibration algorithm (Appendix A) is described textually but could benefit from a process diagram or pseudocode summary in the main body.\nRecommendation: Adding a flowchart or visual timeline of calibration steps (attention map → saliency weighting → constraint optimization. This would help readers grasp implementation details faster.\n\n2.The paper reports speedup in terms of FLOP/TOPS efficiency. Can the authors share wall-clock latency improvements (ms/token) under real inference conditions, possibly for long-context chat benchmarks?\n\n3. Technique beats SageAttention2, cool. But can you try it with the latest sparse-attention kernels from Han et al. (Song Han’s group) on identical hardware?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T11:53:02",
"modification_date": "2025-11-12T10:52:43",
"review_url": "https://openreview.net/forum?id=isBH8kP5AX¬eId=1AneOLPn3b",
"license": "CC BY 4.0"
},
{
"id": "klgZ1amL6z",
"forum": "isBH8kP5AX",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1941/Reviewer_hcmZ",
"reviewer_name": "Reviewer_hcmZ",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper proposes BMAttn, a block-aligned mixed-precision attention framework that adaptively assigns precision levels across the attention map to balance accuracy and efficiency for large language model (LLM) inference.\n\nBMAttn divides each attention head into high-precision, low-precision, and sparse zones, determined by affine distance-based thresholds that scale with sequence length. This ensures both fine-grained adaptivity and hardware regularity, making it compatible with optimized kernels such as FlashAttention.",
"strengths": "1. Extensive experiments: Evaluated across three modern LLM families and multiple long-context benchmarks.\n2. Significant real-world relevance: Integrates cleanly with FlashAttention kernels and quantization toolchains, making it deployment-ready.\n\n3. Excellent ablation coverage: Demonstrates both the necessity and synergy of SWM and LRR components.",
"weaknesses": "1. No detailed hardware profiling: While claimed to be “FlashAttention-compatible,” kernel-level runtime traces or memory bandwidth breakdowns would strengthen hardware efficiency claims.\n2. Limited conceptual novelty: The core idea can be interpreted as an integration of pruning and quantization within a structured attention layout. While the implementation (block alignment and affine scaling) is clever and effective, it primarily extends known paradigms rather than introducing a fundamentally new mechanism or phenomenon.",
"questions": "1. How stable are the affine parameters across datasets or prompts? Can a single calibration generalize to unseen domains?\n\n2. How sensitive is the performance to hyperparameters metioned in paper?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T05:02:37",
"modification_date": "2025-11-12T10:52:43",
"review_url": "https://openreview.net/forum?id=isBH8kP5AX¬eId=klgZ1amL6z",
"license": "CC BY 4.0"
},
{
"id": "Gb54upzESk",
"forum": "isBH8kP5AX",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1941/Reviewer_QYg9",
"reviewer_name": "Reviewer_QYg9",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "- The paper proposes BMAttn, a block-aligned mixed-precision attention method for faster LLM inference\n- BMAttn is similar to window attention, but instead of having a hard trunctation, it varies precision with distance\n - High-precision (HP): short-range, salient dependencies\n - Low-precision (LP): mid/long-range dependencies\n - Sparse/pruned: negligible connections, pruned entirely\n- Block alignment is a key engineering trick to make it compatible with efficient kernel implementation\n- Empirical results across Qwen2.5-7B, Llama3-8B, and GLM4-9B, comparing it with FlashAttention-2, SageAttention, and SageAttention2\n - authors claim neglebible loss compared to FlashAttention-2 (which is an exact attention mechanism, unlike the other methods discussed)\n - BMAttn reports a 3.1×–3.3× speedup compared to FlashAttention-2",
"strengths": "- good motivation, combining algorithmic insights as well as awarness of implementation limitations to make it viable writing a high perf kernel\n- conceptually simple and intuitive approach to combine mixed precision, block alignment, and adaptive windowing into one coherent framework that fits naturally into existing attention kernels.\n- figures are excellent to understand the key idea of the algorithm\n- extremely important problem given the total cost of attention in LLMs, particularly for long context.",
"weaknesses": "- While conceptually simple and intuitive is a strength, it lacks major novelty.\n- Outdated and unclear baselines: FlashAttention-2 is now an older baseline, and several newer kernels such as FlashAttention-3, Flash-Decoding, Lean Attention, and PagedAttention (vLLM) deliver significantly faster exact attention, especially for the decode phase on modern GPUs. The paper doesn’t include or discuss these.\n- No kernel-level measurements: Even though the work emphasizes GPU efficiency, it doesn’t show kernel-level profiling or hardware utilization. There’s no data on Tensor Core occupancy, memory bandwidth, or latency per kernel, so the benefits of block alignment are mostly theoretical.\n- Narrow evaluation scope: All results are from offline accuracy benchmarks (WikiText, MMLU, LongBench, RULER). The comment on neglegible accuracy impact sounds optimistic and attention approimations can be much more sensitive in practice/more specialized datasets. The authors should consider additional benchmarks and discussions to differentiate the nouanced impact of attention approximation (maybe PingPong? but there may be better ones).",
"questions": "I don't have any further questions but would like to hear the authors thoughts on the weaknesses I have pointed out.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T08:54:07",
"modification_date": "2025-11-12T10:52:44",
"review_url": "https://openreview.net/forum?id=isBH8kP5AX¬eId=Gb54upzESk",
"license": "CC BY 4.0"
}
] | |
ZYVhh51UlM | https://openreview.net/forum?id=ZYVhh51UlM | Perturbation Guided Drug Molecule Design via Latent Rectified Flow | 2 | 4 | [
2,
2,
2
] | [
4,
3,
5
] | 3 | [
"Multi-modal generation",
"Perturbation biology",
"Molecular generation"
] | Phenotypic drug discovery generates rich multi-modal biological data, yet translating complex cellular responses into molecular design remains a computational bottleneck. Existing generative methods operate on single modalities (transcriptomic or morphological alone) and condition on post-treatment measurements without leveraging paired control-treatment dynamics. We present **Pert2Mol**, the first framework for multi-modal phenotype-to-structure generation that integrates transcriptomic and morphological features from paired control-treatment experiments. Pert2Mol employs separate ResNet and cross-attention encoders for microscopy images and gene expression profiles, with bidirectional cross-attention between control and treatment states to capture perturbation dynamics rather than simple differential measurements. These multi-modal embeddings condition a rectified flow transformer that learns velocity fields along straight-line trajectories from noise to molecular structures, enabling deterministic generation with superior efficiency over diffusion models. We introduce Student-Teacher Self-Representation (SERE) learning where an exponential moving average teacher supervises student representations across network depths, stabilizing training in high-dimensional multi-modal spaces. Unlike previous approaches that require preprocessed differential expression vectors, Pert2Mol learns perturbation effects directly from raw paired experimental data. Experiments on large-scale datasets demonstrate the first successful multi-modal framework for phenotype-driven molecular generation. | applications to physical sciences (physics, chemistry, biology, etc.) | https://openreview.net/pdf?id=ZYVhh51UlM | 2025-09-20T01:44:14 | 3 | [
{
"id": "jY3sZYpELK",
"forum": "ZYVhh51UlM",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20223/Reviewer_EBao",
"reviewer_name": "Reviewer_EBao",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 1,
"presentation": 2,
"summary": "The paper proposes a conditional generative model for molecules, specifically designed for applications in phenotypic drug discovery. The task's complexity arises from the multimodal conditioning signal, which includes microscopy images and gene expression data from both pre- and post-treatment states. Given that generative modeling of molecules is inherently challenging, the authors perform this task in the continuous latent space of an autoencoder. They adopt flow matching as the generative modeling paradigm. The use of a transformer-based approximate vector field facilitates the incorporation of conditioning information via two mechanisms: cross-attention and adaptive normalization. To ensure stable training, the authors employ a self-supervised loss between transformer layers, applied alongside the primary flow matching objective. The framework's effectiveness is demonstrated on several datasets. Beyond generation, the authors also leverage the learned data representations to perform drug repurposing and retrieval tasks.",
"strengths": "(As my expertise lies in machine learning rather than the application domain, my evaluation will focus on the methodological aspects of the work.)\n\n- This work presents an elegant integration of standard state-of-the-art (SOTA) methods to address the target task.\n- The framework is described in great detail (with the exception of the aspects noted in the Weaknesses and Questions sections), providing sufficient material to support the re-implementation of the method.\n- The proposed \"Student-Teacher Self-representation (SERE)\" is a potentially interesting contribution for stabilizing training. However, as noted below, this component requires significantly more justification and analysis to validate its effectiveness.\n- A further strength is the use of the learned condition representations for downstream tasks, such as drug repurposing and retrieval, as demonstrated in the \"Experiments\" section.",
"weaknesses": "- The \"Methods\" section is largely dedicated to a detailed listing of neural network architectures, implementation choices, and hyperparameters. While this detail is valuable for reproducibility, it is difficult to evaluate this descriptive catalogue as a primary scientific contribution.\n- The paper's contribution could be substantially strengthened by including more rigorous empirical analysis. For example:\n - A comprehensive ablation study on key hyperparameters.\n - An analysis of the contribution of individual model components (e.g., evaluating performance using only cross-attention for conditioning versus the full model).\n- The SERE method, noted as a potential strength, is a significant weakness in its current form. It suffers from a superficial description, a lack of rigorous analysis, and is not supported by an ablation study. (This is detailed further in the \"Questions\" section).\n- Overall, the paper provides excessive implementation detail on standard components while remaining vague on its more novel aspects (such as SERE) and the precise mechanisms of its core components (such as the molecular representation).",
"questions": "1. Molecular Representation: Could the authors describe this component in more detail? The text seems to present conflicting or incomplete information. The first reference points to \"RoBERTa\" which suggests a masked modeling objective. However, the second reference describes a contrastive learning method. The word \"contrastive\" appears in Figure 2 but is absent from the main text (outside of the \"Related Work\" section). Furthermore, the paper states, \"Tokenization uses learned molecular motifs,\" but provides no details on how these motifs are learned. Please clarify the exact architecture and training objective of the molecular encoder.\n2. Student-Teacher Self-representation (SERE): Is this method a novel contribution of this paper? No references are provided in its description. Additionally, the description is contradictory. The text first implies that the student and teacher are different layers within the same transformer vector field. However, it later mentions \"higher-\" and \"lower-noise layers,\" which suggests a comparison across different denoising timesteps. Please clarify: (a) if SERE is novel, and (b) the precise mechanism of the student-teacher relationship (i.e., which layers or timesteps are being compared).\n3. Figure 2: the diagram shows the \"SMILES Encoder\" as an input to the ReT. However, this connection and its purpose are not described in the text. How is the output of the SMILES Encoder integrated into the model during this stage?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T23:15:04",
"modification_date": "2025-11-12T15:48:39",
"review_url": "https://openreview.net/forum?id=ZYVhh51UlM¬eId=jY3sZYpELK",
"license": "CC BY 4.0"
},
{
"id": "ImcetJ5I7n",
"forum": "ZYVhh51UlM",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20223/Reviewer_FH2E",
"reviewer_name": "Reviewer_FH2E",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper proposed focuses on integrating transcriptomic and morphological data for modeling control–treatment perturbation dynamics in drug design. The work applies a rectified flow framework to this multi-modal setting, combining an image encoder and an RNA encoder to learn joint representations from microscopy images and transcriptomic profiles. The motivation appears to be improving generative modeling performance by leveraging complementary information across modalities.\n\nWhile the overall direction of multi-modal integration is interesting and potentially useful for biological discovery, the paper lacks sufficient novelty. \nThe approach essentially extends an existing rectified flow framework to a multi-modal context without introducing substantial methodological innovation. \nThe contribution is therefore more of an application than a conceptual advancement.\n\nIn terms of evaluation, the experiments are quite limited. The authors compare their method only with a generic diffusion model, without specifying the exact implementation or baseline details. \nThis makes it difficult to assess the validity or significance of the reported improvements. \nMoreover, the experimental section would benefit from comparisons against other established multi-modal generative models or perturbation prediction methods. \nAblation results suggest some value in the multi-modal setup, but they are not convincing enough to demonstrate a clear advantage over existing techniques.",
"strengths": "multi-modal algo are an interesting methods",
"weaknesses": "not novel, only appliying rectified flow in multi-modal settings.\n\nlimited evaluation, only compared with a diffusion model, not specifying which one.",
"questions": "could you compare with other multi-modal approaches?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T23:02:13",
"modification_date": "2025-11-12T15:48:39",
"review_url": "https://openreview.net/forum?id=ZYVhh51UlM¬eId=ImcetJ5I7n",
"license": "CC BY 4.0"
},
{
"id": "7HDBnWUGzW",
"forum": "ZYVhh51UlM",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20223/Reviewer_syMa",
"reviewer_name": "Reviewer_syMa",
"rating": 2,
"confidence": 5,
"soundness": 2,
"contribution": 1,
"presentation": 3,
"summary": "In this work, the authors study the problem of generating chemical structures that achieve a target biological effect, conditioned on paired data from control and treatment samples. The model, Pert2Mol, integrates both transcriptomic and imaging data, which are first encoded separately and then concatenated to produce the final conditioning vector. The generative model is based on rectified flow transformers. The authors introduce a student-teacher self-representation scheme to improve training stability and sampling. The model is evaluated on a multi-modal dataset of chemically perturbed cell populations, and the generated molecules are assessed using metrics for chemical validity, drug-likeness, and target similarity.",
"strengths": "- The paper proposes to integrate transcriptomes and imaging data for de-novo molecule design, offering a potentially more comprehensive phenotype-to-structure mapping.\n- The paper is well-written and easy to understand.",
"weaknesses": "- The authors claim that \"no existing method tackles the task of perturbation-guided drug molecule design\". This is inaccurate. This is a very active field and many methods have been proposed to solve the problem of molecule generation conditioned on a desired gene expression effects [1-5].\n- Given the above claim, the authors only compare against a single diffusion baseline. A direct comparison and benchmarking against the models listed is necessary to properly evaluate Pert2Mol's claimed advantages.\n\n[1] https://www.nature.com/articles/s41467-019-13807-w \n[2] https://pubs.acs.org/doi/10.1021/acs.jcim.2c01301 \n[3] https://academic.oup.com/bib/article/25/6/bbae525/7845937 \n[4] https://academic.oup.com/bioinformatics/article/40/5/btae189/7649318 \n[5] https://www.nature.com/articles/s41587-021-00946-z",
"questions": "- How was sampling for the molecules presented in Figure 3 done?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T08:21:40",
"modification_date": "2025-11-12T15:48:40",
"review_url": "https://openreview.net/forum?id=ZYVhh51UlM¬eId=7HDBnWUGzW",
"license": "CC BY 4.0"
}
] | |
owpU8gxnkM | https://openreview.net/forum?id=owpU8gxnkM | ENCOURAGING CRITICAL THINKING FOR MULTIAGENT DEBATE | 3.5 | 3.5 | [
4,
2,
2,
6
] | [
3,
4,
3,
4
] | 4 | [
"Debate",
"Critical Thinking",
"Self-reflection"
] | Large language models (LLMs) have demonstrated remarkable performance across a wide range of tasks in recent years. While prior work has explored leveraging LLMs to generate synthetic data for self-improvement, repeated iterations often suffer from diminishing returns due to the reliance on homogeneous reasoning patterns and limited exploration of alternative perspectives. In this paper, we introduce a novel framework that enriches the reasoning process by encouraging critical thinking among multiple agents. Rather than deploying an ensemble of models with identical prompts, we propose a strategy generator that produces customized instructions tailored to each individual LLM. Acting as a critical thinking agent, the generator is iteratively fine-tuned using carefully selected strategies that are both diverse and effective. This approach fosters specialization within each model while promoting diversity across reasoning paths, enabling the system to maintain varied solution trajectories and achieve sustained performance gains through iterative refinement. We demonstrate the effectiveness of our method across a variety of agentic frameworks and complex reasoning tasks. | We propose a framework with optimizable strategies to guide LLM solvers in solving different questions. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=owpU8gxnkM | 2025-09-05T02:34:10 | 4 | [
{
"id": "Ql0MRofdjN",
"forum": "owpU8gxnkM",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2172/Reviewer_hHRL",
"reviewer_name": "Reviewer_hHRL",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "LLMs can be made into \"agents\" to solve a problem by adding a \"Strategy\" to the input prompt along with the problem, and then iteratively refining what this strategy is based on how well the LLM solves the problem. However, this needs a way to score answers, which may or may or may not exist.\n\nThis paper proposes a method to do so by first instantiating several different such strategies, finding the resulting answers, and using agreements and diversity between these to refine the strategies.",
"strengths": "Compares agains a comprehensive set of baselines.",
"weaknesses": "Main paper does not contain enough specifics of the method. It is also unclear how it differs from one of the references. (see questions below for both these points)",
"questions": "It is unclear what the strategy generator is. Is it an open-weights LLM (if so which one)? It is also unclear what precisely is meant by a \"strategy\", and how a set of these are generated from a question. A simple example in the main text of the paper would have helped a great deal clarifying this.\n\nWhat precisely is the difference between the method in this paper and the one in (Subramaniam et al 2025)? Also, it seems this method has not been compared against.\n\nIn Table 1, some methods involve no fine-tuning / training of any sort, and others (like CMAD) do. So in some sense some of these are not fair comparisons. At the very least, training-free and fine-tuned approaches should be demarcated as such.\n\nMinor typo: A_i on line 159\n\nHow are strategies mapped to vectors (which are needed for the diverse sampling in line 209)?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-08T05:47:00",
"modification_date": "2025-11-12T10:54:51",
"review_url": "https://openreview.net/forum?id=owpU8gxnkM¬eId=Ql0MRofdjN",
"license": "CC BY 4.0"
},
{
"id": "DL1e8rNyxA",
"forum": "owpU8gxnkM",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2172/Reviewer_uewh",
"reviewer_name": "Reviewer_uewh",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "The paper addresses the homogeneous reasoning patterns of complex reasoning in LLMs and proposes Critical Thinking with Multi-Agent Debate (CMAD).\n\nCMAD uses a strategy generator that produces reasoning strategies for multiple LLM agents. After multi-round debates, a feedback loop balances correctness and diversity to select high-quality strategies for fine-tuning.\n\nExperiments show the framework is model-agnostic and outperforms baselines on reasoning benchmarks.",
"strengths": "- The paper has a good motivation to enable LLMs to generate diverse reasoning strategies instead of relying on fixed prompts (such as CoT, PoT, Step-back).\n\n- The proposed method is simple yet effective, selecting high-quality strategies with both correctness and diversity metrics and using these data to fine-tune a strategy generator.",
"weaknesses": "- Inconsistent Reporting of Results\n\n(1) Line 355-356 says \"The average improvement over the second-best method ranges from 1.2% to 9.8%.\" However, Table 1 shows that the performance gaps between CMAD and DMAD (the second-best method) are all less than 5%.\n\n(2) Comparing Table 1 and Table 4, the reported results for baselines are identical, but CMAD’s results differ. What is the difference in evaluation settings between these two tables?\n\n- Missing Important Experimental Details\n\n(1) The paper does not explicitly specify which models were fine-tuned to produce the reported results. While Line 742–743 implies Qwen2.5-7B, Line 700–701 mentions full-model fine-tuning for Qwen1.5-7B and LLaMA-8B. \n\n(2) The paper does not provide the prompt used for the strategy generator or examples of its training data.\n\nThe above two concerns make the results less convincing.",
"questions": "- What if we directly use the initial answers with different strategies (instead of going through the full debate process) to construct training data?\n\n- The description of Figure 2 refers to “refine the pre-training data”—should this instead be “fine-tuning data”?\n\n- The description of Table 3 does not align with its contents. Is the “DMAD” listed in Table 3 a typo that should be “CMAD”?\n\n- Reference mistake: \n\nLine 742: Table C should be Table 4; \n\nLine 355: Table 4.1 should be Table 1; \n\nLine 375, the baseline is incorrectly cited as published in 2015; the correct publication year is 2025.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-08T01:38:51",
"modification_date": "2025-11-12T10:54:52",
"review_url": "https://openreview.net/forum?id=owpU8gxnkM¬eId=DL1e8rNyxA",
"license": "CC BY 4.0"
},
{
"id": "8PXGDA3Ydh",
"forum": "owpU8gxnkM",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2172/Reviewer_efZd",
"reviewer_name": "Reviewer_efZd",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes Critical Thinking with Multi-Agent Debate (CMAD), a framework for improving the reasoning capabilities of LLMs by training a strategy generator to generate diverse, undefined reasoning strategies. The framework iteratively fine-tunes the generator using feedback on correctness (based on majority voting) and diversity (based on a similarity metric), aiming to balance exploration and exploitation. Empirical results on MATH, GSM8K, and GPQA show consistent improvements across several LLMs compared to baselines such as DMAD and CoT.",
"strengths": "1. The idea of using a trainable strategy generator to produce undefined reasoning paths is creative and differentiates CMAD from prior multi-agent debate like DMAD.\n2. The paper evaluates across multiple benchmarks and models (GPT-4o-mini, LLaMA-3, Qwen2.5, Nova Micro), and compare with various baseline methods.\n3. The introduction convincingly argues the need to move beyond homogeneous reasoning and fixed strategies.",
"weaknesses": "1. The process of solution sharing and summarization risks contaminating the agents’ independent reasoning based on their given strategies. If each agent accesses others’ intermediate solutions, the resulting fine-tuning data may lose diversity and no longer reflect distinct strategies. The authors should clarify how they prevent such convergence or bias.\n2. The paper focuses on the Multi-Agent Debate (MAD) setting, but it does not explain why this setting is necessary over simpler mechanisms such as majority voting or ensemble averaging. Clarifying this design choice, especially how debate interaction benefits strategy generation beyond aggregation, would strengthen the motivation.\n3. Figure 1 is visually cluttered; the text overlaps and the color scheme makes it difficult to interpret. \n4. The related work section omits prior studies exploring similar concepts of using a trainable model to guide another model [1]\n\nReferences:\n[1] Li, Zekun, et al. \"Guiding large language models via directional stimulus prompting.\" Advances in Neural Information Processing Systems 36 (2023): 62630-62656.",
"questions": "1. The paper does not specify the underlying model for the strategy generator and solution agents.\n2. How do you make sure that each strategy actually contribute to the the final solution, given that each agent can not only see its given strategy but also other's solution.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-04T16:42:21",
"modification_date": "2025-11-12T10:54:52",
"review_url": "https://openreview.net/forum?id=owpU8gxnkM¬eId=8PXGDA3Ydh",
"license": "CC BY 4.0"
},
{
"id": "OlBxwnfNfN",
"forum": "owpU8gxnkM",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2172/Reviewer_82VM",
"reviewer_name": "Reviewer_82VM",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper shows how to improve LLM performance using multiagent debate extended with several new contributions: diversification of the reasoning paths taken by different agents, critical thinking. The diverse paths are obtained by a strategy generator that generates M strategies used to prompt M agents.",
"strengths": "The approach is interesting and the experimental results seem compelling.",
"weaknesses": "One overall weakness for this line of work, not just restricted to this particular contribution, is that it is not clear why this approaches lead to better performance. I do not count this remark against this paper as I think that these experimental results are important. \n\nThe diversity metric seems to be a key element of the approach. It would be interesting to see some alternative measures of diversity and how they impact the performance of the algorithm.\n\nMinor comments:\n\nLine 153, where it says “answer, denoted as y_{1,i}, where the”, I believe it should say y_{i,1}\n\nWhere it says “table 4.1” it should say “table 1”",
"questions": "The similarity threshold \\tau to evaluate the diversity of the proposed strategies could be context dependent. In some cases, a large diversity might be needed, while in other cases it might be difficult to propose very different strategies. How do you chose this parameter and have you observed context dependent differences? The results from figure 5 are a first step in this direction. \n\nI am not convinced that “Critical Thinking” is what the approach is doing. Could you denote in algorithm 1, what part is the one responsible of critical thinking?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T01:30:23",
"modification_date": "2025-11-12T10:54:53",
"review_url": "https://openreview.net/forum?id=owpU8gxnkM¬eId=OlBxwnfNfN",
"license": "CC BY 4.0"
}
] |
lBBtmSu5Q2 | https://openreview.net/forum?id=lBBtmSu5Q2 | On Fine-Grained I/O Complexity of Attention Backward Passes | 5 | 3.25 | [
6,
8,
4,
2
] | [
3,
3,
2,
5
] | 4 | [
"Attention",
"I/O Complexity",
"Backward Passes."
] | Large Language Models (LLMs) have demonstrated remarkable capabilities in processing long-context information. However, the quadratic complexity of attention computation with respect to sequence length poses significant computational challenges, and I/O aware algorithms have been proposed. This paper presents a comprehensive analysis of the I/O complexity for attention mechanisms, focusing on backward passes by categorizing them into small and large cache scenarios. Using the red-blue pebble game framework, we establish tight bounds on I/O complexity across all cache sizes. We confirm that the de facto standard I/O aware algorithm FlashAttention is optimal for both forward and backward passes for the large cache size scenario. For small cache sizes, we provide an algorithm that improves over existing methods and achieves tight bounds. Additionally, we extend our analysis to sparse attention, a mainstream speeding-up approach, deriving fine-grained lower bounds for both forward and backward passes and both small and large caches. Our findings complete the theoretical foundation for I/O complexity in attention mechanisms, offering insights for designing efficient algorithms of LLM training and inference. | optimization | https://openreview.net/pdf?id=lBBtmSu5Q2 | 2025-09-19T13:01:55 | 4 | [
{
"id": "XeVdhPAtSV",
"forum": "lBBtmSu5Q2",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15988/Reviewer_gi5z",
"reviewer_name": "Reviewer_gi5z",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 1,
"presentation": 3,
"summary": "The original FlashAttention paper provides upper I/O complexity bounds for the backward pass of the exact attention computation, but does not provide lower bounds. This raises the question: what is the optimal I/O complexity of the attention backward pass? This paper provides a lower bound as a function of cache size. Interestingly, they show that the lower bound changes at a crossover point where the cache size if $o(d^2)$.",
"strengths": "- The authors show that there is room at small cache sizes, to potentially provide a speedup over FlashAttention by reducing I/O complexity.\n- The paper is pretty easy to follow and does quite a good job situating itself with respect to prior work.",
"weaknesses": "- The authors do not provide an implementation of their algorithm, and so they cannot demonstrate that it actually provides a speedup over FlashAttention. The claim that the “algorithm designed for small cache sizes would become relevant and useful”, is speculative. In my view, this is the most significant limitation of this work.\n- The result is only applicable for very small cache sizes, and does not apply to modern GPUs typically used for training (A100s, H100s, B200s).\n- This paper (like prior work before it) assume a two-level memory hierarchy. This may limit the applicability of the results, especially since newer chips include more complex memory hierarchies including",
"questions": "- Does Algorithm 6 increase the FLOPs required — even if only by a constant factor?\n- Can the authors provide an implementation of their algorithm and demonstrate that it can provide a speed up on GPUs with small cache sizes?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-04T14:42:59",
"modification_date": "2025-11-12T13:43:15",
"review_url": "https://openreview.net/forum?id=lBBtmSu5Q2¬eId=XeVdhPAtSV",
"license": "CC BY 4.0"
},
{
"id": "nosDARxwy7",
"forum": "lBBtmSu5Q2",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15988/Reviewer_qFFE",
"reviewer_name": "Reviewer_qFFE",
"rating": 8,
"confidence": 3,
"soundness": 4,
"contribution": 4,
"presentation": 3,
"summary": "The paper analyzes the I/O (cache ↔ memory) complexity of the backward pass of exact softmax attention under standard GEMM, using the red–blue pebble framework. It proves *matching upper and lower bounds across all cache sizes*, with a phase transition at ($M = \\Theta(d^2)$) ($M$ is the cache size and $d$ is attention head dimension). \n\nIn the large-cache regime ($M=\\Omega(d^2)$), the bounds match FlashAttention’s behavior and establish optimality; in the small-cache regime ($M=o(d^2)$), the paper gives a strictly better algorithm (and matching lower bound) than FlashAttention. It also gives lower bounds for sparse attention, recovering the dense case as a special case.",
"strengths": "### Originality\n\n* Provides the first matching upper and lower bounds for the backward pass of exact attention for all cache sizes with a clean phase transition at ($M=\\Theta(d^2)$) (Theorem 1.1). \n* Extends to sparse attention with lower bounds that recover the dense case as a special instance. \n\n### Quality\n\n* Uses the red–blue pebble framework rigorously and states Theorem 1.1 with an explicit formula covering both regimes. \n* Gives matching bounds in each regime: large-cache upper (Thm 4.1) and lower (Thm 4.2), small-cache upper via Algorithm 6 (Thm 4.3) and lower (Thm 4.4). \n\n### Clarity.\n\n* Figure 1 clearly contrasts the paper’s tight bound (red) with FlashAttention’s upper bound (blue dashed) and marks the cross-point ($M=\\Theta(d^2)$). \n* Theorems in §4 are presented as informal versions which helped readabillity. \n\n### Significance\n\n* In the large-cache regime, results match FlashAttention and establish optimality; in the small-cache regime, Algorithm 6 is provably better than FlashAttention.",
"weaknesses": "1. **Positioning vs prior work could be tighter.** The paper clearly cites Dao et al. (FlashAttention) and Saha & Ye for forward-pass tightness; it mentions Addanki et al. (streaming/approximate attention) in related work, but a compact comparison table clarifying different problem settings (exact vs approximate, streaming vs two-level memory) would help readers situate novelty. \n\n2. **Practical relevance narrative.** The paper *does* discuss when small-cache arises (e.g., per-SM caches on older GPUs) and even gives A100 vs GTX1060 examples; expanding this with a short table of device-level (M) estimates and typical head sizes (d) would strengthen the “why it matters” section.",
"questions": "1. **Scope vs Addanki et al. (2023).** Please add a small table clarifying the differences (objective: exact vs approximate; model: two-level I/O vs streaming; bounds reported) and why your results are not directly comparable numerically. \n\n2. **Multi-head attention.** Your bounds are given per head; what changes (if any) under (H) heads computed in parallel. Does tiling across heads alter the asymptotics or only the constants?\n\n3. **Device checklist.** Consider adding a table (SM/L1 size, datatype, typical ($d$)) for a few GPUs/edge devices to show where ($M \\lessgtr d^2$) actually falls.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T06:38:27",
"modification_date": "2025-11-12T13:43:16",
"review_url": "https://openreview.net/forum?id=lBBtmSu5Q2¬eId=nosDARxwy7",
"license": "CC BY 4.0"
},
{
"id": "LQ7hZbsseI",
"forum": "lBBtmSu5Q2",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15988/Reviewer_3EBW",
"reviewer_name": "Reviewer_3EBW",
"rating": 4,
"confidence": 2,
"soundness": 4,
"contribution": 2,
"presentation": 3,
"summary": "The paper extends the analysis of I/O complexity of exact attention appearing in [Dao, 2022] and [Saha & Ye, 2024], specifically providing tight bounds on the I/O complexity of the attention backwards pass using the red-blue pebble game framework. The results suggest that the popular FlashAttention algorithm is optimal in both forwards and backwards modes in the large cache regime (most practically relevant), while providing an improved algorithm in the small cache regime. The authors also extend the analysis to the sparse attention regime.",
"strengths": "- The paper's derivations seem to be solid and rigorous, to the best of my understanding.\n- The paper extends the results appearing in the previous work, thus completing the I/O complexity analysis for both forwards and backwards passes, small and large cache regimes, as well as dense and sparse attention.\n- The paper is well-written and easy to follow.",
"weaknesses": "Overall, the paper seems to be a direct extension of [Saha & Ye, 2024], adding tight bounds for the I/O complexity of attention backwards pass. However, the results seem to directly mirror the prior work; the authors utilise the same framework, and provide similar asymptotic bounds and conclusions. Due to this, my impression is that the work, although mathematically solid, seems to be incremental. The small-cache algorithm, as well as theoretical derivations seem to follow directly from [Saha & Ye, 2024], and from the practical perspective do not offer a significant contribution (as noted in the paper, the large- cache regime is more practically relevant, and FlashAttention is proven to be optimal). Due to this, my impression is that the scope of the paper is not quite sufficient for publication in ICLR.",
"questions": "- Could the authors clarify how their small-cache algorithm differs/complements the similar proposition from [Saha & Ye, 2024]?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T19:19:52",
"modification_date": "2025-11-12T13:43:16",
"review_url": "https://openreview.net/forum?id=lBBtmSu5Q2¬eId=LQ7hZbsseI",
"license": "CC BY 4.0"
},
{
"id": "lm2R5wKOeo",
"forum": "lBBtmSu5Q2",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15988/Reviewer_KW4D",
"reviewer_name": "Reviewer_KW4D",
"rating": 2,
"confidence": 5,
"soundness": 4,
"contribution": 1,
"presentation": 3,
"summary": "The authors consider the I/O complexity of Attention gradient computation. In hardware, data is typically arranged hierarchically, with data stored in an unbounded memory, and computation occurring in a bounded cache. To compute, data is moved into the cache, computation occurs, and the result is saved in memory. Since data movement is typically more expensive that computation, I/O complexity measures only the data movements. The goal of I/O complexity is to design algorithms minimizing I/Os. Given the prevalence of attention and the success of the FlashAttention algorithm, it is a practically important question to understand whether the training process can be optimized w.r.t. I/O complexity.\n\nThe authors give I/O optimal bounds for the computation of attention gradient when restricted to algorithms using standard matrix multiplication. The authors also consider sparse attention, and give lower bounds for algorithms using standard matrix multiplication in this setting. \n\nWhile the statement of the main result is interesting, the techniques are identical to prior work, and in fact the main result can be obtained immediately from the lower bound for the forward pass. Furthermore, the lower bound on sparse attention is not well substantiated without a matching upper bound (or at least some improvement over the naive algorithm). Thus I recommend reject.",
"strengths": "The authors study a practically interesting problem, and give tight results. \n\nThey initiate the study of sparse I/O attention.",
"weaknesses": "The main result (lower bound for attention gradient computation) is essentially immediate from prior work. In particular, a previous paper proves that any algorithm that computes the attention matrix already requires the FlashAttention lower bound. Since attention gradient computation involves a n x d and d x n matrix product, this immediately implies the desired lower bound. Similarly, the new upper bound for gradient computation in the small cache setting is a consequence of the equivalence with matrix multiplication (the easy direction - using matrix multiplication we can compute attention gradients).\n\nThe sparse attention lower bound is not well motivated if there is no matching upper bound, or at least some improvement on the trivial algorithm. Even if this is hard to prove, there should be some discussion towards what the obstacles are.",
"questions": "What are the main obstacles towards designing I/O efficient algorithms for sparse attention?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T13:45:33",
"modification_date": "2025-11-12T13:43:17",
"review_url": "https://openreview.net/forum?id=lBBtmSu5Q2¬eId=lm2R5wKOeo",
"license": "CC BY 4.0"
}
] | |
UAZCKdd4R7 | https://openreview.net/forum?id=UAZCKdd4R7 | Koopman-Assisted Trajectory Synthesis: A Data Augmentation Framework for Offline Imitation Learning | 6.5 | 3.25 | [
4,
6,
8,
8
] | [
3,
4,
3,
3
] | 4 | [
"Offline Imitation Learning; Offline Reinforcement Learning; Data Augmentation"
] | Data augmentation plays a pivotal role in offline imitation learning (IL) by alleviating covariate shift, yet existing methods remain constrained. Single-step techniques frequently violate underlying system dynamics, whereas trajectory-level approaches are plagued by compounding errors or scalability limitations. Even recent Koopman-based methods typically function at the single-step level, encountering computational bottlenecks due to action-equivariance requirements and vulnerability to approximation errors. To overcome these challenges, we introduce Koopman-Assisted Trajectory Synthesis (KATS), a novel framework for generating complete, multi-step trajectories. By operating at the trajectory level, KATS effectively mitigates compounding errors. It leverages a state-equivariant assumption to ensure computational efficiency and scalability, while incorporating a refined generator matrix to bolster robustness against Koopman approximation errors. This approach enables a more direct and efficacious mechanism for distribution matching in offline IL. Extensive experiments demonstrate that KATS substantially enhances policy performance and achieves state-of-the-art (SOTA) results, especially in demanding scenarios with narrow expert data distributions. | reinforcement learning | https://openreview.net/pdf?id=UAZCKdd4R7 | 2025-09-19T23:31:52 | 4 | [
{
"id": "xe11B2ewvK",
"forum": "UAZCKdd4R7",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19398/Reviewer_cuKL",
"reviewer_name": "Reviewer_cuKL",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes a method based on Koopman Theory for generating trajectories from offline data.",
"strengths": "The approach appears reasonably sound.",
"weaknesses": "1. (I have not read KFC) The authors claim that this work differs from KFC, as KFC only generates single-step data, while KATS generates trajectories. However, judging from Equations 7, 8, 9, and 10, KATS still appears to generate states.\n\n2. Although the experimental results presented by the authors show that KATS performs well, the experiments seem insufficient. For example, there is no ablation study.\n\n3. The baselines compared in Table 1 and Table 2 are inconsistent:\n(1) Table 1 compares KATS+BC and KFC+BC, while Table 2 compares KATS+BC and KFC+CQL. Since the base algorithms of KATS+BC and KFC+CQL are different, the comparison lacks fairness.\n(2) In Table 1, the data augmentation methods compared are SRA, MOLI, and KFC+BC, while in Table 2, the compared methods are TELS, DOGE, POR, and KFC+CQL.",
"questions": "1. What are the differences and connections between Equations 7–8 and 9–10? In implementation, are they used together or only 9–10?\n\n2. Why does Table 1 compare KATS+BC with KFC+BC, while Table 2 compares KATS+BC with KFC+CQL?\n\n3. Why are the data augmentation methods in Table 1 compared with SRA, MOLI, and KFC+BC, while in Table 2, they are compared with TELS, DOGE, POR, and KFC+CQL?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T12:34:24",
"modification_date": "2025-11-12T15:08:37",
"review_url": "https://openreview.net/forum?id=UAZCKdd4R7¬eId=xe11B2ewvK",
"license": "CC BY 4.0"
},
{
"id": "YbYtKKkWmC",
"forum": "UAZCKdd4R7",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19398/Reviewer_f7Wk",
"reviewer_name": "Reviewer_f7Wk",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 4,
"presentation": 3,
"summary": "The paper tackles covariate shift in offline imitation learning, where agents are limited to fixed datasets with potentially low diversity. Prior Koopman-based methods operate at the single-step level, causing dynamic inconsistency and high cost. Koopman-Assisted Trajectory Synthesis (KATS) is introduced as a trajectory-level data augmentation method that generates novel yet dynamically consistent expert-like trajectories in a learned state-equivariant linear Koopman latent space. In addition, KATS is adaptive in that it prioritizes data synthesis where the model is uncertain. Theoretical results guarantee that symmetries commuting with the learned Koopman operator yield trajectories consistent with expert policy dynamics. In practice, KATS augments data and then applies simple BC, outperforming more complex offline IL and RL baselines. KATS serves as a plug-and-play augmentation module that enhances existing algorithms through high-fidelity, behaviorally consistent data generation.",
"strengths": "* Th. 2 is very elegant and clear. Slight suggestion: preamble the section with the gist/a teaser of what the theorem will show.\n* Sec. 5.2: ingenious yet simple.\n* KATS demonstrates that augmenting data and applying behavioral cloning can be a more effective and reliable strategy for imitation, even in data-scarce regimes where traditional apprenticeship learning methods, despite allowing interaction, tend to be fragile and overcomplicated.",
"weaknesses": "* Th. 3 should be followed by an \"in-words\" interpretation and description of its consequences, along with a hyperlinked reference to the “Implication of the Bound” section presented in the Appendix.\n* KATS is introduced from the machinery of KFC, but the link is somewhat lost until it comes back in 5.1 line 287, where it is clear.\n* The way the authors present sigma in Fig 2 (“Sigma 0.2, Sigma 0.3” etc.) or as “a symmetry basis” suggests a structured family of symmetry operators or scalars, but in the actual text and appendix, they never describe how that basis or scaling is obtained. The figure’s depiction of multiple sigma’s or scaled sigma’s is not grounded in the described theory or implementation. The authors must explicitly indicate how they operationalize the learned sigma network to obtain their basis.\n* The authors write: \"This dramatic leap provides strong validation for our core contribution: the action-independent formulation.\". It is definitely noteworthy and interesting to observe that the action-independent closed-loop formulation combined with an IDM can yield such results. Stating that the generated trajectories are \"by construction, more behaviorally consistent than those from action-conditioned models like KFC\" might be a bit of a stretch however, but an experimental design could be devised to showcase that further.\n* In the appendix L795-796, the authors write \"This process effectively densifies the training data, filling the gaps in the state-space coverage that a sparse dataset would otherwise leave open.\". That is “potentially” what KATS enables, but the coverage is not proven or showcased as such.\n* The paper could use an additional round of polish to remove the inconsistencies in notations.",
"questions": "* Can the authors make it clear what they mean by scalability/scalable? From the phrasing in the paper, it seems that the scalability claimed by the authors is on the complexity of the task (dimension, degrees of freedom). However, since this is an approach for the low-data regime (limited offline dataset), it might be on the how the developed data augmentations impact performance, etc.\n* Koopman theory imposes a strong inductive bias, going against the bitter lesson. Would the authors defend that modeling the temporal relationship between one latent and its successor simply with a linear operator is enough to model system that are more complex than the ones tackled empirically in the paper? In other words, would adding depth to the encoder always be enough to go in a sufficiently “higher-dimensional space” where the dynamics can reasonably be assumed linear?\n* It is unclear in the text why KFC working at the single-step level is costly in compute (L193). Is it clearer costly because the action-equivariant assumption in KFC requires one linear operator per action dimension (Eq. 2), making it costly to scale with action dimensionality?\n* Section 5.1: modeling the closed-loop system dynamics is a design choice rather than an innovation, is it not? Do the other claim it is an innovation because it is unusual?\n* Is there a particular reason why the authors write \"find symmetry basis\" at line 2 of the algorithm, and not \"learn the symmetries\" (\"sigma model training\" only appears in the appendix, L812-813)? By basis or symmetries, do the author mean that the set verifies the properties of basis, then they then use to craft other symmetries as linear combinations of the elementary symmetries of the basis?\n* What would be informative, for the data augmentation methods, is to get the final size of the dataset used to train the policy with BC (compared to the initial size), along with statistics that could give an idea as to how it expands the initial one in terms of diversity.\n* Could the authors include a few words about the baselines, to get the gist of their approach, which would put KATS in perspective; e.g., how does TELS' data augmentation approach differ from KATS?\n* The authors write in the appendix that the \"[policy] training alternates between original and augmented data with equal weighting\". What is the only sampling strategy that the authors tried? Could the authors give the respective dataset sizes for the reader to be able to gauge how likely to overfit the policy is?\n* Have the authors experimented with learning the IDM from the mapping of the expert states into the learned latent space, i.e. from the latents z instead of the states s?\n* What does the weight distribution (in the sigma loss) look like? In other words, how far from uniform is the \"adaptive\" scheme the authors designed?\n* Providing an ablation study comparing KATS with and without any symmetry training and usage would be insightful.\n\nStyle, typos, suggestions:\n* It would be useful to add, in the algorithm, links to the equation according to which the various networks are optimized.\n* [minor] It might be good to mention that that Koopman machinery is used in latent space earlier in the introduction than at the very end. The paper might also benefit from putting KATS in the context of model learning in latent spaces, which the typical RL literature reader might be more familiar with.\n* [minor] L123-124: the comma after \"shift\" should be removed.\n* [minor] L155-156: z_t and z_{t+1} correspond to a transition (s_t, s_{t+1}) or to a pair of states but not a pair of transitions.\n* [minor] L187-188: why use \"aug\" when the figure 1 (a) uses primes to designates the augmentations?\n* [minor] The last sentence of Def. 2 should be emphasized.\n* [minor] L263: properly format the emdash, or use a colon.\n* [minor] End of page 9: \"Limitaitions\" -> Limitations\n* [minor] L810-811: \"Synthesisi\" (extra \"i\"), \"(KATS)\" (missing space prefix).\n* [minor] L289-290: \"Any symmetry transformation that commutes with K is therefore guaranteed to produce trajectories that adhere to this policy.\" I find this to not be clear.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T04:56:44",
"modification_date": "2025-11-12T15:08:37",
"review_url": "https://openreview.net/forum?id=UAZCKdd4R7¬eId=YbYtKKkWmC",
"license": "CC BY 4.0"
},
{
"id": "boqIfUbY2q",
"forum": "UAZCKdd4R7",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19398/Reviewer_AWKq",
"reviewer_name": "Reviewer_AWKq",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This paper proposes a theoretically principled method to generate augmented, synthetic expert demonstrations for offline imitation learning. With said augmented data, one can perform standard behavioral cloning on the union of the given expert dataset and the augmented dataset. The paper grounds their technique for augmenting trajectories in Koopman theory, done in the latent space of a learned autoencoder over the expert data. In particular, they note that if the learned state latents satisfy certain linear properties (e.g. the transition function under the expert data is linear -- said linear transformation is the Koopman operator), then compounding errors are bounded, leading to useful trajectory generation.\n\nExperiments on MuJoCo tasks, both in IL and RL, validate the hypothesis shown, including strong results compared to prior offline IL baselines that either had to rely on suboptimal offline data (e.g. MILO) and other methods employing data augmentation (e.g. KFC+BC). Furthermore, they also try their method in the offline RL setting, comparing to standard baselines and recent Koopman-focused baselines such as KFC++ and showing strong performance.",
"strengths": "I really like this area of work, as Koopman theory is well grounded and has been used for quite a while in model-based dynamical system control (although I am not super familiar with the literature). There are many advantages in learning a latent space under which the transition dynamics are linear, which include not even having to do RL directly and employing more stable control theory-focused algorithms.\n\nThe theory seems fine to me, and seems to borrow a lot from the KFC paper (Weissenbacher et al. 2022), leading me to believe that it is sound. The experimental results are also strong, showing strong performance improvement even over other Koopman-based RL and IL algorithms.",
"weaknesses": "There are instances where the paper could be written a bit better (e.g. put your citations in parentheses!). I think there are also potential weaknesses to the method empirically, which include (and correct me if I'm wrong) the following:\n\n- In the RL setting, if the reward function is a function of both state and action, then Q learning may be biased. I think that the reward is only the function of the state in MuJoCo and DMC control domains, which means the method is fine there, but in cases where it is not, then I figure due to said bias, learning is difficult even when the latent system is well-learned.\n\n- Generally, these methods seem to work in small-scale tasks such as OpenAI Gym locomotion, while not having been tested on larger-scale domains such as DMC from pixels or larger control tasks. A potentially great use for this method could be to learn such a linear latent system on real robotic datasets, making learning controllers much faster.\n\nThese are not \"make or break\" weaknesses, more so that this seems not to have been tested. In general, for the focus of IL, there are less problems, as for instance removing action conditioning is fine as the Markovian expert policy is embedded into the latent encoding, which is enough for data augmentation.",
"questions": "No fundamental questions from me, but I am curious to know if any large-scale experiments were done for this paper, including with either high-dimensional states or with image observations.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T11:20:55",
"modification_date": "2025-11-12T15:08:38",
"review_url": "https://openreview.net/forum?id=UAZCKdd4R7¬eId=boqIfUbY2q",
"license": "CC BY 4.0"
},
{
"id": "1dJHFHBVuO",
"forum": "UAZCKdd4R7",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19398/Reviewer_qXWS",
"reviewer_name": "Reviewer_qXWS",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "In this paper, a data augmentation framework, namely KATS that leverages Koopman theory to address the critical distribution shift problem in offline imitation learning is introduced. The presented framework can synthesize trajectories for the training data augmentation while avoiding the compounding errors of recursive rollouts and ensuring computational efficiency and scalability. Experimental results show the effectiveness of the proposed approach.",
"strengths": "1. The paper is well-motivated and aims to address an important issue in the literature, which generates high-quality, dynamically consistent trajectory-level data while avoiding the compounding errors and ensuring computational efficiency. \n2. Experimental results along with theoretical guarantees demonstrate the advantages of KATS, which yields substantial improvements in policy performance on some tasks.",
"weaknesses": "1. The presented Koopman-Assisted Trajectory Synthesis (KATS) framework is based on the assumption that the symmetries of a closed-loop dynamical system, driven by a fixed expert policy, are directly reflected as commutation properties of its associated Koopman operator. While the assumption may hold in some cases, it is unclear whether such an argument can be satisfied in a general sense. Can the proposed framework be applied in any type of environment, or is its application limited to some special domains? \n2. In the literature, many works have been proposed for the trajectory-level data augmentation, more recent baselines, especially for diffusion-based approaches, can be added and discussed for the comparison. \n3. In the experiments, the most recent baseline is a rejected paper (TELS) while other baselines were mainly presented two or three years ago. Considering the rapid development of the related research area, it is necessary to adopt more recent baselines to verify the effectiveness of the proposed method.",
"questions": "Please refer to the weakness points.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T15:54:30",
"modification_date": "2025-11-12T15:08:38",
"review_url": "https://openreview.net/forum?id=UAZCKdd4R7¬eId=1dJHFHBVuO",
"license": "CC BY 4.0"
}
] | |
PVooP3d7cI | https://openreview.net/forum?id=PVooP3d7cI | The Price of a Second Thought: On the Evaluation of Reasoning Efficiency in Large Language Models | 3.5 | 3.5 | [
6,
4,
2,
2
] | [
3,
4,
4,
3
] | 4 | [
"Reasoning Efficiency",
"Test-time Scaling",
"Large Language Models",
"Chain-of-Thought"
] | Recent thinking models trained with reinforcement learning and backwardchecking CoT often suffer from overthinking: they produce excessively long outputs even on simple problems, wasting computation. Existing evaluations, based on token efficiency, give an incomplete view as they neglect problem difficulty and intermediate computation costs. We formalize reasoning efficiency as a relative measure between thinking and instruct models, treating instruct models as the minimal-effort baseline. A systematic study across four thinking models and multiple benchmarks reveals two consistent patterns: (i) instruct models achieve higher efficiency overall, and (ii) problem difficulty affects efficiency, with thinking models wasting computation on easy problems but providing value on harder ones. Building on this insight, we propose COTHINK, a simple two-stage pipeline: an instruct model drafts a brief outline, and a thinking model expands it. On GSM8K, MATH500, and AIME24, COTHINK cuts token usage by 21.1% while keeping accuracy on four thinking models, and remains competitive with strong efficiency baselines. | We formalize reasoning efficiency to evaluate thinking models, discover potential scaling laws showing systematic overthinking on simple problems, and propose CoThink to adaptively scale computation with problem complexity. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=PVooP3d7cI | 2025-09-18T19:32:11 | 4 | [
{
"id": "dQoy9XTgh6",
"forum": "PVooP3d7cI",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11280/Reviewer_fcAM",
"reviewer_name": "Reviewer_fcAM",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The authors propose as a fairer way to assess reasoning efficiency in RMs, using a baseline-normalized version of $\\tau(M,D)$ (efficiency given dataset D and model M) as $\\frac{\\tau(M_R,D)}{\\tau(M_I,D)}$ (where $M_R$ and $M_I$ are reasoning and instruct models from the same family)\n\nTo improve efficiency they propose a two step process where an IM first proposes an outline, and then the RM expands the outline of the CoT into a full one. It improves avg budget by 21.1% and accuracy by 1.66% over GSM8K, MATH500, AIME24.",
"strengths": "CoThink is simple and delivers performance gains for some models on some tasks. On average, models do gain in performance across some tasks. This suggests that depending on the decision making process, it may be worth trying this approach for some applications.",
"weaknesses": "The improvement in performance isn’t that clear-cut. When considered alongside the relative simplicity (which may also read as limited novelty) that is either a pro or a con. I am personally inclined to forgive simple methods more for inconsistent performance gains.\n\nWould be nice to see some comparison to or discussion of methods that force early stopping such as [1,2]\n\nLimited motivation of design choices such as prompts. No discussion of how well the models instruction follow/conform to the outlines. Would be nice to see such analysis (see questions)\n\n[1] Fu Y, et al. “Efficiently Scaling LLM Reasoning with Certaindex” NeurIPS 2025 (https://openreview.net/forum?id=nn51ewu5k2), arXiv:2412.20993\n\n[2] Pu, X, et al. “ThoughtTerminator: Benchmarking, Calibrating, and Mitigating Overthinking in Reasoning Models” COLM 2025 (https://openreview.net/forum?id=oHR862dpMC) arXiv: 2504.13367",
"questions": "Do you have any ablations on prompt? How do you know you picked the right prompts to elicit the desired summarization behavior?\n\nDo you have any analysis of how well models follow the instructions to use the outline provided? Could that explain inconsistent performance? Are some outlines \"better\" for some tasks than others?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-13T08:03:17",
"modification_date": "2025-11-13T08:03:17",
"review_url": "https://openreview.net/forum?id=PVooP3d7cI¬eId=dQoy9XTgh6",
"license": "CC BY 4.0"
},
{
"id": "wvEubxZNF3",
"forum": "PVooP3d7cI",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11280/Reviewer_2ZwJ",
"reviewer_name": "Reviewer_2ZwJ",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper studies the reasoning efficiency of “thinking” large language models versus standard instruct models. It introduces a Relative Reasoning Efficiency metric that normalizes compute cost by an instruct baseline, revealing that reasoning models often overthink simple problems but add value for harder ones. Building on this finding, the authors propose a two-stage framework, COTHINK, where the instruct model first generates a short outline and the reasoning model then expands and verifies it. Experiments on three math reasoning benchmarks (GSM8K, MATH500, AIME24) show that COTHINK reduces token usage by about 21% on average with slightly improved accuracy. The paper also explores causes of overthinking and proposes a scaling interpretation for reasoning efficiency.",
"strengths": "1. The paper introduces an interesting metric that enables consistent comparison across models and tasks. This quantitative perspective fills a gap in evaluating reasoning models beyond simple accuracy or token count.\n2. The proposed COTHINK framework is simple and practical. It requires no difficulty prediction, is easy to reproduce, and achieves meaningful compute savings without sacrificing performance.\n3. The paper is clearly written and well-organized. Motivation, method, and analysis are coherently connected, and experimental settings are described with sufficient clarity.",
"weaknesses": "1. While the current experiments focus exclusively on mathematical reasoning, extending the evaluation to at least one non-math reasoning domain (e.g., code generation on HumanEval or MBPP, or knowledge reasoning on GPQA-Diamond or the non-math subset of MMLU-Pro) would strengthen the paper’s generality and demonstrate the broader applicability of the proposed framework.\n2. The robustness of the two-stage structure could be explored further. It would be helpful if the author could include at least one ablations that examine (a) the effect of perturbed outlines (such as synonym rewrites, step reordering, small insertions/deletions, or minor errors), and (c) a reversed order setup (reasoning draft → instruct refinement). These analyses would clarify how sensitive the method is to outline quality and the ordering of stages.\n3. Given that Table 3 suggests varying accuracy and efficiency patterns across benchmarks, adding a difficulty-level analysis could make the findings more informative. A brief stratified view would quantify the intuition that COTHINK tends to offer greater benefits on harder problems and help identify the most suitable use cases for this approach.\n4. Because the proposed strategy-completion paradigm is largely motivated by empirical observations and case studies, including a variant comparison, for example, letting the instruct model produce a complete answer while the reasoning model performs critique and revision (Critique-and-Revise), would further reinforce the empirical validity and credibility of the design.",
"questions": "See weakness.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T23:43:15",
"modification_date": "2025-11-12T12:40:30",
"review_url": "https://openreview.net/forum?id=PVooP3d7cI¬eId=wvEubxZNF3",
"license": "CC BY 4.0"
},
{
"id": "PVLY6ciy1x",
"forum": "PVooP3d7cI",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11280/Reviewer_cX2J",
"reviewer_name": "Reviewer_cX2J",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "this paper focuses on reasoning efficiency in thinking models, providing a clean definition of thinking efficiency, analyzing thinking and non-thinking models, and provide a two-stage prompting method to enhance efficiency",
"strengths": "1. a clean definition for formalization of relative efficiency\n2. CoThink works without architectural changes but just prompt engineering with two stages, simple and effective",
"weaknesses": "1. this topic is also widely and deeply studied, and this paper does not provide new insights or surprising results\n\n2. The mechanistic explanations including RL-induced verbosity and backward CoT patterns are speculative without rigorous evidence\n\n3. Lines 192-194 claim RL reduces \"per-step information density\" but provide no direct evidence\n\n4. The authors try to establish the scaling law, which has good intention, but how are the parameters fit? The scaling parameters in the Figure are simply drawn for reference, which is not convincing.",
"questions": "a lot of models now have thing/non-thinking mode switching, in the future would we still need this two stage prompting? is it really necessary?\n\nOther questions see Weakness section",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T14:16:57",
"modification_date": "2025-11-12T12:40:31",
"review_url": "https://openreview.net/forum?id=PVooP3d7cI¬eId=PVLY6ciy1x",
"license": "CC BY 4.0"
},
{
"id": "lWf9AtgPU0",
"forum": "PVooP3d7cI",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11280/Reviewer_R1kH",
"reviewer_name": "Reviewer_R1kH",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper studies efficient reasoning in large language models (LLMs). The authors point out that reasoning LLMs trained with RL and backward-checking CoT exhibit strong long-form reasoning ability but also tend to suffer from overthinking. Existing efficiency optimization approaches mainly focus on token-level efficiency metrics, which ignore problem difficulty and intermediate reasoning cost, thus failing to distinguish between overthinking and underthinking. \n\nTo address this, the authors define a relative reasoning efficiency metric by comparing the token efficiency of a reasoning model against that of an instruct model. They identify two key observations: (i) instruct models are overall more token-efficient; and (ii) reasoning models show advantages mainly on hard problems.\n\nBased on these observations, the authors propose a two-stage pipeline called COTHINK: an instruct model first generates an outline, and a reasoning model then expands it. Experiments on GSM8K, MATH500, and AIME24 show that COTHINK improves accuracy and reduces computation budget, demonstrating its effectiveness.",
"strengths": "1.\tThe paper is overall motivative. Through visual analyses such as Figure 1 and Figure 2, the paper illustrates the overthinking phenomenon and its strong correlation with task difficulty, providing a well-motivated foundation for the proposed efficiency metric.\n2.\tThe proposed method is reasonable. It proposes a two-stage pipeline, COTHINK, which uses an instruct model to draft a brief outline, and a thinking model to expand it.\n3.\tExperimental results demonstrate the effectiveness of the proposed method.",
"weaknesses": "1. The findings for motivation is similar in existing studies. The two main observations in Section 2.1 (that instruct models are more efficient and reasoning models mainly help on hard problems) have already been reported in multiple prior works (e.g., AutoThink [1], Chen et al. [2], Sui et al. [3], Wang et al. [4]). These studies also show similar reasoning efficiency distributions across problem difficulty and input length.\n2. The novelty is somewhat limited. Similar pipeline-based approaches already exist in the same direction (e.g., LM-guided CoT [5]) that also combine small and large models for outline generation and reasoning expansion. The contribution of COTHINK is therefore incremental. Besides, the generated outline can also contain errors, which may lead to potential cascading errors.\n3. The differences between COTHINK and prior methods such as Sketch-of-Thought [6], FlashThink [7] are not clearly articulated. These works are mentioned in the related work section but without explicit comparison or further discussion to show advantages.\n\n\n**REFERENCES**\n\n[1] Tu S, Lin J, Zhang Q, et al. Learning When to Think: Shaping Adaptive Reasoning in R1-Style Models via Multi-Stage RL[J]. arXiv preprint arXiv:2505.10832, 2025. NeurIPS 2025\n\n[2] Chen X, Xu J, Liang T, et al. Do not think that much for 2+ 3=? on the overthinking of o1-like llms[J]. arXiv preprint arXiv:2412.21187, 2024.\n\n[3] Sui Y, Chuang Y N, Wang G, et al. Stop overthinking: A survey on efficient reasoning for large language models[J]. arXiv preprint arXiv:2503.16419, 2025. TMLR 2025\n\n\n[4] Wang Y, Liu Q, Xu J, et al. Thoughts are all over the place: On the underthinking of o1-like llms[J]. arXiv preprint arXiv:2501.18585, 2025.\n\n[5] Lee J, Yang F, Tran T, et al. Can small language models help large language models reason better?: LM-guided chain-of-thought[J]. arXiv preprint arXiv:2404.03414, 2024. COLING 2024\n\n[6] Aytes S A, Baek J, Hwang S J. Sketch-of-thought: Efficient llm reasoning with adaptive cognitive-inspired sketching[J]. arXiv preprint arXiv:2503.05179, 2025. \n\n[7] Jiang G, Quan G, Ding Z, et al. Flashthink: An early exit method for efficient reasoning[J]. arXiv preprint arXiv:2505.13949, 2025.",
"questions": "Please see strengths and weaknesses.\n\nBesides, it would be helpful to include more discussion or ablation studies against existing training-free reasoning-efficiency approaches to highlight the unique contribution of COTHINK. \n\nIn addition, it would also be beneficial to clarify how COTHINK differs conceptually and technically from concurrent two-stage reasoning frameworks, such as related directions’ work: Thought Manipulation and Scot.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T19:01:06",
"modification_date": "2025-11-12T12:40:31",
"review_url": "https://openreview.net/forum?id=PVooP3d7cI¬eId=lWf9AtgPU0",
"license": "CC BY 4.0"
}
] |
mbu8EEnp3a | https://openreview.net/forum?id=mbu8EEnp3a | Do LLMs Signal When They’re Right? Evidence from Neuron Agreement | 4.5 | 4 | [
6,
6,
4,
2
] | [
4,
4,
4,
4
] | 4 | [
"Neuron-Agreement Decoding (NAD); Neuron activation patterns; Unsupervised answer selection; Chain-of-thought ensembling; Token efficiency"
] | Large language models (LLMs) commonly boost reasoning via sample-evaluate-ensemble decoders (e.g., majority voting), achieving label free gains without ground truth. However, prevailing strategies score candidates using only external outputs such as token probabilities, entropies, or self evaluations, and these signals can be poorly calibrated after post training. We instead analyze internal behavior based on neuron activations and uncover three findings: (1) external signals are low dimensional projections of richer internal dynamics; (2) correct responses activate substantially fewer unique neurons than incorrect ones throughout generation; and (3) activations from correct responses exhibit stronger cross sample agreement, whereas incorrect ones diverge. Motivated by these observations, we propose Neuron Agreement Decoding (NAD), an unsupervised best of N method that selects candidates using activation sparsity and cross sample neuron agreement, operating solely on internal signals and without requiring comparable textual outputs. NAD enables early correctness prediction within the first 32 generated tokens and supports aggressive early stopping. Across math and science benchmarks with verifiable answers, NAD matches majority voting; on open ended coding benchmarks where majority voting is inapplicable, NAD consistently outperforms Avg@64. By pruning unpromising trajectories early, NAD reduces token usage by 99\% with minimal loss in generation quality, showing that internal signals provide reliable, scalable, and efficient guidance for label free ensemble decoding. | interpretability and explainable AI | https://openreview.net/pdf?id=mbu8EEnp3a | 2025-09-20T18:18:27 | 4 | [
{
"id": "q2QF9xWDWT",
"forum": "mbu8EEnp3a",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission25050/Reviewer_gDoM",
"reviewer_name": "Reviewer_gDoM",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper studies internal activation patterns in LLMs and reports two empirical regularities: (i) correct generations activate fewer unique neurons than incorrect ones; and (ii) correct generations show higher cross-sample neuron-set agreement. Building on these observations, the authors introduce Neuron Agreement Decoding (NAD)",
"strengths": "1. Novel internal-signal criterion: Selecting candidates via Jaccard agreement of activated-neuron sets rather than output-space agreement is intellectually novel\n\n2. Computational savings: Early pruning at 32 tokens yields two orders of magnitude fewer tokens with modest accuracy impact\n\n3. Method simplicity: NAD relies on inexpensive set operations over FFN activations; the MinAct variant is parameter-light, aiding adoption.",
"weaknesses": "1. External validity to large/closed models: All results are on small/medium, open models. It is unclear whether the “fewer-neurons-when-correct” regularity and NAD’s gains hold for frontier models (70B–>100B)\n\n2. Definition and sensitivity of “activated neuron set”: The operational definition depends on thresholds/top-k within layers and across chunks. Although ablations exist, a more systematic sensitivity analysis (varying k, chunk size B, layer subsets, and gating functions) would strengthen your claims. \n\n3. Sampling hyperparameters: Results are reported for T=0.6, top-p=0.9; robustness to temperature/top-p and to different N would help verify the effectiveness.",
"questions": "1. How does NAD’s advantage change with larger models and larger N?\n2. Could you provide a more solid theoretical analysis for your arguments?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T15:13:11",
"modification_date": "2025-11-12T18:28:21",
"review_url": "https://openreview.net/forum?id=mbu8EEnp3a¬eId=q2QF9xWDWT",
"license": "CC BY 4.0"
},
{
"id": "1zn4ZXUZN7",
"forum": "mbu8EEnp3a",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission25050/Reviewer_jKnt",
"reviewer_name": "Reviewer_jKnt",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "- **Core idea:** Use internal neuron activations (not output logits/entropies) to score and select reasoning traces. Two signals: (i) activation sparsity (correct traces activate fewer unique neurons) and (ii) cross-sample neuron agreement (correct traces share more similar activation sets). \n- **Method:** Neuron Agreement Decoding (NAD): build a Jaccard-similarity matrix over activated-neuron sets across sampled traces and select via kNN/medoid/DBSCAN; a MinAct variant selects the fewest-unique-neuron trajectory. Early stopping uses the same signals after the first 32 tokens (B=32) to prune low-quality paths. \n- **Findings:** On AIME24/25 & GPQA, NAD matches majority vote while enabling aggressive early stop; on code (HumanEval, MBPP, LiveCodeBench), where voting is hard, NAD beats Avg@64. Reported token reductions up to ~99% with small accuracy loss.",
"strengths": "- **Insightful internal analysis:** shows entropy/self-certainty are low-dimensional projections of richer activation dynamics; correct traces are sparser and more aligned across samples. \n- **Simple selection rules:** kNN/medoid/DBSCAN over Jaccard of activated-neuron sets; unsupervised and label-free at test time. \n- **Early-stop lever:** practical chunked early-stop at 32 tokens with large token savings in parallel sampling.",
"weaknesses": "- **Positioning vs token-confidence baselines:** Conceptually close to self-consistency / DeepConf (token-level confidence/entropy) but at the neuron level; however, there is no apples-to-apples comparison against DeepConf under the same sampling regime (accuracy + compute). \n- **“Early correctness within 32 tokens” needs clarification:** Paper sets early stop at B=32 and infers quality from internal signals—not ground-truth correctness mid-generation. Clarify how “NAD enables early correctness prediction within the first 32 generated tokens” is quantified and whether OOD checks were made to avoid overfitting to seen patterns. \n- **Scope & generality:** Signals are shown strongly on AIME-style math; for open-ended tasks (code), MinAct can underperform, and neuron-agreement advantages shrink—casting doubt on broad generality (e.g., free-form scientific discovery). \n- **Cost reporting is incomplete:** Paper emphasizes token savings, but wall-clock, activation extraction overhead, pairwise Jaccard construction, and memory/storage (noted in Limitations) aren’t benchmarked vs strong external baselines. \n- **Baselines:** Mainly Avg@64 and Cons@64; missing self-evaluate before ensemble and confidence-based (e.g., DeepConf) under matched budgets.",
"questions": "1. **Meaning of “early correctness”:** When you say “enables early correctness prediction within the first 32 tokens”, do you mean ranking traces by internal signals at 32 tokens and later verifying with ground truth, or a calibrated correctness probability? How is this measured, and did you test OOD prompts to check robustness? \n2. **DeepConf comparison:** Please provide matched-budget comparisons to DeepConf (token-level entropy pruning): final accuracy, token count, wall-clock, and memory. This will isolate the incremental value of neuron-level signals over token-level confidence. \n3. **Generalisation beyond math:** Your Figure 6 suggests weaker or minimal gains (even reversals) for code. Can you evaluate on open-form science benchmarks to test whether neuron sparsity/agreement remains predictive when answers are not short-form/numeric? \n4. **Computation & storage:** Please report the per-token/trace overhead of computing activation sets, building the n×n Jaccard matrix, and memory footprint (with/without bitset compression), compared to token-confidence baselines. \n5. **Ablations:** How sensitive are results to the activation thresholding (top-k per token), chunk size B=32, and the choice among kNN/medoid/DBSCAN? Could later chunks introduce noise (as hinted by Figure 8), and how does this vary by task?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T06:13:14",
"modification_date": "2025-11-12T18:28:21",
"review_url": "https://openreview.net/forum?id=mbu8EEnp3a¬eId=1zn4ZXUZN7",
"license": "CC BY 4.0"
},
{
"id": "D99DklozeF",
"forum": "mbu8EEnp3a",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission25050/Reviewer_nzaY",
"reviewer_name": "Reviewer_nzaY",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The paper investigates whether an LLM’s neuron activations can be used to determine if its generated response is correct. The authors contrast this with prevailing methods that rely on \"external\" signals like token probabilities, output entropy, or model self-evaluations, which the paper argues can be poorly calibrated.\n\nThe paper shows that these external signals are effectively low-dimensional projections of richer, high-dimensional internal dynamics. The authors' analysis uncovers two key findings. Sparsity: Correct responses activate \"substantially fewer unique neurons\" than incorrect responses during generation. Agreement: The activation patterns from correct responses exhibit \"stronger cross-sample agreement,\" while incorrect responses tend to diverge.\n\nMotivated by these observations, the paper proposes a novel unsupervised method called Neuron Agreement Decoding (NAD). NAD selects the best response from a batch of $N$ samples by identifying the candidate with the highest activation agreement with its peers or, alternatively, the one with the fewest activated neurons (activation sparsity).",
"strengths": "The paper's primary strength is its novel investigation that successfully links an LLM's internal neuron activation patterns to the external correctness of its reasoning. Specifically, looking at the number of activated neurons and how they overlap between different inputs can provide a signal for whether the answer is correct. This is pretty cool and I haven’t seen such an exploration before.\n\nOne strength of NAD is its ability to operate without requiring comparable textual outputs, unlike majority voting. This makes it applicable to open-ended tasks like code generation, where majority voting is often inapplicable. This is an important direction for research these days.\n\nNAD matches or outperforms the performance of majority voting on math and science benchmarks and open-ended coding benchmarks. This is pretty strong evidence that NAD can work well, and without using too much inference time (or even reducing compared to Majority voting over many samples).",
"weaknesses": "* One area I am quite skeptical about is whether this method works when the base model has relatively high or relatively lower accuracy on the task in the first place. The experiments right now show that NAD works in the “middle ground” regime, with about 50-70% accuracy. However, for high accuracy (>90%) then NAD seems to degrade performance. Similarly if the original performance is low, I can imagine that the neuron activations could be more “random” so that NAD method doesn’t work. \n\n* Also the value of Avg@64 on these tasks is surprisingly high (since you are averaging over 64 outputs), which means the model is inherently very confident on these tasks. It could very well be the case that NAD only improves performance if Avg@64 is similar to Pass@1 or something. Basically the model is not very creative and only tries the same types of solutions. \n\n* There are no baselines in this paper. The paper only compares its own variants and the base models. I generally am skeptical about a paper without any other methods in the experiments. I understand that NAD is kind of a unique method, but there are a lot of test-time methods for improving reasoning these days. For example, TTRL (https://arxiv.org/abs/2504.16084) or the authors already discuss DeepConf (https://arxiv.org/abs/2508.15260). I am less interested in the 3 different clustering methods (which are basically an ablation for NAD).",
"questions": "* For the evaluations, although the datasets are varied, the performance is somewhat clustered. What happens when the base model has high accuracy (e.g., GSM8k)? Or low accuracy, on some harder benchmarks (e.g., Humanities Last Exam or some of the newer benchmarks with search like SealQA https://arxiv.org/abs/2506.01062). Do we see any benefit or does it also degrade? \n\n* What happens if you perform the analyses with different numbers of samples? The analysis right now is very focused on 64. However, it is not clear if this is kind of a local maximum for NAD performance or whether 32 and 128 also exhibit good performance.\n\n* Similarly, what if Avg@64 is low because the model can output a lot of wrong answers if you keep sampling. This seems like the dominant regime if we are looking forward to AGI and harder tasks. Can you say something about if NAD will work then?\n\n* How does NAD compare to other method’s performance on the same models/benchmarks? The paper cites a few majority voting variants, or compare against TTRL or DeepConf or any of the methods the paper mentions about “external” rewards. These are still very valid approaches for the task at hand.\n\n* This is more minor, but there is a lot of work on decoding methods for improving model outputs. I would say it is worth citing these, and perhaps comparing against them. For example, Factuality Decoding methods also use internal signals (internal layers) to improve the output, e.g., DoLA (https://arxiv.org/abs/2309.03883) and SLED (https://arxiv.org/abs/2411.02433). I would take a look at TTRL (https://arxiv.org/abs/2504.16084) and the forward citations as well. I think currently there are only **4 papers cited in the related work** section, which is quite limited.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T01:36:42",
"modification_date": "2025-11-12T18:28:21",
"review_url": "https://openreview.net/forum?id=mbu8EEnp3a¬eId=D99DklozeF",
"license": "CC BY 4.0"
},
{
"id": "kDwXNnnqSH",
"forum": "mbu8EEnp3a",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission25050/Reviewer_NEts",
"reviewer_name": "Reviewer_NEts",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes Neuron Agreement Decoding (NAD), an unsupervised best-of-N method that selects candidates using activation sparsity and cross sample neuron agreement. NAD is motivated by three observations:\n\n- External signals are low dimensional projections of richer internal dynamics;\n- Correct responses activate substantially fewer unique neurons than incorrect ones throughout generation; and\n- Activations from correct responses exhibit stronger cross sample agreement, whereas\nincorrect ones diverge\n\nThe authors claim that:\n\n- NAD enables early correctness prediction within the first 32 generated tokens and supports aggressive early stopping\n- NAD matches the accuracy of majority voting in math and science benchmarks and outperforms Average@64 in open-ended coding benchmarks.\n- NAD reduces token usage by 99% with minimal loss in generation quality",
"strengths": "- The paper proposes a promising method which uses mechanistic interpretability for selecting best reasoning traces in Best-of-N sampling\n- The early stopping analysis may be of interest to the efficient inference community.",
"weaknesses": "## Major Weaknesses\n\n- Preliminary claims are poorly justified\n - Section 3.2: The authors claim that neuron activation patterns capture structure beyond what entropy represents, citing that samples within clusters have varying entropy values. However, this conclusion is poorly justified.\n - First, they have shown that the *number* of activated neurons correlates with entropy, suggesting the clustering is partially driven by a scalar feature that entropy already captures.\n - Second, any high-dimensional representation will trivially contain structure that a single scalar cannot fully represent, which is not evidence of meaningful structure. The variation in entropy within clusters could simply reflect noise, measurement artifacts, or the fact that t-SNE on Jaccard distances emphasizes pattern overlap rather than distributional properties. At the moment, the more likely conclusion from Figure 3 is that one metric does not perfectly predict another.\n - The preliminary experiments are done with only one model (Qwen3-4B) which may not be generalizable.\n- Lack of motivation on the experimental setup\n - It is unclear why the models are selected (Qwen3-4B-thinking-0527, Qwen3-4B-Instruct-0527 and DeepSeek-R1-0528-Qwen3-8B). Is it because of the different reasoning training regime? or are there specific reasons?\n - Lack of baselines.\n - This is very critical especially because the authors claim that the method captures structure beyond what the “external behaviors” can, thus it is natural to expect that NAD would outperform prior works which are based on these external behaviors:\n - Majority-based selection: Universal Self Consistency [1]\n - Confidence-based selection: Self-Certainty[2], DeepConf [3], PiCSAR [4]\n - Length-based selection: short-1@k [5]\n- Lack of statistical rigor\n - As the paper is dealing with sampling, the authors should try to run the experiments with multiple random seeds to account for stochasticity.\n\n## Additional Suggestions\n\n- L56: Please cite the GPT-4 reports\n- Figure 3: Update the colorbar label (”Average Entropy”)\n- Section 3.2: I believe the Jaccard index is calculated pairwise among all responses across all questions. Please add that explanation in the paragraph\n- Figure 6, 7, and 8 are ordered awkwardly. Figures 7 and 8 are mentioned in the text earlier than Figure 6.\n- The model name Qwen3-4B-thinking-0527 is perhaps a typo? it should have been Qwen3-4B-thinking-**2507**\n\n## References\n\n- [1] Universal Self-Consistency for Large Language Model Generation\n- [2] Scalable Best-of-N Selection for Large Language Models via Self-Certainty\n- [3] Deep think with confidence\n- [4] PiCSAR: Probabilistic Confidence Selection And Ranking for Reasoning Chains\n- [5] Don't Overthink it. Preferring Shorter Thinking Chains for Improved LLM Reasoning",
"questions": "- Figure 2:\n - Have you tried separating the correct vs incorrect instances in Figure 2? The trend may differ between the two categories.\n - Have you tried plotting Figure 2 in log-log scale? I suspect that there is a power-law relation there, which may be interesting.\n- Why are the AIME24 and AIME25 reported as one task?\n- Why is GPQA under Math Reasoning? Which subset of GPQA did you use?\n- Table 2: It is rather awkward to report the total token consumption. Any particular reason why you choose to report that? I believe we are more interested in the average number of tokens saved per question (with confidence interval).\n- In Section 5.3 analysis of the top-k method, what is the metric used to decide the separation? You should consider using statistical test to quantify it.\n- Figure 8:\n - Is this averaged across questions? If yes, please provide the confidence interval bars.\n - Am I understanding it incorrectly? Because the B=16k seems to achieve the highest accuracy, which contradicts the conclusion mentioned in the text.\n - What should I interpret from the token consumption line in the plot?\n- Is there a way to automatically decide the early stopping position? If not, it seems like a difficult hyperparameter to tune.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T06:42:20",
"modification_date": "2025-11-12T18:28:21",
"review_url": "https://openreview.net/forum?id=mbu8EEnp3a¬eId=kDwXNnnqSH",
"license": "CC BY 4.0"
}
] | |
M7eWB695jp | https://openreview.net/forum?id=M7eWB695jp | Purifying Generative LLMs from Backdoors without Prior Knowledge or Clean Reference | 4.5 | 4 | [
2,
2,
6,
8
] | [
4,
3,
5,
4
] | 4 | [
"LLM; Backdoor attack; Backdoor Elimination."
] | Backdoor attacks pose severe security threats to large language models (LLMs), where a model behaves normally under benign inputs but produces malicious outputs when a hidden trigger appears. Existing backdoor removal methods typically assume prior knowledge of triggers, access to a clean reference model, or rely on aggressive finetuning configurations, and are often limited to classification tasks. However, such assumptions fall apart in real-world generative LLM settings. In this work, we propose a new framework for purifying **generative LLM** without any prior trigger knowledge or clean references. Through systematic sanity checks, we find that backdoor associations are redundantly encoded across MLP layers, while attention modules primarily amplify trigger signals without establishing the behavior. Leveraging this insight, we shift the focus from isolating specific backdoor triggers to cutting off the trigger–behavior associations, and design an immunization-inspired elimination approach: by constructing multiple synthetic backdoored variants of the given suspicious model, each trained with different malicious trigger–behavior pairs, and contrasting them with their clean counterparts. The recurring modifications across variants reveal a shared **"backdoor signature"**—analogous to antigens in a virus. Guided by this signature, we neutralize highly suspicious components in LLM and apply lightweight finetuning to restore its fluency, producing purified models that withstand diverse backdoor attacks and threat models while preserving generative capability. | generative models | https://openreview.net/pdf?id=M7eWB695jp | 2025-09-01T23:24:25 | 4 | [
{
"id": "vAWWZyAmIC",
"forum": "M7eWB695jp",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission451/Reviewer_rkEg",
"reviewer_name": "Reviewer_rkEg",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 1,
"presentation": 3,
"summary": "The submission proposes an “immunization-inspired” purification method to remove backdoors from LLMs without knowing the true trigger or having a clean reference model by creating multiple synthetic poisoned and clean fine-tuned variants of the same base model, each with different key–behavior pairs.\n\nThe authors compute parameter update differences between (LoRa and SFT) poisoned and clean variants and identifies shared, consistently aligned channels as the backdoor signature in their experiments and introduce a two-stage purification pipeline: Suppress/reinitialize high-scoring channels in MLPs or LoRA adapters and lightly fine-tune on clean data to recover fluency.\n\nThe authors claim to provide novel insights on MLPs encoding backdoor association while attention modules are not the key driver of the mechanism, that the activation is distributed across the model and different parts of the model can learn the backdoor, even when shuffled. They demonstrate the effectiveness if their method across multiple LLMs (LLaMA-2, Mistral) and attack types (e.g., BadNets, CTBA, Sleeper), outperforming their chosen baselines like pruning and fine-pruning and report results using attack success rate (ASR) and general benchmark utility. They also observes that backdoor activation is redundant and order-invariant across many MLP layers.",
"strengths": "The submission \n* proposes a novel, reference-free purification framework that extracts a shared backdoor signature across synthetic poisoned variants via magnitude + alignment scoring.\n* introduces a two-stage purification pipeline (channel suppression + light clean fine-tuning) effective for both full-model and LoRA-only access.\n* demonstrates good empirical results across multiple large models (LLaMA-2, Mistral) and diverse attack types, outperforming established baselines such as pruning and fine-pruning.\n* provides clear experimental methodology and presentation, including ablation studies on model components and purification stages.",
"weaknesses": "## Weakness 1 [Significance/Originality] \n\nThe paper's related work omits several critical contributions that shape today’s LLM backdoor landscape, including attacks via instruction tuning, attacks in other training steps, attacks in PEFT/LoRA settings, and, crucially, recent mechanistic analyses of how and where backdoors are encoded that come to similar findings as this submission, raising significant concerns regarding novelty and quality of the contributions made to the field. \n\nFor example, the authors did not cite influential papers in the field of backdoored LMs like\n\nUniversal Jailbreak Backdoors from Poisoned Human Feedback. Rando et al.\nTrojaning Plugins of Large Language Models. Dong et al.\nAttention-Enhancing Backdoor Attacks Against BERT-based Models. Lyu et al.\nPoisoning Language Models During Instruction Tuning. Wan et al.\nPPT: Backdoor Attacks on Pre-trained Models via Poisoned Prompt Tuning. Du et al.\nAnti-Backdoor Learning (ABL): Training Clean Models on Poisoned Data. Li et al.\nBlind Backdoors in Deep Learning Models. Bagdasaryan et al.\nSpinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures. Bagdasaryan et al.\n\nlimiting the rigor and completeness of the paper's threat model and contextual grounding. \n\nFurther, the paper \n\nAnalyzing And Editing Inner Mechanisms Of Backdoored Language Models. Lamparth et al. (Arxiv 2023, published 2024)\n\n(which is also not cited) makes several key contributions that significantly overlap with the claimed novel insights made by this submission. In particular, \nit\n* identifies that early-layer MLPs and embedding projections encode backdoor behavior; attention modules are not triggers.\n* introduces a method to localize, remove, or reinsert backdoor mechanisms (in clean and backdoored LLMs).\n* shows backdoor activation distributed across early layers and scalable by parameters edits.\n* studies the effect of keeping MLPs and attention modules fixed during fine-tuning to reduce backdoor eprformance without harming utility.\nin a trigger-agnostic way (manipulation of backdoors without needing to know the trigger, only a large dataset containing it). Meaning that both show MLPs encode the malicious association while attention mainly amplifies or maintains coherence, confirm activation is distributed across layers (strongest in early MLPs), attacks are trigger-agnostic in approach (although with different methods; dataset activations vs synthetic variant contrasts), and enable backdoor removal without external clean reference models.\n\nBesides using newer models and attack methods that since came out compared to the old paper, this seemingly reduces the novel contributions of the submission to their method to collect the backdoor signature extraction and for the purification pipeline, studying PEFT settings, and applications like Coding. \n\nA more rigorous literature review and better positioning of the submitted paper could strengthen it terms of significance/originality.\n\n## Weakness 2 [Quality]\n\nThe submission sutdies attack success rate and a general utility score, but omits the metric of accidental trigger rate (ATR), which can lead to underestimating false positives or collateral damage from purification not captured in the utility score. It is also unclear how potential over-purification could be a problem, asfull-channel reinitialization may remove benign semantics or have other unmeasured side effects.\n\nAdditional experiments and clarifications could strengthen the paper in terms of quality.\n\n## Weakness 3 [Quality]\n\nIt is unclear how real attacker backdoors may behave differently to the author self-generated poisoned variants. Also, there seems to be a dependence on the behavior knowledge of the backdoor, as “trigger-agnostic” still assumes ability to define synthetic behaviors and fine-tune models.\n\nAdditional experiments or clarifications could strengthen the paper in terms of quality.",
"questions": "* Is there a reason ATR was not studied and can over-purification be a problem?\n\n* How realistic are the generated attacks compared to real-world attacks?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T08:33:04",
"modification_date": "2025-11-12T10:45:22",
"review_url": "https://openreview.net/forum?id=M7eWB695jp¬eId=vAWWZyAmIC",
"license": "CC BY 4.0"
},
{
"id": "5i4ZTJL6wo",
"forum": "M7eWB695jp",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission451/Reviewer_Khsx",
"reviewer_name": "Reviewer_Khsx",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 1,
"presentation": 2,
"summary": "The authors propose a method to remove backdoors from LLMs in settings where the triggers aren't known and a clean reference model is not available by identifying trigger–behavior associations in MLP layers. It introduces an immunization-inspired approach that extracts shared backdoor signatures across poisoned variants and suppresses them to neutralize backdoors.",
"strengths": "- The paper clearly motivates the problem of backdoor removal in LLMs without access to trigger information or a clean reference model, which is a realistic and practically relevant scenario.\n- The proposed immunization-inspired signature extraction framework is conceptually clear and intuitive \n- The method’s ability to operate effectively under both full-model access and adapter-only settings increases practical relevance, because many deployed LLMs expose only adapter-level modification capabilities",
"weaknesses": "- Multiple claims throughout the paper (e.g., backdoors are “easy to inject” and “extremely difficult to detect”, Sec. 1) are not sufficiently supported by citations or empirical justification, and would benefit from references.\n- Some terminology remains underspecified, particularly the contrast implied by the term “generative LLM” (Sec. 1): it is unclear what the authors consider a “non-generative LLM” in this context. Additionally, the phrase “safe conditions” (Sec. 1, line 50) lacks a precise definition or operational criteria.\n- The comparison to prior work on backdoor localization is incomplete. For example, the paper identifies MLP layers as the central locus of trigger–behavior associations, but does not discuss how this finding relates to prior mechanistic localization analyses (e.g., https://arxiv.org/abs/2302.12461 and others). It's unclear to me how their contributions are novel compared to prior literature. I'm willing to update my score once this gets clarified.\n- The procedure for constructing poisoned vs. clean variants used in immunization-style signature extraction is not described in enough detail to reproduce: the paper does not specify sampling strategy for D_clean, how triggers and behaviors are selected or diversified, or whether dataset overlap across variants influences extracted signatures.\n- The experimental evaluation omits coding-specific utility measurements in the code injection setting: while Code-LLaMA models are included, the paper does not report any post-purification coding performance metrics.\n- The reported reductions in ASR are not consistently below 5% as claimed in the text, particularly for targeted refusal attacks (per Table 2).\n- The structure of the paper is confusing (e.g., why are key findings listed as part of the methodology section?).",
"questions": "- How would one suspect a model is backdoored in the first place under your assumed setting?\n- How are poisoned and clean variant datasets constructed, and how is variant diversity ensured across trigger and behavior choices?\n- Do backdoor signatures transfer across models or architectures, or must extraction be repeated per model?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T06:36:45",
"modification_date": "2025-11-12T10:45:23",
"review_url": "https://openreview.net/forum?id=M7eWB695jp¬eId=5i4ZTJL6wo",
"license": "CC BY 4.0"
},
{
"id": "TeEQbrYcyH",
"forum": "M7eWB695jp",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission451/Reviewer_1Jtm",
"reviewer_name": "Reviewer_1Jtm",
"rating": 6,
"confidence": 5,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper investigates the problem of removing backdoors from generative large language models (LLMs) without relying on prior trigger knowledge or clean reference models. The authors conduct a detailed analysis revealing that backdoor associations are redundantly encoded in MLP layers, while attention modules primarily amplify trigger signals. Based on these insights, they propose an immunization-inspired framework that extracts backdoor signatures, followed by targeted neuron suppression and lightweight fine-tuning. The proposed method aims to eliminate diverse backdoor behaviors while preserving generative utility across different models, tasks, and attack types.",
"strengths": "- The paper is well-written and easy to follow.\n- The topic of backdoor defense for generative LLMs is both important and timely, given the growing deployment of large models in safety-critical applications.\n- The authors conduct comprehensive experiments across multiple attacks and defense settings, including the BackdoorLLM benchmark, which provides strong empirical evidence for the method’s effectiveness.",
"weaknesses": "1.\tClarification on “without clean reference model” claim:\nAlthough the paper claims to remove backdoors without clean reference models, Section 3.3 shows that the computation of the differential delta (Δ) between backdoored and clean parameters is used to derive the backdoor signature. This implicitly relies on clean references, contradicting the stated assumption. Please clarify this inconsistency or reformulate the claim.\n2.\tReliability of conclusions in Table 1:\nThe observation that backdoors mainly reside in MLP layers may not be fully reliable. Different LoRA fine-tuning configurations can alter where triggers are embedded. For instance, backdoors can also be injected effectively by fine-tuning only attention layers. It would be more convincing if the authors fixed the fine-tuned layers and then re-examined the trigger localization patterns.\n3.\tLayer-wise backdoor analysis granularity:\nThe current analysis of backdoor behavior lacks fine-grained evaluation. The authors are encouraged to conduct layer-wise pruning to observe trigger activation rates. This would yield stronger interpretability and empirical insights.\n4.\tDirect mitigation from localization:\nIf backdoor behaviors can indeed be precisely localized, could pruning or fine-tuning those specific layers directly mitigate the attack? This connection should be discussed, as it might offer a simpler and complementary defense approach.\n5.\tGeneralization of the backdoor signature:\nThe proposed backdoor signature is derived from a set of pre-trained backdoored models. How well does this generalize to unseen attacks or datasets?",
"questions": "Overall, this paper presents an interesting and valuable contribution to understanding and mitigating backdoors in generative LLMs. The empirical findings regarding layer-wise backdoor distributions are insightful, and the immunization-inspired framework is novel. However, the paper would benefit from more rigorous layer-wise empirical studies, clarified claims regarding reference-free assumptions to solidify its conclusions.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-24T17:51:37",
"modification_date": "2025-11-12T10:45:23",
"review_url": "https://openreview.net/forum?id=M7eWB695jp¬eId=TeEQbrYcyH",
"license": "CC BY 4.0"
},
{
"id": "uX1ApZjVem",
"forum": "M7eWB695jp",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission451/Reviewer_dxCf",
"reviewer_name": "Reviewer_dxCf",
"rating": 8,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "In this work, the authors propose a method for eliminating backdoors in large language models. The idea is to conduct multiple backdoor attacks on the same model, and identify those MLP parameters that are often updated as targets for finetuning and backdoor mitigation. The proposed approach has been evaluated using 3 tasks, 5 attacking methods, and compared with a number of baselines.",
"strengths": "First, the empirical study discussed in Section 3.2 is fairly interesting, although some of the results are known through studies on model editing, still it is great to see that they are confirmed in the backdoor attacking as well (a special form of finetuning I suppose).\n\nSecond, the proposed method for identifying guilty parameters is a reasonable one. Although one can imagine certain adaptive attacks which avoid using commonly attacked parameters, it is good to see such an approach for five different kinds of backdoor attacks. \n\nLastly, the paper is fairly well-written, i.e., easy to follow, with well-designed evaluation session and discussion on the experimental results.",
"weaknesses": "On the other hand, the draft can be perhaps improved from the following aspects.\n\nFirst, the method can be further improved through counter-factual analysis, that is, you can improve the magnitude-and-consistency score by filtering those that are not causally related to the backdoor (e.g., if disabling the update on some parameters does not disable the backdoor, those parameters are deemed not causally related). \n\nSecond, the experimental evaluation can be improved by considering adaptive attacks (which, for instance, aim to update different parameters, e.g., by LoRA finetuning focusing on different parameters or layers each time).\n\nThe following are a list of detailed comments.\n\nAblation study on using different attacking methods should be done to show the robustness of the backdoor signature.\n\nPage 1: “... which can be deliberately obfuscated by adaptive attackers during injection.”\n\nComment: Can you provide some references to support your claim?\n\nPage 2: “... MLPs encode the malicious association: removing poisoned MLP updates reliably eliminates backdoor behavior, suggesting that trigger–response associations are established in MLP layers.”\n\nComment: Isn’t this what was found by those works on model editing, such as ROME through causal tracing?\n\nPage 2: “Intuitively, if very different trigger-behavior pairs all induce consistent parameter\nshifts, these shared neurons or channels must encode the abstract association machinery rather than any specific trigger.” \n\nComment: This may not be true if a different backdoor attack method is adopted. It would be helpful to comment on that here.\n\nPage 5: “We then define a define a magnitude-and-consistency score, sj , for each channel as …”\n\nComment: Typo. \n\nPage 6: “we intervene on the neurons in the gate_proj and up_proj matrices,\ntogether with the input channels in down_proj.”\n\nComment: What are gate_proj and up_proj and down_proj?",
"questions": "How do you defend against an adaptive backdoor attack that randomly chooses some layers or parameter for backdoor injection?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-16T20:38:33",
"modification_date": "2025-11-12T10:45:23",
"review_url": "https://openreview.net/forum?id=M7eWB695jp¬eId=uX1ApZjVem",
"license": "CC BY 4.0"
}
] | |
B4mu5A3wVN | https://openreview.net/forum?id=B4mu5A3wVN | e-HC: Adaptive Sequential Higher Criticism Test for Sparse Mixtures | 4 | 3 | [
8,
2,
2,
4
] | [
2,
3,
4,
3
] | 4 | [
"higher criticism",
"sequential test",
"supermartingale",
"sparse mixture",
"Ville's inequality"
] | We propose e-HC, an adaptive sequential test for detecting sparse and weak signals in a stream of p-values. Unlike existing approaches that rely on asymptotic approximations or require knowledge of alternative parameters, e-HC constructs exact test-martingales using moment-generating function compensators, ensuring anytime-valid Type I error control through Ville's inequality. The method adapts to unknown sparsity and signal strength by maintaining exponential weights across multiple detection thresholds, effectively learning the optimal threshold online. We establish non-asymptotic power guarantees for sparse Gaussian mixtures alternative and derive the expected stopping time scaling for weak signal regimes. The same martingale machinery naturally yields anytime-valid confidence sequences for the proportion of significant p-values. Simulations demonstrate that e-HC maintains robust performance under model misspecification, substantially outperforming sequential likelihood ratio tests when the true alternative differs from assumptions. | other topics in machine learning (i.e., none of the above) | https://openreview.net/pdf?id=B4mu5A3wVN | 2025-09-18T23:11:05 | 4 | [
{
"id": "sFHLMYTsxS",
"forum": "B4mu5A3wVN",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12503/Reviewer_PYho",
"reviewer_name": "Reviewer_PYho",
"rating": 8,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes an adaptive sequential testing framework for detecting sparse and weak signals from a stream of p-values. It builds on the classical Higher Criticism test but ensures anytime-valid inference by constructing exact test martingales using moment generating function compensators to control Type I error via Ville’s inequality. \nMore specifically, the setting is as follows: a stream of independent p-values $p_1,p_2,\\dots $ arrives over time, and the goal is to decide whether all of them come from the null distribution $\\text{Uniform}(0,1)$ or if a small, unknown fraction comes from an alternative distribution with more mass near zero (indicating a weak signal. The task is to detect the presence of such sparse, weak signals as quickly as possible, while maintaining Type I error control at any time.\n\n\nThe proposed method, e-HC, constructs an adaptive sequential test by combining ideas from higher criticism, online learning, and martingale-based inference. For a grid of thresholds $u_1, \\dots, u_m$, it tracks the cumulative proportion of p-values below each threshold, forming standardized statistics similar to Higher Criticism. For each threshold, it builds an exact test martingale by compensating for randomness with its moment-generating function under the null, ensuring that the expected growth of the process is 1 when no signal is present. These per-threshold martingales are then aggregated using exponential weights (via the Hedge algorithm), allowing the method to adapt online to the most informative threshold without knowing the sparsity or strength of the signal in advance. The resulting “wealth process” $M_t$ increases multiplicatively over time; when it exceeds $1/\\alpha$, the null hypothesis is rejected. This guarantees anytime-valid Type I control, while the adaptive weighting provides signal detection across different regimes of sparsity and signal strength.\n\nThe experiments show that e-HC maintains exact Type I error control and achieves strong detection power even for weak or misspecified signals. Its martingale process grows rapidly under the alternative but stays stable under the null, confirming theoretical guarantees.",
"strengths": "The paper introduces a novel method that unifies higher criticism, martingale-based inference, and online learning into a single adaptive framework, achieving exact anytime-valid error control. Conceptually, the approach is elegant and applicable when both signal strength and sparsity are unknown, while also having rigorous theoretical guarantees.",
"weaknesses": "On the negative side, the results depend on strong assumptions, such as independence of p-values and correctly specified null distributions, which may not hold in practical applications.\nAlso, the analysis and experiments focus mainly on sparse Gaussian mixtures, so its behavior in other models is unclear.",
"questions": "-Can the method and theoretical guarantees extend to other models beyond Gaussian mixtures?\n-Can you comment on how crucial the independence assumption is on the results? Can the algorithm tolerate some limited dependence?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-10T16:53:58",
"modification_date": "2025-11-12T12:56:13",
"review_url": "https://openreview.net/forum?id=B4mu5A3wVN¬eId=sFHLMYTsxS",
"license": "CC BY 4.0"
},
{
"id": "D6dmNnTWe2",
"forum": "B4mu5A3wVN",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12503/Reviewer_8Qf1",
"reviewer_name": "Reviewer_8Qf1",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This paper introduces e-HC, an adaptive, sequential test designed to detect sparse and weak signals within a continuous stream of p-values. The test aims to distinguish the global null hypothesis $H_0$ (where all p-values are uniformly distributed, $p_t \\sim \\text{Uniform}(0, 1)$) from a sparse mixture alternative $H_1$ (where a small, unknown fraction of p-values $\\epsilon_t$ are drawn from a signal distribution $F_1$ that has more mass near zero).\n\nThe HC method that this paper uses is tailored for the \"rare-and-weak\" signal regime. The core contribution is the construction of an exact, non-asymptotic test-martingale, $M_t$, by merging the adaptive thresholding concept of Higher Criticism with modern e-value-family martingale inference.",
"strengths": "- It introduces the idea of HC into the modern, rigorous framework of test-martingales and e-processes. The use of exact moment-generating function (MGF) compensators to build an exact (non-asymptotic) sequential test seems to be a novel technical contribution.\n\n- A critical point in HC is the argumentation method over the pre-threshold statistics representing the sparsity of signals. Instead of the max statistic, the authors propose to combine the multiple-threshold statistics using the hedge algorithm. It makes the argumentation data-adaptive.\n\n- Theoretically, the authors provide an exact martingale property under the null (Theorem 1), which is a much stronger guarantee than typical asymptotic results. This is followed by a unified, non-asymptotic power bound under the alternative (Theorem 2) and a formal analysis of the stopping time in the target weak-signal regime (Theorem 3). The appendix details the proofs, showcasing a high level of technical contributions.",
"weaknesses": "- The organization of the methodology is poor, which makes it hard to follow while reading. \n 1. **Lack of Clear Motivation for the Core Martingale Construction.** A significant weakness in the paper's clarity lies in the core technical derivation in Section 4.1. The paper introduces the standardized statistic $Z_t(u_j)$. However, it then immediately reformulates this statistic's increment, $\\Delta Z_t(u_j)$, into a sum of a \"predictable part\" $A_{t-1}(u_j)$ and a \"stochastic part\" $B_t(u_j)$. The final test martingale (the \"wealth process\") is then built using only the $B_t(u_j)$ term. If $Z_t(u_j)$ is not the object of interest, why not introduce $B$ directly? Why is its increment the necessary starting point, and why is the $A_{t-1}(u_j)$ component subsequently \"deleted\" from the construction?\nThe paper would be substantially clearer if it added some sentences to Section 4.1 to motivate this decomposition. It should explicitly state why this step is necessary. Explaining that $A_{t-1}$ is a predictable drift that must be removed to satisfy the martingale condition would improve the paper's accessibility.\n 2. **The regret $R_T$ is undefined.** The hedge algorithm that determines the weights of the HC combination appears to be a critical component of the methodology. However, it is only briefly mentioned at the end of Section 4. In Theorem 2 of Section 5, readers can not even find the definition of the regret $R_T$, which plays a critical role in the lower bound.\n 3. The definition of the stopping time $\\tau$ apears in the very end of Section 5. The methodology part only introduces the construction of the martingale, which is incomplete in methodology. And it also makes the motivation of the construction of the martingale sequences really unclear.\n\n- **The numerical study is questionable**. Only the SLRT method is compared. And the results seem to be located in Table 1 only. However, Table 1 only reports the performance of the e-HC method, why the red-colored numbers represent the SLRT method? The information in Table 1 is totally misleading. And the authors should also describe why the proposed e-HC method is superior in this numerical setting.",
"questions": "Besides the questions in the weakness part. There are several questions I raise upon the reading of the paper.\n\n- **The $\\lambda$ Parameter**: The core MGF compensator depends on a parameter $\\lambda_t$, which is defined as \"predictable\". However, the theoretical analysis (Theorem 2) and all experiments appear to use a fixed, constant $\\lambda$. This is a significant missed opportunity. The framework allows for a data-driven, adaptive $\\lambda_t$, but the paper provides no guidance on how to choose it, nor does it explore the performance gains of an optimized $\\lambda_t$ versus the fixed $\\lambda=0.2$ used in the experiments. The test's power is likely very sensitive to this choice, and a \"bad\" $\\lambda$ could cripple performance.\n\n- **Grid Size and Regret ($m$)**: The Hedge algorithm's regret, $R_T$, scales with $\\sqrt{\\log m \\log T}$. This is the \"price\" of adapting over $m$ thresholds. The paper does not discuss this trade-off. What is the practical effect of choosing a coarse grid ($m=20$) versus a very fine grid ($m=2000$)? A fine grid is more likely to contain an \"optimal\" threshold but will pay a higher regret cost, potentially slowing detection. The paper's choice of $m=200$ is arbitrary, and a sensitivity analysis is needed.\n\n- **Hedge Learning Rate ($\\gamma$)**: Similarly, the weight rate $\\gamma$ (the Hedge algorithm's learning rate) is set to 0.05 without justification. This parameter's tuning is critical to the algorithm's ability to \"catch up\" to the best threshold, and the paper should provide either a theoretical or empirical basis for its selection.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T10:42:55",
"modification_date": "2025-11-12T12:56:14",
"review_url": "https://openreview.net/forum?id=B4mu5A3wVN¬eId=D6dmNnTWe2",
"license": "CC BY 4.0"
},
{
"id": "Aw9dRuRGcm",
"forum": "B4mu5A3wVN",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12503/Reviewer_fVDD",
"reviewer_name": "Reviewer_fVDD",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 1,
"presentation": 1,
"summary": "This paper proposes an adaptive sequential test, e-HC, for detecting sparse and weak signals in a stream of independent p-values. The authors construct exact test-martingales using moment-generating function compensators, ensuring anytime-valid Type I error control through Ville's inequality. Additionally, the e-HC adapts to unknown sparsity and signal strength, maintaining robust performance even under model misspecification.",
"strengths": "The e-HC algorithm proposed in this paper constructs adaptive martingales based on independent p-value sequences, achieving anytime-valid Type-I error control with theoretical guarantees, even when the signal strength and sparsity are unknown. The authors further analyze its statistical power under the alternative hypothesis modeled by a Gaussian mixture. Empirical results demonstrate that the proposed e-HC method exhibits robustness under model misspecification.",
"weaknesses": "1. This paper lacks insight and has an outdated motivation. The paper’s core idea—replacing asymptotic Higher Criticism by an exact martingale version—is mostly an algebraic adaptation, not a new principle. The problem of sparse-signal detection via HC is a classic statistical problem from early 2000s (Donoho & Jin, 2004). Recasting it in an “online sequential” setup does not by itself constitute a compelling motivation in 2025, especially for ICLR. \n2. The main construction (test martingale via exact MGF compensator + exponential weights) follows similarly from known results in the e-process literature (Ramdas et al., 2021; Waudby-Smith & Ramdas, 2024), and the authors didn't mention this or refer to related works. The “adaptive threshold aggregation” is a straightforward application of Hedge, and the resulting theorems (nonasymptotic Type I control, unified lower bound) read more like a re-derivation of standard facts than a new conceptual advance. \n3. There is no attempt to connect the method to practical applications. All experiments are synthetic Gaussian mixtures with simulated p-values.",
"questions": "Please refer to the 'Weaknesses' section. Additionally: \n1. I have some reservations regarding the title and the name of e-HC. The derivation of the term e-HC isn't fully explained in the main text. Given the occasional references to 'e-values' (Line 100) and 'e-processes' (Line 440), might this terminology be analogous to other 'e-' prefixed methods such as 'e-BH'? Some clarification would be helpful. \n2. The implementation of e-HC needs the partition $\\{u_j\\}_{j=1}^m$(as well as the number $m$), the predictable rule for $\\lambda_t$, and the weight rate $\\gamma$. Could the authors clarify: (i) What principles should guide the selection of these parameters? (ii) How might these parameter choices influence the method's statistical power? \n3. I would appreciate some additional clarification regarding the Remarks on Line 312 to better understand their significance. \n4. A more thorough introduction to the SLRT method, particularly in the experimental section, would help better contextualize the comparative results. \n5. There seems to be a disconnect between Figure 2 (which lacks SLRT results) and the analysis with SLRT mentioned in Line 353. Clarifying this apparent discrepancy would be helpful for readers. \n6. The results in Table 1 suggest that e-HC may incur longer delays compared to SLRT. Could the authors provide some discussions or insights into this phenomenon?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T21:34:12",
"modification_date": "2025-11-12T12:56:14",
"review_url": "https://openreview.net/forum?id=B4mu5A3wVN¬eId=Aw9dRuRGcm",
"license": "CC BY 4.0"
},
{
"id": "47ZDlNjcCc",
"forum": "B4mu5A3wVN",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12503/Reviewer_jDDm",
"reviewer_name": "Reviewer_jDDm",
"rating": 4,
"confidence": 3,
"soundness": 4,
"contribution": 2,
"presentation": 4,
"summary": "The paper studies a sequential version of the well-studied sparse mixture detection problem where the signals are rare and weak. In order to adapt to the unknown sparsity level and unknown signal strength, a sequential Higher Criticism-type statistic is developed. At the core of their proposal is a martingale construction, and the authors show anytime-valid Type I error control through Ville's inequality. Additionally, their framework yields the straightforward construction of confidence sequences for various quantities of interest.",
"strengths": "The paper is clearly written and the authors do a great job providing good intuition. The problem they study is obviously fundamental, and the sequential aspect is a fresh twist on a canonical topic. Methodologically, the martingale construction and the proposal of using exponential weights to adapt to the thresholds is new to the sparse mixture detection literature.",
"weaknesses": "The primary focus of much of the sparse mixture detection literature (in the classical, non-sequential setting) is establishing sharp information-theoretic detection boundaries. In the classical literature, the Neyman-Pearson lemma asserts that the likelihood ratio test is optimal and much work goes into finding the sharp detection boundaries (including sharp constants) delineating the regions in which the null and alternative hypotheses separate or merge asymptotically. Donoho and Jin's proposal of Higher Criticism (which they attribute to Tukey) is notable not only because it adapts to the unknown sparsity and signal level, but also because it provably achieves the sharp detection boundary. This optimality guarantee is a very strong reason to advocate for its use. \n\nThe current paper does not derive a detection boundary nor offer any optimality guarantees of any kind for the Higher Criticism-type procedure they propose. Of course, the sequential setting is quite different and thus likely requires careful thinking in formulating an appropriate notion of a detection boundary and optimality. Offering a coherent formulation would itself constitute a contribution in my view, yet it is absent from the current paper. Due to this, I get the feeling that the \"Higher Criticism\" aspect of the procedure is not actually that important for the major thrusts of the paper. The anytime validity results seems to be the main point, and it appears only the martingale aspect is needed for these.",
"questions": "__(1)__ In the usual, non-sequential setting, the fact that Higher Criticism adapts to the sparsity and signal levels to achieve the optimal detection boundary in the Gaussian sparse mixture detection problem is a very compelling reason to use it. Is there a natural formulation of a detection boundary/optimality in the sequential setting, and can the authors show (or at least discuss) the optimality of e-HC? Even a focused discussion on just the Gaussian setting would greatly improve the paper.\n\n__(2)__ In the paper, the thresholds $0 < u_1 < … < u_m < 1$ are just taken as given and the authors do not discuss at all how to select $u_1,…,u_m$ or even how to select $m$. Can the authors provide some guidance? Clearly choices made here will have important consequences for the test’s power. In the extreme setting $m = 1$ seems a bad choice, so it appears there is much room to do some optimization here. Is there some principle to which the statistician should adhere?\n\n__(3)__ To follow up on the previous question, in the classical definition of Higher Criticism in the non-sequential setting, one takes supremum over all possible thresholds - the statistician does not need to select a grid a priori. In fact, this is very important for Higher Criticism to be adapt to the sparsity/signal and achieve optimality. What are the difficulties of incorporating this strategy into the author’s proposal? Can the authors comment on how much they believe they lose by specifying a grid in advance?\n\n__(4)__ In the classical sparse mixture detection literature, the standard parametrization for the sparsity is $\\varepsilon = t^{-\\beta}$ (in the non-sequential setting) for $\\beta \\in (0, 1)$. The dense case is $\\beta \\in (0, 1/2)$, in which case the usual $\\chi^2$-statistic is optimal. The interesting regime is $\\beta \\in (1/2, 1)$, and it is here where Donoho and Jin propose Higher Criticism. Can the authors comment on whether a similar demarcation between dense/sparse regimes can be made (perhaps with some other parametrization)? It seems roughly that the “weak” corresponds to “dense” and “strong” corresponds to “strong”.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-20T06:13:04",
"modification_date": "2025-11-12T12:56:15",
"review_url": "https://openreview.net/forum?id=B4mu5A3wVN¬eId=47ZDlNjcCc",
"license": "CC BY 4.0"
}
] | |
fQIE4NJOVm | https://openreview.net/forum?id=fQIE4NJOVm | Tight Bounds and Achievable Upper Bounds of Minimal Dimensions for Embedding-based Retrieval | 5.2 | 4 | [
6,
8,
6,
2,
4
] | [
4,
4,
4,
4,
4
] | 5 | [
"representation learning",
"embedding-based retrieval"
] | This paper studies the minimal dimension required to embed subset memberships into vector spaces.
The lower and upper bounds are derived theoretically and supported empirically for various notions of "distances" or "similarities", including $\ell_2$ metric, inner product, and cosine similarity.
Our results suggest no fundamental differences between those metrics in terms of Minimal Embeddable Dimension (MED).
In addition, we conduct experiments in the achievable setting, where we find that we can easily realize the logarithmic dependency between the MED and the number of objects to embed.
Our results also align well with existing practices in large language models, vector databases, and other related fields. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=fQIE4NJOVm | 2025-09-12T16:03:15 | 5 | [
{
"id": "CBy4xBy2FE",
"forum": "fQIE4NJOVm",
"review_number": 6,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4333/Reviewer_KFwt",
"reviewer_name": "Reviewer_KFwt",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 4,
"presentation": 2,
"summary": "This submission looks at the problem of determining the minimum dimension needed to encode an arbitrary set $X$ of $m = |X|$ objects into $\\mathbb{R}^n$ such that a retrieval query on $X$ with $k$ answers in the set is perfectly retrievable. This is done by way of a combinatorial argument:\n1. $k$-shattering is used to define the minimum embeddable dimension (MED)\n2. A simple VC-dimension bound falls out of the definitions, parametrized by $m$ and $k$, and works for an arbitrary scoring function \"family\" $\\mathcal{F}$\n3. For specific scoring functions (dot products, $\\ell_2$, cosine sim.), one can get tight bounds on the MED of $\\Theta(k)$\n4. A minimum achievable dimension (MAED) is designed to model practical scenarios and provides an upper bound on MED\nThere is also an upshot given by empirical results, which suggest that how we construct the embeddings (or generate them with a neural net) matters much more than the available dimensions.",
"strengths": "1. The bounds on MED are quite surprising and make for a great result.\n2. The contrasting optimism for low-dimensional dense retrieval to the prior work of Weller, et al. will make for interesting and important discussion on the limits of vector search in the AI landscape.\n3. Careful effort is made to reconcile empirical results with the theoretical results in an intuitive manner. There is also a clean comparison with the prior work, which makes it easier to reconcile the position of this work with existing results.",
"weaknesses": "1. Typos (e.g. lines 49, 81, 283): some are quite substantial, definitely get these fixed\n2. Considering that this submission aims to contradict earlier work, some more discussion about the earlier work, what is acking in it, and motivation to pursue this approach in place of the prior work should appear earlier on in the manuscript.\n3. The MAED discussion is perhaps oversimplifying the practical scenarios that it tries to represent. While it does model the in-distribution setting of vector search, it fails (at the admission of the authors) to capture the nuance that comes with embeddings generating with a neural network. This leads to an empirical section that is, I feel, lacking. To complement the existing results, there should be experiments based on real data with neural network based embeddings that support the paper's results and an effort to quantify the issues that come with such a setting.",
"questions": "1. It's common in practice to retrieve a larger number (than $k$) of candidates, then rerank them down using a stronger similarity function into $k$ final results. Is there a way to model that setting with this framework? If we were to naively apply the theoretical results to this method, it's possible we would see poor results (as we rely on $k << m$), but this seems to work remarkably well in practice.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T15:32:13",
"modification_date": "2025-11-12T11:15:03",
"review_url": "https://openreview.net/forum?id=fQIE4NJOVm¬eId=CBy4xBy2FE",
"license": "CC BY 4.0"
},
{
"id": "f8whvVSApw",
"forum": "fQIE4NJOVm",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4333/Reviewer_JMqo",
"reviewer_name": "Reviewer_JMqo",
"rating": 8,
"confidence": 4,
"soundness": 4,
"contribution": 4,
"presentation": 3,
"summary": "The paper studies the problem of finding the appropriate dimensionality to embed data in vector spaces. In contrast with recently published work, the formal findings in this paper show an encouraging picture for embedding based retrieval. First, for common similarity measures, the minimal number of dimensions does not depend on the cardinality of the set to embed. Second the minimal dimensionality is a low degree polynomial of the number k of retrieved vectors: between k and 2k in the general case and quadratic in the \"achievable\" setting. Considering that the number of retrieved vectors is a small number in practice, these theoretical bounds paint a positive picture for vector search.",
"strengths": "- The topic is of great relevance in practice, even though the results are of a very theoretical nature.\n- The dimensionality bounds are novel and they bring a much needed formal understanding to the important area of retrieving unstructured data.\n- These small bounds also highlight that more work is needed in the embedding models and that such practical work is not a lost cause.\n- To the best of my knowledge, the proofs are correct and the level of rigor exhibited is appropriate for ICLR.",
"weaknesses": "- The discussion of the achievable setting in the introduction (line 70) feels a bit lacking and its description in the contributions (lines 81 to 84) too vague. The authors should position the achievable setting more clearly in a \"hardness of embbedability\" scale.\n- How tight are the MAED bounds? Although the authors state that this bound may not be tight, I would have appreciated a more detailed discussion of what this bound means in practice.\n- How important is the fact that random vectors are used to get the MAED bound? What would happen in a different setting? Would the bound get worse? It is important to clearly state whether the authors are covering a best or worst case scenario here (or neither and it is just a particular one). This should be more clearly stated in the introduction and abstract, because those section seem to implicitly indicate the MAED bound is general.",
"questions": "- In the abstract, it is unclear what the authors mean by achievable. It would be useful to have a succint definition.\n- In the third paragraph of the introduction, too little context is given when taking about the work by Weller et al. It is not clear what the query-relevance matrix and rank_{+/-} mean. Adding a couple of sentences might help. Alternatively, the authors should up-level that discussion to a more intuitive explanation.\n- There is an undefined symbol at the end of the third paragraph (line 49).\n- In the contributions, there is an unfinished sentence in the item about the \"achievable setting\" (line 81).\n- After Definition 2.11, \"that if MAED\" -> \"that MAED\" (line 181).\n- After Proposition 2.12, \"MEAD\" -> \"MAED\" (lines 186 and 187)\n- Please add a horizontal space between Figures 1 and 2 as the captions are hard to read.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T05:47:47",
"modification_date": "2025-11-12T11:15:03",
"review_url": "https://openreview.net/forum?id=fQIE4NJOVm¬eId=f8whvVSApw",
"license": "CC BY 4.0"
},
{
"id": "oob2LjPwwd",
"forum": "fQIE4NJOVm",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4333/Reviewer_txVp",
"reviewer_name": "Reviewer_txVp",
"rating": 6,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 4,
"summary": "This paper investigates the minimum dimension of the vector space required for embedded retrieval systems, aiming to challenge the view on the \"vector bottleneck\" in the field. The authors introduce two core settings to analyze this problem:\n\n1. Standard Setting. Under this theoretically idealized setting, the paper proves that the Minimum Embedding Dimension (MED)—required to perfectly retrieve all queries with no more than k answers—has a linear relationship only with $k$ (i.e., $\\Theta(k)$) and is independent of the total number of objects $m$ in the corpus.\n2. Achievable Setting. Under this more practically relevant setting, query vectors are constrained to be the centroid of the answer set vectors. The paper theoretically derives and experimentally verifies that the Minimum Achievable Embedding Dimension (MAED) required for this constructive method has a logarithmic relationship with the total number of objects $m$ (i.e., $d=\\text{O}(k^2\\text{log}m)$).",
"strengths": "The theoretical proof on the $\\Theta(k)$ bound for the Minimum Embedding Dimension provides an entirely new, more optimistic perspective for understanding the theoretical limits of embedded retrieval. On top of that, the paper successfully reframes the \"vector bottleneck\" problem, shifting it from a seemingly immutable hardware constraint (space dimension) to an optimizable software issue (embedding construction method).",
"weaknesses": "1. The experimental validation for the \"achievable\" $O(\\log m)$ bound is not truly achievable, as its training method requires checking all $\\binom{m}{k}$ combinations, which is computationally unfeasible for large $m$.\n\n2. The experimental comparison to prior work [Weller et al., 2025a] is misleading because it compares results from two different paradigms (MAED vs. MED). A fair comparison would require the authors to re-run experiments under the same (e.g., MED) setting to prove their optimization method is truly better.\n\n3. The centroid query method proposed in the paper cannot handle complex compositional queries, which limits the applicability of the achievable method. It is important to more honestly define the scope of applicability of their method.\n\n4. There are several minor but noticeable typographical and notational issues in the manuscript.",
"questions": "1. Most importantly, to truly support the \"achievable\" claim, you should demonstrate that this O(logm) bound can also be reached using a scalable, practical training algorithm (e.g., one based on negative sampling or contrastive loss) rather than the full-combination check.\n\n2. To make a more robust claim, you should run your optimization method under the same MED (Standard Setting) as Weller et al. This would provide a true \"apples-to-apples\" comparison and prove that the difference is due to your superior optimization, not the change in settings. Alternatively, provide a clear theoretical proof or new experimental evidence within your paper demonstrating that the MAED (centroid query) is indeed a harder problem than MED (free query).\n\n3. You should more clearly define the limited scope of the MAED model. It would be valuable to discuss the gap between this \"centroid query\" model and a more realistic \"independent query\" model, and what new challenges the latter might introduce.\n\n4. There are several minor but noticeable typographical and notational issues in the manuscript. It is recommended that the authors carefully proofread the paper to improve readability and consistency. In particular:\n - There are some cross-reference errors, including \"Section ??\" and \"on the \\space(a space)\" in Section 1.\n - Some misspellings are present, such as \"MEAD\" in Section 2.3 and \"k-shuttering\" in Section 3.2.\n - Equation 10 in A.2 should be $\\langle v_1, \\sum_{u\\in S} u\\rangle - \\langle v_2, \\sum_{u\\in S} u\\rangle$.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T13:27:59",
"modification_date": "2025-11-12T11:15:03",
"review_url": "https://openreview.net/forum?id=fQIE4NJOVm¬eId=oob2LjPwwd",
"license": "CC BY 4.0"
},
{
"id": "ALetFM29a2",
"forum": "fQIE4NJOVm",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4333/Reviewer_KFqx",
"reviewer_name": "Reviewer_KFqx",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 1,
"summary": "This work studies the *minimal embeddable dimension* (MED) problem, where given a set of $m$ objects\nand a pairwise scoring function $f$, we want to know the minimum embedding dimension $n$ such that\nwe can perfectly recover the top-k (object, query) results according to $f$. The authors consider\nthe following score functions: Euclidean distance, cosine similarity, and a related inner product.\nThey also study a so-called \"achievable\" setting (MAED) where query vectors are of the form\n$1/|S| \\sum_{i \\in S} \\mathbf{x}_i$ for some set $S \\subseteq X$ of size $k$. They give a clean\nproof of the lower and upper bounds for MED (tight up to a factor of 2). They also give\n$O(k^2 \\log m)$ upper bounds for MAED and some synthetic experiments to demonstrate this result.",
"strengths": "- The authors introduce the study of the achievable setting.\n- The MED and MAED problem statements and analysis are built on a clean definition of *$k$-shattering*.\n- The cyclic polytope example in Section 3.1 is instructive and succinct.",
"weaknesses": "- Manuscript is quite unpolished.\n- Experiments (Section 4.2) are very interesting but incomplete. It would be\n good in a future version of this paper to strengthen these results (e.g.,\n revisiting the cyclic polytope as a warm up).",
"questions": "**Questions**\n\n- What are the lower bounds for MAED (Table 1)?\n\n**Misc**\n\n- [049] Typo: \"can be found in Section ??\"\n- [081] Typo: two different capitalization styles for list items, i.e., \"Standard setting\" and \"achievable setting\"\n- [081] Typo: \"simulation results on the .\"\n- [095] Nit: How do we handle having the same vector for two different items given the notation $\\{\\mathbf{x}_{i}\\}_{i=1}^m$?\n- [102] Nit: Inconsistent use of normal and boldface letters for scalars and vectors.\n- [246] Typo: \"top-k\" --> \"top-$k$\"\n- [281] Sugestion: Add some horizontal space between the captions of Figure 1 and Figure 2 so it's more clear they're separate.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T22:20:18",
"modification_date": "2025-11-12T11:15:04",
"review_url": "https://openreview.net/forum?id=fQIE4NJOVm¬eId=ALetFM29a2",
"license": "CC BY 4.0"
},
{
"id": "lnrdGkSSEk",
"forum": "fQIE4NJOVm",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4333/Reviewer_Dok3",
"reviewer_name": "Reviewer_Dok3",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 1,
"summary": "The paper provides theoretical bounds for the so-called minimal embeddable dimension, which is the smallest dimension for which some configuration of m points with a given functional family can be k-shattered. It shows that both lower and upper bounds are independent of the number of points, and only depend on k, in the special case where the functional family is given by the 3 standard scoring functions: inner product, cosine, and L2 distance. \n\nThe paper also defines so-called minimal achievable embeddable dimension, where k-shattering is replaced by k achievable-shattering, defined by evaluating a scoring function evaluated at the centroid of k nearest points. Here nearest just means highest scores. Then the paper uses a union bound to show that the O(k^2 log m) dimension is sufficient to find a configuration of m points with the k achievable-shattering property. Finally the paper runs a simulation to verify that the true relation between achievable configuration and dimension is indeed logarithmic as opposed to cubic in a referenced paper.",
"strengths": "The context of the problem being addressed is of significant interest in the ML community. For instance, in K nearest neighbor retrieval, we need to find the right kind of dimension to ensure most if not all query embeddings can find its nearest k item embeddings simply using dot product, L2 distance, or cosine distance (dot product with item embeddings L2-normalized).\n\nThe construction using the moment curve is interesting mathematically. \nThe proof of the achievable upper bound is also standard and reasonable. The simulation result also supports its general order of magnitude.",
"weaknesses": "The exposition of the paper is quite cryptic sometimes. I will list some examples\nl070-075: looks like 3 things are being compared here: MED, MAED, and real life practical situation. The first sentence says MAED is weaker than real life, but the second sentence then concludes that MAED upper bounds MED. The logic simply doesn’t follow.\nIt would be helpful to tabulate the results for the 3 kinds of scoring functions to make their relationship more transparent. \n\nThere are many typos in the paper, including\nl293: “We use optimize m embeddings randomly initialized”\nl028-029: \"retrieving the top-k answers of top-k largest scores\" should be rephrased as \"retrieving the answers with the k largest scores\".\n\nl019 (abstract): \"Our results also align well with existing practices in large language models, vector databases, and other related fields.\" seem irrelevant.\n\nl097: C_k definition doesn't need min\n\nThe paper can also make the result statements as well as the proofs more accessible to general ML practitioners not familiar with proof heavy literature, involving VC dimensions. \n\nThe upper bound result of the MED is pretty contrived. It doesn’t really show that the dimension n doesn’t depend on the number of points, but only that for any m, one can find a configuration of m points in R^n with the property that the k nearest points of any point can be separated by one of the scoring functions. This makes the result of little practical value. \n\nEven in the MAED case, the so-called achievable query, where it’s required to be the centroids of its k nearest neighbors, seems rather special. In addition, it’s again not saying the dimension upper bound works for all configurations, but rather one can find some configuration with the achievability constraint under the upper bound, even though it should work for almost all cases. I think it’s very much worth highlighting the limitations of these results, and try to make some attempt connecting the results to practical cases, such as in a unsupervised KNN learning task.\n\nThe setup for the simulation requires more detailed explanation as well as motivation. The use of gradient descent to search for an achievable k-shattering configuration does not guarantee optimality, but is sufficient for the upper bound. Some references would be useful to compare against other such simulation work for such a theoretical result.",
"questions": "Overall I think the paper has some interesting probabilistic results. But it needs to explain the results in a more accessible manner. There should be plenty of space left to add more details in the main text.\nI would like to see the paper much better polished in terms of writing style and motivation. Focus on the main claims, namely the 2 upper bounds and 1 lower bound, as well as the special treatment for each scoring function, and making sure the setup, definitions, and proof strategy are completely transparent. Leave some of the propositions to the appendix.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T13:47:39",
"modification_date": "2025-11-12T11:15:04",
"review_url": "https://openreview.net/forum?id=fQIE4NJOVm¬eId=lnrdGkSSEk",
"license": "CC BY 4.0"
}
] | |
0KFQ4F9YEH | https://openreview.net/forum?id=0KFQ4F9YEH | LoC-Decomp: LLM Autoformalization via Logical Concept Decomposition and Iterative Feedback Correction | 4 | 4 | [
2,
4,
6
] | [
4,
3,
5
] | 3 | [
"Autoformalization",
"Automated theorem proving",
"Large language model"
] | Automated formalization—the process of converting natural language mathematical statements into machine-verifiable formal code—plays a critical role in ensuring the reliability of mathematical reasoning generated by large language models (LLMs). Recent studies show that LLMs exhibit strong potential in automating this process, producing formal code for systems such as Lean4, Coq, and Isabelle. Despite prominent advances, existing LLM-based autoformalization methods remain limited: they lack the ability to provide reliable semantic consistency checks to ensure that the formal code accurately preserves the meaning of the original statement. Furthermore, such methods are unable to support iterative improvement through corrective feedback. To address these limitations, we propose Loc-Decomp, a novel framework that integrates an automatic semantic consistency checker and the Lean4 compiler to iteratively refine LLM-generated formalizations, ensuring both semantic consistency and syntactic correctness. Our approach introduces three key innovations: (1) A structured formalization template that decomposes complex formalization tasks into modular, foundational components, and systematically assembles them—like building blocks—into a complete formal expression. (2) A semantic self-checking mechanism based on a divide-conquer-merge strategy to detect subtle inconsistencies between the formalization and the original statement. (3) An iterative feedback-driven refinement loop that leverages both semantic and syntactic error signals to guide the LLM in progressively improving the formal output. By integrating these innovations, Loc-Decomp significantly enhances the accuracy of LLM-driven formalization, reduces reliance on human intervention, and moves closer to truly reliable automated reasoning. Extensive experiments on the MATH and miniF2F datasets demonstrate that our approach achieves a significantly higher formalization success rate compared to baseline methods and previous state-of-the-art (SOTA) approaches. On the miniF2F dataset, for instance, our method attains a success rate of 91.16%, substantially outperforming the previous SOTA result of 46.70%. | Loc-Decomp is a novel framework that enhances LLM-based autoformalization by integrating semantic consistency checks and iterative refinement, achieving a 91.16% success rate on the miniF2F dataset. | neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.) | https://openreview.net/pdf?id=0KFQ4F9YEH | 2025-09-19T18:44:48 | 3 | [
{
"id": "hCHKc73ab9",
"forum": "0KFQ4F9YEH",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17639/Reviewer_8Wwq",
"reviewer_name": "Reviewer_8Wwq",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 1,
"presentation": 1,
"summary": "This paper proposes a pipeline, Loc-Decomp, for autoformalization of math problems. It consists of four components. FormalTrans introduces the use of a structured formalization template, to decompose the statement into parts such as functions, types, constraints, and the problem statement. BackTrans translates the formal statement back into natural language. ASCC-R and CpC-R use a divide-and-conquer “LLM-as-a-judge” approach to verify semantic alignment and syntactic correctness, respectively, and leverage error feedback to iteratively improve accuracy. The method seems to achieve strong results on subsets of MATH-500 and miniF2F, namely MATH-Level5-50 and MATH-ASCC-Eval-150.",
"strengths": "1. The proposed FormalTrans converts autoformalization into a structured template. Using free variables as answer placeholders (instead of the more conventional “sorry” placeholders) is a novel idea.\n\n2. Figure 5 shows that ASCC’s divide-and-conquer plus majority-vote strategy for semantic alignment checking is effective and produces judgments that align better with human evaluation.\n\n3. The paper’s figures present the pipeline clearly.",
"weaknesses": "1. Lack of novelty. Much of the paper repeats prior work and reads more like an engineering integration than a substantive methodological advance; the novelty is therefore marginal. In particular, the idea of semantic consistency checking by decomposing the formalization code has been previously explored [1], and iterative feedback-based methods for semantic and syntactic correction have been investigated in prior work [2] and [3]. Using back-translation plus an “LLM-as-a-judge” for semantic checks is also a standard practice in this area [4]. The authors should provide explicit comparisons to demonstrate what is new in this paper.\n\n \n\n2. Questionable evaluation methodology. The two baselines in Table 1 (SymEQ and Lean-workbook) use different datasets and evaluation criteria; reporting their absolute scores side-by-side with the proposed method is misleading and does not support a fair comparison of method superiority. Besides, miniF2F is primarily used as a benchmark for automated theorem proving rather than autoformalization. Using miniF2F for formalization evaluation is not a common practice and should be justified.\n\n\n\n3. Limited and potentially biased MATH-ASCC-Eval-150. The MATH-ASCC-Eval-150 split is very small, relies heavily on manual annotation, and depends on a single formalization model (DeepSeek V3). As a result, the evaluation of semantic-alignment judgment ability is limited: it is unclear whether the reported ability would generalize to formalizations produced by other models. The authors should provide variants of MATH-ASCC-Eval-150 derived by different formalization models or present an analysis of generalization across models; otherwise the practical value of this dataset is questionable.\n\n \n\n4. Writing quality and presentation issues. Overall the manuscript would benefit from careful proofreading and stricter editing for consistent terminology and formatting.\n\n - The manuscript uses many nonstandard abbreviations and inconsistent spellings, which make it difficult to follow and reduce readability.\n\n - The term “autoformalization” appears with multiple spellings (“automated formalization” on Line 013, “auto formalization” on Line 108, “auto-formalization” on Line 117, “autoformalization” on Line 051). These inconsistencies are embarrassing and should be standardized.\n\n - The paper mostly uses “Lean4,” but “Lean 4” appears on Line 219; the usage should be consistent.\n\n - Typographical/spacing issues: missing space after commas on Line 062; missing space after periods on Line 110 and Line 351.\n\n[1] Zhang, J., Zhong, C., Xu, H., Li, Q., & Zhou, Y. (2025). *KELPS: A Framework for Verified Multi-Language Autoformalization via Semantic-Syntactic Alignment* (No. arXiv:2507.08665). arXiv. https://doi.org/10.48550/arXiv.2507.08665\n\n[2] Wang, H., Unsal, M., Lin, X., Baksys, M., Liu, J., Santos, M. D., Sung, F., Vinyes, M., Ying, Z., Zhu, Z., Lu, J., Saxcé, H. de, Bailey, B., Song, C., Xiao, C., Zhang, D., Zhang, E., Pu, F., Zhu, H., … Li, J. (2025). *Kimina-Prover Preview: Towards Large Formal Reasoning Models with Reinforcement Learning* (No. arXiv:2504.11354). arXiv. https://doi.org/10.48550/arXiv.2504.11354\n\n[3] Liu, C., Shen, J., Xin, H., Liu, Z., Yuan, Y., Wang, H., Ju, W., Zheng, C., Yin, Y., Li, L., Zhang, M., & Liu, Q. (2023). *FIMO: A Challenge Formal Dataset for Automated Theorem Proving* (No. arXiv:2309.04295). arXiv. https://doi.org/10.48550/arXiv.2309.04295\n\n[4] Ying, H., Wu, Z., Geng, Y., Wang, J., Lin, D., & Chen, K. (2024). *Lean Workbook: A large-scale Lean problem set formalized from natural language math problems* (No. arXiv:2406.03847). arXiv. https://doi.org/10.48550/arXiv.2406.03847",
"questions": "1. For miniF2F there is ground-truth formalization available. Given a dataset with ground-truth formal statements, BEq can evaluate formal-statement equivalence and reduce the need for manual annotation. Please explain the rationale for preferring LLM-as-a-judge over BEq in this setting.\n\n2. In the example in Figure 2, the auxiliary functions and constraints introduced appear to duplicate existing definitions in the standard math library (mathlib)—for instance, continuity is already defined in mathlib. Could this duplication create unnecessary obstacles for downstream automated theorem proving? Is this duplication an intentional design choice or a potential problem? How does LoC_Decomp avoid or mitigate such issues in practical formalization workflows?\n\n3. Line 203 states that instead of using the traditional automated theorem proving placeholder “sorry” as a solution placeholder, this work uses an input 's', a free variable representing the solution. This is not a standard practice and might harm the usefulness of the resulting autoformalization dataset for automated theorem proving. Can the authors provide further justification for avoiding “sorry” as the placeholder?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T12:18:46",
"modification_date": "2025-11-12T14:04:56",
"review_url": "https://openreview.net/forum?id=0KFQ4F9YEH¬eId=hCHKc73ab9",
"license": "CC BY 4.0"
},
{
"id": "GHzN41Jw6S",
"forum": "0KFQ4F9YEH",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17639/Reviewer_DZ3h",
"reviewer_name": "Reviewer_DZ3h",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces LoC-DeCOMP, a framework for automated formalization of mathematical statements using off-the-shelf LLMs. The approach integrates a structured decomposition template for Lean4 code generation, a semantic self-checking mechanism (ASCC) leveraging divide-conquer-merge back-translation, and an iterative feedback-driven refinement loop employing both semantic and syntactic (compiler) error signals. Experimental evaluation on the MATH-500 and miniF2F datasets demonstrates substantial improvements over baseline and prior state-of-the-art methods. Furthermore, experiments on the human-verified MATH-Level5-50 subset show a notable 30 percentage point improvement over the baseline.",
"strengths": "1.\tClarity and Presentation: The paper is exceptionally well-written and easy to follow. The proposed method, LoC-DeCOMP, and its evaluation are described with great clarity, which significantly aids in understanding the contribution.\n2.\tAccessibility and Ease of Use: A key strength of this work is that it proposes a training-free workflow. By designing a system that effectively orchestrates off-the-shelf LLMs, the framework is highly accessible and can be readily implemented without the need for costly model fine-tuning.\n3.\tDemonstrated Effectiveness: The method shows significant empirical success. The 30 percentage point improvement over the baseline on the human-verified MATH-Level5-50 dataset is impressive and underscores the practical utility and effectiveness of the proposed workflow.",
"weaknesses": "1.\tPotential Loss of Expressive Capabilities: The framework's reliance on a predefined template for formalization, as mentioned in Appendix A.4, is a central concern. This template-based approach may inherently restrict the expressive power of the Lean4 language. The paper currently lacks a thorough analysis of this trade-off. Specifically, there is no justification or evidence to suggest that the template is sufficient for formalizing mathematics beyond the scope of high-school-level problems. If the template's expressiveness is severely limited, it could significantly impact the generalizability and overall contribution of the proposed method.\n2.\tNarrow Scope of Evaluation: The empirical evaluation is confined to the MATH and miniF2F datasets, both of which primarily consist of high-school-level mathematics. The paper would be much stronger with experiments on more advanced or diverse mathematical domains (e.g., undergraduate-level abstract algebra or analysis). This weakness is particularly concerning when considered alongside the potential loss of expressiveness (Weakness 1). Without empirical evidence of its applicability to more complex mathematics, it is difficult to assess the method's general utility.\n3.\tMissing Comparison with Specialized Systems: The paper compares its framework against general-purpose LLMs but does not include comparisons with models or systems specifically designed or fine-tuned for formalization tasks. While the primary contribution is the workflow itself, which is well-demonstrated by comparing it against base LLMs, including a baseline from a specialized formalizer would provide a more complete picture of its performance.",
"questions": "1.\tCould you please elaborate on the extent to which the proposed template may limit the expressive capability of Lean4? A theoretical discussion on this trade-off, perhaps analyzing constructs that are difficult or impossible to represent, would greatly strengthen the paper.\n2.\tIf a theoretical analysis (Question 1) is difficult, could you provide some empirical evidence demonstrating the framework's applicability to mathematical domains beyond high school-level problems? \n3.\tWould it be possible to include an experimental comparison with a system specifically designed or fine-tuned for formal theorem proving? This would help benchmark the performance of your training-free workflow against alternative approaches.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T18:23:12",
"modification_date": "2025-11-12T14:04:56",
"review_url": "https://openreview.net/forum?id=0KFQ4F9YEH¬eId=GHzN41Jw6S",
"license": "CC BY 4.0"
},
{
"id": "SgXD3tSfYT",
"forum": "0KFQ4F9YEH",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17639/Reviewer_2crV",
"reviewer_name": "Reviewer_2crV",
"rating": 6,
"confidence": 5,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces **Loc-Decomp**, a framework that improves automated formalization—the process of converting natural language math statements into formal, machine-verifiable code. It combines a **structured Lean4 template**, a **semantic consistency checker**, and an **iterative feedback loop** that jointly refine both meaning and syntax. Unlike previous methods, Loc-Decomp detects subtle semantic mismatches and uses compiler errors to iteratively correct them. Experiments on MATH and miniF2F datasets show major improvements, achieving up to **90% formalization accuracy**, far surpassing previous state-of-the-art methods.",
"strengths": "- A **divide–conquer–merge–based semantic self-checking mechanism** is proposed to detect subtle inconsistencies between the formalization and the original statement, which aligns well with the motivation behind **Retrieval-Augmented Generation**. \n- For the first time, it combines semantic inconsistency feedback with compiler error information to perform iterative refinement. \n- The method is **simple and easy to understand**.",
"weaknesses": "- The experiments are too limited; please provide **comparative experiments on the ProofNet and Putnam datasets**. \n- Please provide a **detailed ablation study** of **Logical Concept Decomposition** and **Iterative Feedback Correction**, explicitly isolating and analyzing their individual effects.",
"questions": "Please Refer to Weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T17:16:11",
"modification_date": "2025-11-12T14:04:57",
"review_url": "https://openreview.net/forum?id=0KFQ4F9YEH¬eId=SgXD3tSfYT",
"license": "CC BY 4.0"
}
] |
MlQ0goJG9U | https://openreview.net/forum?id=MlQ0goJG9U | DiRA: Nuclear Norm Dynamic Rank Adaptation for Large Language Models | 3.5 | 3.75 | [
4,
4,
4,
2
] | [
4,
4,
3,
4
] | 4 | [
"LLM",
"Fine-Tuning",
"Nuclear-Norm"
] | Parameter-Efficient Fine-Tuning (PEFT) methods, particularly Low-Rank Adaptation (LoRA), have become a standard paradigm for adapting Large Language Models (LLMs) to specific tasks. However, standard LoRA implementations use a fixed, uniform adaptation rank across all layers, a static allocation that fails to capture the varying contributions of different layers. In this work, we introduce DiRA, which learns layer-adaptive ranks by penalizing the nuclear norm of the weight update matrix $\Delta W$ for each layer. While extensive experiments show that DiRA matches or surpasses fixed-rank LoRA baselines across tasks, its primary contribution is methodological and scientific. Using DiRA as a probe, we uncover a mechanism of catastrophic forgetting in continual learning: forgetting is frequently accompanied by pronounced changes in the rank landscape. Building on this insight, we propose a new strategy that treats the previously learned rank landscape as a prior and, with only a small amount of data, regularizes current updates to retain newly acquired knowledge while recovering old-task memory, thereby mitigating forgetting. Taken together, these results position DiRA both as an efficient PEFT method and as a principled approach for understanding—and mitigating—forgetting in LLMs. | We introduce a new PEFT method DiRA, which not only improves model performance but also reveals changes in the rank landscape associated with catastrophic forgetting. | other topics in machine learning (i.e., none of the above) | https://openreview.net/pdf?id=MlQ0goJG9U | 2025-09-16T21:27:36 | 5 | [
{
"id": "p6EeGeXG7I",
"forum": "MlQ0goJG9U",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7607/Reviewer_tpMu",
"reviewer_name": "Reviewer_tpMu",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper introduces DiRA, a novel PEFT method that enables dynamic, layer-wise rank allocation for LoRA by penalizing the nuclear norm of the weight update matrix. Beyond its performance as an efficient PEFT method, DiRA is utilized as a scientific probe to uncover a mechanism of catastrophic forgetting in continual learning.",
"strengths": "- The nuclear-norm-style factor penalty is well motivated and avoids full SVDs.\n- DiRA performs as well or better than LoRA over multiple datasets (commonsense + ConvAI2).\n- Using DiRA to study catastrophic forgetting and rank dynamics is potentially impactful.",
"weaknesses": "- Comparisons to recent rank-adaptive LoRA variants (e.g., AdaLoRA, RankAdaptor, ARD-LoRA) would be needed.\n- Rank adapts per layer, but overhead is not clearly quantified vs. LoRA or SVD-based methods. \n- The CL part is more exploratory than conclusive; missing LoRA variants for continual learning baselines (e.g., LoRI).\n- The choice/sensitivity of the regularization strength is not well discussed.",
"questions": "- What is the actual fine-tuning cost relative to standard LoRA? Does the per-layer dynamic structure add parameter overhead?\n- For CL, is performance robust across different sequences of tasks or domains?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T16:50:21",
"modification_date": "2025-11-12T11:54:50",
"review_url": "https://openreview.net/forum?id=MlQ0goJG9U¬eId=p6EeGeXG7I",
"license": "CC BY 4.0"
},
{
"id": "WE6ypQFPAk",
"forum": "MlQ0goJG9U",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7607/Reviewer_tVTm",
"reviewer_name": "Reviewer_tVTm",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper presents DiRA (Dynamic Rank Adaptation), a PEFT method that addresses the limitation of LoRA, which fixes the same rank across all layers. DiRA formulates rank allocation as an optimization problem by introducing a nuclear norm regularization term in the loss. To remain efficient, it penalizes a tractable upper bound of the nuclear norm, enabling the model to learn layer-specific effective ranks by shrinking unimportant rank-1 components during training.\nExperiments on commonsense reasoning and dialogue tasks show that DiRA matches or slightly outperforms LoRA and dynamic variants such as AdaLoRA. Using DiRA as a probe for continual learning, the study finds that catastrophic forgetting corresponds to shifts in the model’s rank landscape, and introduces RLGR, a recovery method that leverages prior rank landscapes to improve knowledge retention.",
"strengths": "The use of nuclear norm regularization to induce a dynamic, layer-wise rank is well-motivated, offering a soft alternative to the hard pruning or SVD-based importance scoring employed in methods such as AdaLoRA.",
"weaknesses": "1. The main limitation of DiRA lies in its modest empirical improvements. Although presented as a superior PEFT method, its performance gains over AdaLoRA, a strong dynamic-rank baseline, are minimal and inconsistent across tasks.\n\n2. The proposed RLGR strategy depends on access to “a small subset of data from previous tasks” (eight examples), effectively functioning as a data replay approach. While framed as a novel insight derived from the rank landscape analysis, this reliance on replay makes its originality and practical benefit difficult to evaluate relative to established replay- or regularization-based CL methods. Moreover, RLGR is applied post hoc—to recover performance on a forgotten task—rather than preventively mitigating forgetting during subsequent training, which limits its generality.\n\n3. The continual learning study appears tacked on and insufficiently validated. RLGR is tested on only one task pair (Common170k → GSM8K) and compared solely against Naive and RandLGR, omitting stronger CL baselines such as EWC, LwF, or other replay methods. As a result, the CL component feels preliminary and distracts from the paper’s core contribution—an already marginally superior PEFT method.",
"questions": "Refer to Weaknesses for related questions.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T18:53:07",
"modification_date": "2025-11-12T11:54:51",
"review_url": "https://openreview.net/forum?id=MlQ0goJG9U¬eId=WE6ypQFPAk",
"license": "CC BY 4.0"
},
{
"id": "boIHeBeJoG",
"forum": "MlQ0goJG9U",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7607/Reviewer_3DPA",
"reviewer_name": "Reviewer_3DPA",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The authors present **Dynamic Rank Adaptation (DiRA)**, a method for continual learning. They propose learning layer-adaptive ranks by penalizing the nuclear norm of the weight update matrix. Specifically, they decompose the LoRA update as a sum of rank-1 components and penalize the Frobenius norm of each per-component product. This drives entire rank-1 components to zero, allowing the effective rank of each layer to adapt organically. The authors further suggest that forgetting is connected to large changes in the model’s rank landscape and propose mitigating this via a rank-landscape prior using data from previous tasks.\n\nDiRA is evaluated on commonsense reasoning tasks and dialogue generation across LLaMA-2-7B and LLaMA-3-8B. They also probe forgetting in commonsense reasoning and math fine-tuning.",
"strengths": "The paper introduces a novel and well-motivated approach to adaptive rank modulation.\n\n---\n\n- The text is clear, and the method is introduced in a straightforward and understandable manner (first part).\n- The choice of the threshold is well grounded and appropriately justified.",
"weaknesses": "### **1. Scope and Problem Framing**\n\n- The paper appears to address two different settings: learning capability and continual learning. Presenting both in the same work is somewhat confusing.\n- The recovery stage in the continual learning setup is not well motivated.\n- The experiments in the continual learning setup involve effectively one task, which does not qualify as continual learning.\n- The work appears to be the beginning of an interesting direction, but not fully developed or polished. Overall, the submission feels unfinished.\n\n---\n\n### **2. Related Work Coverage**\n\n- The related work section reads more like a high-level introduction; several mentioned methods are not directly relevant, and connections to this work are not clearly articulated.\n- Some important related methods (e.g., MiLoRA [1] and PiSSA [2]) appear to be missing.\n\n---\n\n### **3. Methodological Clarity**\n\n- RLGR is never introduced before being used. Later, the text does not clearly state what is being done algorithmically. The description “zero the corresponding LoRA B adapters in Task B (Interference) before fine-tuning on a small subset of Task A data” is not sufficiently detailed to understand.\n\n---\n\n### **4. Presentation and Organization**\n\n- Portions of the main text are overly descriptive (e.g., experimental setup) and could be moved to the appendix.\n- Figures 1 and 2 are insufficiently analyzed; the implications of what they show are unclear.\n- Figure 3 appears pixelated, and text quality drops noticeably.\n- Captions are incomplete and do not describe what the reader should observe.\n- Some figures lack y-axis labels, and axis labels are generally too small to read comfortably.\n- The text discusses performance fluctuations, but the plots only show rank fluctuations.\n\n---\n\n### **5. Experimental Evaluation**\n\n- Computational cost analysis is missing, especially given the SVD operations involved.\n\n\n[1] MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM Finetuning. (2025) Hanqing Wang and Yixia Li and Shuo Wang and Guanhua Chen and Yun Chen\n\n[2] PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models. (2025) Fanxu Meng and Zhaohui Wang and Muhan Zhang",
"questions": "1. Are all experiments conducted with only one seed?\n2. The proposed three-stage procedure is not standard. The “recovery” stage is typically not available in realistic continual learning scenarios. How practical is this component?\n3. For Figures 1 and 2, which model are you reporting results on?\n4. Why is the nuclear norm higher for lower layers?\n5. Could the authors showcase connections to [1]?\n\n[1] LoRA vs Full Fine-tuning: An Illusion of Equivalence. (2025) Reece Shuttleworth and Jacob Andreas and Antonio Torralba and Pratyusha Sharma",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T21:55:04",
"modification_date": "2025-11-12T11:54:51",
"review_url": "https://openreview.net/forum?id=MlQ0goJG9U¬eId=boIHeBeJoG",
"license": "CC BY 4.0"
},
{
"id": "rNLH0W0nkU",
"forum": "MlQ0goJG9U",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7607/Reviewer_UeAR",
"reviewer_name": "Reviewer_UeAR",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 1,
"presentation": 1,
"summary": "DiRA is a novel Parameter-Efficient Fine-Tuning (PEFT) framework that addresses the fixed-rank limitation of Low-Rank Adaptation (LoRA) by introducing nuclear norm-based regularization. This allows dynamic, layer-wise rank adjustment. Experimental results show that DiRA outperforms existing methods such as LoRA and AdaLoRA in commonsense reasoning and dialogue generation tasks on LLaMA-based models. Furthermore, the authors demonstrate that catastrophic forgetting in continual learning is closely linked to shifts in the rank landscape and propose RLGR as a mitigation strategy.",
"strengths": "- Demonstrated effectiveness on LLaMA-based models.\n\n- Easy to read and follow.\n\n- Convincingly highlights the inefficiency of fixed-rank LoRA through theoretical and empirical arguments.\n\n- Provides an interesting analysis of the structural link between rank variation and catastrophic forgetting in continual learning.",
"weaknesses": "- The writing lacks clarity. Without sufficient background on continual learning (CL) and without adhering to conventional experimental setups, it is difficult to assess the actual usefulness of the proposed method. The Preliminaries section should include background on CL, and the method should be compared against LoRA-based CL approaches or relevant CL baselines. For instance, replaying small amounts of previous task data is a well-established strategy called 'rehearsal' known to mitigate catastrophic forgetting. It remains unclear whether the improvements stem from rank landscape-aware mechanisms or simply from the rehearsal effect.\n\n- Although the paper proposes a dynamic rank allocation method, it lacks to compare against latest existing approaches such as SaLoRA [1] and DyLoRA [2]. Including these as baselines is important to fairly evaluate the contribution and effectiveness of the proposed method.\n\n- According to [3], standard weight decay in LoRA already plays a role similar to nuclear norm regularization. This paper adopts the HiRA framework and sets weight decay to zero, which may lead to substantial differences. This raises the concern that standard LoRA may already achieve a similar effect to DiRA, and clearer justification and discussion are needed.\n\n- While the use of nuclear norm can be interpreted from a rank allocation perspective, the method itself is relatively simple, and there is a lack of empirical analysis showing how LoRA adapts its rank during training. A deeper quantitative analysis of the adapter's rank distribution or its variation over time would strengthen the validity of the approach. Furthermore, since the adapter's rank is still capped at a fixed maximum \\(r\\), the method does not enable higher-rank modeling or improved computational efficiency, which limits its contribution.\n\n- The paper does not include results on widely used NLU tasks, and comparisons with more recent LoRA variants are lacking.\n\n- There is no analytical or empirical analysis of time/space complexity.\n\n> [1] Li, Mingjie, et al. \"SaLoRA: Safety-Alignment Preserved Low-Rank Adaptation.\" The Thirteenth International Conference on Learning Representations.\n>\n> [2] Valipour, Mojtaba, et al. \"DyLoRA: Parameter-Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank Adaptation.\" Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. 2023.\n>\n> [3] Jang, Uijeong, Jason D. Lee, and Ernest K. Ryu. \"LoRA training in the NTK regime has no spurious local minima.\" Proceedings of the 41st International Conference on Machine Learning. 2024.",
"questions": "- Please refer to the Weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T13:06:20",
"modification_date": "2025-11-12T11:54:52",
"review_url": "https://openreview.net/forum?id=MlQ0goJG9U¬eId=rNLH0W0nkU",
"license": "CC BY 4.0"
}
] |
B9iMn59jFE | https://openreview.net/forum?id=B9iMn59jFE | OmniEval: A Benchmark for Evaluating Omni-modal Models with Visual, Auditory, and Textual Inputs | 4 | 4.25 | [
6,
6,
2,
2
] | [
4,
4,
4,
5
] | 4 | [
"Omni models",
"Benchmark",
"Multimodality"
] | In this paper, we introduce OmniEval, a benchmark for evaluating multimodal Chinese and English video understanding, which encompasses visual, auditory, and textual inputs. Compared with existing benchmarks, our OmniEval has several distinctive features: (i) Full-modal collaboration: We design evaluation tasks that highlight the strong coupling between audio and video, requiring models to effectively leverage the collaborative perception of all modalities; (ii) Diversity of videos and tasks: OmniEval includes 1,000 audio-visual synchronized videos, with 307 Chinese videos and 558 English videos, systematically categorized into four major domains. (iii) Diversity and granularity of tasks: OmniEval contains 2783 question-answer pairs, comprising 1412 open-ended questions and 1371 multiple-choice questions. These questions are divided into four major task types and 12 subtask types to achieve comprehensive evaluation. Among them, we have introduced a more granular video localization task, which named as Grounding. Based on our OmniEval, we have extensively evaluated a variety of state-of-the-art models. The experimental results indicate that existing models face significant challenges in understanding the real world, with the best accuracy rate being only 10%. We hope that our OmniEval can provide a platform for evaluating the ability to construct and understand coherence from the context of all modalities. | datasets and benchmarks | https://openreview.net/pdf?id=B9iMn59jFE | 2025-09-19T15:16:38 | 5 | [
{
"id": "1I41nebHsn",
"forum": "B9iMn59jFE",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16567/Reviewer_3gEp",
"reviewer_name": "Reviewer_3gEp",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces OmniEval, a new benchmark designed to evaluate the ability of omni-modal models to simultaneously understand visual, auditory, and textual inputs. The main contributions are threefold:\n1. Addresses an Evaluation Gap: OmniEval focuses on \"full-modal collaboration,\" with tasks meticulously designed to require a strong coupling of information from multiple modalities (video + audio + text) to answer correctly. This addresses the limitation of existing benchmarks that often evaluate modalities in isolation.\n2. High-Quality Data Curation: OmniEval consists of 2,617 Q&A pairs (in both Chinese and English). Its curation pipeline (Section 3.3) combines \"automated filtering\" (removing overly simple samples) with critical \"manual judgment\" (ensuring multi-modal dependency), ensuring the benchmark's difficulty and validity.\n3. Reveals Deep Model Flaws: Using this benchmark, the paper's modal ablation study (Section 4.3) discovers that current SOTA omni-modal models struggle significantly with fusing raw audio-visual signals. They exhibit an over-reliance on \"text\" (subtitles/captions) , and their performance can even degrade significantly when forced to process raw video frames or audio.",
"strengths": "1. This paper accurately identifies the \"blind spot\" in current omni-modal evaluation—namely, the lack of assessment for synergistic understanding. The paper's core, original concept is its \"full-modal collaboration\" evaluation philosophy.\n2. The inclusion of bilingual (CN/EN) support and the fine-grained \"Grounding\" (temporal localization) task effectively fills gaps left by existing benchmarks.\n3. The hybrid pipeline described in Section 3.3 is excellent. The \"Judgment\" step (to ensure multi-modal dependency) and the \"Distribution\" correction step (to fix LLM-generation biases) are particularly crucial for ensuring the benchmark's quality and validity, setting it far apart from simple \"collect-and-label\" efforts.",
"weaknesses": "1. The benchmark contains a total of 2,617 Q&A pairs derived from 810 videos. When these are divided among 3 major categories, 12 sub-task types, and 2 languages, the number of samples for each fine-grained task (e.g., \"Grounding\" in Chinese) may be very small. This raises concerns about the statistical significance of the evaluation results. For example, the \"Grounding\" task has only 342 pairs in total; further subdivision by language and format (OE/MC) may result in an insufficient sample size for robust conclusions.\n2. The paper's core claim of \"full-modal collaboration\" relies on a manual \"Judgment\" step to determine how many modalities are required to answer a question. This is an inherently subjective task. The paper lacks a detailed protocol for this step. For instance, what was the Inter-Annotator Agreement (IAA)? How were disagreements (e.g., whether audio was truly necessary) resolved? Without IAA data, the reliability of this core claim is questionable.",
"questions": "1. As mentioned in Weakness #1, the 2,617 Q&A pairs might be spread too thin across 12 sub-tasks and 2 languages Could the authors please provide a detailed table showing the Q&A distribution across (12 sub-tasks $\\times$ 2 languages $\\times$ 2 formats)? For categories with a small sample size (e.g., \"Grounding\" 28), how do the authors ensure the statistical robustness of the evaluation results?\n2. As mentioned in Weakness #2, the manual \"Judgment\" step is central to the paper's methodology. Could the authors please provide details on the annotation guidelines for this task? Was Inter-Annotator Agreement (IAA) calculated for this step? If so, what was the score (e.g., Cohen's Kappa or Krippendorff's Alpha), and how were disagreements among annotators resolved?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T21:39:21",
"modification_date": "2025-11-12T13:50:45",
"review_url": "https://openreview.net/forum?id=B9iMn59jFE¬eId=1I41nebHsn",
"license": "CC BY 4.0"
},
{
"id": "xigupzc1Cg",
"forum": "B9iMn59jFE",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16567/Reviewer_siC2",
"reviewer_name": "Reviewer_siC2",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "The paper proposes a bilingual (EN/CN) benchmark: OmniEval, for omni-modal models that process video + audio + text jointly. The suite contains 780 synchronized videos and 2,411 QA items spanning open-ended (1,278) and multiple-choice (1,133) formats. These are categorized into 3 major task families and 12 sub-tasks, with a special temporal grounding task (moment and time-span) intended to probe fine-grained spatiotemporal understanding. The construction pipeline aggregates videos (YouTube/Youku; partly via FineVideo and Youku-mplug), obtains captions/subtitles and ASR transcripts, filters out low-speech videos, then generates and manually curates QAs. The evaluation reports baselines for Qwen2.5-Omni, Baichuan-Omni, MiniCPM-O, VITA-1.5, and Gemini 2.5 Pro, including language-wise performance and ablations isolating the contribution of captions, audio, and raw video; captions typically help the most, while adding raw video sometimes degrades scores under the current pipelines. Grounding OE grading uses time-tolerance for moments and IoU≥0.5 for spans. Overall, Gemini 2.5 Pro tops the table; Qwen2.5-Omni-7B is the strongest open model.",
"strengths": "1) Unlike vision-only or audio-text setups, OmniEval evaluates joint A+V+T reasoning, with both English and Chinese coverage, which is still underexplored.\n\n2) he mix of OE (1,278) and MC (1,133), distributed across 12 sub-tasks, supports both generative analysis and standardized accuracy comparisons; the Grounding category (moment/time-span) is a good addition.\n\n3) The adaptive timestamp tolerance and IoU≥0.5 criteria for OE grounding are explicit and easy to re-implement, which helps reproducibility of temporal evaluation.\n\n4) Tables 6–7 show that captions/subtitles consistently lift performance, whereas adding raw video (or audio) can be mixed or negative. This seems to provide a useful empirical signal for the community about current model bottlenecks\n\n5) The paper has good coverage of experiments. They evaluate multiple open source and proprietary models.",
"weaknesses": "1) The pipeline excludes low-speech videos (ASR subdensity < 0.5), which systematically under-samples silent, music-dominant, or non-verbal soundscapes. This may bias the benchmark towards text-anchored items and may under-stress purely audio-visual fusion. A short analysis of discarded vs kept videos (content type, duration, genre) would help to clarify the bias.\n\n2) OE scoring uses an LLM-as-judge/extractor but the paper provides no agreement statistics (e.g., κ with CIs) or dual-judge disagreement audit.\n\n3) The time tolerance τ scales with FPS or (duration / max frames). While practical, this makes correctness thresholds dataset- and setting-dependent; a brief sensitivity table (vary τ, IoU) would calibrate how robust rankings are to scoring hyperparameters.\n\n4) The ablations indicate captions lift scores far more reliably than adding frames; in several models, adding video reduces accuracy. This suggests tasks often remain text-solvable, weakening claims of deep A+V synergy. Perhaps, including a \"video-only” diagnostic track (and per-subtask breakdown) would help to separate textual vs visual competence.",
"questions": "1) Could you report κ with 95% CIs for OE scoring, stratified by Perception/Understanding/Reasoning and by language (EN/CN)? A small stratified audit would bring more confidence.\n\n2) Would you consider reportingASR WER (or a proxy) for EN and CN, and show sensitivity of results to ASR errors on speech-heavy tasks?\n\n3) Could you please add per-source (YouTube/Youku/FineVideo/Youku-mplug) and per-genre accuracy tables to reveal any inheritance effects?\n\n4) Could you include a short analysis varying τ (moment) and IoU threshold (span) to show ranking stability?\n\n5) Since OmniEval includes tasks with differing temporal scales (moment-level vs. span-level grounding), can the authors quantify temporal granularity sensitivity, e.g., whether models perform uniformly across events lasting <3s versus >15s?\n\n6) Given that the bilingual corpus sources (YouTube/Youku) may differ culturally, do accuracy disparities correlate with genre or region?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T07:54:54",
"modification_date": "2025-11-12T13:50:46",
"review_url": "https://openreview.net/forum?id=B9iMn59jFE¬eId=xigupzc1Cg",
"license": "CC BY 4.0"
},
{
"id": "p19cfIfMv2",
"forum": "B9iMn59jFE",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16567/Reviewer_hcPT",
"reviewer_name": "Reviewer_hcPT",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces OmniEval, a comprehensive benchmark designed to evaluate omni-modal large language models (MLLMs), especially those capable of processing text, images, and audio. The benchmark consists of diverse tasks covering perception, understanding, and reasoning across modalities.",
"strengths": "1. The proposed benchmark spans text, video, and audio, making it reflective of real-world multimodal task needs.\n\n2. The proposed benchmark includes both English and Chinese videos, posing additional challenges for omni-foundation models.",
"weaknesses": "1. A detailed comparison of different MLLMs on 12 tasks is missing. \n\n2. Qualitative comparison of different MLLMs on the proposed benchmark is missing.\n\n3. The authors highlighted temporal grounding as a key feature of this benchmark, but I could not find how MLLMs perform on this task in the experiments.",
"questions": "1. L214-215, what does it mean by generating the captions using appropriate methods? what appropriate methods?\n\n2. What is the LLM used for evaluating open-ended QAs?\n\n3. Table 7 shows that audio is useful in current MLLMs. Can you give some QA examples where audio information is important in answering the question, especially in the Grounding task.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T18:18:18",
"modification_date": "2025-11-12T13:50:46",
"review_url": "https://openreview.net/forum?id=B9iMn59jFE¬eId=p19cfIfMv2",
"license": "CC BY 4.0"
},
{
"id": "hoXrbk1t98",
"forum": "B9iMn59jFE",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16567/Reviewer_sdL2",
"reviewer_name": "Reviewer_sdL2",
"rating": 2,
"confidence": 5,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces the OmniEval benchmark, designed for evaluating a model's understanding of synchronized audio-visual content. The videos are sourced from existing datasets (Finevideo and Youku-mplug). The authors' pipeline involves first using automated tools to generate visual captions and ASR transcripts for the videos. These textual annotations are then used as input for LLMs to generate question-answer pairs, which are subsequently refined by human annotators. The paper also introduces a \"Grounding\" task to test precise spatio-temporal understanding.",
"strengths": "- Bilingual Benchmark: OmniEval is a bilingual video understanding benchmark that includes both English and Chinese videos and questions, which is valuable for evaluating multilingual models.\n- Audio-Visual Grounding Task: OmniEval introduces the \"Grounding\" task, which is an important capability for audio-visual understanding.",
"weaknesses": "- Missing Comparison with Existing Benchmarks: For the evaluation of audio-visual video understanding, there are already established benchmarks (e.g., AVUT, DailyOmni), which are not discussed or compared in the paper.\n- Limitations of the Data Generation Methodology: The method of using an LLM to generate questions based on video captions and audio subtitles, while cost-effective, has several critical limitations:\n - Based on empirical evidence, Q&A pairs generated by LLMs tend to be of limited difficulty.\n - Since the visual and audio information are provided as decoupled text streams (captions and subtitles), the LLM is likely to generate questions that are superficial, touching only on the surface-level content of each modality independently. This method fails to produce questions that probe a deeper, more challenging understanding derived from the interplay between audio and video.\n - Furthermore, the synchronicity between the visual and auditory events is not considered during question generation, which undermines the benchmark's core purpose of evaluating omni-modal inputs. This oversight could even lead to questions with errors or ambiguous answers.\n- Narrow Focus on ASR for Audio: Regarding the audio modality, the benchmark appears to focus almost exclusively on ASR (speech content), neglecting the role of general audio events (e.g., environmental sounds, music, sound effects), which are crucial for comprehensive scene understanding.\n- Lack of Detail on Grounding Annotation: The annotation process for the Grounding task is not described. The paper provides few details on how these temporal annotations were created or how their accuracy and consistency were ensured.\n- Incomplete Experiments: The experimental section is insufficient. The paper only compares several omni-LLMs but lacks a crucial baseline of strong vision-only LLMs with subtitles as an additional text input. Moreover, the paper fails to compare against other powerful audio-visual LLMs such as Video-LLaMA 2 and Video-SALMONN 2.",
"questions": "- What is the performance of strong vision-only LLMs (provided with subtitles as an additional input) on the OmniEval benchmark?\n- Which LLM is used for annotation in Section 3?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T18:55:07",
"modification_date": "2025-11-12T13:50:46",
"review_url": "https://openreview.net/forum?id=B9iMn59jFE¬eId=hoXrbk1t98",
"license": "CC BY 4.0"
}
] | |
NQsdnYkCar | https://openreview.net/forum?id=NQsdnYkCar | Arbitrary-Order Block SignSGD for Memory-Efficient LLM Fine-Tuning | 6 | 3.75 | [
4,
8,
6,
6
] | [
3,
4,
4,
4
] | 4 | [
"Block-Coordinate Optimization",
"SignSGD",
"Large Language Models (LLMs)",
"Memory-Efficient Fine-Tuning"
] | We propose \textbf{ABSignSGD}, a block‑coordinate variant of sign-based descent with flexible block selection that enables memory‑ and runtime‑efficient full‑parameter fine‑tuning of large language models. We present a unified convergence analysis under mild conditions, covering both the base method and a \textit{majority‑vote} extension for distributed training. The latter improves communication efficiency by aggregating only gradient signs rather than averaging full gradients. Experiments on Qwen3‑8B and Llama3-8B, spanning mathematical reasoning and general instruction‑following tasks, show that ABSignSGD converges faster per iteration and delivers superior downstream performance while reducing both runtime and memory usage compared to existing methods. Ablation studies further indicate that the memoryless sign-based update naturally complements block‑wise updates, explaining the method’s strong empirical performance. | optimization | https://openreview.net/pdf?id=NQsdnYkCar | 2025-09-18T09:36:34 | 4 | [
{
"id": "iB5Ox799f4",
"forum": "NQsdnYkCar",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10064/Reviewer_LQY1",
"reviewer_name": "Reviewer_LQY1",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The paper combines several elements (SignSGD, block-coordinate updates, majority vote) for improving the efficiency of fine tuning LLMs along various dimensions. I suspect the paper is mainly of interest to academic researchers studying optimizer design, compression, or distributed training efficiency. I suspect that few LLM training groups are likely to use sign-based methods in production due to the likely need for substantially more hyperparameter tuning even in cases where they can achieve comparable results to FP-based methods, undoing any efficiency gains.\n\n- The paper consider the fine-tuning step of LLMs.\n- Proposes ABSignSGD, combining blockwise updates, 1-bit sign gradients, and arbitrary (depth-biased) scheduling.\n- Extends to ABSignSGD-MV for distributed training via 1-bit majority-vote aggregation.\n- Provides standard convergence guarantees under sign-agreement assumptions.\n- Shows empirical gains in VRAM, runtime, and fine-tuning accuracy vs. LoRA, GaLore, Apollo, and BAdam.\n- Lacks validation at larger scales where the claimed advantages would truly matter.",
"strengths": "- Clear exposition and empirical reproducibility: Algorithms, tables, and ablations are well presented.\n- Consistent, modest memory/runtime improvements relative to known baselines at 8B scale.\n- Simple implementation concept—no optimizer states, potentially useful for educational or constrained hardware studies.",
"weaknesses": "- Limited practical relevance: SignSGD and its derivatives are rarely, if ever, used in large-scale LLM tuning, and this is an incremental improvement over SignSGD, making widespread adoption unlikely.\n- Limited scope: this paper only addresses fine tuning, a step that takes up only a small part of the compute budget of an LLM.\n- Scale mismatch: Experiments are confined to 8B models, but memory and communication constraints dominate only at tens or hundreds of billions of parameters. Small differences on 8B model performance may make the difference between a competitive and uncompetitive large model.\n- Questionable net compute advantage: These kinds of methods introduce their own set of hyperparameters, which also require tuning. The paper does not show that the advantages remain when that tuning is taken into account. That is, I may need to conduct quite a few runs of the method in the paper in order to be fairly confident that I have a network that performs as well as straightforward tuning with one of the other methods would have yielded.\n- In practice, this would likely be used as just another fine tuning candidate, and people would use its output only if it happens to yield better accuracy; in effect, a method intended for generating for improving compute performance would end up being just another ad hoc fine tuning variant.",
"questions": "- Can you respond about the concerns about hyperparameter tuning and the overall process efficiency?\n- Why did you not try to fine-tune larger LLMs? It seems to me this paper would be much stronger if you could directly demonstrate that the method allows some fine tuning of large models on limited hardware that are impossible to fine tune using any of the other methods.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T21:57:39",
"modification_date": "2025-11-12T12:24:17",
"review_url": "https://openreview.net/forum?id=NQsdnYkCar¬eId=iB5Ox799f4",
"license": "CC BY 4.0"
},
{
"id": "NspLk0fIcC",
"forum": "NQsdnYkCar",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10064/Reviewer_qPmE",
"reviewer_name": "Reviewer_qPmE",
"rating": 8,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The authors propose an algorithm for memory-efficient finetuning of LLMs by combining SignSGD and block-coordinate descent. It has good empirical performance compared to many state-of-art finetuning methods, and the authors also provide convergence proofs for their algorithm.",
"strengths": "- The algorithm is simple and effective. It combines SignSGD with block-coordinate descent to produce a memory efficient algorithm for finetuning LLMs. It is about 10% more memory efficient than competing methods and also converges faster. It also achieves good generalization performance on many finetuning benchmarks. \n\n- In addition to the base algorithm, the authors also suggest a distributed version that employs majority voting to reduce the communication cost. \n\n- The authors provide convergence proofs of their algorithms, based on previous works from Safaryan & Richtarik 2021.",
"weaknesses": "- Although the authors propose the communication efficient version of their algorithm using majority voting, I don't see any empirical evaluations on that in the paper.",
"questions": "- What is the rank of LoRA used in the experiments? How are they selected? \n\n- What are the block size used in the experiments? Are they as small as individual Q,K,V weight matrices or groups of them? \n\n- Where are the results for the block-update rule ablation study at the end of Section 4? I just see the discussion but no tables or figures for that.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T23:38:34",
"modification_date": "2025-11-12T12:24:17",
"review_url": "https://openreview.net/forum?id=NQsdnYkCar¬eId=NspLk0fIcC",
"license": "CC BY 4.0"
},
{
"id": "NaXXDwyicQ",
"forum": "NQsdnYkCar",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10064/Reviewer_UBss",
"reviewer_name": "Reviewer_UBss",
"rating": 6,
"confidence": 4,
"soundness": 4,
"contribution": 3,
"presentation": 2,
"summary": "This paper studies the memory-efficient optimization methods for LLM training. Related works have adopted block-coordinate descent to reduce the memory cost of gradients and optimizer states of the popular optimizers. This paper proposes an algorithm ABSignSGD which introduces SignSGD into the block-coordinate descent to further reduce the memory cost since the signal consumes only 1 bit for one coordinate. \n\nABSignSGD also adopts a flexible rule of block selection: each block is selected in at most B iterations. The theoretical analysis shows that if the Success Probability Bound is satisfied, ABSignSGD achieves the $O(1/\\sqrt{K})$ convergence rate in the form of alignment norm. The experiments of fine-tuning LLMs show that ABSignSGD consumes the least memory and runtime among the baselines. It also converges faster and performs better in downstream tasks than other memory-efficient algorithms. The ablation studies indicate the reason why SignSGD is better than SGD and Adam when coupled with BCD.\n\nThe main contribution of this paper is that it goes a step further based on BCD and BADAM by adopting SignSGD as the optimizer in the Block-Coordinate update. The experiment results verify that this is a significant solution for memory-efficient training.",
"strengths": "SignSGD has been mainly studied in distributed learning or federated learning to reduce the communication cost. On the other hand, recent studies about using Block-Coordinate gradient to reduce the memory cost mainly adopt SGD or Adam as the optimizer. Thus, the originality of this work is good. The authors provide the theoretical analysis for the proposed algorithm and conduct detailed ablation studies to explain why SignSGD performs well in BCD. I think this work is of high quality and significance. Finally, most content of this work is clear and easy to follow.",
"weaknesses": "This work proposes an extra extension of ABSignSGD, i.e. its distributed version. However, there is no further analysis, including both convergence analysis and experiments for it in the following contents (only a robustness analysis in the supplementary material). The meaning of proposing such an algorithm is not clear.",
"questions": "This work proposes a “depth-biased” update rule in the experiments, but this part is somewhat confusing. Specifically, ABSignSGD selects the block with the minimal timestamp to update. However, it also says that “This event-driven rule mimics asynchronous execution, where each block becomes eligible for update once its gradient is available”. This seems somewhat contradictory since the former statement indicates that ABSignSGD updates the blocks in a serial way, why it mimics asynchronous execution. In addition, how is the gradient-computation latency obtained?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T19:33:28",
"modification_date": "2025-11-12T12:24:18",
"review_url": "https://openreview.net/forum?id=NQsdnYkCar¬eId=NaXXDwyicQ",
"license": "CC BY 4.0"
},
{
"id": "COwqprLdM2",
"forum": "NQsdnYkCar",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10064/Reviewer_WMLw",
"reviewer_name": "Reviewer_WMLw",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This paper introduces ABSignSGD, which bridges sign-based optimization and block-coordinate training through an arbitrary-order block update rule and a distributed 1-bit majority-vote aggregation scheme.\nThe method achieves strong empirical and theoretical performance, offering substantial memory, runtime, and communication efficiency while maintaining competitive accuracy.\nComprehensive ablations and analyses confirm its practical effectiveness for resource-constrained LLM fine-tuning.",
"strengths": "1. The paper bridges sign-based optimization with block-coordinate methods, introducing an arbitrary-order update rule and 1-bit majority-vote communication that collectively advance memory- and communication-efficient training.\n\n2. Its technical quality is supported by a theoretical convergence analysis, extensive empirical validation on LLMs, and comprehensive ablation and memory analyses that substantiate the design choices.\n\n3. The work offers a simple yet effective solution for large-model fine-tuning under tight hardware constraints.",
"weaknesses": "1. The method’s reliance on *sign-only updates* discards gradient magnitude information, which may limit precision and hinder convergence in magnitude-sensitive or pretraining scenarios.\n2. The approach is primarily evaluated in fine-tuning settings on 8B-scale models, leaving its scalability and effectiveness for full pretraining or larger architectures less explored.\n3. The convergence analysis assumes a fixed sign-agreement probability and bounded block-update interval, conditions that may not strictly hold in high-noise or asynchronous environments.\n4. While the arbitrary-order rule is conceptually appealing, its implementation details (e.g., latency estimation, update scheduling) could benefit from deeper empirical or theoretical justification.",
"questions": "Q1. In ABSignSGD, each iteration updates only one active block using the sign of its gradient. Could you clarify whether this *sign-only* update may lead to misalignment between active and inactive blocks across layers during training? How do you ensure stability when different blocks are updated at different frequencies?\n\nQ2. How is a *block* defined in your implementation? Does one block correspond exactly to a Transformer layer, or can it be a smaller unit (e.g., Q, K, V projection matrices) or a larger group of layers? How did you determine the optimal block size (N) in practice? Within a selected block/layer, are all submodules (attention projections, output projection, and feed-forward networks) updated uniformly with the same sign-based rule, or do you apply differentiated treatment among them?\n\n\nQ3. The paper introduces an “arbitrary-order” block selection strategy with a bounded update interval assumption. Could you elaborate on how the *event-driven* depth-biased rule is implemented in practice? Specifically:\n * How are the latency parameters ( \\tau_i ) estimated for different layers?\n * What determines the update readiness timestamp (T_i)?\n * How does this scheduling compare empirically to random or cyclic updates in terms of convergence stability?\n\nQ4. This paper mention that deeper layers are updated more frequently. Could you provide more detail on the precise mathematical rule or heuristic that governs this frequency difference? For example, is it proportional to the inverse of the estimated backward latency or some adaptive function? Since deeper layers are updated more often than shallow ones, does this induce any gradient misalignment or optimization imbalance between early and late layers? Have you observed any degradation in generalization or representation consistency due to this update asymmetry?\n\nQ5. ABSignSGD is designed as a *stateless* and block-switching optimizer. In contrast, widely used adaptive optimizers such as Adam and AdamW maintain two exponential moving averages (the first and second moments), which require continuous accumulation of gradient history for each parameter. Given that block switching interrupts this continuity, is ABSignSGD fundamentally limited to *SGD-like* (stateless) optimizers?\nHave you explored—or do you foresee the feasibility of—extending your approach to *stateful* variants that preserve per-parameter moment information while still supporting arbitrary-order block updates?\n\n\nQ6: The current paper investigates ABSignSGD primarily in the fine-tuning setting, where model parameters start from pretrained weights and the optimization trajectory remains relatively close to the original model. In contrast, pretraining begins from random initialization and requires learning the full parameter distribution from scratch.\nGiven that ABSignSGD discards gradient magnitude information—updating parameters solely based on gradient signs (±1 per coordinate)—do you expect this sign-only update rule to maintain sufficient precision for large-scale pretraining?\nIn particular, could the absence of gradient magnitude information hinder convergence when the optimization landscape requires large, magnitude-sensitive adjustments early in training? Have you considered or tested any hybrid variants (e.g., partial sign scaling or adaptive step-size modulation) to mitigate this potential limitation?\n\nQ7: The paper establishes an (O(1\\sqrt{K})) convergence rate under mild assumptions using the alignment norm. Could you clarify how sensitive this convergence behavior is to the *sign-agreement probability* (ρ > 0.5) in large-scale, high-noise training regimes?\nIn particular, do you observe any empirical breakdown when ρ approaches 0.5 (e.g., with small batch sizes or unstable gradients), and how does ABSignSGD behave in comparison to standard SignSGD or BAdam under such conditions?\n\nQ8: For ABSignSGD-MV, you mention a 960× reduction in communication cost compared to standard DDP by transmitting only gradient signs.\nCould you elaborate on:\n\n* Whether the MV aggregation is performed per-block or across the full model?\n* How sensitive convergence is to the number of participating agents ?\n* Whether partial synchronization (e.g., delayed or stale majority votes) degrades convergence stability in multi-node environments?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-22T03:18:52",
"modification_date": "2025-11-12T12:24:18",
"review_url": "https://openreview.net/forum?id=NQsdnYkCar¬eId=COwqprLdM2",
"license": "CC BY 4.0"
}
] | |
7y9IKjl8dt | https://openreview.net/forum?id=7y9IKjl8dt | SPARK: Stepwise Process-Aware Rewards for Reference-Free Reinforcement Learning | 4 | 3.666667 | [
2,
4,
6
] | [
4,
3,
4
] | 3 | [
"Process Reward Models",
"Inference-time Scaling",
"Reference-free Reinforcement Learning",
"Mathematical Reasoning",
"Synthetic Verification"
] | Process reward models (PRMs) that provide dense, step-level feedback have shown promise for reinforcement learning, yet their adoption remains limited by the need for expensive step-level annotations or ground truth references. We propose SPARK--a three-stage framework where in the first stage a generator model produces diverse solutions and a verifier model evaluates them using parallel scaling (self-consistency) and sequential scaling (meta-critique). In the second stage, we use these verification outputs as synthetic training data to fine-tune generative process reward models, which subsequently serve as reward signals during training. We show that aggregating multiple independent verifications at the step level produces training data for process reward models that surpass ground-truth outcome supervision—achieving 67.5 F1 on ProcessBench (a benchmark for identifying erroneous steps in mathematical reasoning) compared to 66.4 for reference-guided training and 61.9 for GPT-4o. In the final stage, we apply our generative PRM with chain-of-thought verification (PRM-CoT) as the reward model in RL experiments on mathematical reasoning, and introduce format constraints to prevent reward hacking. Using Qwen2.5-Math-7B, we achieve 47.4\% average accuracy across six mathematical reasoning benchmarks, outperforming ground-truth-based RLVR (43.9\%). Our work enables reference-free RL training that exceeds ground-truth methods, opening new possibilities for domains lacking verifiable answers or accessible ground truth. | We train process reward models without ground truth by aggregating multiple verification attempts through inference-time scaling, achieving better performance than ground-truth-based approaches. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=7y9IKjl8dt | 2025-09-19T07:33:11 | 3 | [
{
"id": "87ws3DfBoV",
"forum": "7y9IKjl8dt",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14538/Reviewer_YhFA",
"reviewer_name": "Reviewer_YhFA",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 2,
"presentation": 3,
"summary": "This paper introduces a new method for generating a verification dataset for training PRMs based on scaling a step-wise verification process parallely and sequentially. This synthetic dataset, which comprises step-level judgment with rationales, is used to train these PRMs that later on serves as reward models for RL training. The work presents evaluation on two setups: first, on ProcessBench, claiming that the proposed method for PRM training surpasses GPT-4o and reference-guided verification; and second, on RL training, claiming a performance that matches or exceeds ground truth performance.",
"strengths": "- The method is conceptually simple and the paper is easy to follow.\n\n- The comparison against GPT-4o and Reference-Guided verification in ProcessBench suggests that the employed methodology is promising, since it does not rely on ground-truth nor on a frontier model.",
"weaknesses": "- The main concern is the lack of evidence to assess statistical significance in the results. The paper does not mention how many experimental seeds were used (I assume it is a single one), and no results in the paper brings confidence intervals. A well known fact supported by prior literature is that RL training is extremely stochastic [1, 2], which is also observed in math reasoning benchmarking [3], so it is unclear whether the reported takeaways are meaningful or just observation noise. This is particularly necessary to the RL training results (Figs 4-7 at least) but also necessary for the PRM training, as different seeds may lead to different verification performances.\n\n- As the paper states in Section 4.4, one of the limitations from the developed PRMs is that they are still vulnerable to reward hacking, which is one of the main reasons why RLVR is still the standard choice over PRMs. The reward hacking issue is very much present in the “Step-Augmented Process Rewards”, which is the method that strongly relied on step-level rewards.\n\n- The evaluation methodology in Figure 5 is flawed. The Figure highlights a 7.6% difference between PRM-CoT and the other methods, but ignores the fact that prior checkpoints present considerably better performance for ORM. Besides the issue of reporting a single seed, the work also does not provide a methodology on checkpoint selection, and the evaluation on t = 300 is arbitrary. \n\n- The paper does not bring a comparison of the computational cost involved in (1) generating the training dataset; and, more importantly, (2) the cost involved in performing inference in the trained PRMs. From the paper, it is unclear if the proposed PRMs use variable test-time computation, and it would be extremely important to ensure fairness in computation during inference.\n\n- There are other methods to train RL without verifiable rewards, e.g., [4]. It would be nice to compare against them.\n\nOverall, while I see the method as a simple yet interesting direction for generating PRM training datasets, the experimental methodology of the paper is currently flawed which makes the provided evidence weak/questionable. I also believe the main claims of “enabling RL to scale beyond verifiable domains” is somewhat too strong and not supported, especially given that PRMs in general (including the ones in this work) are generally vulnerable to reward hacking, and the proposed method does not address this problem.",
"questions": "- During verifier inference, are the inference scaling methods also used? Or are they used solely during dataset generation?\n\n- The paper mentions that the datasets contain 63k examples after filtering. Which filtering is that?\n\n\nReferences\n\n\n[1[ Henderson et. al. Deep Reinforcement Learning that Matters. AAAI, 2018.\n\n[2] Agarwal et. al. Deep Reinforcement Learning at the Edge of the Statistical Precipice. NeurIPS, 2021.\n\n[3] Hochlehnert et. al. A Sober Look at Progress in Language Model Reasoning: Pitfalls and Paths to Reproducibility. COLM, 2025. \n\n[4] Zhao et. al. Learning to Reason without External Rewards, 2025.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-04T05:43:06",
"modification_date": "2025-11-12T13:22:08",
"review_url": "https://openreview.net/forum?id=7y9IKjl8dt¬eId=87ws3DfBoV",
"license": "CC BY 4.0"
},
{
"id": "UGIiqJOc2C",
"forum": "7y9IKjl8dt",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14538/Reviewer_9QxJ",
"reviewer_name": "Reviewer_9QxJ",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper introduces SPARK, a framework to train process reward models (PRMs) for reinforcement learning (RL) entirely without ground-truth references. The method uses a \"generator-verifier\" system with inference-time scaling (like self-consistency and meta-critique) to create high-quality synthetic step-level verification data. This data is then used to train a generative PRM (PRM-CoT), which subsequently provides reward signals for RL training. SPARK enables reference-free training that outperforms ground-truth methods. On the ProcessBench evaluation, the SPARK-trained PRM achieved 67.5 F1, surpassing the reference-guided (ground-truth) model's 66.4 F1. In RL experiments, SPARK's PRM-CoT led to 47.4% average accuracy, exceeding the ground-truth-based RLVR's 43.9%.",
"strengths": "1. The design of SPARK is intuitive to me.\n\n1. Instead of relying on a static, expensive ground-truth dataset, SPARK uses a dynamic generator-verifier framework. It leverages inference-time scaling techniques (like self-consistency and meta-critique) to aggregate multiple verification attempts, effectively bootstrapping a high-quality, step-level training dataset from the model's own reasoning capabilities.\n\n1. When used in RL training, SPARK's generative PRM enables the policy model to achieve 47.4% average accuracy on math benchmarks. This result exceeds the performance of the ground-truth-based method, RLVR, which achieved 43.9%.\n\n1. This paper also systematically analyzes reward hacking patterns in process reward-based RL.",
"weaknesses": "1. The method is motivated by the need to apply RL to subjective domains without ground truth (e.g., creative writing, ethical reasoning). However, all experiments are conducted exclusively in mathematical reasoning, a domain where objective ground truth does exist. This creates a mismatch between the problem the method claims to solve and the domain in which it is actually validated.\n\n1. The paper provides a systematic analysis of reward hacking patterns. However, the identified patterns (e.g., solution appending, step inflation, and step reduction) and their solutions are specific to the highly structured format of mathematical problem-solving, which also deviates from the motivation of applying RL to subjective domains without ground truth. The experimental design does not demonstrate whether these findings or solutions are transferable to the unstructured, open-ended tasks that are the method's ultimate target.\n\n1. The contribution of this paper is marginal. To my understanding, SPARK just combines self-consistency and meta-critique for auto annotation. Besides, there is no quality evaluation to show how accurate the SPARK-generated annotations are.",
"questions": "1. As mentioned in Weaknesses, how can SPARK be reliably generalized to the very domains the authors use for motivation (like creative writing or ethical reasoning), where an objective verifier doesn't exist and the verifier's critique is just as subjective and unverifiable as the generator's output?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T12:13:53",
"modification_date": "2025-11-12T13:22:08",
"review_url": "https://openreview.net/forum?id=7y9IKjl8dt¬eId=UGIiqJOc2C",
"license": "CC BY 4.0"
},
{
"id": "E3BNCYA1t7",
"forum": "7y9IKjl8dt",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14538/Reviewer_QmP1",
"reviewer_name": "Reviewer_QmP1",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces SPARK, a three-stage framework to train generative process reward models (PRMs) for reinforcement learning without requiring any ground-truth references or human annotations. In Stage I, it generates a synthetic dataset by using a generator model to create solutions and a more powerful verifier model to evaluate them using inference-time scaling techniques like self-consistency and meta-critique. In Stage II, this synthetic data is used to fine-tune generative PRMs. In Stage III, this reference-free PRM-CoT is used as the reward signal to train a policy model via RL, achieving state-of-the-art results on math reasoning benchmarks, even outperforming baselines trained with ground-truth outcome verification.",
"strengths": "The primary strength of this work is its novel and effective framework for training process reward models (PRMs) without access to ground-truth references. The reliance on expensive, step-level human annotations or gold solutions is a major bottleneck for scaling process-based feedback, and this paper offers a viable, reference-free alternative. The core idea of using inference-time scaling methods (like self-consistency and meta-critique) to generate high-quality synthetic verification data is a solid contribution.\n\nThe empirical results for the PRM itself are strong. The paper shows that a PRM trained on this synthetically generated data (specifically using step-level consistency) achieves a 67.5 F1 on ProcessBench. This result is impressive not only because it outperforms strong LLM critics like GPT-4o, but also because it surpasses a baseline PRM trained with access to ground-truth outcomes, which suggests that aggregating multiple noisy, reference-free process verifications has the potential to provide a richer training signal than verifying against a single ground-truth final answer.\n\nThe paper also demonstrates the downstream utility of this reference-free PRM in a practical RL setting and also provides a valuable analysis of reward hacking patterns, and introduces format constraints to mitigate them.",
"weaknesses": "There is a mismatch between the paper's core motivation and its experimental validation. The method is motivated as a solution for domains where ground truth is \"unavailable,\" \"subjective,\" or \"lacks clear verification criteria,\" such as creative writing or complex planning. However, all experiments are conducted exclusively on mathematical reasoning, a domain defined by objective, verifiable ground truth. \n\nThe computational cost of the Stage 1 synthetic data generation pipeline appears to be enormous and is not analyzed. To generate the \"Step Consistency\" dataset, the framework must run $N=16$ independent verifications for each of the $M=8$ solutions generated for each problem. This implies over 100 verifier passes per problem just to create a single training data point. This massive offline inference cost is not compared against the cost of alternative methods.",
"questions": "The paper opts for a generative PRM (Gen-PRM) over a discriminative one (Disc-PRM). However, the paper dedicate substantial analysis to reward hacking issues (e.g., solution appending, step inflation) that are unique to this generative approach . Given that Gen-PRMs introduce these complex new failure modes, what is their fundamental advantage over simpler discriminative PRMs that justifies this trade-off?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T01:02:45",
"modification_date": "2025-11-12T13:22:09",
"review_url": "https://openreview.net/forum?id=7y9IKjl8dt¬eId=E3BNCYA1t7",
"license": "CC BY 4.0"
}
] |
fdp7klHmnn | https://openreview.net/forum?id=fdp7klHmnn | Robust Learning of Diffusion Models with Extremely Noisy Conditions | 4 | 3.25 | [
4,
4,
2,
6
] | [
3,
3,
4,
3
] | 4 | [
"diffusion models",
"noisy conditions",
"generation controllability"
] | Conditional diffusion models have the generative controllability by incorporating external conditions. However, their performance significantly degrades with noisy conditions, such as corrupted labels in the image generation or unreliable observations or states in the control policy generation. This paper introduces a robust learning framework to address extremely noisy conditions in conditional diffusion models. We empirically demonstrate that existing noise-robust methods fail when the noise level is high.
To overcome this, we propose learning pseudo conditions as surrogates for clean conditions and refining pseudo ones progressively via the technique of temporal ensembling. Additionally, we develop a Reverse-time Diffusion Condition (RDC) technique, which diffuses pseudo conditions to reinforce the \textit{memorization effect} and further facilitate the refinement of the pseudo conditions.
Experimentally, our approach achieves state-of-the-art performance across a range of noise levels on both class-conditional image generation and visuomotor policy generation tasks. | generative models | https://openreview.net/pdf?id=fdp7klHmnn | 2025-09-06T12:39:37 | 4 | [
{
"id": "M0Cw5LZNwX",
"forum": "fdp7klHmnn",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2525/Reviewer_hdc3",
"reviewer_name": "Reviewer_hdc3",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This paper proposes a robust learning framework that helps conditional diffusion models train more effectively and efficiently under extremely noisy conditions. Building upon the theory of the *memorization effect* [1], the paper introduces *pseudo-conditions*, which gradually replace the original noisy conditions with clean conditions generated by the model itself through a *temporal ensembling* module. It further proposes *Reverse-time Diffusion Conditioning (RDC)*, which applies a reverse-time diffusion process to the pseudo-conditions as a form of conditional augmentation, enhancing the memorization effect and stabilizing training. Extensive experiments on CIFAR-10/100 and Push-T datasets demonstrate that the proposed method outperforms existing baselines (e.g., TDSM) under high noise levels.\n\n[1] S. Liu, et al. Early-learning regularization prevents memorization of noisy labels. NeurIPS 2020",
"strengths": "- The paper introduces a novel robust learning approach that optimizes the conditions during training, treating noisy conditions as denoising targets for data augmentation. The method achieves significant performance gains, especially in faster convergence, showing clear benefits during the early stages of training.\n- The proposed method is lightweight and easy to integrate into existing diffusion models, making it valuable for training on large-scale, low-quality datasets.\n- The paper is well-structured, with clear organization, well-designed experiments, and detailed implementation information.",
"weaknesses": "* The frequent use of early stopping reduces the generalizability of the approach, suggesting that the theoretical foundation is still incomplete. The authors are encouraged to analyze the causes of overfitting and provide theoretical explanations and improvement strategies.\n* The comparison baselines are relatively limited; including more baselines would make the results more convincing.\n* The theoretical support for RDC is insufficient. Although the ablation study shows that RDC is critical to performance, further theoretical analysis and more detailed ablations are needed to clarify its contribution.",
"questions": "- Can this method be extended to high-dimensional conditions, such as high-resolution images? In such cases, is it easy to recover pseudo-conditions from the diffusion model?\n- Could you provide more textual description of Figure 2 to help readers better understand the connection between the data and your argument? Is the mention of \"Figure 2(b)\" in line 229 of the text a typo?\n- Could you add a textual description of temporal ensembling? Although it originates from [1], as one of the new modules in the paper, and considering its early introduction, a brief explanation of its principles would help readers with limited background better understand your work and intentions.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T17:42:52",
"modification_date": "2025-11-12T10:57:46",
"review_url": "https://openreview.net/forum?id=fdp7klHmnn¬eId=M0Cw5LZNwX",
"license": "CC BY 4.0"
},
{
"id": "4XYS7AM9Lw",
"forum": "fdp7klHmnn",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2525/Reviewer_pT6p",
"reviewer_name": "Reviewer_pT6p",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces a training framework for conditional diffusion models designed to handle highly noisy conditioning inputs. The method combines two key components: Pseudo Condition and Reverse-time Diffusion Condition. The Pseudo Condition is produced by a lightweight prediction head that processes the UNet encoder’s output, serving as a surrogate for the clean condition and mitigating the impact of noise in the original input. Meanwhile, the Reverse-time Diffusion Condition employs a reverse SDE to generate a denoised estimate of the input condition. By integrating PC and RDC, the proposed approach effectively approximates clean conditioning signals, leading to improved performance across tasks such as label-conditioned image generation and image-conditioned visuomotor policy learning.",
"strengths": "The paper is clearly written and well-structured, making it easy to follow. The overall presentation is coherent, and Algorithm 1 effectively clarifies the proposed method. Most of the experiments generally support the authors’ claims, with the 2D toy experiments providing particularly intuitive and convincing evidence.",
"weaknesses": "The main concern with this paper lies in the experimental design and the limited exploration of key design choices.\n\nIn the image-conditioned visuomotor policy generation experiments, the “noisy” images are generated using only two specific distortion types. While these distortions can be regarded as forms of noise, they do not capture more general or practically relevant noise sources—such as typical camera noise or Gaussian noise. Incorporating experiments with these more realistic noise types would strengthen the method’s robustness and practical deployability in real-world scenarios.\n\nMoreover, the experiments are confined to the UNet architecture, leaving the generalization of the proposed approach to other diffusion model backbones—particularly Transformer-based architectures—unclear. The experiments in Section 4.2 are also limited to class-label–conditioned image generation with demonstrations. Extending the evaluation to other conditioning modalities, such as line sketches (as referenced in [1]), would further demonstrate the versatility of the proposed method.\n\n[1] Zhang, L., Rao, A., & Agrawala, M. (2023). Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 3836-3847).\n\nMinor:\n\nThe caption of Table 2 does not appear to correspond correctly to the content of the table.",
"questions": "N/A",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T16:53:10",
"modification_date": "2025-11-12T10:57:46",
"review_url": "https://openreview.net/forum?id=fdp7klHmnn¬eId=4XYS7AM9Lw",
"license": "CC BY 4.0"
},
{
"id": "uwaHCN69VZ",
"forum": "fdp7klHmnn",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2525/Reviewer_oTa3",
"reviewer_name": "Reviewer_oTa3",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "## Review of “ROBUST LEARNING OF DIFFUSION MODELS WITH EXTREMELY NOISY CONDITIONS”\n\n### Summary\nThe paper studies conditional diffusion models when the conditioning signal (labels for class-conditional image generation, visual observations for visuomotor policy generation) is extremely noisy. The authors argue existing robustness methods fail at high noise rates and propose two ideas:\n1. **Pseudo condition (PC):** learn a per-sample latent “pseudo condition” \\(\\hat y\\) using a lightweight prediction head and temporal ensembling, then gradually replace the noisy condition \\(\\tilde y\\) with \\(\\hat y\\).\n2. **Reverse-time Diffusion Condition (RDC):** inject noise into \\(\\hat y\\) via a reverse-time SDE so that the model must learn to generate conditioned on progressively denoised pseudo conditions.\n\nThey claim state-of-the-art performance on CIFAR-10/100 with up to 80% label noise and on a visuomotor policy learning benchmark with distorted camera inputs.",
"strengths": "The paper targets a setting that is undeniably important: conditional diffusion models trained with highly corrupted conditioning signals, including 60–80% incorrect class labels on CIFAR-style datasets and visual inputs with heavy distortion in visuomotor tasks. Even if the proposed solutions need more clarity, the problem itself is real and underexplored at the level of “extremely noisy conditions,” rather than just mild label noise. The paper does a good job motivating why this matters for reliability and safety.",
"weaknesses": "### Major Concerns\n\n#### 1. Conceptual clarity / internal consistency\nThe core story of the paper is: (i) noisy conditions \\(\\tilde y\\) entangle clusters, (ii) we “repair” them by learning pseudo conditions \\(\\hat y\\), (iii) we further stabilize training using RDC. But the paper never really gives a coherent probabilistic view of what \\(\\hat y\\) is supposed to represent.\n\n- This is just classic self-training / bootstrapping on noisy labels with EMA. There is no guarantee provided about to where the \\(\\hat y\\) converges rather than to whichever early bias the model latches onto. \n\n- The authors appeal to “memorization effect” and “early stopping” but give no principled stopping rule, no theoretical identifiability, and no robustness analysis under heavy class imbalance or systematic (non-symmetric) corruption. This is hand-wavy.\n\n- Later, \\(\\hat y\\) becomes part of an SDE and is diffused in reverse time (Sec. 3.1.2). But now \\(\\hat y_t\\) is treated almost like a continuous signal you can inject noise into, including for discrete labels. The paper never reconciles these two roles: is \\(\\hat y\\) a discrete class label, a continuous embedding, or a generic latent code? The text casually mixes these cases, e.g. in label-conditioning vs. visuomotor conditioning, and the math assumes a continuous vector with gradients \\(\\nabla_{\\hat y} \\log p_t(\\hat y)\\). This looks mathematically incompatible with the categorical-label case that motivates most of the “extremely noisy label” narrative.\n\n- The RDC SDE in Eq. (8) is introduced with forward / reverse SDEs copied from diffusion literature, but the derivation is not convincing. The paper defines boundary conditions \\( \\hat y_0 \\sim \\mathcal{N}(\\mu,\\sigma) \\), \\( \\hat y_T = \\hat y \\), then writes down dynamics that allegedly realize a “reverse-time diffusion condition.” However:\n - There is no demonstration that these dynamics actually produce a tractable marginal \\(p_t(\\hat y)\\) consistent with those boundary conditions.\n - There is no training algorithm that simulates these SDEs in practice in a numerically well-defined way beyond saying “we estimate \\(\\hat y_\\phi\\) using numerical methods.” This is extremely vague.\n\nIn short: the paper sells RDC as a principled diffusion-in-condition-space, but the derivation looks more like informal noise injection / augmentation of the conditioning vector. The gap between the math and the actual implementation is huge and currently not bridged.\n\n#### 2. Method description is incomplete / borderline non-reproducible\nSeveral critical training details are missing or contradictory:\n\n- **How exactly is \\(\\hat y\\) updated?** \n The algorithmic description for pseudo condition learning (Eq. (6)–(7), Algorithm 1) is ambiguous:\n - Eq. (6) optimizes \\(\\| \\hat y_\\phi - \\tilde y \\|^2\\). But if \\(\\tilde y\\) is extremely noisy, regressing toward it just propagates noise. Why should this denoise anything, instead of *fitting the noise faster*? The only answer given is “memorization effect” / “temporal ensembling” / “early stopping,” but the paper does not specify (a) the EMA momentum \\(\\alpha\\), (b) when to stop, or (c) how sensitive results are to those choices. \n - Algorithm 1 line 7 says `yt ← (1−λ)yt + λ ŷϕ`. This quietly introduces a new \\(\\lambda\\) and mutates \\(y_t\\) in-place, but \\(\\lambda\\) is never defined, nor is it connected to \\(\\alpha\\) in Eq. (7). This raises reproducibility and even correctness concerns.\n\n- **When is the condition encoder trained and frozen?** \n For image-based conditions (visuomotor setting), the paper says: initialize \\(\\hat y\\) with encoder output \\(\\tilde y = e_\\gamma(\\text{image})\\); run early stopping to refine \\(\\hat y\\); later continue training the encoder to match \\(\\hat y\\) (Sec. 3.2). \n This implies a bilevel schedule (first fix \\(e_\\gamma\\), tune \\(\\hat y\\); then fix \\(\\hat y\\), tune \\(e_\\gamma\\)). But:\n - There is no precise schedule (epochs? steps? loss plateaus? validation metric?).\n - There is no ablation isolating whether this two-phase procedure, *not* RDC, is what actually gives the reported visuomotor gains.\n\n- **Sampling / guidance** \n The method claims to “switch” classifier-free guidance from \\(\\tilde y\\) to \\(\\hat y\\) in Eq. (5). But at sampling time you don’t have ground-truth clean labels or clean observations — you either have the noisy condition \\(\\tilde y\\) (test-time corruption) or nothing. The paper doesn’t say how \\(\\hat y\\) is obtained at inference. Do we run the prediction head \\(q_\\phi\\) online to predict a denoised condition at test time? Do we rely on a refined encoder \\(e_\\gamma\\)? This is crucial for deployment, especially in the robotics/control setting, and it’s missing.\n\nOverall, too many essential knobs are “empirically determined,” “estimated numerically,” or “updated via early stopping,” without concrete, reproducible definitions. For ICLR-level work, that’s not acceptable.\n\n#### 3. Weak/unclear baselines and metrics\nThe empirical claims are not convincingly supported.\n\n- **Label-noise baselines.** \n The paper mostly compares against EDM (Karras et al., 2022) and TDSM (Na et al., 2024). But robustness to label noise has an extensive literature: MentorNet, Co-teaching, DivideMix, early-learning regularization, transition-matrix estimation, etc. Many of those works explicitly address extreme label noise and could be adapted to conditional generative models or to the conditional encoder. The paper cites some of them in Related Work but does not actually implement competitive versions (e.g. using DivideMix-style clean/noisy split on the condition head, or robust loss on the condition head rather than plain \\(\\ell_2\\)). So “SOTA” is overstated.\n\n- **Metric choice for controllability.** \n In Figure 1(b), controllability is measured as top-1 accuracy of a pretrained CIFAR-10 classifier on generated samples. \n This is fragile: if the classifier itself struggles under heavy corruption or distribution shift, the “controllability” score will be noisy. No calibration is provided. Also, top-1 accuracy on generated samples says nothing about sample diversity or mode-collapse. You *could* cheat controllability by collapsing to a single prototypical “dog” image per class. The paper does include FID/IS/Density/Coverage in Table 1, but does not analyze mode collapse explicitly in the high-noise regime (60–80%), which is exactly where they claim superiority.\n\n- **CIFAR-100 @ 60–80% noise.** \n The paper shows very large claimed gains when noise is extremely high and TDSM “collapses,” and then takes that as evidence of novelty. But TDSM is originally designed for label noise via transition matrices and might simply be mis-implemented for 100-way classification under 80% corruption, which is an extremely adversarial setting. There is no sanity check like: what is the oracle upper bound if you just train on clean labels for a small clean subset? What if you partially relabel with a small trusted clean set? Without that, “SOTA” here mostly means “our baseline impl of TDSM crashes so we win.”\n\n- **Visuomotor / robotics experiments.** \n The robotics evaluation is (i) one environment (Push-T), (ii) one type of corruption (camera distortion with fixed probability), and (iii) one metric (IoU/TAC).\n This is extremely narrow. There’s no evaluation of closed-loop robustness under *unseen* distortions, lighting changes, occlusions, or partial sensor dropout — which is exactly the kind of real-world brittleness the introduction uses to motivate safety (“autonomous driving,” “surgery,” etc.). The leap from “slightly distorted tabletop pushing in simulation” to “hazardous failures in autonomous driving and surgery” is not justified.\n\nIn short: the experiments are tailored to showcase the authors’ method, but they do not seriously test robustness, and they omit strong alternative baselines.\n\n#### 4. Mathematical rigor of RDC\nRDC is pitched as a key novelty, but right now it reads like ad hoc noise augmentation with diffusion-flavored notation:\n\n- Eq. (8) writes forward and reverse SDEs for \\(\\hat y\\), but there is no derivation that these correspond to an ELBO-style bound, or that optimizing Eq. (10) is consistent with score matching in \\(\\hat y\\)-space. The connection to Kingma & Gao (2023) is waved at but not actually developed.\n\n- Eq. (9) defines \\(\\hat y_\\phi\\) as an integral of the learned score function over \\(t\\). But then the paper immediately says “we directly optimize \\(\\|\\hat y_\\phi - \\tilde y\\|^2\\)” instead of training the score network in \\(\\hat y\\)-space in a principled way. This looks self-contradictory: either RDC gives you a principled score-matching view, or you’re just using it as data augmentation for a regression head.\n\n- The ablation in Table 3 is used to argue RDC “fixes” the degradation caused by naïve pseudo conditions. But that table is only on CIFAR-10 40% noise, and it is extremely underspecified: hyperparameters, stopping rules, and sampling details are not given. So it’s impossible to tell whether RDC itself is doing anything fundamental, or whether the improvement is due to other training heuristics that were added alongside it (e.g., different EMA schedule or guidance scaling).\n\nGiven how much of the claimed novelty rests on RDC, this level of vagueness is a serious issue.",
"questions": "Do we really need the REVERSE-TIME DIFFUSION CONDITION? is that possible to define a manual schedule to anneal the noise level of the \\hat y. I think here the author lacks enough discussion about the intuition/motivation of the introducing of such a techinique.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T22:36:40",
"modification_date": "2025-11-12T10:57:47",
"review_url": "https://openreview.net/forum?id=fdp7klHmnn¬eId=uwaHCN69VZ",
"license": "CC BY 4.0"
},
{
"id": "872lKJyjSu",
"forum": "fdp7klHmnn",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission2525/Reviewer_FovT",
"reviewer_name": "Reviewer_FovT",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces a robust learning framework for conditional diffusion models under extremely noisy conditions, such as corrupted labels or unreliable observations. The authors identify that existing noise-robust methods fail when noise levels are high, and propose two key innovations:\n\nPseudo Conditions (PC): A surrogate for clean conditions, initialized from noisy ones and progressively refined using temporal ensembling and early stopping to avoid overfitting.\n\nReverse-time Diffusion Condition (RDC): A technique that applies the diffusion process to pseudo conditions in reverse, enhancing memorization and stabilizing training under noise.\n\nThe method is evaluated on class-conditional image generation (CIFAR-10/100) and visuomotor policy generation (Push-T), achieving state-of-the-art performance across various noise levels. Ablation studies confirm the effectiveness of both PC and RDC components.",
"strengths": "The paper introduces a genuinely new role for diffusion dynamics: instead of diffusing only the image, it diffuses the condition itself in reverse time (RDC). This is not a minor tweak; it reframes the noisy-label problem as a joint denoising task in both pixel and label space and gives the diffusion model a self-contained way to “hallucinate then refine” its own conditioning signal.\n\nThe combination of (i) learning a pseudo-condition, (ii) updating it via temporal ensembling, and (iii) injecting it back through an RDC augmentation has not appeared before in the diffusion literature. Even individually, these ideas are creatively re-purposed from semi-supervised learning, mean-teacher methods, and score-based generative models.\n\nThe work removes a key limitation of prior label-noise diffusion papers (TDSM, RCGAN, etc.)—the need to estimate an explicit noise-transition matrix—and still works when 80 % of the labels are adversarially corrupted.\n\nThe method is model-agnostic: any conditional diffusion backbone (U-Net, DiT, etc.) can slot in the lightweight prediction head and RDC loss. This makes the barrier to adoption low for practitioners who already have diffusion pipelines.\n\n It opens a new research direction—self-correcting conditional diffusion—that could extend to text-to-image (noisy captions), reinforcement learning (noisy rewards), or audio (transcript errors).",
"weaknesses": "The paper motivates RDC by analogy to “data augmentation with Gaussian noise” and by an intuitive SDE reversal, but it never proves that the reverse-time diffusion of the condition actually improves the denoising error or the posterior p(clean-label | noisy-label).\n\n\nTraining freezes the pseudo-label at 25 k (CIFAR-10) or 30 k (CIFAR-100) iterations—numbers taken from a grid search on clean validation data. In real noisy-data scenarios we do not have clean validation labels.",
"questions": "Refer to Weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T15:07:25",
"modification_date": "2025-11-12T10:57:47",
"review_url": "https://openreview.net/forum?id=fdp7klHmnn¬eId=872lKJyjSu",
"license": "CC BY 4.0"
}
] | |
InOz43jIVI | https://openreview.net/forum?id=InOz43jIVI | CoSteer: Collaborative Decoding-Time Personalization via Local Delta Steering | 5 | 3.5 | [
4,
6,
6,
4
] | [
4,
3,
4,
3
] | 4 | [
"Decoding-time personalization",
"Collaborative text generation",
"Privacy-preserving"
] | Personalized text generation has become crucial for adapting language models to diverse and evolving users' personal context across cultural, temporal, and contextual dimensions. While existing methods often rely on centralized fine-tuning or static preference alignment, they struggle to achieve real-time adaptation under resource constraints inherent to personal devices. This limitation creates a dilemma: large cloud-based models lack access to localized user-specific information, while small on-device models cannot match the generation quality of their cloud counterparts. To address this dichotomy, we present **CoSteer**
, a novel collaborative framework that enables decoding-time personalization through localized delta steering. Our key insight lies in leveraging the logits difference between personal context-aware and -agnostic outputs from local small models as steering signals for cloud-based LLMs. Specifically, we formulate token-level optimization as an online learning problem, where local delta vectors dynamically adjust the remote LLM's logits within the on-device environment. This approach preserves privacy by transmitting only the final steered tokens rather than raw data or intermediate vectors, while maintaining cloud-based LLMs' general capabilities without fine-tuning. Through comprehensive experiments on various personalized generation tasks, we demonstrate that CoSteer effectively assists LLMs in generating personalized content by leveraging locally stored user profiles and histories, ensuring privacy preservation through on-device data processing while maintaining acceptable computational overhead. Our anonymized code and data is available at https://anonymous.4open.science/r/Costeer-4977 | CoSteer: a framework for private LLM personalization. A local SLM uses on-device data to compute a delta signal that steers a cloud LLM, achieving high-quality personalized output without transmitting user data. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=InOz43jIVI | 2025-09-18T23:18:11 | 4 | [
{
"id": "atK2BFM0OS",
"forum": "InOz43jIVI",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12572/Reviewer_WxRf",
"reviewer_name": "Reviewer_WxRf",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "This work introduces CoSteer, a framework that enables real-time, privacy-preserving personalization of LLMs by collaborating with small local models. It steers LLM logits using locally computed delta signals from personal-context-aware small models, achieving strong personalized generation without fine-tuning or direct data leakage, validated across diverse tasks and model scales.",
"strengths": "+ The problem of enabling cloud LLM to be aware of local user data without direct data access is well-motivated. \n+ CoSteer introduces a unique collaborative decoding-time personalization framework that enables real-time adaptation using local delta steering without requiring fine-tuning or directly exposing sensitive user data.\n+ Extensive experiments across multiple datasets and model scales demonstrate that CoSteer improves personalized text generation while maintaining privacy and efficiency comparable to non-personalized cloud LLM.",
"weaknesses": "- The core idea builds on existing context steering methods (He et al., 2025), mainly extending them to cloud-edge collaboration, offering limited theoretical or algorithmic innovation.\n- Experimental results in Table 2 show only slight improvements over strong personalized baselines.\n- Most evaluated datasets are mobile-centric, while CoSteer’s target use case emphasizes cloud LLM serving with personalized requirement.\n- The paper lacks a formal theoretical assessment of whether sharing SLM logit differences could indirectly expose sensitive personal information to the cloud.",
"questions": "Please see Weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T23:42:06",
"modification_date": "2025-11-12T12:57:03",
"review_url": "https://openreview.net/forum?id=InOz43jIVI¬eId=atK2BFM0OS",
"license": "CC BY 4.0"
},
{
"id": "BzMKElqygZ",
"forum": "InOz43jIVI",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12572/Reviewer_Yob3",
"reviewer_name": "Reviewer_Yob3",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper develops CoSteer, a framework that allows for personalization at decoding-time by using the difference in logits between a personal and general-purpose small language model to steer cloud-based LLMs. They show through experiments that this approach outperforms the small general-purpose and personal language models and the large general-purpose LLM.",
"strengths": "- The approach seems like a simple way to leverage the strengths of both personalized SLMs and general LLMs.\n- The problem being solved is interesting and relevant for generating good-quality personalized outputs while keeping private information local.\n- The paper provides thorough experimental results in a variety of settings, including different SLM-LLM combinations and hyperparameter ablations.",
"weaknesses": "While the experimental section contains many experiments, the paper should further distinguish the approach from other methods.\n1. The paper cites Table 3 to explain why their approach is unique, but I am not sure why exactly the constraints from the table are required. In particular, the main reason why Linear Alignment/Context Steering differ from CoSteer is that these models are not weak-to-strong collaborative. However, LA/CS seem to have fairly comparable performance to CoSteer without weak-to-strong collaboration, so it is not clear why this is needed. \n2. The baselines are only compared to CoSteer for a single model pair (Qwen7B-1.5B). Section 4.5 would benefit from additional experiments for other pairs of models.\n3. The performance of CoSteer in comparison to the SLM and LLM (Table 1 and 6) seems more mixed for Llama 8B-1B, Qwen 8B-0.6B, and Qwen 8B-32B. Could the authors elaborate why this is the case? I think it would be useful to discuss this more in the main paper.",
"questions": "- The metrics are listed without standard deviations or error bars. Could the authors add these to the paper?\n- There are a variety of typos throughout the paper that should be fixed, particularly misspelled words and missing spaces. Also, Section 4.3 lists the last 2 Qwen models in opposite order from the other models. Is this intentional?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T08:52:50",
"modification_date": "2025-11-12T12:57:04",
"review_url": "https://openreview.net/forum?id=InOz43jIVI¬eId=BzMKElqygZ",
"license": "CC BY 4.0"
},
{
"id": "20tA8asuXE",
"forum": "InOz43jIVI",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12572/Reviewer_7ke3",
"reviewer_name": "Reviewer_7ke3",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper introduces CoSteer, a collaborative framework for personalized text generation that aims to protect user privacy. It addresses a specific and practical scenario where a powerful cloud-based LLM needs to be personalized but cannot be given direct access to a user's sensitive local data (e.g., profiles, interaction history). The core idea is to leverage a smaller, on-device SLM which can access this private context. This local SLM computes a \"delta steering\" signal, which is the logit difference between its context-aware and context-agnostic outputs. This privacy-safe delta signal is then used to guide the decoding process of the remote cloud LLM, aligning its generation with the user's personal context without that context ever leaving the device. The entire process is formulated as an online optimization problem solved at decoding time.",
"strengths": "- The paper identifies and tackles an interesting, intuitive, and increasingly relevant problem: how to balance the need for high-quality personalization from powerful cloud models with the critical and non-negotiable requirement of user privacy.\n\n- The proposed CoSteer framework is well-motivated and its core mechanism is straightforward to understand. The idea of using a local model to compute a \"delta\" to steer a remote model is an elegant solution to this problem.\n\n- The experimental evaluation is comprehensive, testing the framework's effectiveness across multiple personalized generation tasks and different model pairs.\n\n- I appreciate the detailed discussion section (Section 5), which proactively explores practical challenges like robustness to noisy context and collaboration between different model architectures.",
"weaknesses": "- My primary concern is the paper's limited technical novelty. The contribution is almost entirely in the problem setup and framework design.\n\n- The core optimization algorithm, which is central to the method's implementation, appears to be adopted directly from a previous work (Zhang et al., 2025b), specifically the use of FTRL for decoding-time alignment.\n\n- This lack of technical innovation places a very heavy burden on the novelty of the scenario itself. If this collaborative, delta-steering setup is not demonstrably practical or is perceived as niche, the overall contribution of the paper feels minor.",
"questions": "- My main question is about the practical grounding of this work. Could the authors comment on or provide any existing examples—whether from public applications or industrial settings they are aware of—that currently use this specific paradigm? That is, a setting where users possess sensitive local information they cannot transmit, and where logit-based signals are used as the medium for collaborative personalization? Evidence of real-world application would significantly strengthen the paper's motivation.\n\n- Thinking about the discussions on noise robustness (5.1) and cross-architecture collaboration (5.2), I wonder if this \"Delta Steering\" signal could be used for aggregation. For example, could you have multiple different local models compute their respective delta signals, which are then aggregated (e.g., through voting or averaging) to create a single, more robust steering vector that could be less susceptible to the noise or bias of any single local model?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-26T19:17:42",
"modification_date": "2025-11-12T12:57:04",
"review_url": "https://openreview.net/forum?id=InOz43jIVI¬eId=20tA8asuXE",
"license": "CC BY 4.0"
},
{
"id": "XElD2e21yf",
"forum": "InOz43jIVI",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12572/Reviewer_QsZr",
"reviewer_name": "Reviewer_QsZr",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces CoSteer, a collaborative framework for personalized text generation, operating within a cloud-edge paradigm. It aims to leverage the power of large language models (LLMs) in the cloud while preserving user privacy by keeping sensitive context (profiles, preferences, history) strictly on the local device. The core challenge is enabling personalization without transmitting private data to the cloud, while avoiding the quality limitations of using only a small local model (SLM).\n\nCoSteer proposes a decoding-time steering mechanism that is tuning-free. A local SLM computes the logit difference (delta) between its outputs generated with and without access to the private user context. This delta vector, representing the personalization direction, is calculated and applied locally to steer the logits received from the cloud LLM at each decoding step. The steering process uses an online learning formulation (FTRL) with an efficient closed-form update, ensuring privacy as only the final steered token is returned to the cloud.",
"strengths": "Tackles the critical challenge of balancing LLM capabilities, user privacy, and personalization on resource-constrained devices, which is crucial for real-world applications like personal assistants. While cloud-edge collaboration for personalization isn't entirely new, CoSteer introduces a specific, elegant mechanism using the SLM's logits delta combined with an online learning (FTRL) formulation for steering. This particular approach to extracting and applying the personalization signal locally is a key technical contribution.",
"weaknesses": "The core concept of cloud-edge collaboration or \"semi on-device\" processing to balance privacy, personalization, and compute power, is an active area of research. While the specific delta steering mechanism is novel, the overall collaborative architecture might be seen as an instantiation within a known paradigm rather than a completely groundbreaking framework. The per-token communication round trip (cloud-to-device for logits, device-to-cloud for token) remains a significant practical bottleneck, especially under high network latency. The paper acknowledges this and proposes AdaCoSteer, but a direct latency comparison with vanilla cloud LLM generation is missing. Requiring the local SLM to run twice per token (with/without context) plus the FTRL optimization step could impose a non-trivial computational and energy burden on the local device, potentially exceeding the cost of running the SLM just once.",
"questions": "Can you provide a more detailed breakdown of the wall-clock time per token? Specifically, measurements for: (a) Cloud LLM inference, (b) Cloud-to-local logits transmission, (c) Local SLM inference (x2), (d) Local FTRL optimization (using Eq 7), (e) Local-to-cloud token transmission? How does the total per-token latency compare quantitatively to vanilla LLM cloud inference under typical network conditions?\n\nCould you elaborate on the intuition behind why the iterative FTRL optimization (T=20) provides better results than the single-step LightCoSteer variant (T=1)? Does iteratively refining the policy within the generation of a single token allow for better integration of the SLM delta signal?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-26T00:37:47",
"modification_date": "2025-11-12T12:57:05",
"review_url": "https://openreview.net/forum?id=InOz43jIVI¬eId=XElD2e21yf",
"license": "CC BY 4.0"
}
] |
qzgro4i3sg | https://openreview.net/forum?id=qzgro4i3sg | Efficient numeracy in language models through single-token number embeddings | 4.5 | 3.5 | [
4,
4,
6,
4
] | [
3,
4,
3,
4
] | 4 | [
"language model",
"LLM",
"arithmetic",
"numeracy",
"benchmark",
"single-token number embedding",
"tokenization"
] | To drive progress in science and engineering, large language models (LLMs) must be able to process large amounts of numerical data and solve long calculations efficiently. This is currently only possible through the use of external tools or extensive reasoning chains, either limiting the numerical intuition of LLMs or limiting the length of problems they can solve. We show that frontier LLMs require excessive amounts of reasoning tokens to solve even basic calculations, which is exacerbated by their tokenization strategies that split single numbers into multiple tokens. This motivates the need for efficient and effective single-token number encodings. We introduce a set of desiderata for such encodings and show that existing approaches fail to fulfill them. To address these shortcomings, we propose BitTokens, a novel tokenization strategy that embeds any number into a single token using its IEEE 754 binary floating-point representation. Through extensive experiments we show that our BitTokens allow even small language models to learn algorithms that solve basic arithmetic operations nearly perfectly. This newly gained efficiency could expand the length and complexity of problems language models can solve. | We propose BitTokens, a novel tokenization strategy for LLMs that embeds numbers using their IEEE 754 binary floating-point representation, which allows for efficient numeracy in language models | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=qzgro4i3sg | 2025-09-15T19:43:56 | 4 | [
{
"id": "350r2qrCTi",
"forum": "qzgro4i3sg",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5809/Reviewer_bJvs",
"reviewer_name": "Reviewer_bJvs",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The manuscript addresses the problem of LLM performance on arithmetical operations over numerical expressions. It identifies shortcomings of current implementations supported by empirical evidence, critically surveys alternative solutions available in the state of the art, lists a series of desiderata, and proposes a novel alternative accompanied by experimental results. In my view, the manuscript exhibits clear strengths but also important weaknesses, suggesting significant room for improvement.",
"strengths": "1. The problem of arithmetical content processing in distributional models, and LLMs in particular, is an important and timely one\n2. The manuscript is well-written and well-structured\n3. It provides a reasonable account of the literature relevant to the solution proposed, even if some relevant work is missing (cf. weaknesses)\n4. Good and informative empirical evaluation on frontier LLMs\n5. Very interesting analysis of addition and multiplication over sinusoidal encoding of numerical expressions\n6. Empirical and formal results are sound as far as I can judge (disclosure: I'm not a mathematician by training, so, despite my best efforts, it is not impossible that I have overlooked some technical details)",
"weaknesses": "(in order of importance)\n1. The motivation for this work is ill-formed, and therefore, the solution proposed can look unjustified\n2. The experimental results obtained are relatively limited\n3. Desiderata look arbitrary\n4. Using prompting as a method is inadequate\n5. Relevant work is not considered\n6. Confusion between tokenization and embedding\n7. Lack of clarity in formal statements\n\nFurther details on weaknesses:\n1. The problem of the arithmetical performance of LLMs is presented in terms of \"numeracy\" of LLMs, defined as \"the ability to understand and work with numbers.\" As such, this problem is conceived as a cognitive task. But the problem of arithmetical operations in LLMs, no less than in an elementary pocket calculator, or any abstract or concrete computational model for that matter, is not a cognitive but a formal one. My calculator---or, say, the lambda calculus---doesn't have \"the ability to understand and work with numbers\", while one can arguably say that my neighbor does. Certainly, we have some good ideas of why my calculator performs arithmetical operations, while my neighbor's cognitive capabilities are more obscure. But the cognitive character of the latter doesn't come from that obscurity. Therefore, the fact that the operations of a computational model like an LLM are obscure to us is not a legitimate reason to assume that whatever numerical calculation they perform is to be addressed in cognitive terms as \"numeracy\". These remarks are not purely speculative. They point to the fact that attempting to provide an LLM with \"intrinsic numeracy skills\" is an ill-defined task at best (if not belonging to magic altogether). With respect to LLMs, the problem of numerical calculation is either a descriptive one (i.e., understanding the formal mechanisms explaining the possibilities and limitations of distributional models of computation to perform arithmetical calculations) or a normative one (i.e., what alternative mechanisms can we imagine for correct calculations). This manuscript provides some interesting insights on the former by formally analyzing alternative solutions (in particular, sinusoidal encoding, which, however, is not distributional). But if the problem is how to enhance distributional models with formal methods for correct calculation, it is not clear why not simply outsource calculations to an elementary calculator, which would be, without any doubt, more efficient and more effective. The manuscript claims that this \"prevents the model from building an intuition for numbers and the results of calculations, which is required to interpret and contextualize information from complex domains\", which again, I take for highly unscientific, since \"intuition\", \"interpretation\", and \"contextualization\" are not computational concepts. One could maybe claim that outsourcing numerical calculations affects the processing of non-numerical (eg. linguistic) expressions in LLMs, but this would require evidence that neither is present in this manuscript, nor seems to be its intention to provide. One could also claim that outsourcing calculations would interfere with the current tendency of constructing end-to-end models, but the proposed solution is no better in this sense, because it comes down to introducing symbolically engineered components foreign to the end-to-end distributional approach, and yet are less efficient and perform worse than a simple numerical calculator. Another way to put it is: why should numerical calculations be trainable in an LLM, once acknowledged that, one one side, training methods are highly inefficient and ineffective, and on the other, we have simple, well-understood, very efficient, and 100% correct formal methods for numerical calculation available in case we are ready to enhance a trained model with something else? Without an answer to this question, there is a risk of not addressing the problem at hand (numerical calculation in LLMs) with the right tools, introducing spurious concerns, while neglecting important dimensions. I believe that, due to the questionable motivation, the manuscript suffers from both (see other points below).\n2. Even if one disregards the motivations, one could claim that the results of the proposed method are relatively modest, with significant improvements with respect to the leading baseline only for multiplication and division, while performing significantly worse on the computation of the mean, and slightly worse on 3 other of the 8 tasks. This wouldn't be a problem *per se*, if these results were mobilized for descriptive purposes, giving solid insights on the mechanisms responsible for the different behaviors, instead of evidence for a normative goal (i.e., making the models better). However, the manuscript only reports the performance of varying representation strategies (Appendix D.3) without analysis of the possible reasons behind the difference in performance. And for the interesting case of the mean, it attributes the advantage to other methods, without evidence, to \"the fact that multi-token methods generate answers over multiple forward passes, which effectively enables a form of “reasoning”\", once again hiding behind cognitive metaphors the lack of understanding of the corresponding formal mechanisms.\n3. The desiderata advanced in section 3, supposed to justify the proposed methods, are introduced without sufficient discussion and, therefore, appear as a more or less arbitrary list of properties matching the solution (and not the other way round) instead of a coherent system of independent conditions (as in an axiomatic system), especially if one can raise doubts about the overall motivation of the paper, as discussed in the point 1 above. As an example, one could claim, unlike D1, that representing numbers with as many tokens as digits for some positional representation in some base (eg. representing the number 123 with the 3 consecutive tokens \"1\" \"2\" \"3\") is the most efficient way of representing numbers, making algorithmic properties readily available to computation (hence the importance of positional numerical systems since the Babylonians). Or that making numbers independent of geometry, unlike D3, actually frees arithmetic from geometric limitations, as centuries of abstract algebra have shown. All this can be debated, and depends on what is the point of computing arithmetical properties in one way or another, but the point is that there is nothing obvious in the desiderata proposed, which would then require further justification.\n4. A rigorous approach to understanding how LLMs perform arithmetical computation should not accept prompting (e.g., \"you are an expert in numeracy... do not explain...\") as a valid method, as none of this has a rigorous computational/formal status. I know this is widespread practice in the field, but I don't think that's a legitimate argument. I also know that not all models are open source, and there's no other way to explore them than prompting. But that's not a reason to accept prompting as a scientific methods, but a reason not to use those models for scientific purposes. I'm reviewing a scientific paper, not a commercial product. Can you be sure that a model didn't use a calculator tool just because you asked it to please not do it? If that can't be guaranteed formally, then any result coming from such an obscure procedure falls necessarily outside the domain of computer science and belongs to other areas of scientific knowledge, such as anthropology or psychology, or non-scientific, such as religion, or magic. Not having a clear motivation, as pointed in 1., hides the problem with this methodology.\n5. Since a central aspect of this paper has to do with the representation of numerical expressions and their mathematical processing in the framework of transformer models, it is surprising not to see any discussion of the work done by Charton on this topic ([eg1](https://arxiv.org/pdf/2308.15594), [eg2](https://arxiv.org/pdf/2306.15400),[eg3](https://arxiv.org/pdf/2211.00170), [eg4](https://arxiv.org/pdf/2112.01898)). Discussing his views and adopting some of his solutions could be precious for the work presented in this manuscript (Disclosure: I am neither Charton, nor any one of his co-authors).\n6. The manuscript presents the problem as a tokenization problem (\"We hypothesize that addressing this problem requires rethinking the way LLMs tokenize numbers.\"; and BitTokens is presented as \"a novel tokenization strategy\"), but the problem is clearly not one of tokenization, but of embedding. From a tokenization perspective, the solution proposed is trivial: every numerical expression is mapped to the same [NUM] token (which, incidentally, exposes the LLM to the risk of statistical inconsistency, cf. [Gastaldi et al., 2025](https://arxiv.org/abs/2407.11606)). It was not until page 3 or 4 that I understood that tokenization was not the issue. I suggest to remove any substantial reference to tokenization and frame the paper in terms of encodings, vector representations, or embeddings.\n7. Proposition 4.3 is expressed in rather informal terms (\"numbers\", \"states\" \"outputs\", \"read\", etc), in a way that it is difficult to follow the correctness of the proof proposed. For instance, it's not clear where the contradiction lies (because nowhere was explicitly claimed that the operator was injective), let alone the fact that a proof by contradiction might not be needed at all here, since an explicit bound is being found (e.g., a direct proof could claim that below that bound the operator is not injective, or sthg of the sort). The proof of the computational complexity is even less formal, so, while I think I understand the argument, I'm not sure I can follow the correctness of a proof. To avoid misunderstandings, I suggest either providing a more formal statement and proof, or presenting this more like an argument than like a formal result.",
"questions": "- Would you be ready to change the motivation of the paper to avoid appealing to obscure cognitive properties? If you were not allowed to appeal to cognitive metaphors to justify your work, how would you justify the use of your method over outsourcing numerical computation to a simple numerical algorithm? Could you provide formal or empirical evidence for whatever that justification would be?\n- Would you be ready to remove any claim about tokenization and frame this work exclusively in terms of encoding or embedding?\n- What is your justification for considering it a rigorous scientific method to politely ask LLMs to do something? What alternative methods can you imagine to make your work more rigorous?\n- What is the unified perspective that justifies the desiderata?\n- It is unclear to me what the number sampling is supposed to reflect. Is it actual distributions on real-life corpora? Cognitive numeracy? Formal principles of learnability? How is the \"increased difficulty\" that justifies oversampling operands with similar exponents judged? Difficulty is usually a function of the algorithmic implementation, which is largely unknown in the case of DNNs. It could also be that by learning easy cases, a model generalizes better, and this is being artificially prevented by the sampling?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T18:52:53",
"modification_date": "2025-11-12T11:31:13",
"review_url": "https://openreview.net/forum?id=qzgro4i3sg¬eId=350r2qrCTi",
"license": "CC BY 4.0"
},
{
"id": "l1rzD4wlAD",
"forum": "qzgro4i3sg",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5809/Reviewer_QH8n",
"reviewer_name": "Reviewer_QH8n",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes BitTokens, a single token number embedding based on IEEE-754 floating-point structure. Each number is encoded into a [NUM] token augmented with a 64-dimensional vector that corresponds to the sign bit, exponent bits, and significand bits. Decoding uses a small number head with a sigmoid, trained with bit-wise BCE. The authors first motivate the need for efficient numeracy by showing frontier LLMs still require very large reasoning traces for basic arithmetic. They then lay out nine desiderata for single token numeric encodings and argue that prior approaches such as xVal and FoNE violate key desiderata for arithmetic, especially multiplication in sinusoidal space. Experiments on small GPT-2 style models trained from scratch compare BitTokens to subword, single-digit, xVal, and FoNE across seven single-step tasks in a multi-task setting, plus three harder solo tasks. BitTokens achieve near-perfect accuracy on comparison and single-step arithmetic, with notably strong multiplication and division, and competitive language modeling perplexity on FineWeb. The paper also introduces a curriculum for numeric tasks and dynamic multi-task sampling.",
"strengths": "Recasting numeric representation as IEEE-754 bit planes inside a single token is a clean idea that aligns with hardware arithmetic and Boolean operations. The formal desiderata are a helpful framework for comparing encodings.\n\nThe paper diagnoses why sinusoidal number tokens struggle with multiplication, showing that any learned operator must effectively decode, convolve, propagate carries, then re-encode. This is a sound argument that matches the empirical results.\n\nOn multi-task training with text, BitTokens outperform FoNE and xVal on multiplication and division, and are competitive or better than digit and subword baselines on many tasks. With the proposed setup, BitTokens achieve the best FineWeb perplexity among the compared tokenizers in the multi-task configuration. \n\nThe design stays numerically stable and LayerNorm-friendly by scaling bits to unit RMS and using a simple number head, which is attractive for integration.",
"weaknesses": "1. **Training distribution fairness**. The curriculum introduces an **extra training set** that uniformly samples bit precision to balance difficulty for BitTokens, while evaluation retains decimal difficulty for all methods. This is a non-trivial distribution tweak that seems tailored to BitTokens and is not obviously mirrored for baselines. After investigating the provided codes, I find this obvious in the configs: BitTokens have more config training sets compared to other baselines. The paper should either remove this asymmetry or construct equivalently fair curricula for each tokenizer.\n\n2. **Scope of multi-task justification**. The paper argues that exponentiation, mean, and standard deviation are hard and thus are removed from the multi-task mix and trained as solo tasks, which complicates comparisons and the claim that BitTokens deliver broadly better numeracy under realistic pretraining mixes. A stronger justification and matched compute budgets across tasks are needed.\n\n3. **Precision choice not explored.** The method hard-codes float64, yet many deployments run in float32, bfloat16, or even fp8. The advantages of 64-bit mantissa bits vs smaller formats are not quantified, and the desideratum about low-precision robustness is asserted rather than carefully tested across precisions.\n\n4. **Ablation coverage is narrow.** Ablations cover token combination strategies, base-10 vs base-2, reciprocal concatenation, and curriculum. Missing are precision width, number head variants, normalization treatments, noise robustness, and data mixing ratios. Also these methods are trained with Muon optimizer and whether it gives BitTokens unfair favor is unknown. I would like to see standard AdamW results. \n\n5. **Generalization and downstreams.** Results are on small models trained from scratch. It is still unclear how BitTokens interact with large-scale pretraining and math word problems. The discussion acknowledges this but it limits the strength of the contribution for ICLR.\n\n6. **Language modeling tradeoff depends on setup.** In multi task training with text, BitTokens win FineWeb perplexity. In solo task training, FoNE has the best FineWeb perplexity and BitTokens are slightly worse, which weakens the claim that BitTokens are a drop in replacement for BPE or FoNE in general LLM pretraining.",
"questions": "1. **FineWeb perplexity comparison.** In Table 4 BitTokens have the best perplexity among baselines in the multi-task setup. Can the authors clarify whether FoNE ever wins on perplexity under any controlled setting, for example when numeracy data is removed or curriculum is disabled, and with matched token budgets and seeds? A small ablation in Table 11 hints at shifts when curriculum is off. Please add a controlled study. Also in Table 8, it shows FoNE achieves the best perplexity on solo tasks trained on FinewebText, does that imply the distribution needed for BitTokens to work are so different from natural number distributions?\n\n2. **Distributional fairness of curricula.** The bit-precision-balanced auxiliary training set seems specific to BitTokens. What is the effect size of this design choice on final arithmetic and perplexity metrics? Please either remove this asymmetry or provide equivalent difficulty balancing for decimal-centric tokenizers. A paired ablation would help. I think this is *very important* for scientific studies. \n\n3. **Why multi-task excludes multi-step tasks.** Excluding exponentiation and std from the shared mix weakens the general claim. Could you include them with a capped sampling ratio and report end-to-end training dynamics and final tradeoffs?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T04:37:45",
"modification_date": "2025-11-12T11:31:14",
"review_url": "https://openreview.net/forum?id=qzgro4i3sg¬eId=l1rzD4wlAD",
"license": "CC BY 4.0"
},
{
"id": "Z51uAhCMZi",
"forum": "qzgro4i3sg",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5809/Reviewer_M71x",
"reviewer_name": "Reviewer_M71x",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "This paper, Efficient Numeracy in Language Models through Single-Token Number Embeddings, investigates why even frontier LLMs struggle with basic arithmetic despite strong reasoning capabilities. The authors argue that the root cause lies in inefficient number tokenization—existing models split numbers into multiple tokens, forcing them to use long reasoning chains or external tools for simple computations.\nTo address this, the paper introduces BitTokens, a new single-token number encoding scheme based on the IEEE 754 binary floating-point representation. Each number is encoded as a 64-dimensional binary vector representing its sign, exponent, and significand, concatenated with its reciprocal to improve division learning. This design satisfies a set of nine desiderata for efficient, trainable, and numerically stable encodings.\nComprehensive experiments across nine numeracy tasks show that BitTokens enable even small GPT-style models to perform addition, multiplication, and division with near-perfect accuracy—surpassing prior single-token methods such as xVal (value-scaled embeddings) and FoNE (Fourier Number Embeddings).",
"strengths": "Originality:\n The paper introduces a novel conceptual and technical framework for enhancing numeracy in LLMs through single-token number embeddings, addressing a long-standing inefficiency in numerical reasoning. The proposed BitTokens represent a creative synthesis of ideas from numerical computing (IEEE 754 floating-point representation) and modern tokenization strategies for LLMs. The formalization of nine desiderata for single-token number encodings is conceptually fresh and provides a principled foundation for evaluating future approaches.\n\n\nTechnical quality:\n The work demonstrates strong theoretical and empirical rigor. The authors analyze prior methods (xVal and FoNE) through formal proofs—e.g., showing the additive homomorphism of sinusoidal encodings and their inability to support efficient multiplication—and motivate BitTokens as a solution grounded in established numerical theory. Experimental methodology is solid: the benchmark includes nine carefully controlled numeracy tasks, diverse number ranges, and rigorous evaluation metrics (e.g., log-sMAPE). Results are robust, replicable, and consistently show clear improvements across models and tasks.\n\n\nClarity and presentation:\n The paper is very clearly written and well structured. Each section flows logically—from motivation, desiderata, and theoretical analysis to implementation and results. Visualizations (e.g., Figures 1–4) effectively communicate both the inefficiency of reasoning-based numeracy and the improvements achieved by BitTokens. Mathematical formulations (e.g., Lemma 4.2, Proposition 4.3) are carefully explained and accessible to readers with standard ML background.\n\n\nSignificance and impact:\n The contribution is potentially highly significant for the development of numerically capable LLMs. By providing a deterministic, stable, and efficient encoding for numbers, BitTokens could reduce reasoning-token overhead and unlock more efficient arithmetic computation inside general-purpose LLMs. This directly impacts scientific and engineering applications of LLMs and contributes to the broader goal of building models with intrinsic numerical understanding rather than reliance on external tools.",
"weaknesses": "The method cannot generalize to unseen or longer-digit numbers across tasks, since each number is represented as a unique token rather than a compositional encoding of digits or bits.\n\n\nThe paper does not explain how the model reproduces exact numeric strings (e.g., 1.000, 000101) where format, not value, is important.\n\n\nThe comparative setup with prior work (FoNE, Neural Number Representations) lacks fairness and direct equivalence in optimization and data sampling settings.",
"questions": "Generalization to longer or unseen numbers:\n\n Since each number is represented by its own token, the model cannot share parameters across digits or magnitudes. This means if the training data contains only 3-digit numbers, any 6-digit number in an unseen task will have an untrained token embedding. In practice, you cannot train every number on every task. Could you evaluate how BitTokens handle such out-of-distribution magnitudes — for example, models trained on ≤3-digit numbers but tested on ≥6-digit ones for addition, multiplication, or exponentiation?\n\n---\n\nExact string prediction:\n\n How does your pipeline preserve numeric formatting when the target string must match exactly (e.g., 1.000, 000101)? Does the [NUM] decoding step reproduce such surface forms, or does it always canonicalize to a float value (e.g., 1 or 101)?\n\n---\n\nComparison to related methods:\n\n Please clarify how BitTokens differ conceptually and empirically from Improving LLM Numerical Reasoning with Neural Number Representations (arXiv:2405.17399), which also explores specialized numeric embeddings for arithmetic reasoning.\n\n---\n\nExperimental fairness:\n\n In your FoNE reproduction, you employ different optimization and data sampling strategies from the original paper. The FoNE work reports ~97% accuracy on 60-digit addition, suggesting that performance differences may arise from optimizer choice, curriculum design, or sampling distribution rather than representational limits. Could you clarify whether you controlled for these factors? Before large-scale training, it would be informative to compare all methods under identical, simple settings to isolate the effects of different training strategy and training data distribution.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-25T06:26:32",
"modification_date": "2025-11-12T11:31:14",
"review_url": "https://openreview.net/forum?id=qzgro4i3sg¬eId=Z51uAhCMZi",
"license": "CC BY 4.0"
},
{
"id": "X8RT4oGCZF",
"forum": "qzgro4i3sg",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5809/Reviewer_THn1",
"reviewer_name": "Reviewer_THn1",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The authors in this paper first propose a benchmark to test the numeracy of LLMs, and nine desiderata of the number tokenizer. They analyze two classic previous work xVal and FoNE and find their drawbacks. To solve the problem, they propose their BitTokens which can satisfy all the nine desiderata and can also represent a very large range of numbers. The experiments show that their tokenizer performs better than other tokenizers.",
"strengths": "1. **Meaningful improvement**: The method detailed analyze the previous mehtods like xVal and FoNE, and propose meaningful improvements over them.\n2. Their experiments show that their method outperforms the previous methods on various datasets.\n3. The FineWeb dataset is a practical text dataset that can support its effectiveness on real-world tasks.",
"weaknesses": "1. **Fair comparison between related work**: the related work NumberCookbook contains four different representation of numbers including integer, float, fraction and scientific notation. The float and scientific notation contain large range of numbers. Lines 140-147 incorrectly state about the related works' contribution.\n2. **Contribution of the benchmark**: it is not clear what the benchmark adds to the existing related work. It is also not clear that the relation between the results in section 2 and their BitTokens.\n3. **The necessary to further justify the nine desiderata**: although the nine desiderata seem to be reasonable, some of them are not well justified. \n 1. For example, why \"a single token\" is required? (D1) A long-standing issue regarding the one-token representation lies in its practical effectiveness. Despite the common emphasis on the computational overhead associated with single-digit tokenizers in certain scenarios, I have yet to see any evidence from efficiency comparisons in real-world tasks that sufficiently demonstrate the importance of this method (including in this article). Considering that, even in fields like physics and finance, the number of tokens for words, spaces, or other formatting elements (such as tables in markdown) will far exceed that of the numbers themselves, with shorter numbers still predominating, the actual benefits of this method remain questionable. \n 2. For D3: we need experiment evidence to show that structured representation leads to better performance, where some indirect evidence is insufficient. I am aware that past work on interpretability in smaller models seems to suggest that the model tends to learn this structural form, but this may be strongly correlated with insufficient model size.\n 3. For D6: in most practical scenarios,both the token embedding and the activation value will not use low-precision representation, even the model parameters have heavily quantized. Therefore, it is not clear why low-precision representation is necessary.\n4. **Reasoning**: Some text-number reasoning tasks are required to further validate the effectiveness of the proposed method. For example, GSM8K, SVAMP, and other benchmarks that require numerical reasoning in addition to basic understanding. The FineWeb dataset can only validate the fitting ability. Whether the token is suitable for math reasoning is not clear.",
"questions": "1. What is the contribution on the benchmark?\n2. Why do you believe each of the nine desiderata is necessary? Is there any experiment evidence?\n3. Is there enough motivation to use a special designed tokenizer to represent number, especially when the experiments show that one-digit tokenizer performs also well on most of the tasks.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-23T11:32:14",
"modification_date": "2025-11-12T11:31:15",
"review_url": "https://openreview.net/forum?id=qzgro4i3sg¬eId=X8RT4oGCZF",
"license": "CC BY 4.0"
}
] |
PBIHh6ibal | https://openreview.net/forum?id=PBIHh6ibal | PRPO: Paragraph-level Policy Optimization for Vision-Language Deepfake Detection | 5.5 | 3 | [
6,
6,
4,
6
] | [
2,
4,
3,
3
] | 4 | [
"deepfake detection",
"vision language models",
"deepfake reasoning"
] | The rapid rise of synthetic media has made deepfake detection a critical challenge for online safety and trust. Progress remains constrained by the scarcity of large, high-quality datasets. Although multimodal large language models (LLMs) exhibit strong reasoning capabilities, their performance on deepfake detection is poor, often producing explanations that are misaligned with visual evidence or hallucinatory. To address this limitation, we introduce a reasoning-annotated dataset for deepfake detection and propose Paragraph-level Relative Policy Optimization (PRPO), a reinforcement learning algorithm that aligns LLM reasoning with image content at the paragraph level. Experiments show that PRPO improves detection accuracy by a wide margin and achieves the highest reasoning score of 4.55/5.0. Ablation studies further demonstrate that PRPO significantly outperforms GRPO under test-time conditions. These results underscore the importance of grounding multimodal reasoning in visual evidence to enable more reliable and interpretable deepfake detection. | reinforcement learning | https://openreview.net/pdf?id=PBIHh6ibal | 2025-09-17T17:33:46 | 4 | [
{
"id": "AP9sIuSZ2Z",
"forum": "PBIHh6ibal",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8895/Reviewer_24DX",
"reviewer_name": "Reviewer_24DX",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper proposes a framework to enhance both detection accuracy and interpretability of multimodal reasoning in deepfake image detection. The authors address two key gaps: the scarcity of large reasoning-annotated datasets and the misalignment between visual evidence and generated explanations by multimodal large language models (MLLMs). To tackle these, they first introduce DF-R5, a dataset of approximately 115 K images paired with high-quality reasoning annotations; second, they present DX-LLaVA, a vision-language architecture combining a CLIP ConvNeXT encoder with a language model to better capture fine-grained artifacts; and third, they develop Paragraph-level Relative Policy Optimization (PRPO), a test-time reinforcement-learning algorithm that rewards paragraph-level alignment of reasoning with visual features and internal consistency between reasoning and final decision",
"strengths": "* The dataset contribution (DF-R5) is significant: reasoning annotations for deepfake detection are rare, and the scale (~115k) is commendable.\n* The architectural component (DX-LLaVA) shows awareness of the limitation of generic vision encoders for subtle artifact detection; replacing CLIP ViT with ConvNeXT is a reasonable design choice.\n* The RL component (PRPO) is novel in that it treats paragraphs as units for reward, rather than tokens or single outputs, and aligns reasoning with visual features as well as final output consistency.",
"weaknesses": "One concern is that the proposed model has been fine-tuned specifically on deepfake data, while the compared baselines are general-purpose MLLMs such as Gemini-2.5 and GPT-4o. This raises a fairness issue: the reported performance gains might largely reflect domain-specific adaptation rather than the effectiveness of the proposed algorithm itself.",
"questions": "A key question concerns the generality of the proposed PRPO framework. While the method is shown effective for deepfake detection, it remains unclear whether PRPO can generalize to broader vision-language reasoning tasks, such as visual question answering or visual entailment. Given that the algorithm optimizes reasoning alignment at the paragraph level, one would expect it to be applicable beyond this specific domain. I encourage the authors to clarify this point or discuss potential extensions to more general visual reasoning settings.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T02:01:13",
"modification_date": "2025-11-12T12:10:49",
"review_url": "https://openreview.net/forum?id=PBIHh6ibal¬eId=AP9sIuSZ2Z",
"license": "CC BY 4.0"
},
{
"id": "MaTMT3R8hF",
"forum": "PBIHh6ibal",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8895/Reviewer_kPnV",
"reviewer_name": "Reviewer_kPnV",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper addresses the critical challenge of deepfake detection and forensic reasoning using Multimodal Large Language Models (MLLMs). The core contribution is the introduction of a new reasoning-annotated dataset, DF-R5 (115k image-reasoning pairs), specifically designed for deepfake detection. Furthermore, the authors propose an enhanced MLLM architecture, DX-LLaVA, fine-tuned on DF-R5, and a novel test-time optimization algorithm, Paragraph-level Policy Optimization (PRPO), to align the generated forensic explanations with visual evidence. Experimental results demonstrate that the combined approach significantly outperforms existing detection baselines and general state-of-the-art MLLMs in both classification accuracy and reasoning quality.",
"strengths": "1. The research tackles the complex issue of boundary ambiguity and unreliable reasoning in MLLM-based deepfake detection. The proposal of an MLLM fine-tuning method (DX-LLaVA) combined with a post-SFT optimization algorithm (PRPO) for forensic reasoning is a clear and novel contribution to the field.\n\n2. The introduced dataset, DF-R5, provides a substantial volume (115k pairs) of high-quality, fine-grained reasoning annotations crucial for training MLLMs to perform reliable deepfake forensics. This dataset significantly helps in bridging the data gap for explainable deepfake detection.\n\n3. The PRPO method demonstrates substantial and consistent performance improvements over both dedicated deepfake detection baselines (Table 4) and general SOTA MLLMs (Table 5, Table 10), validating the effectiveness of the proposed data and optimization strategy.",
"weaknesses": "Major concerns:\n\n1. Lack of clarity and details regarding the proposed DX-LLaVA architecture and its implementation. Section 3.2, which describes the method, abruptly introduces DX-LLaVA without a smooth transition from the preceding section, making the rationale behind its design unclear. Specifically, the paper needs explicitly state which parameters of the base LLaVA model are fine-tuned (e.g., full LLM weights, LoRA, or just the projection layer). Similarly, the replacement of the vision encoder with ConvNeXT is introduced suddenly. Furthermore, the embedding of certain experimental results (Tables 2 and 3) directly within the methodology section hinders the reader's ability to first grasp the complete method overview before evaluating its performance.\n\n2. Inconsistency and ambiguity in the experimental setups, which complicates the assessment of performance gains. The paper needs to clearly articulate the data split. Specifically, are the experiment results shown in Table 3 based on an inter-domain or intra-domain split. This distinction is crucial for understanding and validating the reported performance improvements.\n\n3. The ambiguity of the relationship between DX-LLaVA and PRPO, and lack of their individual contributions in the ablation analysis. While PRPO is understood to operate on the SFT-trained DX-LLaVA, the paper lacks a clear ablation study.",
"questions": "See Weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T16:41:48",
"modification_date": "2025-11-12T12:10:50",
"review_url": "https://openreview.net/forum?id=PBIHh6ibal¬eId=MaTMT3R8hF",
"license": "CC BY 4.0"
},
{
"id": "QdcojFgTpe",
"forum": "PBIHh6ibal",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8895/Reviewer_1dF7",
"reviewer_name": "Reviewer_1dF7",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes a method to improve the deepfake detection for multimodal models. To deal with limitations in existing multimodal deepfake detection, the authors introduce a new large scale dataset comprising reasoning explanations automatically generated from Gemini 2.5. The authors further modify Llava by using CLIP Convnext encoder along with auxiliary classification loss. The authors also introduce PRPO, which is a test time reinforcement learning algo using paragraph level rewards and prediction consistency reward to optimize alignment and consistency of generated reasoning.",
"strengths": "- The paper is easy to read and well presented.\n- PRPO in order to improve deepfake detection is interesting area and novel in this context.\n- The paper addresses a key problem and the approach in direction of explainability has significant value.\n- The authors also introduce a large scale dataset in this context, which will be useful for future exploration in this area.",
"weaknesses": "- The proposed dataset is built on the distilled reasoning from Gemini 2.5. This can severally limit the significance of the conclusion in this paper.\n- VCR relies on the semantic alignment, which could be insufficient for forensic tasks where visual consistency needs to verified at pixel level.\n- The architectural modification seems to be oversold and is not a significant modification. This could be strengthened by low level analysis or some visualisations. The claim on architecture benefits are also not well justified.\n- Its not clear how much computation overhead is at test-time with RL approach. The latency aspect is not well explained.\n- A significant drawback of current method is around overfitting the reasoning style of Gemini 2.5 which could lead to fragile explanations.",
"questions": "I would request the authors to provide inference time overhead for PRPO optimization. Please also look at the above weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T08:33:08",
"modification_date": "2025-11-12T12:10:50",
"review_url": "https://openreview.net/forum?id=PBIHh6ibal¬eId=QdcojFgTpe",
"license": "CC BY 4.0"
},
{
"id": "oxu6RachFS",
"forum": "PBIHh6ibal",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8895/Reviewer_jDoc",
"reviewer_name": "Reviewer_jDoc",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "### Method\n- The authors propose a new training dataset, an enhanced model, and a new RL algorithm for deepfake detection with MLLMs.\n - DF-R5, a new training set, which is collected by using off-the-shelf MLLMs such as Gemini.\n\n - An enhanced model, DX-LLaVA. The authors replace CLIP ViT with CLIP ConvNeXT to get pixel-level embeddings, enabling a finer focus on local image regions.\n\n - A new RL algorithm, PRPO. PRPO does not require ground truth labels, but utilize Visual Consistency Reward and Prediction Consistency Reward to improve the model performance, which can be applied at test time.\n\n### Results\n- PRPO achieves the highest reasoning score of 4.55/5.0.\n- Ablation studies show the advantages of the proposed methods.",
"strengths": "- This study improves deepfake detection in several aspects, including data, model architecture, and algorithms.\n- The exploration and the proposed methods are intuitive and reasonable.\n- PRPO significantly surpasses prior methods and achieves the highest reasoning score.",
"weaknesses": "### Major\n- Quality of the dataset. \n - DF-R5 is built on off-the-shelf MLLMs such as Gemini. I'm not sure whether this pipeline can ensure stable and high-quality data annotations. Does Gemini generate hallucination or wrong annotations?\n - Second, it seems that the upper bound of the data is limited by the used MLLMs. \n - Last, the evaluation such as the explanation quality evaluation also relies on MLLMs (GPT-4o here). I'm wondering if this leads to overestimated performance, because MLLMs generate training data and MLLMs evaluate. It is possible that these models will prefer an answer \"similar\" to their own.\n\n- Generalization ability of DX-LLaVA.\n - As shown in Table 2, the performance for inter-domain is not satisfactory. I didn't see further discussion on this point, so I'm not sure about the generalization ability of DX-LLaVA across domains different from the training data\n\n### Minor\n- How is the performance of the classifer trained on the CLIP ConvNeXT features? Is it better than DX-LLaVA?\n- The authors are recommended to put more examples in the main context or Appendix, to compare the quality of explanations generated by DX-LLaVA or other models. \n\n- About PRPO\n - How do the authors separate paragraphs from the DX-LLaVA answers during training? A fixed-number of sentences is a paragraph? Or separate paragraphs based on meanings?\n - If we use ground truth labels to introduce accuracy rewards to PROP (like GRPO), will the performance be improved?",
"questions": "Please see the Weaknesses section.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-25T11:51:44",
"modification_date": "2025-11-12T12:10:51",
"review_url": "https://openreview.net/forum?id=PBIHh6ibal¬eId=oxu6RachFS",
"license": "CC BY 4.0"
}
] | |
FOrVEtwixO | https://openreview.net/forum?id=FOrVEtwixO | LangMedSAM: Scalable Adaptation of Medical Segment Anything Model (MedSAM) for Language-Prompted Medical Image Segmentation | 2 | 4.5 | [
2,
2,
0,
4
] | [
4,
5,
5,
4
] | 4 | [
"Medical Image Computing",
"Image Segmentation",
"Foundational Model"
] | Image segmentation is a crucial component of medical imaging, facilitating precise analysis and diagnosis by identifying anomalies and structures across various imaging modalities. Recent advancements have led to the development of foundational medical image segmentation models such as MedSAM. Trained on a large corpus of medical images, MedSAM generates segmentation masks based on user prompts such as bounding boxes and points. For faster inference, LiteMedSAM, a lightweight variant of MedSAM, offers a computationally more practical solution, while maintaining comparable performance. However, manually providing bounding boxes for each 2D slice in volumetric imaging remains cumbersome and hinders the automatic processing of large datasets. To address this, we introduce LangMedSAM, a multi-modal text-based segmentation model that leverages natural language prompts for mask generation in radiological images. LangMedSAM is trained on 20 publicly available medical datasets and evaluated both on these datasets and on 4 additional external datasets to assess generalizability. Building on LiteMedSAM’s architecture, it supports segmentation via both text-based prompts and conventional inputs such as bounding boxes. Our results show that text-based prompts provide a scalable and effective solution for multi-modal and multi-region medical image segmentation, offering a practical alternative to conventional prompting methods in MedSAM—particularly for the automated processing of large collections of scans. | We propose LangMedSAM, a multi-modal segmentation model that uses natural language prompts to generate anatomical and pathological masks, reducing dependence on manual bounding boxes while maintaining strong CT and MR performance. | applications to physical sciences (physics, chemistry, biology, etc.) | https://openreview.net/pdf?id=FOrVEtwixO | 2025-09-20T05:16:56 | 4 | [
{
"id": "2RRtjc6g1M",
"forum": "FOrVEtwixO",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21360/Reviewer_pMMq",
"reviewer_name": "Reviewer_pMMq",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "The paper proposes LangMedSAM, it is a language-adaptive medical image segmentation framework that integrates language prompts into the Segment Anything Model (SAM) paradigm. During this process, the LangMedSAM introduces a lightweight language–vision alignment module that conditions the segmentation process on textual instructions. The model is trained on 20 medical datasets across multiple modalities, and it achieves strong generalization across unseen domains. From this paper, the framework demonstrates that coupling language guidance with lightweight SAM adaptation enables scalable, prompt-based segmentation suitable for clinical and deployment scenarios.",
"strengths": "1. The paper demonstrates solid engineering that scales SAM-style segmentation efficiently to the medical domain. Training across 20 datasets and achieving good generalization to 4 external benchmarks shows commendable robustness.\n\n2. LangMedSAM maintains competitive performance with only ~700 MB VRAM usage, highlighting its deployment feasibility in clinical or low-resource environments. Such lightweight adaptation is important for translating foundation models to real-world medical applications.\n\n3. The integration of text-based prompts, though conceptually simple, improves accessibility for non-expert users, representing a meaningful step toward more user-friendly medical AI systems.",
"weaknesses": "1. Limited methodological novelty: The paper mainly integrates existing techniques (connects SAM and a lightweight text encoder) without introducing a fundamentally new learning paradigm or architectural mechanism. The overall framework resembles prior multimodal segmentation systems (e.g., BiomedParse, FLanS [1]), and the contribution lies mostly in engineering refinement rather than conceptual advancement.\n\n\n[1] Biomedparse: a biomedical foundation model for image parsing of everything everywhere all at once. \n[2] FLanS: A Foundation Model for Free-Form Language-based Segmentation in Medical Images\n\n2. Lack of in-depth analysis on language-driven improvements: The paper does not adequately explain why text prompts enhance segmentation performance. The paper evaluates only prompt length, there is no ablation on prompt semantics, robustness to ambiguous or noisy language inputs, or analysis of how linguistic cues influence feature alignment. As a result, the claimed benefit of language adaptivity remains unconvincing.\n\n3. Insufficient differentiation from prior work: Many of the design choices, such as CLIP-based text conditioning and SAM fine-tuning, have been widely explored in earlier works. Without clearer motivation or distinct methodological contributions, the novelty boundary between LangMedSAM and existing frameworks remains weak.",
"questions": "1. The paper mainly combines SAM with a text encoder — what is the key methodological innovation beyond this integration?\n2. How are text prompts generated or standardized during training and evaluation? Are they manually written or programmatically derived?\n3. How does the model handle ambiguous or conflicting textual inputs, such as “tumor near left lobe” vs. “mass close to center”?\n4. In this work, the authors mentioned \"try different text encoders (SAPBERT, PubMedBERT, BERT) and add projection MLPs to match the image side\", however, is the language encoder frozen or fine-tuned? If fine-tuned, what data or loss functions guide its alignment with visual features?\n5. How does LangMedSAM perform on open-ended or unseen prompts, beyond those directly corresponding to labeled anatomy classes?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T16:50:40",
"modification_date": "2025-11-12T18:02:00",
"review_url": "https://openreview.net/forum?id=FOrVEtwixO¬eId=2RRtjc6g1M",
"license": "CC BY 4.0"
},
{
"id": "lxhcCSODUg",
"forum": "FOrVEtwixO",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21360/Reviewer_KAdJ",
"reviewer_name": "Reviewer_KAdJ",
"rating": 2,
"confidence": 5,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces LangMedSAM, a model that extends LiteMedSAM to accept natural language inputs for medical image segmentation. The model is trained on a large dataset comprising one million 2D MR or CT slices and is capable of segmentation based on either linguistic descriptions or bounding box prompts. Evaluations on both in-domain and out-of-domain data show promising results, with notable improvements in segmenting thin-walled structures.",
"strengths": "1. **Enhanced interactivity**. The introduction of natural language inputs to the LiteMedSAM framework is a valuable extension, offering a more flexible and potentially more user-friendly interface for clinical interaction compared to bounding-box-only prompts.\n2. **Large-scale evaluation.** The model is evaluated on a substantial dataset. The inclusion of box plots provides a clear visual representation of performance variance across different test cases.\n3. **Identified and addressed weakness.** The paper effectively identifies a key limitation of MedSAM and LiteMedSAM, i.e., their difficulty with thin-walled structures (lines 119-120), and provides empirical evidence (Appendix Figure 5) to show that LangMedSAM mitigates this issue.",
"weaknesses": "1. **Unclear practical value.** The paper motivates the use of natural language by citing the impracticality of bounding-box inputs (lines 058-059). However, this may come at the cost of one of MedSAM's core practical value. MedSAM was assessed in annotation assistance by segmenting *an unseen type* of tumor using bounding-box prompts. The text-image alignment is noted to be redundant in some cases (lines 457-461), which undermines the language model's generalizability in *unseen types*. The practical advantage of LangMedSAM over non-language models needs stronger justification, perhaps through a direct comparison.\n2. **An unsubstantiated claim.** The authors claim that algorithms like nnU-Net exhibit \"limited generalizability\" and perform \"suboptimally\" on out-of-domain data (lines 040-044), but provide no empirical evidence or citations to support this broad assertion. This claim must be justified to contextualize LangMedSAM's contribution properly.\n3. **Questionable out-of-domain generalization.** While the improvement on out-of-domain MR slices (due to better thin-walled myocardium segmentation) is sound, the overall out-of-domain performance is inconsistent. Specifically, LangMedSAM underperforms compared to MedSAM and LiteMedSAM on out-of-domain CT scans (Table 2). Furthermore, in terms of median DSC, Figure 3 shows that LangMedSAM is outperformed by its predecessors for both CT and MR, which raises doubts about its claimed generalizability.",
"questions": "1. **Evaluation of negative samples.** The training data excluded images with masks smaller than 100 pixels (lines 309-310). Additionally, the DSC computation in Appendix Eq. 12 does not take negative samples (no target tumor/organ in the slice) into account. Are these \"negative samples\" also excluded from the evaluation? If so, this could artificially inflate performance metrics and misrepresent the model's practical utility, as it may generate false positive segmentations in real-world scenarios where no target is present.\n2. **Training stopping criteria.** The stopping criterion is vaguely described as \"till convergence\" (line 317). Could the authors specify the exact metric used to determine convergence (e.g., loss plateau, DSC on a validation set) and the maximum number of training epochs/steps allowed? This is critical for reproducibility.\n3. **Baseline training settings**. The authors state that \"The model is trained on a single H200 GPU of 144 GB memory\" (line 317) and \"all models are assessed under identical conditions\" (line 329), but the training configurations for the baseline models are not detailed. Were these models trained from scratch on the same dataset and with the same hardware setup? Clarification is needed for a fair comparison.\n4. **Version of BiomedParse**. To ensure reproducibility, please specify which version of BiomedParse (v1 or v2) was used for generating the dataset.\n5. **Inconsistent bounding-box performance:** Appendix Table 9 shows that LangMedSAM using bounding-box prompts does not suffer the same performance degradation on MR slices as LiteMedSAM on thin-walled structures (Table 2). This is confusing, given the argument that bounding boxes struggle with such anatomy (lines 119-120, 437-439). Why is LangMedSAM's bounding-box performance an exception to this stated weakness?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T17:39:07",
"modification_date": "2025-11-12T18:02:00",
"review_url": "https://openreview.net/forum?id=FOrVEtwixO¬eId=lxhcCSODUg",
"license": "CC BY 4.0"
},
{
"id": "JhlKVh2JtN",
"forum": "FOrVEtwixO",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21360/Reviewer_sN4k",
"reviewer_name": "Reviewer_sN4k",
"rating": 0,
"confidence": 5,
"soundness": 1,
"contribution": 1,
"presentation": 1,
"summary": "This paper introduces LangMedSAM, a multi-modal medical image segmentation model that leverages natural language prompts instead of manual bounding boxes. Built upon LiteMedSAM, it supports both text-based and conventional inputs. Trained on 20 public datasets and evaluated on 4 external ones, LangMedSAM demonstrates good generalization and automation potential, providing a scalable solution for large-scale medical image segmentation.\n\nHowever, the paper’s overall presentation is weak, and the reported results appear rather coarse. There is no statistical analysis of the training data, nor any detailed evaluation of performance across individual datasets. Furthermore, the authors fail to cite the key baseline LiteMedSAM when it is first mentioned, indicating a lack of attention to essential references and academic rigor. Overall, the paper gives the impression that the authors did not approach the work with sufficient seriousness or thoroughness.",
"strengths": "1. The paper builds up a text-driven medical segment any thing model which I believe is a promising direction for MedSAM.",
"weaknesses": "1. Poor presentation quality: The paper lacks clarity and organization, making it difficult to follow the methodology and results.\n2. Insufficient experimental analysis: No statistical analysis of training data or detailed performance breakdown for individual datasets is provided. \n3. No comparison with strong baselines: The study omits comparison with specialized and widely adopted models such as nnUNet, which limits the validity of the claimed performance.\n4. Coarse results reporting: Experimental results are presented superficially without in-depth discussion or comparison. Most results are presented in boxplot while due to the limited performance differences, it is hard to see any significant differences in the figure.\n5. Missing key citation: Even the baseline LiteMedSAM, which the work builds upon, is not properly cited when first introduced.\n6. Lack of academic rigor: The omissions and limited analyses suggest inadequate attention to research completeness and reproducibility.",
"questions": "To be honest, the paper exhibits too many weaknesses to be considered for acceptance at this stage. Too many critical questions remain unanswered:\n\n1. What is the detailed distribution of the training and evaluation data?\n2. How does the model perform compared to specialized models such as nnUNet?\n3. Can text prompting effectively enhance data integration or segmentation performance?\n...\n\nOverall, the paper is far from meeting the acceptance criteria. It lacks clarity, thorough evaluation, and sufficient experimental analysis. I do not see any questions or revisions that would significantly change my current decision.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T18:37:49",
"modification_date": "2025-11-12T18:02:01",
"review_url": "https://openreview.net/forum?id=FOrVEtwixO¬eId=JhlKVh2JtN",
"license": "CC BY 4.0"
},
{
"id": "AX7YZdis7H",
"forum": "FOrVEtwixO",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21360/Reviewer_MuQb",
"reviewer_name": "Reviewer_MuQb",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes LangMedSAM, a language-prompt segmentation model for medical imaging, aiming to address the issue that traditional SAM-like models rely on slice-by-slice bounding box prompts in medical scenarios, which leads to high annotation costs. While retaining the lightweight structure of MedSAM, this method introduces a language encoding module to realize the fusion of natural language and visual prompts, thereby enabling direct segmentation of target regions through text descriptions.",
"strengths": "- It supports hybrid text and visual prompts and is compatible with existing models in the SAM family.\n\n- It conducts thorough comparisons with baseline models, including MedSAM, LiteMedSAM, and BiomedParse.\n\n- It has low inference memory consumption, enabling easier deployment in clinical systems.",
"weaknesses": "- Although LangMedSAM structurally integrates text encoding and visual prompt modules, its overall framework still follows that of SAM and its derivatives. It is essentially an engineering optimization work, with insufficient innovation and no breakthroughs in theoretical methods or training paradigms.\n\n- In the general computer vision field, there are already multiple segmentation models supporting text prompts (e.g., Grounded-SAM, CLIPSeg). However, this paper does not compare LangMedSAM with these methods.\n\n- The labels, color schemes and explanatory texts of the charts in the paper are not clear enough, which may affect the readability of experimental results.\n\n- The paper mainly trains and tests the model on 2D slices, and does not verify the model’s spatial consistency and continuity in 3D volumetric data.",
"questions": "see weakness",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T15:27:02",
"modification_date": "2025-11-12T18:02:01",
"review_url": "https://openreview.net/forum?id=FOrVEtwixO¬eId=AX7YZdis7H",
"license": "CC BY 4.0"
}
] |
INwNHRWN2o | https://openreview.net/forum?id=INwNHRWN2o | Structural Error Patterns Matter: Towards More Structure-aware GNN Evaluation and Training | 2 | 4.25 | [
2,
2,
4,
0
] | [
4,
5,
3,
5
] | 4 | [
"GNN",
"Model Evaluation",
"Error Pattern"
] | Graph Neural Networks (GNNs) are a specialized family of neural networks designed to handle graph-structured data, enabling the modeling of complex relationships within graphs. Despite significant algorithmic improvements, the issue of performance evaluation for GNNs has largely been overlooked in the literature. A crucial but underexplored aspect of GNN evaluation is understanding how errors are distributed across the graph structure, which we refer to as the "structural error pattern." To the best of our knowledge, this paper is among the first to highlight the importance of paying attention to these error patterns, which are essential not only for model selection—especially in spatial applications where localized or clustered errors can signal critical issues—but also for providing algorithmic insights into the model’s performance. In this work, we introduce a novel mathematical framework that analyzes and differentiates evaluation metrics based on their sensitivity to structural error patterns. Through a thorough theoretical analysis, we identify the limitations of traditional metrics—such as accuracy and mean squared error—that fail to capture the complexity of these error distributions. To address these shortcomings, we propose a new evaluation metric explicitly designed to detect and quantify structural error patterns, offering deeper insights into GNN performance. Our extensive empirical experiments demonstrate that this metric enhances model selection and improves robustness. Furthermore, we show that it can be incorporated as a regularization method during training, leading to more reliable GNN predictions in real-world applications. | This paper study the structural error distribution of GNN | learning on graphs and other geometries & topologies | https://openreview.net/pdf?id=INwNHRWN2o | 2025-09-10T22:43:17 | 4 | [
{
"id": "KlNLFjea5P",
"forum": "INwNHRWN2o",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3761/Reviewer_ozky",
"reviewer_name": "Reviewer_ozky",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper argues that commonly used GNN evaluation metrics such as accuracy are exchangeable, and cannot distinguish different structural error patterns across the graph. Based on the limitation, the authors propose a new metric SCS and further introduces SCA regularization, which incorporates a squared form of SCS into the training loss to discourage spatially clustered errors.",
"strengths": "The paper identifies a relevant and underexplored issue in GNN evaluation: traditional metrics do not capture spatial or structural patterns in prediction errors.\n\nThe theoretical analysis clarifies why exchangeable metrics cannot differentiate different error distributions across the graph, which is conceptually insightful.\n\nThe idea that where errors occur matters, not just how many, is intuitive and aligns with practical concerns in spatial/graph applications.",
"weaknesses": "1. SCS may be confounded by graph structure rather than model behavior. For instance, in Figure 2 (planar power-law case), SCS primarily reflects the underlying graph topology (e.g., the presence of hubs) rather than revealing model-specific failure patterns. This challenges the core claim that SCS robustly detects meaningful structural clustering of errors.\n\n2. The role of SCA is unclear when error clustering is induced by graph structure. If the clustering arises because of the inherent graph structure rather than model shortcomings, then penalizing it with SCA may not be meaningful. The paper does not clarify when SCA is beneficial versus when it may suppress necessary model behavior.\n\n3. Experimental results do not convincingly demonstrate benefit.\n- In Table 1, adding SCA often reduces accuracy or increases MSE, suggesting the method may impair predictive quality.\n- It is unclear whether lower SCS is inherently desirable, especially when accompanied by lower predictive performance.\n- The experiments do not justify the trade-off or provide scenarios where the trade-off is beneficial.\n\n4. Missing comparisons to alternative structural metrics / baselines.\n To claim the necessity or superiority of SCS, comparisons to other graph-based or spatial autocorrelation measures (or even naive distance/clustering baselines) are needed.\n\n5. Experiment scope is limited.\n Only four datasets and five backbones are tested; no ablation study, no real-world scenario, and no evaluation where structural clustering actually matters for decision-making.",
"questions": "N/A",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T13:21:00",
"modification_date": "2025-11-12T11:09:31",
"review_url": "https://openreview.net/forum?id=INwNHRWN2o¬eId=KlNLFjea5P",
"license": "CC BY 4.0"
},
{
"id": "tP1xwR2cUY",
"forum": "INwNHRWN2o",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3761/Reviewer_YGt3",
"reviewer_name": "Reviewer_YGt3",
"rating": 2,
"confidence": 5,
"soundness": 1,
"contribution": 1,
"presentation": 2,
"summary": "The paper argues that standard GNN metrics (ACC/MSE) are exchangeable and therefore blind to the spatial distribution of errors (e.g. how errors are arranged on the graph). The authors thus propose a graph-aware diagnostic, called the Structural Cluster Statistic (SCS), which is essentially Moran's $I$ over residuals on the test subgraph, and a regularizer, SCA, given by a squared version of SCS added to the loss to discourage residuals from being clustered. Experiments show SCS reveals clustered errors and that SCA reduces SCS at a small cost in ACC/MSE.",
"strengths": "1) Some motivation for structure-aware diagnostics; Fig.\\1 illustrates a failure mode of exchangeable metrics;\n(2) The solution is a simple, computable ($O(|E|)$) statistic tied to a well-known spatial measure (Moran's $I$).",
"weaknesses": "Overall, while I do appreciate the importance of diagnostics that would reflect patterns of correlations between nodes in the graph, I do not necessarily understand the authors’ proposed solution. Here are my main reservations and questions.\n\n1) **Theory is not convincing as stated.** Theorem 3.1 claims that if two error distributions have the same expected sum $\\mathbb{E}[S(\\epsilon)]$, then the expectation of any exchangeable metric should agree under both. This seems a little odd. Consider a simple setting, with two errors $\\epsilon_1, \\epsilon_2$. Suppose that under distribution P, $\\epsilon_1 ,\\epsilon_2 ~$ Uniform(0,1), so that the $\\epsilon_1 +\\epsilon_2 \\sim Uniform(0,2)$. Under distribution 2, $\\epsilon_1 \\sim Bernouilli(0.5), \\epsilon_2= \\epsilon_1$. The expectations of the sums are equivalent (all equal to 1). But the expectation of the maximum (which is exchangeable) is 2/3 in the first case, and 0.5 in the second. Wouldn't this contradict the theorem?\n\n2) **The motivation is too unclear.** While I agree that patterns of autocorrelation in residual structures could be better investigated in GNNs, the authors' argument for its necessity is not rigorous. They claim that \"Without structure-aware evaluation, practitioners lack insights into how prediction errors manifest across the graph, hindering their ability to diagnose, address, and prevent localized failures effectively,\" and that diagnosing errors is necessary to improve the fit. However, the problem is not well-formulated mathematically. From the authors' comments in the paper, it is unclear if the problem is to find local failure modes (e,g, sensor malfunctions), or to allow to select models. The problem could be formulated more mathematically. For instance, consider a regression task, with $f$ as a GNN function. The standard regression model assumes: $f(X) = \\mathbb{E}[Y|X]$, so that $y = f(x) + \\epsilon$, where $\\epsilon$ indicates iid noise. Residuals in the errors could indicate two things: \n 1) *Residual structure in $\\epsilon$ (model mis-specification)*: In this case, $\\epsilon$ is not iid and depends on $x$/ the graph, likely necessitating a different architecture (e.g. more complex, to explain more of the signal).\n 2) *Autocorrelation in $\\epsilon$:* The structure is well-approximated, but $\\epsilon$ is not iid, yet independent of $X$. For example, in traffic prediction, heavy rainfall might correlate traffic deviations in a neighborhood. In this case, autocorrelation is not a problem for the fit, but it would be at inference time when estimating variance, etc. For instance, if the SCS metric shows correlation in the error, then it could be argued that randomly assigning nodes to train/validation/test splits will probably induce data leakage.\n\n\n3) **I find part 2 (SCA) confusing and potentially detrimental to the paper.** Why is regularizing towards a solution that \"looks good\" beneficial? The first part of the paper focuses on diagnostics, while the second seems to suggest training the model to perform well on the test set. However, regularizing a model away doesn't address the underlying issues. If the goal is to diagnose a lack of fit in the GNN class, how does this regularization strategy that would essentially mask it contribute to that? An alternative could have been to use SCS for model selection: i.e. amongst models with different architectures, use SCS to choose which architecture provides the better fit.\n\n\n4) Moran's I is usually considered to be test statistic --- not a metric---, and not particularly interpretable as such. It is usually standard practice to use it to test the null (e.g no spatial clustering). How do the authors evaluate whether their SCS is within expected variation?\n\n5) **Lack of comparisons to other metric**: finally, the authors could have argued that the suite of tools developed for spatial statistics could be pertinent, --- or at least mention these techniques. Examples include local Moran's I, Geary's C --- or measures of patchiness (e..g CHAOS https://www.nature.com/articles/s41467-022-34879-1#Sec10)",
"questions": "1) Is the test set assumed to be a fully connected graph? How do the authors ensure that the statistic can be computed? (e.g. if the test set is 20\\% of the data, chosen randomly in the graph, wouldn't we expect the graph induced by the test nodes to be pretty sparse and disconnected)?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T10:14:00",
"modification_date": "2025-11-12T11:09:32",
"review_url": "https://openreview.net/forum?id=INwNHRWN2o¬eId=tP1xwR2cUY",
"license": "CC BY 4.0"
},
{
"id": "RYS6EWPU7j",
"forum": "INwNHRWN2o",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3761/Reviewer_3ELp",
"reviewer_name": "Reviewer_3ELp",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper studies the structural error patterns in GNN models, proposes a new evaluation metric SCS explicitly to detect and quantify structural error patterns, and show that SCS can be adapted into a regularization framework during model training. The experiments results illustrate that the consideration of such factors can leading the errors distributed more uniformly.",
"strengths": "1, It is interesting to study the error pattern in the GNN models.\n2, The proposed metrics can reduce the chances of the error clusters in the GNN network.\n3. Some applications might prefer a uniform distribution of the error loss.",
"weaknesses": "1, The motivation of this paper should be more clear\n2. The SCA regularization proposed do not always lead to improvement of overall performance\n3. More competitors should be included in the experiments",
"questions": "1.\tThe motivation in this paper should be clear, Currently, the paper has two objectives, the overall performance and error uniform distribution. When two factors conflict, what are your desired results. Even the overall performance is the same, is the uniform error distribution better than other case? In Figure 1, is it meaningful to just distribute errors with the same performance? Suppose each node has the same weight, the overall performance is more important metric than the error distribution.\n\n2.\tThe value of the error distributions study should improve the overall performance. However, we can see that from Figure 1, the overall performance is not improved, and sometime degraded, which damages the value of the proposed methods.\n\n3.\tIt might result in more challenges to improve the overall performance of GNN, if we apply SCA regularization to achieve structural uniformity of errors, as it is hard to capture the error pattern. For example, for the cluster error in Figure 1b, it is relatively easy to detect the error, which then further provides chances to fix them.\\\n\n4.\tThe compared methods are not new. GCN, GAT are very early works. Some work on heterophily graph might be related to your work, and can be compared in the experimental study\n\n5.\tThis paper applies SCA in the GNN training. It is better to discuss such a regulation works on graphs with different distributions, such as power-law graph. \n\n6.\tThe paper mainly studies the structural factors related to error patterns. How about other factors like content/label of neighbor nodes.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T20:26:12",
"modification_date": "2025-11-12T11:09:32",
"review_url": "https://openreview.net/forum?id=INwNHRWN2o¬eId=RYS6EWPU7j",
"license": "CC BY 4.0"
},
{
"id": "9j61A6u6ui",
"forum": "INwNHRWN2o",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3761/Reviewer_ycLD",
"reviewer_name": "Reviewer_ycLD",
"rating": 0,
"confidence": 5,
"soundness": 1,
"contribution": 1,
"presentation": 1,
"summary": "This paper suggests a new metric to compute errors in GNNs, called Structural Cluster Statistic (SCS).",
"strengths": "The idea to design a metric that can encapsulate how the miss-labeling of a GNN is spread across the graph is interesting and can be usefull.",
"weaknesses": "1. The paper is hand-wavy and full of undefined terms. For example, the term \"error pattern,\" which seems to be the main focus of this work, is not well-defined anywhere, neither in words nor mathematically. Figure 1 suggests that what the authors mean by \"error pattern\" is some continuous property over the graph topology of the structures that nodes that are mislabeled form. Nonetheless, even in \"Figure 1\", it is not mathematically defined what \"random\", \"cluster\", or \"Dispersed Error\" means. These terms must be rigorously well-defined. In the abstract, this is referred to as \"structural error pattern\"\n\n2. The proposed method is not well-motivated. Lines 61 hand wave that \"error patterns\" are important, but they did not demonstrate in any example, mathematical formulation, nor references, the implication of not examining \"error patterns\", The fact that \"error patterns\" is not well defined as explained in (1), only makes it harder to understand what is the motivation for the proposed approach.\n\n4. The evaluation section is weak for a paper with main contribution being an evaluation metric. I expect to see an exdtensive evaluation across many diverse datasets and GNNs. The authors uses 4 datasets, two of which are known to has many limitations including mislabeled examples [1].\n\n5. It is not clear what contribution the proposed method provides. In the evaluation section, there is no demonstration of the insights that the new metric reveals. This seems natural to provide, especially as the author claims in their abstract, \"A crucial but underexplored aspect of GNN evaluation is understanding how errors are distributed across the graph structure, which we refer to as the 'structural\nerror pattern”. If this is crucial, as the authors suggest, it would be beneficial to see at least one example of insights obtained by these patterns (assuming they are well-defined, which is not the case in this work)\n\n6. The paper is not sound and poorly written. The claims raised in the abstract and across the paper, on the need for the proposed metric, are not demonstrated or shown anywhere to support this claim.\n\n7. Not clear why the authors provided in Figure 1 as an example a grid graph. This emphasises the lack of motivation for the suggested approach as it was not even demonstrate on some real graph to show the need for the approach or what it can reveal over the graph.\n\n[1] Position: Graph learning will lose relevance due to poor benchmarks, Bechler-Speicher et al., ICML 2025.",
"questions": "1. What is \"error pattern\" ? please define this mathematically, and show how your metric provides insights into discovering this, mathematically.\n\n2. Please provide examples of real use-cases where the defined \"error pattern\" above holds crucial information and show why existing metric do not capture that, on real examples.\n\n3. Please well-define mathematically the 3 types of patterns in Figure 1, in a way that is consistent with them being \"error pattern\" (after this is also well-defined).",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-26T14:05:04",
"modification_date": "2025-11-12T11:09:33",
"review_url": "https://openreview.net/forum?id=INwNHRWN2o¬eId=9j61A6u6ui",
"license": "CC BY 4.0"
}
] |
LtTuAVkKoM | https://openreview.net/forum?id=LtTuAVkKoM | Focusing by Contrastive Attention: Enhancing VLMs' Visual Reasoning | 4 | 3.4 | [
4,
4,
4,
4,
4
] | [
3,
3,
4,
4,
3
] | 5 | [
"Vision-Language Models",
"Visual Reasoning",
"Large Language Model",
"LLM",
"VLM",
"Reasoning"
] | Vision-Language Models (VLMs) have demonstrated remarkable success across diverse visual tasks, yet their performance degrades in complex visual environments. While existing enhancement approaches require additional training, rely on external segmentation tools, or operate at coarse-grained levels, they overlook the innate ability within VLMs. To bridge this gap, we investigate VLMs' attention patterns and discover that: (1) visual complexity strongly correlates with attention entropy, negatively impacting reasoning performance; (2) attention progressively refines from global scanning in shallow layers to focused convergence in deeper layers, with convergence degree determined by visual complexity. (3) Theoretically, we prove that the contrast of attention maps between general queries and task-specific queries enables the decomposition of visual signal into semantic signals and visual noise components. Building on these insights, we propose Contrastive Attention Refinement for Visual Enhancement (CARVE), a training-free method that extracts task-relevant visual signals through attention contrasting at the pixel level. Extensive experiments demonstrate that CARVE consistently enhances performance, achieving up to 75% improvement on open-source models. Our work provides critical insights into the interplay between visual complexity and attention mechanisms, offering an efficient pathway for improving visual reasoning with contrasting attention. | We propose Contrastive Attention Refinement for Visual Enhancement (CARVE), a training-free method that extracts task-relevant visual signals through attention contrasting at the pixel level. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=LtTuAVkKoM | 2025-09-12T23:20:56 | 9 | [
{
"id": "fe0598m4vt",
"forum": "LtTuAVkKoM",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4502/Reviewer_WNpL",
"reviewer_name": "Reviewer_WNpL",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "The paper introduces CARVE, a training-free method that compares attention from a general instruction and with the task question, to create a pixel-level mask that removes visual noise and improves VLM reasoning focus on the core part. It shows that visual complexity increases attention entropy, and higher entropy reduces accuracy. They conduct experiments to show consistent gains using their attention entropy based training-free methods across models and datasets, with ablations on time steps, layers, and mask parameters.",
"strengths": "1. CARVE shows consistent improvements over the original models and external-tool baselines;\n2. The paper formalizes attention-entropy clearly, shows complexity -> entropy and entropy -> accuracy trends, and provides a closed-form contrasted attention used for masking.\n3. Abalation results are through across time steps and layers, plus sensitivity to top-p and region count K, with practical guidance. They also compare with SAM/YOLO/CLIP/ViCrop and gradient/attention variants.",
"weaknesses": "Refer to Questions section.",
"questions": "1. For SAM/YOLO/CLIP/ViCrop, were thresholds, confidence cutoffs, and post-processing tuned to near-optimal for each dataset & model to ensure a fair comparison?\n2. The paper needs an error case analysis to clarify the limits of the method and guide usage. Does it still work for tasks with global-context or multi-object queries, or other VQA tasks involving more reasoning?\n3. Have you considered in-attention sharpening (no image masking) that directly sharpens the difference between question and general attention? For example: compute (A' <- (A^Q)/(A^G+\\lambda)) and use softmax(log A'/t) to renormalize or sharpen the cross-attention weights. It could work as a baseline very direct to the motivation.\n\nIf the authors answer my questions well and make these clarifications, I am happy to raise my review score.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-10T00:28:49",
"modification_date": "2025-11-12T11:16:44",
"review_url": "https://openreview.net/forum?id=LtTuAVkKoM¬eId=fe0598m4vt",
"license": "CC BY 4.0"
},
{
"id": "9r97pS7eKb",
"forum": "LtTuAVkKoM",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4502/Reviewer_fSTn",
"reviewer_name": "Reviewer_fSTn",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper investigates the degradation of VLMs performance in visually complex scenes by providing a quantitative analysis that establishes a clear link between visual complexity (defined by texture and color), increased attention entropy, and reduced model accuracy. Based on this finding, the paper proposes CARVE, a training-free method to improve VLM reasoning. The core idea is to contrast the attention maps generated from a task-specific question against those from a general, descriptive prompt. This contrast is designed to isolate task-relevant \"semantic signals\" from task-agnostic \"visual noise.\" Extensive experiments on several VLMs, demonstrate consistent performance improvements across multiple benchmarks.",
"strengths": "1. While the phenomenon that \"complex visual information impairs performance\" is intuitive, the authors' contribution lies in rigorously and quantitatively analyzing this relationship. By defining concrete metrics for visual complexity and linking them empirically to attention entropy and downstream task performance (Figs. 4 & 5), the paper provides a solid, data-driven foundation for its claims. This analysis transforms an intuitive observation into a measurable scientific insight, which is a significant strength.\n\n2. Based on this finding, This paper demonstrates clear effectiveness as a training-free enhancement. The ability to improve performance on a strong, recent model like Qwen2.5-VL is particularly impressive and suggests that even advanced VLMs suffer from attention dispersion and can benefit from this approach. The substantial performance gains observed on weaker models (e.g., up to 75% on LLaVA-1.5-13B) further underscore the utility of CARVE as a mechanism to compensate for inherent model limitations in focusing ability.",
"weaknesses": "1. A primary weakness of the proposed method is its sensitivity to hyperparameters, which is a significant concern for a training-free approach that aims for broad applicability. Figure 7 clearly shows that the performance of CARVE is highly dependent on the choice of the top-p threshold (p) and the maximum number of kept regions (K). For a method to be truly \"plug-and-play,\" it should be robust to such choices or provide a principled, automatic way to set them. The current need for manual tuning for optimal results somewhat undermines the convenience of its training-free nature.\n\n2. It is hoped that more models, especially more experiments on complex visual scene benchmarks, will be conducted to further enhance the effectiveness and persuasiveness of the method proposed in this paper.\n\nIf the robustness of the method and the performance gains on more benchmarks can be further demonstrated, I am willing to increase the score.",
"questions": "See weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T21:12:38",
"modification_date": "2025-11-12T11:16:44",
"review_url": "https://openreview.net/forum?id=LtTuAVkKoM¬eId=9r97pS7eKb",
"license": "CC BY 4.0"
},
{
"id": "YMjxmj7iqA",
"forum": "LtTuAVkKoM",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4502/Reviewer_pgMe",
"reviewer_name": "Reviewer_pgMe",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper studies why VLMs fail to “focus” in complex scenes and proposes a training-free enhancement, CARVE (Contrastive Attention Refinement for Visual Enhancement). Empirically, the authors show that (i) visual complexity (texture/color) correlates with higher attention entropy and lower accuracy; (ii) attention sharpens from shallow to deep layers; and (iii) contrasting attention from a general instruction vs. the task question yields a pixel-level mask that suppresses visual noise, improving VQA-style benchmarks across LLaVA-1.5 and Qwen2.5-VL models.",
"strengths": "1. A simple inference-time refinement method for VQA reasoning tasks that improves accuracy without additional training or heavy compute.\n2. Plug-and-play method applicable to transformer-based VLMs.\n3. Clear explanation of the problem.",
"weaknesses": "1. Potential degradation on reasoning tasks beyond VQA, especially those requiring global context.\n2. Limited evaluation scope: only four datasets were used for evaluation. Given that no training is involved, this is relatively small for VLM evaluation and for demonstrating the generalizability of the results.\n3. Relatively small gains given ~3× inference passes. The paper mentions early termination to reduce two of the three passes but the total overhead remains underexplored.",
"questions": "1. Following the weaknesses above, I am a bit concerned about the generalizability of the proposed method, beyond VQA reasoning tasks that rely on local context. Could you please elaborate on this? I would be interested in seeing negative/neutral cases where CARVE hurts or offers no benefit (e.g., wrong region kept; too small K; OCR-like questions).\n2. Related to the above question, could you please elaborate on how the model would perform on more general reasoning tasks, such as image captioning? In those cases, I would guess that the general and the question attention masks become very similar, which could be a failure point for the proposed approach.\n3. How much additional memory and compute does the method require at inference compared to the raw model without CARVE? According to the paper, the number of passes is tripled (once for the general instruction, once for the question, and once for the question with the masked image).\n4. What does “accuracy” exactly mean in Table 3, and how is it computed for models like SAM and YOLO?\n5. In Figure 5-b, both performance and entropy are plotted on a single y-axis; it is unclear what the y-axis represents and what the exact performance values are for each point. Could you please clarify Figure 5-b in more detail?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T18:10:14",
"modification_date": "2025-11-12T11:16:45",
"review_url": "https://openreview.net/forum?id=LtTuAVkKoM¬eId=YMjxmj7iqA",
"license": "CC BY 4.0"
},
{
"id": "D2FX53HFbh",
"forum": "LtTuAVkKoM",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4502/Reviewer_whRJ",
"reviewer_name": "Reviewer_whRJ",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper defined a “visual complexity” concept including texture and color complexity, then explored how they affect Vision-Language Models' (VLMs) attention and discovered some phenomena. Building upon the findings, the authors proposed a training-free method called CARVE. Concretely, CARVE generated contrastive attention maps to mask visual noise, crop relevant regions, and magnify them for improved inference, thereby improving visual reasoning. Experimental results exhibited improvements across multiple VLMs and benchmarks, demonstrating the effectiveness of the proposed method.",
"strengths": "1. This paper analyzed VLMs from perspectives of visual complexity and attention entropy, finding several intuitive phenomena. With the findings, the authors presented the CARVE method to enhance the visual reasoning capability of VLMs.\n\n2. The proposed CARVE is training-free, the empirical results demonstrated its effectiveness on different VLMs and test sets, and model scales range from 3B to 13B, making it feasible for various visual reasoning scenarios.",
"weaknesses": "1. Numerous previous studies have already investigated VLMs’ attention mechanism and its relationship with hallucination or visual reasoning, such as [Devils, CVPR 2025], [EAH, EMNLP 2025], [Farsight, CVPR 2025], [TAME, ICLR 2025], [FastV, ECCV 2024], [Clearsight, CVPR 2025], and [SEE WHAT YOU ARE TOLD, ICLR 2025]. More importantly, there have been long Chain-of-Thought reasoning VLMs since OpenAI o1 and DeepSeek-R1(-0528), such as VLM-R^3, LlamaV-o1, Vision-R1, and Visual-O1, which are all of strong visual reasoning capabilities on most of benchmarks. It is necessary for the authors to clarify the exclusive contribution of the proposed method, explain limitations of the prior works, and explicitly distinguish their proposed method from them.\n\n2. The “visual complexity” concept based solely on \"texture and color dimensions\" was poorly justified, there is no validation that these dimensions can capture relevant aspects, or discussion of why other visual aspects, such as object density, or lighting variations were ignored.\n\n3. The paper claimed to “prove” that contrastive attention scores enable decomposition of visual signals into semantic and noise components, but this claim lacked enough theoretical proofs. For example, the core theoretical claim in Definition 1 that attention decomposes as Eq. 4.1 lacked rigorous justification, and Appendix A.2 didn’t interpret that clearly. The authors assumed this decomposition exists, but didn't establish why the specific multiplicative decomposition should hold neither from theoretical proofs nor empirical validations. Apart from that, Eq. 4.3 and Eq. 4.4 are more like heuristic designs rather than theoretically grounded.\n\n4. The experimental evaluation of this work was limited. Results are primarily shown only for TextVQA dataset, with no comprehensive testing on standard benchmarks like GQA or ScienceQA. Also, the paper requires proper baselines to demonstrate the superiority of the proposed CRAVE, more comparisons need to be conducted, such as such as [Devils, CVPR 2025], [EAH, EMNLP 2025], [Farsight, CVPR 2025], [TAME, ICLR 2025], [FastV, ECCV 2024], [Clearsight, CVPR 2025], and [SEE WHAT YOU ARE TOLD, ICLR 2025]. \n\n5. The Abstract claimed “75% improvement” is misleading, even this paper adopted ratio metric to scale up the limited improvements, the results are up to 71.83% at the old Llava1.5-7B. Additionally, there was no statistical significance testing for empirical results, and no confidence or variance intervals, making the reported results random and unreliable.",
"questions": "1. Given the long Chain-of-Thought reasoning methods, such as OpenAI o1 and DeepSeek-R1(-0528) distilled LLMs and VLMs (e.g., LLaVA-o1, LlamaV-o1, VLM-R3), could the current short CoT models outperform these o1-like VLMs? If not, what is the meaningfulness of this research?\n\n2. Why could general queries capture noise while specific queries capture semantic information, is this assumption universally valid or theoretically grounded? And what if both general and specific queries produce similar attention patterns?\n\n3. All experiments were conducted with greedy decoding, how would the CRAVE perform under a common sampling setting?\n\n4. How could the proposed CARVE handle the circumstance where the correct reasoning demands integrating information from multiple disparate regions of an input image? Also, this work assumed general instructions induce uniform semantic signals, but does that always hold? Some images might naturally focus on certain regions regardless of instructions.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T10:34:43",
"modification_date": "2025-11-12T11:16:47",
"review_url": "https://openreview.net/forum?id=LtTuAVkKoM¬eId=D2FX53HFbh",
"license": "CC BY 4.0"
},
{
"id": "flHIGZGYTP",
"forum": "LtTuAVkKoM",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4502/Reviewer_EEGZ",
"reviewer_name": "Reviewer_EEGZ",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper presents an empirical analysis showing a positive correlation between visual complexity and attention entropy, and a negative correlation between attention entropy and visual reasoning accuracy. Building on these findings, it introduces a training-free method, CARVE, which contrasts attention maps from a “general instruction” and a “task-specific question,” normalizes them to obtain semantically relevant attention, and then performs pixel-level masking, cropping, and magnification to suppress visual noise and focus on task-relevant regions. The paper evaluates CARVE on Qwen2.5-VL and LLaVA models across several typical VQA benchmarks, reporting substantial gains.",
"strengths": "1. The paper quantifies visual complexity via texture density and hue diversity, measures attention dispersion with Shannon entropy, and demonstrates a correlation chain linking complexity, attention entropy, and accuracy. A longitudinal inter-layer analysis further shows that deeper layers exhibit more concentrated attention but increased variance, reinforcing the method’s motivation.\n\n2. The approach requires only a single forward pass to extract two attention maps and take their contrast; no external segmenter or additional training is needed.\n\n3. Evidence spans models and datasets, includes ablations across layers and time steps, sensitivity studies on thresholds and the number of regions, and comparisons against SAM/YOLO/CLIP/ViCrop, which together provide a well-rounded empirical case.",
"weaknesses": "1. The paper instantiates complexity through two proxies: Canny edge detection and Hue distribution, but offers limited references and few independent validation studies. Moreover, in settings with comparable edge density and color statistics (e.g., Table VQA benchmarks), attributing failures to attention dispersion driven by these proxies remains debatable.\n\n2. In the exploration phase, the attention entropy H is computed on contrasted attention by default, yet the design choice is not thoroughly analysed. In the method, contrasted attention is further treated as an effective estimator of semantic localization to generate masks, but there is no alignment against external localization ground truth, nor a direct comparison with the native attention readily available from standard models.\n\n3. The use of contrasted attention assumes that attention under the general instruction primarily reflects task-agnostic visual factors like noise or background. The exploration section does not directly test or substantiate this premise.",
"questions": "NA",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T19:38:20",
"modification_date": "2025-11-12T11:16:47",
"review_url": "https://openreview.net/forum?id=LtTuAVkKoM¬eId=flHIGZGYTP",
"license": "CC BY 4.0"
}
] |
8UZpmrxoLG | https://openreview.net/forum?id=8UZpmrxoLG | Astra: General Interactive World Model with Autoregressive Denoising | 5 | 3 | [
6,
4,
6,
4
] | [
2,
3,
3,
4
] | 4 | [
"world model",
"video generation"
] | Recent advances in diffusion transformers have empowered video generation models to generate high-quality video clips from texts or images. However, world models with the ability to predict long-horizon futures from past observations and actions remain underexplored, especially for general-purpose scenarios and various forms of actions. To bridge this gap, we introduce Astra, an interactive general world model that generates real-world futures for diverse scenarios (e.g., autonomous driving, robot grasping) with precise action interactions (e.g., camera motion, robot action). We propose an autoregressive denoising architecture and use temporal causal attention to aggregate past observations and support streaming outputs. We use a noise-augmented history memory to avoid over-reliance on past frames to balance responsiveness with temporal coherence. For precise action control, we introduce an action-aware adapter that directly injects action signals into the denoising process. We further develop a mixture of action experts that dynamically route heterogeneous action modalities, enhancing versatility across diverse real-world tasks such as exploration, manipulation, and camera control. Astra achieves interactive, consistent, and general long-term video prediction and supports various forms of interactions. Experiments across multiple datasets demonstrate the improvements of Astra in fidelity, long-range prediction, and action alignment over existing state-of-the-art world models. | generative models | https://openreview.net/pdf?id=8UZpmrxoLG | 2025-09-17T23:16:34 | 4 | [
{
"id": "FS3A0rAUuF",
"forum": "8UZpmrxoLG",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9431/Reviewer_toMU",
"reviewer_name": "Reviewer_toMU",
"rating": 6,
"confidence": 2,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "In this paper, author proposes Astra, an interactive world model that extends pre-trained diffusion models for long-horizon, action-conditioned video prediction. \n\nThe core contribution lies in three components in an auto-regressive denoising framework: (1) an action-aware adapter that injects action signals into the latent space of a pre-trained diffusion model, (2) a noise-augmented history memory mechanism that balances temporal consistency and action responsiveness, and (3) a mixture of action experts that routes heterogeneous action modalities to specialized experts. \n\nAstra is evaluated on self-proposed benchmark consisting of diverse datasets and demonstrates some improvements in long-range prediction stability compared to state-of-the-art models.",
"strengths": "**Strength (1)**: The paper is well-organized. Authors explain the core ideas with clear diagrams and concrete algorithmic descriptions. \n\n**Strength (2)**: Astra is a single model across multi-modal action spaces, covering camera poses, keyboard/mouse inputs, and robot poses. \n\n**Strength (3)**: The proposed solutions exhibit several elegant and practical design choices:\n- The action-free guidance mechanism offers a simple, original mechanism to amplify action effects without heavy architectural changes.\n- The noise-augmented history memory is an elegant, parameter-free training strategy to reduce “visual inertia” and force the model to rely more on action signals, improving responsiveness without modifying the backbone.\n\n**Strength (4)**: The authors conduct evaluations across multiple domains, including autonomous driving, egocentric, and robotic settings, showcasing reasonable coverage.",
"weaknesses": "**Weakness (1)**: The definition and formulation of action signals are insufficiently specified. The paper does not clearly describe how different types of actions (e.g., camera poses, keyboard/mouse inputs, robot poses, etc.) are represented, parsed, and projected to the action encoder.\n\n**Weakness (2)**: A comparable method, YUME [1], is not discussed in Section 2 (Related Work). The paper does not clearly articulate how Astra differs from or improves upon YUME, which weakens presentation.\n\n**Weakness (3)**: Experimental validation of design choices is limited. For example, no ablation study isolates the contribution of the Mixture of Action Experts (MoAE). A comparison against a simpler variant without a gating network would clarify whether MoAE provides meaningful gains.\n\n**Weakness (4)**: Quantitative comparisons with existing world modeling methods are lacking. Although authors cite several relevant works [2, 3, 4, 5], Astra is not evaluated against them, making it difficult to assess the model's relative performance and significance. \n\n**Weakness (5)**: Although Astra is positioned as a general interactive world model trained on a mixture of five datasets, all evaluations are conducted on held-out data drawn from these same domains. It remains unclear whether the model generalizes to unseen environments.\n\n**Weakness (6)**: The paper combines pose tracking with human evaluation to assess \"instruction following\" in Astra-Bench, but the metric definition, aggregation procedure, and scoring protocol are not clearly specified. It is difficult to compare against future work. \n\n[1] Mao, Xiaofeng, Shaoheng Lin, Zhen Li, Chuanhao Li, Wenshuo Peng, Tong He, Jiangmiao Pang, Mingmin Chi, Yu Qiao, and Kaipeng Zhang. \"Yume: An interactive world generation model.\" arXiv preprint arXiv:2507.17744 (2025).\n\n[2] Cen, Jun, Chaohui Yu, Hangjie Yuan, Yuming Jiang, Siteng Huang, Jiayan Guo, Xin Li et al. \"WorldVLA: Towards Autoregressive Action World Model.\" arXiv preprint arXiv:2506.21539 (2025).\n\n[3] Huang, Siqiao, Jialong Wu, Qixing Zhou, Shangchen Miao, and Mingsheng Long. \"Vid2World: Crafting Video Diffusion Models to Interactive World Models.\" arXiv preprint arXiv:2505.14357 (2025).\n\n[4] Bar, Amir, Gaoyue Zhou, Danny Tran, Trevor Darrell, and Yann LeCun. \"Navigation world models.\" In Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 15791-15801. 2025.\n\n[5] Bruce, Jake, Michael D. Dennis, Ashley Edwards, Jack Parker-Holder, Yuge Shi, Edward Hughes, Matthew Lai et al. \"Genie: Generative interactive environments.\" In Forty-first International Conference on Machine Learning. 2024.",
"questions": "**Question (1)**: In the evaluation of Table 1, Wan-2.1 [6] is used as a baseline. How is Wan-2.1 adapted to accept continuous action inputs during evaluation?\n\n**Question (2)**: YUME [1] also extends Wan-2.1 [6] for interactive video prediction. Could the authors explain why Wan-2.1 is chosen as the base model instead of YUME?\n\n**Question (3)**: Astra-Bench uses both MegaSaM [7] and human evaluations for “instruction following.” Could the authors clarify how the numerical values of \"instruction following\" in Tables 1 and 2 are computed?\n\n**Question (4)**: All experiments train on a dataset mixture across five domains. There is no zero-shot evaluation on held-out domains. Does the model generalize to completely new environment unseen during training?\n\n**Question (5)**: Authors claim that increasing the length of history improves temporal consistency but weakens responsiveness (Line 257). However, no supporting quantitative data are provided. Could the authors supply such evidence?\n\n**Typographical Error**: “Eperimental” → “Experimental” (Line 594).\n\n[6] Wan, Team, Ang Wang, Baole Ai, Bin Wen, Chaojie Mao, Chen-Wei Xie, Di Chen et al. \"Wan: Open and advanced large-scale video generative models.\" arXiv preprint arXiv:2503.20314 (2025).\n\n[7] Li, Zhengqi, Richard Tucker, Forrester Cole, Qianqian Wang, Linyi Jin, Vickie Ye, Angjoo Kanazawa, Aleksander Holynski, and Noah Snavely. \"MegaSaM: Accurate, fast and robust structure and motion from casual dynamic videos.\" In Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 10486-10496. 2025.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-07T12:17:45",
"modification_date": "2025-11-12T12:17:11",
"review_url": "https://openreview.net/forum?id=8UZpmrxoLG¬eId=FS3A0rAUuF",
"license": "CC BY 4.0"
},
{
"id": "5hwiiiZd76",
"forum": "8UZpmrxoLG",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9431/Reviewer_AW9Q",
"reviewer_name": "Reviewer_AW9Q",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper introduces Astra, a framework for interactive world modeling that generates long and temporal coherence video sequences across diverse scenarios. Astra enhances a pre-trained video model with an light-weight action-aware adapter for precise action conditioning, a noise-augmented history memory during training to ensure long-term consistency, and a mixture of action experts to effectively handle diverse action inputs.",
"strengths": "1. This paper uses a lightweight action-aware adapter for precise action conditioning.\n2. Astra achieves good responsiveness and is able to generate long, temporally coherent video sequences by employing a noise-as-mask strategy during training.\n3. Astra employs a mixture of action experts to effectively adapt to diverse scenarios and handle various types of action inputs.",
"weaknesses": "1. Mixture of action experts idea is similar to [1, 2] and action-aware adapter is similar to [3, 4]. Please provide a conceptual comparison with these reference.\n2. The paper does not thoroughly analyze the underlying reasons why the noise-as-mask strategy enables the generation of long, temporally coherent video sequences.\n3. The paper does not explain why the router network performs so well across diverse scenarios and with various types of action inputs.\n\n[1] Mixture of Action Expert Embeddings: Multi-Task ACT\n\n[2] DriveMoE: Mixture-of-Experts for Vision-Language-Action Model in End-to-End Autonomous Driving\n\n[3] Long-Context Autoregressive Video Modeling with Next-Frame Prediction\n\n[4] Epona: Autoregressive Diffusion World Model for Autonomous Driving",
"questions": "1. In Section 3.3, the explanation of the types of random noise and blur used is unclear. Please provide a more detailed description.\n2. It was not properly analyzed the lightweight action-aware adapter model complexity. Please provide a more detailed description and comparison.\n3. In Figure 6, compared to YUME, the results from Astra appear to exhibit some color shift. Could the authors explain the cause of this phenomenon?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-06T12:41:58",
"modification_date": "2025-11-12T12:17:12",
"review_url": "https://openreview.net/forum?id=8UZpmrxoLG¬eId=5hwiiiZd76",
"license": "CC BY 4.0"
},
{
"id": "vz6piu3U8j",
"forum": "8UZpmrxoLG",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9431/Reviewer_ynyx",
"reviewer_name": "Reviewer_ynyx",
"rating": 6,
"confidence": 3,
"soundness": 4,
"contribution": 3,
"presentation": 3,
"summary": "The paper addresses a key limitation in current video generation and world modeling approaches which are lack of interactivity and long-horizon consistency. While diffusion-based models are able to generate high-fidelity videos from text or images, they often produce short, self-contained clips, fail to respond dynamically to user actions or control signals, and struggle with error accumulation in long rollouts. The paper addresses this by building a general-purpose, interactive world model that can simulate realistic futures across diverse domains (e.g., driving, robotics, exploration) while maintaining responsiveness to actions and temporal coherence. It proposes a lightweight module named as ACT-Adapter that injects action signals directly into the latent space of a pre-trained video diffusion backbone, a training strategy of noise-augmented history memory training which corrupts historical frames to reduce over-reliance on visual context and improve responsiveness., and mixture of action experts that handles multiple action modalities of camera, robot pose, and keyboard/mouse. Astra-bench is the benchmark suite used for evaluation which spans across multiple datasets to evaluate the visual quality and instruction-following performance.",
"strengths": "1. The paper addresses the limitation of passive video generation by showing interactive world modeling where video synthesis is conditioned on external actions .\n2. The framework proposes a single, general-purpose model by training on a diverse datasets of driving, robotics, exploration and handles heterogeneous action types via a Mixture of Action Experts.\n3. The paper proposes a noisy memory training strategy which forces the model to reply on action signals and not over-rely on past visual information.",
"weaknesses": "1. The ACT-Adapter seems be to showing a minimal performance improvement in Table 2. The ablation study in Table 2 shows it provides a score of 0.669 on Instruction Following, while a cross attn. adapter achieves 0.642, suggesting the performance gain of the new adapter is relatively small.\n2. The comparison to baseline methods in Table 1 does not reflect a fair comparison.\na). Since Wan2.1 is the pre-trained backbone of Astra, it would be an ablation of the paper instead of baseline.\nb). MatrixGame and YUME are described as domain-specific (\"game-specific\" and \"walking-specific\"). Since Astra is trained on a general mixture of data from multiple domains (driving, robotics, walking), the comparison of domain-specific with a general model does not seem to show a fair comparison. It creates a question that is it the data that is giving a good performance or is it the architecture and training strategy helping with the performance.",
"questions": "There seems to be some contradiction in training details. Section 4.1 mentions that the number of target frames is fixed to 33. Appendix A.2 mentions that the number of target frames is set to 32. Can the authors please clarify this?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T09:10:14",
"modification_date": "2025-11-12T12:17:12",
"review_url": "https://openreview.net/forum?id=8UZpmrxoLG¬eId=vz6piu3U8j",
"license": "CC BY 4.0"
},
{
"id": "XoVIR0wFho",
"forum": "8UZpmrxoLG",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9431/Reviewer_qwc8",
"reviewer_name": "Reviewer_qwc8",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "The paper introduces Astra, a new framework for building a general-purpose, interactive world model. The central problem it addresses is that existing video generation models (like diffusion transformers) can create high-fidelity clips but lack true interactivity—they cannot generate long, coherent videos that dynamically and precisely respond to external user actions (e.g., camera controls, robot actions, or vehicle movements).",
"strengths": "The paper's primary strength lies in its proposal of a Astra framework, which skillfully combines the powerful generative capabilities of pre-trained diffusion models with an autoregressive, action-conditioned paradigm, effectively bridging the gap between high-fidelity video generation and real-time interactivity. The authors' core contributions are embodied in three innovations: (1) A lightweight ACT-Adapter for efficiently injecting action signals into the pre-trained model while preserving its knowledge; (2) An innovative \"noise-augmented memory\" strategy to overcome the \"visual inertia\" problem, forcing the model to prioritize responding to actions rather than simply repeating historical frames; (3) The introduction of a Mixture of Action Experts (MoAE) module, which enables flexible handling of heterogeneous action inputs from different domains, enhancing the model's generality. Experimental results validate the effectiveness of this design, with the model performing exceptionally well on the key \"Instruction Following\" metric.",
"weaknesses": "**W1** The authors constructed a new benchmark, Astra-Bench, for evaluation. According to the paper, this benchmark is \"comprising 20 held-out samples from each dataset\". This scale is extremely small and likely insufficient to robustly evaluate the model's generalization capabilities, which could lead to biased evaluation results. \n**W2** The paper's title claims it is a \"General Interactive World Model\". It is questionable whether this is sufficient to support such a broad \"general\" claim. Out-of-domain scenarios with different physics or interaction types (e.g., fluid dynamics, complex object stacking, multi-agent interaction) are needed.\n**W3** Can interaction modalities from different domains all be mapped to control signals of the same dimension? For example, the degrees of freedom (DoF) of an embodied agent are far greater than the degrees of freedom of autonomous driving.\n**W4** The first embodied agent interaction in the appendix not only has blurry artifacts, but also exhibits incorrect affordances and false interactions, which are challenges that remain to be addressed.\n\nTypos:\nThere is a clear spelling error in the title of Appendix A: \"A MORE EPERIMENTAL DETAILS\".",
"questions": "See in Weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T20:06:40",
"modification_date": "2025-11-12T12:17:12",
"review_url": "https://openreview.net/forum?id=8UZpmrxoLG¬eId=XoVIR0wFho",
"license": "CC BY 4.0"
}
] | |
2EQPpEZtEK | https://openreview.net/forum?id=2EQPpEZtEK | DiSTAR: Diffusion over a Scalable Token Autoregressive Representation for Speech Generation | 3.333333 | 3.666667 | [
4,
4,
2
] | [
3,
4,
4
] | 3 | [
"text-to-speech",
"residual vector quantization",
"masked diffusion model",
"autoregressive language model"
] | Recent attempts to interleave autoregressive (AR) sketchers with diffusion-based refiners over continuous speech representations have shown promise, but they remain brittle under distribution shift and offer limited levers for controllability. We introduce DiSTAR, a zero-shot text-to-speech framework that operates entirely in a discrete residual vector quantization (RVQ) code space and tightly couples an AR language model with a masked diffusion model, without forced alignment or a duration predictor. Concretely, DiSTAR drafts block-level RVQ tokens with an AR language model and then performs parallel masked-diffusion infilling conditioned on the draft to complete the next block, yielding long-form synthesis with blockwise parallelism while mitigating classic AR exposure bias. The discrete code space affords explicit control at inference: DiSTAR produces high-quality audio under both greedy and sample-based decoding using classifier-free guidance, supports trade-offs between robustness and diversity, and enables variable bit-rate and controllable computation via RVQ layer pruning at test time. Extensive experiments and ablations demonstrate that DiSTAR surpasses state-of-the-art zero-shot TTS systems in robustness, naturalness, and speaker/style consistency, while maintaining rich output diversity. Audio samples are provided on \url{https://anonymous.4open.science/w/DiSTAR_demo}. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=2EQPpEZtEK | 2025-09-19T15:56:23 | 3 | [
{
"id": "oBwROuronm",
"forum": "2EQPpEZtEK",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16779/Reviewer_mwhZ",
"reviewer_name": "Reviewer_mwhZ",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "DiSTAR is a method involving the recently popular paradigm of combining the benefits of autoregressive decoder-only LMs with diffusion models. In this specific instance, the entire architecture relies only on discrete tokens from an RVQ codec-based audio tokenizer, which is unlike previous work (DiTAR) where continuous latents are used. Consequently, the diffusion process is now a masked diffusion model. \n\nThe DiSTAR architecture involves aggregated patch-wise tokens fed to the AR model which “sketches” the next patch. The MDM refines the aggregate token into the RVQ tokens conditioned on previous token predictions. The method involves training tricks like dropping out RVQ layers that help the model remain robust across a wide range of bitrates.\n\nComparing with other state-of-the-art TTS models shows that the DiSTAR achieves comparable quality across various metrics.",
"strengths": "The main strength of the paper is the fact that it shows how to apply the AR + Diffusion paradigm to TTS using multi-level RVQ discrete audio tokens which helps remove the need for separate duration predictors and stop predictor; simply predicting [eos] tokens in the AR step is enough. This approach of patch-wise AR prediction mitigates some of the error-accumulation issues since the finer RVQ tokens are being generated in the masked diffusion sampling stage. The results also look good, with both subjective and objective metrics showing comparable results against strong baselines.",
"weaknesses": "Weaknesses and questions:\n- The authors use some embedding initialization trick but do not cite any existing work or ablate the design to prove it is effective.\n- Similarly the utility of stochastic layer truncation is not cited/ablated. I believe the DAC (descript audio codec) paper does use this technique in the training of DAC but the authors are using it in the training of the LM and MDM on top of the RVQ codec. Will this still be needed if the authors used DAC or the RVQ decoder is already trained with quantizer dropout?\n- The authors mention in the abstract that AR+Diffusion models on continuous latents are brittle under distribution shifts but do not really run any experiments that compare DiTAR vs DiSTAR under such a setting.\n- The claims in the abstract regarding surpassing state-of-the-art for speaker similarity seem a little exaggerated based on the results shown in the paper. SMOS is very close to E2TTS with a wide spread, and the objective SIM metric is also lower than some other baselines.",
"questions": "Please see the weakness section.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T14:42:54",
"modification_date": "2025-11-12T13:53:41",
"review_url": "https://openreview.net/forum?id=2EQPpEZtEK¬eId=oBwROuronm",
"license": "CC BY 4.0"
},
{
"id": "5KZcUseHun",
"forum": "2EQPpEZtEK",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16779/Reviewer_tPVS",
"reviewer_name": "Reviewer_tPVS",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "In this paper, authors proposes DiSTAR, a zero-shot TTS framework that works entirely in a discrete residual vector quantizatio code space, coupling an autoregressive language model (sketcher) with a masked diffusion Transformer (refiner). The approach avoids forced alignments and duration predictors, instead using blockwise parallelism where the AR model drafts RVQ token sketches for each patch and the diffusion model performs parallel masked infilling to complete the block. Using DiSTAR discrete latent space can be directly used for controllability, supports a variety of decoding strategies, and allows inference-time bitrate and compute control by pruning RVQ layers. The system is evaluated on standard zero-shot TTS benchmarks. DiSTAR demonstrates improvements over recent baselines in robustness, naturalness, and speaker consistency.",
"strengths": "(a) The paper presents a integration of an autoregressive LM and a diffusion model operating on RVQ discrete tokens. This combination addresses weaknesses of purely-AR or purely-diffusion approaches. Furthermore, the idea of iterative discrete demasking (inspired by LLaDA) is technically interesting and new in the TTS domain.\n\n(b) The empirical results back up the claims of improved robustness, high naturalness, and better speaker consistency (SMOS) across unseen voices.\n\n(c) Eliminating the need for forced aligners, duration models, or external text-speech alignment is a another practical strength of the proposed work.",
"weaknesses": "(a) The proposed appraoch is the combination of different techniques, each individual component draws on previously known ideas, so the perceived novelty is Incremental. DiSTAR’s core innovation is applying masked diffusion in the discrete RVQ domain, which is new, but conceptually it parallels prior AR+refinement pipelines and the LLaDA diffusion LM approach in NLP.\n\n(b) The method is complex and lack in clarity. Furthermore the system involves multiple components and a non-trivial training procedure, which are not fully transparent in the description .For example i cannot understand clearly how the AR hidden sketch is defined and used. Is it generating one coarse codebook stream, a fused embedding per frame, or something else?\n\n(c) The results claim comparable inference speed to a baseline (DiTAR), but since it still relies on an iterative diffusion process for each patch, which may be a bottleneck.\n\n(d) Couple of relevant baselines are absent. In particular, there is no direct comparison to a pure AR discrete token model of comparable size on the same data. Without an explicit AR-only baseline, it’s hard to isolate how much the diffusion refiner helps beyond a standard AR approach\n\n(e) I think an ablation where the diffusion module is removed (i.e. the AR alone generates all codebooks) would be insightful. Does the diffusion mainly help with fine detail, or also with stability (WER)?\n\n(f) The authors mention that DiSTAR is less sensitive to high-frequency artifacts in the reference prompt than others, attributing better speaker cloning to this. However, it’s unclear why ? Is the diffusion refiner helps ignore prompt noise.? There is a need to evaluate the robustness under prompt domain shift.",
"questions": "(a) How are the AR drafter and diffusion refiner trained jointly or sequentially? It is implied in the paper that a shared token space allowing end-to-end optimization , but can you clarify if you train the AR LM and the diffusion Transformer simultaneously or in stages ?\n\n(b) Did you test scenarios beyond the lengths in the benchmark (e.g., generating several minutes of speech concatenating multiple paragraphs)? Does the model maintain speaker identity and prosody consistently in truly long sequences?\n\n(c) Could you provide more details on pruning? For example, if you drop the top $k$ codebooks at inference (using only the first $L-k$ RVQ layers), how does it impact MOS or WER?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T23:00:17",
"modification_date": "2025-11-12T13:53:41",
"review_url": "https://openreview.net/forum?id=2EQPpEZtEK¬eId=5KZcUseHun",
"license": "CC BY 4.0"
},
{
"id": "ojQvbXl8X8",
"forum": "2EQPpEZtEK",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16779/Reviewer_rLfL",
"reviewer_name": "Reviewer_rLfL",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper extends DiTAR by replacing continuous code with RVQ codes and use a LLaDA style masked diffusion transformer to predict the next code patch.",
"strengths": "1. There is some limited novelty in combining LLaDA style diffusion transformer with DiTAR approach.",
"weaknesses": "1. The paper is a bit difficult to read with some grammar issues. If possible, I suggest the authors to seek help from native English speakers to make the paper more reader friendly.\n2. The authors claim the model to be SOTA in robustness, speaker similarity and naturalness but the results in Table 1 seems to indicate otherwise? The Speaker SIM and UTMOS scores are lower than competitors.\n3. The evaluation section seems a bit sketchy overall. Why are the models compared in Table 2 different from Table 1? Subjective evaluations are missing key details (e.g. number of evaluators, number of samples per evaluator). The ablation study only covers decoding strategies but not other design choices.",
"questions": "See weakness.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-23T05:04:37",
"modification_date": "2025-11-12T13:53:42",
"review_url": "https://openreview.net/forum?id=2EQPpEZtEK¬eId=ojQvbXl8X8",
"license": "CC BY 4.0"
}
] | |
M84KJx6oCx | https://openreview.net/forum?id=M84KJx6oCx | SPARK: Synergistic Policy And Reward Co-Evolving Framework | 4 | 4 | [
6,
2,
4,
4
] | [
4,
4,
4,
4
] | 4 | [
"RLVR",
"RLHF",
"LLM",
"LVLM"
] | Recent Large Language Models (LLMs) and Large Vision-Language Models (LVLMs) increasingly use Reinforcement Learning (RL) for post-pretraining, such as RL with Verifiable Rewards (RLVR) for objective tasks and RL from Human Feedback (RLHF) for subjective tasks.
However, RLHF incurs high costs and potential reward–policy mismatch due to reliance on human preferences, while RLVR still wastes supervision by discarding rollouts and correctness signals after each update. To address these challenges, we introduce the Synergistic Policy And Reward Co-Evolving Framework (SPARK), an efficient, on-policy, and stable method that builds on RLVR. Instead of discarding rollouts and correctness data, SPARK recycles this valuable information to simultaneously train the model itself as a generative reward model. This auxiliary training uses a mix of objectives, such as pointwise reward score, pairwise comparison, and evaluation conditioned on further-reflection responses, to teach the model to evaluate and improve its own responses. Our process eliminates the need for a separate reward model and costly human preference data. SPARK creates a positive co-evolving feedback loop: improved reward accuracy yields better policy gradients, which in turn produce higher-quality rollouts that further refine the reward model. Our unified framework supports test-time scaling via self-reflection without external reward models and their associated costs. We show that SPARK achieves significant performance gains on multiple LLM and LVLM models and multiple reasoning, reward models, and general benchmarks. For example, SPARK-VL-7B achieves an average 9.7\% gain on 7 reasoning benchmarks, 12.1\% on 2 reward benchmarks, and 1.5\% on 8 general benchmarks over the baselines, demonstrating robustness and broad generalization. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=M84KJx6oCx | 2025-09-04T12:49:17 | 4 | [
{
"id": "ffSL10iEQi",
"forum": "M84KJx6oCx",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1899/Reviewer_R8hV",
"reviewer_name": "Reviewer_R8hV",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper introduces SPARK, an on-policy framework that trains a single model to be both the policy and the judge. Instead of discarding rollouts in RL with verifiable rewards (RLVR), the method recycles the n-best candidates to build on-policy supervision for pointwise judgments, pairwise comparisons, and reflection. The unified model then uses this judging ability at test time for self-reflection–style TTS (no external reward model). On Qwen2.5-VL-7B, the authors report average gains of +9.7% on seven math benchmarks and +12.1% on two reward benchmarks, with smaller but consistent improvements on broader multimodal tasks. The paper argues this reduces RM cost/complexity while improving stability and data efficiency.",
"strengths": "1. Unified loop that wastes less signal. Recycling RLVR outcomes into pointwise/pairwise/reflection supervision for the same model is neat and practical; it cuts one model class out of the stack and removes frequent RM calls.\n2. On-policy supervision. Using current behavior to create judgment/reflection data reduces distribution shift versus offline RM datasets and explains why TTS helps SPARK but hurts baselines.\n3. Consistent wins. The +9.7% (math) and +12.1% (reward) numbers on VL-7B are solid; the smaller general-domain bump is still directionally positive.\n4. Reasonable ablations. Clear separation of answer vs. CoT data and a TTS study that highlights why a weak judge can degrade performance, whereas a trained judge helps.",
"weaknesses": "1. Efficiency/accounting is light. The paper claims cost wins over RM-based RL, but lacks hard numbers: wall-clock hours, tokens/sec, GPU memory/FLOPs, and verifier runtime (#unit tests per sample, pass rate). Table-style qualitative comparisons are helpful but not enough for practitioners.\n\n2. Verifier dependence. Rewards are binary and rule-based; the paper doesn’t probe robustness to noisy or partial verifiers (very common in code/math). A noise-injection or partial-credit ablation would make the claim more convincing.\n\n3. Self-confirmation risk. Policy and judge live in the same model. The KL to a reference helps, but there’s no quantitative analysis of judge calibration (ECE/Brier) or safeguards against over-confident self-approval during TTS.\n\n4. Repro details. Core knobs for TTS (max reflection rounds, acceptance rule, early stopping), prompt formats, and the exact n-best sampling policy should be surfaced in the main text.",
"questions": "1. Compute & throughput. Could you report end-to-end wall-clock, effective tokens/sec, and GPU hours for SPARK vs. (i) GRPO Policy-Only, (ii) GRPO Policy&Reward, and (iii) an RM-based pipeline? Also break out verifier cost per batch (pass rate, retries). This would substantiate the cost argument beyond Table 7.\n\n2. Judge–policy coupling. Did you try decoupled heads or stop-gradient tricks so the “judge” pathway can drift a bit from the “policy” during data generation? Even light dropout/temperature on the judge might reduce confirmation bias.\n\n3. TTS protocol. Please specify maximum reflection rounds and acceptance criteria (first judged-correct vs. best-of-k). In Table 5, can you attribute the baseline degradation to specific judge errors over rounds?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T06:50:08",
"modification_date": "2025-11-12T10:52:14",
"review_url": "https://openreview.net/forum?id=M84KJx6oCx¬eId=ffSL10iEQi",
"license": "CC BY 4.0"
},
{
"id": "7XuFpnaaY6",
"forum": "M84KJx6oCx",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1899/Reviewer_vEEq",
"reviewer_name": "Reviewer_vEEq",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "This paper proposes a method called “SPARK”. Its major contribution is to jointly optimize the RL policy and the reward model. It uses the RLVR-derived correctness scores to train the model itself to become a generative reward model. The proposed method is verified on three categories of benchmarks. Experimental results show that SPARK achieves significant performance gains on multiple LLM and LVLM models and multiple reasoning, reward models, and general benchmarks.",
"strengths": "1. The motivation to get both the optimized RL policy and reward model is good.\n2. The adoption of the reflection mechanism in both training and testing is helpful.\n3. The experimental results are supportive.",
"weaknesses": "1. The idea of co-training the policy with the reward model will result in divergence. Without a well-trained and fixed reward model, the RL policy will lose the target to optimize. Indeed, a stable target is the priority in optimization. For example, the Deep Q Network, it uses the target network, which is a delayed version of the network to be optimized, as the evaluation network, just to keep the optimization target fixed during a period of time. On the contrary, this paper’s reward model (optimization target) is dynamically changing. Very likely, in the very beginning, the reward model is naive, and the RL policy will not get useful information from it. The RL policy will collapse, and as a result, the reward model itself will not be optimized. Finally, both the reward model and the RL policy will not be improved during the training process.\n2. The reflection process can be improved. The idea of reflection is helpful, but simply using the LLM to directly reflect on itself may cause overfitting, which can limit the improvement.\n3. As far as I comprehend, this paper attempts to improve the RL with verifiable reward (RLVR) framework by proposing the co-training strategy. It didn’t improve the reward limitation on objective tasks of RLVR, nor does it have a direct relationship with RLHF. The advantage of requiring no human preference data is inherited from the vanilla RLVR. Therefore, the purpose of depicting the limitations of those two methods in the description section (Paragraph 2) is confusing.\n4. The manuscript needs polishing. For example, grammar errors like “Our key insight is to recycle the rollouts and correctness data to…”, “reward&reflection”; It is not clear what the “reference model” refers to in Equation 4; It is not clear what “\\box{}” is.",
"questions": "NA.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T21:39:49",
"modification_date": "2025-11-12T10:52:15",
"review_url": "https://openreview.net/forum?id=M84KJx6oCx¬eId=7XuFpnaaY6",
"license": "CC BY 4.0"
},
{
"id": "5OgKAyTg8X",
"forum": "M84KJx6oCx",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1899/Reviewer_JEkG",
"reviewer_name": "Reviewer_JEkG",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "SPARK proposes a reinforcement learning framework that jointly evolves the policy and reward model within a single LLM/LVLM. Built on RL with Verifiable Rewards (RLVR), SPARK recycles correctness signals and rollouts that are normally discarded to train the same model as a generative reward model. This co-evolutionary mechanism reduces reliance on human preference data and external reward models, improving efficiency, stability, and test-time self-reflection.",
"strengths": "1.\tElegant unification of policy and reward training—reduces cost and improves stability.\n2.\tAddresses reward-policy mismatch, a key issue in RLHF pipelines.\n3.\tDemonstrated improvements across reasoning and reward benchmarks (+9.7% / +12.1%).\n4.\tConceptually aligns with scalable self-reflective AI trends.",
"weaknesses": "1.\tIncomplete technical specification:\nThe paper lacks full detail on how co-training signals are balanced or stabilized (e.g., gradient separation, EMA targets). Without this, it is hard to reproduce or verify convergence.\n2.\tPotential circularity problem:\nTraining a model to generate and simultaneously evaluate its own responses risks self-confirmation. The authors claim that the verification step prevents collapse, but empirical or theoretical backing is weak.\n3.\tLimited ablation studies:\nThe contribution of each component (e.g., reflection, recycling, policy iteration) to the overall gain is unclear. Ablations would strengthen causal claims.\n4.\tGenerality of results:\nAll experiments rely on Qwen-family models. It remains uncertain whether SPARK generalizes to other architectures like Llama, Gemini, or GPT-style systems.\n5.\tLack of qualitative failure analysis:\nThe paper focuses on positive results but does not explore where SPARK underperforms—e.g., in ambiguous reward conditions or low-confidence verification.\n6.\tPresentation clarity:\nWhile conceptually sound, some notation and flow between RLVR and SPARK updates are dense and under-explained. Figures could better illustrate the co-evolution process.",
"questions": "1.\tHow do you prevent reward drift or self-confirmation when both policy and reward share parameters?\n2.\tWhat stability techniques (e.g., target networks, KL penalties) are employed to ensure learning convergence?\n3.\tHow often is the verifier updated relative to the policy loop?\n4.\tCan SPARK operate on preference data when available, or is it strictly designed for verifiable signals?\n5.\tHow would SPARK handle tasks without binary verifiability (e.g., open-ended generation)?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-25T19:08:43",
"modification_date": "2025-11-12T10:52:15",
"review_url": "https://openreview.net/forum?id=M84KJx6oCx¬eId=5OgKAyTg8X",
"license": "CC BY 4.0"
},
{
"id": "nwFvAWuc5s",
"forum": "M84KJx6oCx",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1899/Reviewer_Ubvd",
"reviewer_name": "Reviewer_Ubvd",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 1,
"presentation": 3,
"summary": "This paper introduces a method (SPARK) that jointly trains the LM to solve tasks and judge its own generated response, by \"recycling\" the rollouts generated during RL with Verifiable Rewards (RLVR). It also bakes self-reflection into the inference time, by utilizing its own judgement to prompt for reflection when mistake is detected.",
"strengths": "The experiments over Math related domains are comprehensive and show improvements compared to baseline. The model is also ablated with Policy-only and Reward-only objective.",
"weaknesses": "On the methodology, I am not quite sure if I understand the necessity of baking generation and reward modeling together. \n\n1. the task is already verifiable with rule-based calculation, the benefit of incorporating GRM is not obvious. What about other preference task where GRMs are more useful?\n2. No experiments on tasks that are non-verifiable to verify the effectiveness of proposed method. In my opinion, the “co-evolving” framework will likely result in RM overfit or model collapse when the learned judgement is not correct (in RLVR task, the learning signal is guaranteed to be correct for the RM). The generalizability of the setup is not verified.\n3. The experiment is not compared with setting that **trains a separate reward model** to help with test-time scaling. The experiments design should stress the difference between (a) LM + a pre-trained and fixed capable RM (b) the proposed co-evolving framework, but lacks such evidence.\n4. I’m not sure if comparison between the ablated version and proposed method is fair (i.e., whether judgement and self-reflection are both applied during test-time) but I might be wrong. Please see my question for detail.",
"questions": "1. For your evaluation (e.g., Table 1), can you clarify the setting a bit on how SPARK-VL-7B is evaluated? Is test-time scaling with judgement and self-reflection used? My understanding is YES. Please correct me if I am wrong.\n2. Then for your ablated version Qwen2.5-VL+GRPO + Policy&Reward, can you explain in more detail how it’s trained? Is it first trained on original data, then trained on crafted preference data for reward modeling, and then evaluated with judgement and self-reflection step as well? Because from the current description (line 318-323), I don’t know if self-reflection is applied during test-time.\n3. Do you have experiments that show results using two systems (a LM trained to generate CoT and solve problems, another LM trained on the collected rollout for reward modeling, then a combination of both during test time + self-reflection)?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-21T10:31:15",
"modification_date": "2025-11-12T10:52:16",
"review_url": "https://openreview.net/forum?id=M84KJx6oCx¬eId=nwFvAWuc5s",
"license": "CC BY 4.0"
}
] | |
1EyqJNvVlh | https://openreview.net/forum?id=1EyqJNvVlh | Anchors Aweigh! Sail for Optimal Unified Multi-Modal Representations | 3 | 3.75 | [
2,
4,
4,
2
] | [
3,
4,
4,
4
] | 4 | [
"Multimodal alignment",
"mutual information"
] | A unified representation space in multi-modal learning is essential for effectively integrating diverse data sources, such as text, images, and audio, to enhance efficiency and performance across various downstream tasks.
Recent binding methods, such as ImageBind, typically rely on a single, fixed anchor modality for aligning multi-modal data. We mathematically analyze these fixed anchor binding methods and uncover significant limitations: (1) over-reliance on the choice of the anchor modality, (2) inadequate capture of intra-modal information, and (3) failure to account for cross-modal correlation among non-anchored modalities.
To address these issues, we propose the need for adaptive anchor binding methods, exemplified by our framework CentroBind.
The proposed method uses adaptively adjustable centroid-based anchors generated from all available modalities, leading to a balanced and rich representation space.
We theoretically demonstrate that our approach captures three critical properties of multi-modal learning---intra-modal learning, inter-modal learning, and multi-modal alignment---while constructing a unified representation that spans all modalities. Experiments on both synthetic and real-world datasets show that adaptive anchor methods such as CentroBind consistently outperform fixed anchor binding methods, verifying our analysis. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=1EyqJNvVlh | 2025-09-20T13:27:21 | 4 | [
{
"id": "MJVLlUkv3p",
"forum": "1EyqJNvVlh",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23589/Reviewer_Yya1",
"reviewer_name": "Reviewer_Yya1",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes CentroBind, an adaptive anchor binding method for multi-modal representation learning. \nThe authors argue that fixed-anchor binding (FABind) methods suffer from over-reliance on a single anchor modality, loss of intra-modal information, and failure to capture shared information among non-anchor modalities. \nCentroBind replaces the fixed anchor with a centroid-based anchor computed from all available modalities. \nThe method is evaluated on synthetic and real-world datasets and is shown to outperform FABind and several other baselines in tasks such as cross-modal retrieval and classification.",
"strengths": "1. The paper is clearly written and motivated by a relevant problem in multi-modal learning — the bias and inefficiency of fixed-anchor alignment.\n2. The proposed centroid-based adaptive anchor idea is simple, easy to implement, and potentially applicable to other multi-modal frameworks.\n3. Experiments are conducted on both synthetic and real-world datasets, covering multiple modalities and tasks.",
"weaknesses": "1. The theoretical novelty is relatively weak. \n\n(1) The key idea—constructing a centroid anchor from multiple modalities—is a minor variation of existing multi-modal alignment formulations. \n\n(2) Similar concepts of adaptive or learned anchors have been discussed in OmniBind[1] and UNIALIGN[2]. \n\n(3) The mathematical derivations in Section 3 mostly restate standard InfoNCE lower-bound properties without offering new theoretical insights or proofs that go beyond prior work. The formal results (Theorem 3.1, Propositions 2.2–2.3) repackage well-known properties of mutual information and data-processing inequality.\n\n2. While the authors evaluate on several datasets, the experimental validation is not convincing for a top-tier conference.\n\n(1) Baselines are not sufficient. The comparisons include FABIND, UniBind, AudioCLIP, and ViT-Lens, but exclude more recent and stronger baselines such as OmniBind [1] and UNIALIGN [2], which are both highly relevant and directly comparable.\n\n(2) Lack of large-scale evaluation. All reported results are on small or medium datasets (MUStARD, AudioSet subsets). For a method claiming general multi-modal unification, evaluations on larger benchmarks are expected.\n\n3. CENTROBIND introduces extra computation for per-batch centroid construction and additional InfoNCE terms. \nHowever, the paper does not provide complexity analysis or runtime comparison with strong baselines such as ViT-Lens and UniBind. \nWithout this, it is unclear whether the method scales to high-dimensional multi-modal tasks.\n\n4. The manuscript contains several minor formatting and reference errors (e.g., “Figure ??” on page 6, line 273).\n\n5. Appendix references are frequently cited for essential content (algorithms, proofs, and ablations), making it difficult to assess the main claims within the body of the paper.\n\n[1] Lyu, Yuanhuiyi, Xu Zheng, Dahun Kim, and Lin Wang. \"Omnibind: Teach to build unequal-scale modality interaction for omni-bind of all.\" arXiv preprint arXiv:2405.16108 (2024). \n\n[2] Zhou, Bo, Liulei Li, Yujia Wang, Huafeng Liu, Yazhou Yao, and Wenguan Wang. \"UNIALIGN: Scaling Multimodal Alignment within One Unified Model.\" In Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 29644-29655. 2025.",
"questions": "1. Could the authors include direct experimental comparisons with OmniBind [1] and UNIALIGN [2]?\n2. Have the authors considered learning the centroid weights (e.g., modality-dependent coefficients) instead of computing a simple mean?\n3. What happens if modalities are partially missing during training or inference — can the adaptive anchor still be constructed robustly?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T23:25:10",
"modification_date": "2025-11-12T18:18:42",
"review_url": "https://openreview.net/forum?id=1EyqJNvVlh¬eId=MJVLlUkv3p",
"license": "CC BY 4.0"
},
{
"id": "jPNoLGQdaM",
"forum": "1EyqJNvVlh",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23589/Reviewer_EEA3",
"reviewer_name": "Reviewer_EEA3",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper presents CentroBind, a framework that adaptively determines the anchor modality for performing multimodal alignment. The paper provides a theoretical analysis showing that this approach can capture intra-modal, inter-modal, and multi-modal alignment components, and the empirical experiments demonstrate that CentroBind outperforms FABind baselines and other recent multi-modal alignment methods on retrieval and zero‐shot classification tasks.",
"strengths": "- Clear and Well-Structured: The paper is well-organized, with detailed explanations of the preliminary, intuition, and methodology.\n\n- Superiority in Alignment: The experimental results demonstrate that the proposed method achieves the best performance on the cross-modal retrieval and classification tasks compared to the baselines.",
"weaknesses": "- The paper provides clear intuition but presents the preliminary and methodological sections in an overly complex manner. I suggest that the authors reorganize the presentation flow to enhance readability and logical coherence. From my perspective, it is not necessary to include too many theoretical derivations or formal statements in the main text—these could be moved to the appendix, while keeping the main body focused on the core ideas, motivation, and empirical insights. \n\n- If some modalities’ encoders are significantly stronger or weaker, then the centroid might be dominated by high-quality modality embeddings, thus implicitly reintroducing an anchor bias. While the authors mention weighted aggregation as a workaround, empirical quantification of this phenomenon is minimal.\n\n- The technique introduced in this paper is relatively trivial and does not address the essential issue of anchor bias. While the proposed modification may bring marginal improvements, it fails to fundamentally eliminate the dependence on specific modalities or to provide a principled mechanism for balancing heterogeneous modality contributions.\n\n- The paper does not include experimental comparisons with other recent multimodal alignment methods, such as TRIANGLE [1] and GRAM [2]. Including these baselines would provide a stronger empirical validation of the proposed method's effectiveness. \n\n\n[1] A TRIANGLE Enables Multimodal Alignment Beyond Cosine Similarity, NeurIPS 2025\n\n[2] Gramian multimodal representation learning and alignment, ICLR 2025",
"questions": "See Weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T15:30:44",
"modification_date": "2025-11-12T18:18:43",
"review_url": "https://openreview.net/forum?id=1EyqJNvVlh¬eId=jPNoLGQdaM",
"license": "CC BY 4.0"
},
{
"id": "Qnvo7z9Tlb",
"forum": "1EyqJNvVlh",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23589/Reviewer_6D21",
"reviewer_name": "Reviewer_6D21",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper is motivated by a problem in “bind-everything-to-images” style models (ImageBind, LanguageBind, etc.): if you always align other modalities to one fixed anchor (usually vision), the final joint space can only be as rich as that anchor, so any information that lives mostly in audio, text, or another modality is suppressed. Even worse, correlations between two non-anchor modalities (say, audio↔text) are never explicitly optimized, because the loss only looks at anchor↔others. So the authors ask: can we build a unified space that is co-defined by all modalities, not dictated by just one?\n\nTheir method, CENTROBIND, replaces the fixed anchor with an adaptive, batch-wise anchor: for each training batch, they compute a centroid from the available modalities embeddings and use that as the “anchor” everyone aligns to. Then they apply a contrastive / InfoNCE-style objective that pulls every modality toward this centroid, so the shared space is shaped by what actually appears in the data in that batch, not by a single handpicked modality. Because the center is recomputed each time, it can drift to accommodate different modality combinations and preserve modality-specific signals.\n\nEmpirically, on both synthetic and real multimodal setups, CENTROBIND outperforms fixed-anchor baselines, matching the theory that multi-modal alignment should be symmetric (all-to-one adaptive center) rather than asymmetric (all-to-one fixed image space).",
"strengths": "- Theoretical justification for the problem is solid, in standard multimodal contrastive learning the choice of anchor modality imposes a fixed ceiling\n\n- The propose CENTROBIND method is simple: compute a per-batch centroid and align to it, no additional architecture, so in principle you can drop it into existing multimodal contrastive setups",
"weaknesses": "- Assumption that a single centroid per batch is a good proxy for the \"true\" shared semantics.\n- The method is also batch-dependent: the quality and stability of the anchor will depend on what modalities are present and how balanced the batch is.\n- Empirical evaluation is very limited, only consisting of results on a synthetic dataset and some limited set of tasks like sarcasm and speaker classification, dreambooth (image editing ?). Audioset is the only result comparable to prior baseline papers.\n\n- Some relevant references to related work are missing and should be discussed in paper:\n1. Humam Alwassel, Dhruv Mahajan, Bruno Korbar, Lorenzo Torresani, Bernard Ghanem, and Du Tran. Self-supervised learning by cross-modal audio-video clustering. NeurIPS 2020.\n2. Brian Chen, Andrew Rouditchenko, Kevin Duarte, Hilde Kuehne, Samuel Thomas, Angie Boggust, Rameswar Panda, Brian Kingsbury, Rogerio Feris, David Harwath, et al. Multimodal clustering networks for self-supervised learning from unlabeled videos. ICCV 2021.\n3. Sirnam Swetha, Mamshad Nayeem Rizve, Nina Shvetsova, Hilde Kuehne, and Mubarak Shah. Preserving modality structure improves multi-modal learning. ICCV 2023.",
"questions": "Please provide results for all the datasets used in the ImageBind paper which is the main baseline. I am willing to raise my rating if comprehensive evaluation is demonstrated. Also add more recent results for models like LanguageBind and InternVideo2.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T12:09:21",
"modification_date": "2025-11-12T18:18:43",
"review_url": "https://openreview.net/forum?id=1EyqJNvVlh¬eId=Qnvo7z9Tlb",
"license": "CC BY 4.0"
},
{
"id": "fG8tvm5sFR",
"forum": "1EyqJNvVlh",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23589/Reviewer_Ux9F",
"reviewer_name": "Reviewer_Ux9F",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper revisits the challenge of constructing a unified representation space across multiple modalities (e.g., text, image, audio) for multimodal learning. Existing methods, such as ImageBind, rely on a fixed anchor modality (e.g., images or text) as the target space into which all other modalities are aligned, thus leading to reliance on the anchor modality and loss of intra-model interaction. To overcome these issues, the paper proposes CENTROBIND by constructing a centroid-based, dynamically updated anchor embedding to act as the anchor for all modalities. Empirical results on synthetic and real-world datasets demonstrate consistent improvements over FABIND baselines.",
"strengths": "- This paper studies a practical problem where the fixed anchor could be limited in some cases.\n- A theoretical framework is proposed to support the method.",
"weaknesses": "- The anchor generation strategy in CentroBind, which averages modality centroids, may not be robust when different modalities exhibit varying information densities. Modalities containing more or less discriminative information could disproportionately influence the centroid, potentially leading to biased or unbalanced representations.\n- The proposed method relies on high independence of modalities, which is not true in the real world. When modalities are highly correlated or exhibit strong synergy, the centroid could be overly biased toward certain modalities. Such correlation might hinder optimization, preventing the model from capturing true multimodal relationships. The paper does not clearly address how the optimization process mitigates these dependencies.\n- The combined use of multiple InfoNCE losses may introduce convergence instability or slow training. The effects of varying temperature parameters or weighting schemes among loss terms are not discussed, limiting understanding of the loss optimization dynamics and robustness.\n- The theoretical analysis appears to assume ideal conditions, leaving unclear the boundary cases where CentroBind may underperform—such as under uneven data distributions, severe modality imbalance, or non-independent modality structures. The generalizability of the theoretical claims thus remains uncertain.",
"questions": "Please see the weaknesses part.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T10:00:01",
"modification_date": "2025-11-12T18:18:43",
"review_url": "https://openreview.net/forum?id=1EyqJNvVlh¬eId=fG8tvm5sFR",
"license": "CC BY 4.0"
}
] | |
qR2TjMZ10B | https://openreview.net/forum?id=qR2TjMZ10B | On the Representation Degradation in Vision-Language-Action Models | 5 | 3.75 | [
4,
6,
6,
4
] | [
4,
4,
3,
4
] | 4 | [
"robot policy learning",
"vision-language-action models",
"representation learning"
] | Vision-Language-Action (VLA) models have become a promising paradigm for robotic decision-making, yet their application remains limited by generalization bottlenecks. In this paper, we conduct a layer-wise representation analysis and uncover a previously overlooked phenomenon of representation degradation: deeper layers tasked with action generation exhibit diminished generalization to both semantic information and environmental dynamics. To mitigate this issue, we introduce hidden Space WOrld modeLing (SWOL), a lightweight but efficient approach that aligns degraded deep-layer features with more generalizable mid-layer representations extrapolated from future observations. SWOL enforces temporally consistent, action-grounded representations without modifying model architecture or inference procedures. Extensive experiments in simulation and real-world settings demonstrate that SWOL alleviates representation degradation, leading to improved policy effectiveness and stronger generalization across modalities of vision, language, and dynamics. | applications to robotics, autonomy, planning | https://openreview.net/pdf?id=qR2TjMZ10B | 2025-09-19T21:18:36 | 4 | [
{
"id": "NRspVBjDft",
"forum": "qR2TjMZ10B",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18448/Reviewer_iHK4",
"reviewer_name": "Reviewer_iHK4",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The paper finds a layer-wise representation degradation phenomenon in fine-tuned Vision-Language-Action (VLA) models, losing task semantics and dynamics information in the deep layers. Then, this paper proposes SWOL (Hidden Space World Modeling), training an alignment between deep-layer features to mid-layer features of the next observation with a simple MLP predictor. SWOL has no additional inference cost, and yields consistent gains on CALVIN and in real-robot manipulation tasks.",
"strengths": "- Generalization of VLA models is an important research problem.\n- The paper empirically observes representation degradation and low-complexity intervention that efficiently solves the problem with improvements in simulated and real world settings.\n- Results span multiple VLA backbones, low-data regimes, long-horizon tasks, and real-robot experiments.\n- The method has no inference overhead, making it attractive and easy for applied use.",
"weaknesses": "- Unclear necessity of correcting representation degradation: Many VLA architectures condition action decoding on all intermediate features of VLM. If the action expert can access earlier semantically rich features, it is not obvious why degradation in some deep layers must be corrected since there could be semantics agnostic behavior in the deep layers such as precise refinement of actions. Performance gain could be purely from integration of dynamics in the world model objective.\n\n- Insufficient comparison to prior world-modeling baselines: From prior works [1, 2], performance gains from predictive auxiliary objectives like implicit or explicit next-state prediction is not that surprising. This work does not convincingly separate SWOL’s benefits from these approaches. Direct empirical comparisons to at least a couple of simple representative baselines are necessary.\n\n[1] Zhao, Qingqing et al., CoT-VLA: Visual Chain-of-Thought Reasoning for Vision-Language-Action Models.\n\n[2] Zheng, Ruijie et al., FLARE: Robot Learning with Implicit World Modeling.",
"questions": "- How well does representation semantics and dynamics experiments perform with pretrained VLM weights? Discrete action tokens tend to less disturb LLM’s representation space but also appear in discrete action models, so representation degradation could be related to biased behavior from the pretrained weight.\n- In ablation, using the first layer as alignment target shows best performance. Using the first layer as target is nearly equivalent to CoT-VLA, except output tokens are from the perception tokens and targets are from layer 1 (which is close to input representation), questioning the need of mid-layer alignment where task semantics and dynamics are theoretically upper bounded by that of input representations.\n- The result of target layer 5 in Table 3 appears missing.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T12:18:42",
"modification_date": "2025-11-12T14:15:44",
"review_url": "https://openreview.net/forum?id=qR2TjMZ10B¬eId=NRspVBjDft",
"license": "CC BY 4.0"
},
{
"id": "e7ckVOXOtm",
"forum": "qR2TjMZ10B",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18448/Reviewer_Dbfu",
"reviewer_name": "Reviewer_Dbfu",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper discovers that semantic and dynamic representations degrade with network depth in VLA models, reducing generalization. To verify this, the authors use task intent classification for semantic generalization and inverse dynamics prediction for dynamic generalization. They propose SWOL, a simple yet effective approach, which makes deep-layer features predict mid-layer representations from future observations, helping to recover lost information and mitigate degradation. The authors conduct extensive experiments on the CALVIN simulation benchmark and ARX5 robotic platform, testing $\\pi_0$, $\\pi_0$-fast, and OpenVLA-OFT and showing SWOL's superiority. Ablation studies analyze key design choices, and further analysis of semantic and dynamic prediction results confirms SWOL's effectiveness. The paper's main contributions are identifying representation degradation as a key issue in VLA models and proposing the plug-and-play SWOL method to address it.",
"strengths": "1. This paper rigorously identifies the representation degradation phenomenon in fine-tuned VLA models, which is an overlooked issue in prior VLA representation research.\n\n2. The authors design two well-targeted evaluation protocols: semantic intent classification and inverse dynamics regression, systematically measure the distribution of semantic and dynamic information across layers, clearly revealing that mid-layers maintain strong representational quality while deeper layers suffer from significant degradation.\n\n3. The proposed SWOL method is innovative in its design. By performing future mid-layer representation prediction in the hidden space, it avoids the computational inefficiency and appearance sensitivity of raw visual-space future prediction, while forcing the model to re-learn degraded representations. It's plug-and-play, enabling seamless integration with various existing VLA models.",
"weaknesses": "1. The paper lacks quantitative comparative experiments with model-based methods discussed in the Related Works section.\n\n2. Typo: The fourth legend in Figure 5 should be \"Dyn. w. SWOL\"",
"questions": "1. The paper posits that direct visual-space modeling is highly susceptible to appearance variations. However, it lacks robust quantitative evidence demonstrating the superiority of SWOL. Are there comparative experiments showcasing SWOL's performance edge over conventional visual future prediction methods? Specifically, tests should involve varying critical factors such as lighting conditions, object textures, and background clutter to comprehensively validate the robustness claims.\n\n2. Given that SWOL adds an auxiliary loss during training, does it introduce any unintended side effects, such as overfitting to mid-layer representations or compromising the original action generation capability of VLA models? If so, how are these trade-offs managed?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T14:34:13",
"modification_date": "2025-11-12T14:15:45",
"review_url": "https://openreview.net/forum?id=qR2TjMZ10B¬eId=e7ckVOXOtm",
"license": "CC BY 4.0"
},
{
"id": "Z1MoKELnIT",
"forum": "qR2TjMZ10B",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18448/Reviewer_5mNv",
"reviewer_name": "Reviewer_5mNv",
"rating": 6,
"confidence": 3,
"soundness": 4,
"contribution": 3,
"presentation": 3,
"summary": "The paper “On the Representation Degradation in Vision-Language-Action Models” presents an in-depth study of how internal representations evolve across layers in Vision-Language-Action (VLA) models. The authors uncover a consistent and concerning trend: representation degradation, where deeper layers, responsible for generating actions, lose generalization capacity to both semantic and dynamic aspects of the environment.\nTo address this, the paper proposes SWOL (Hidden Space World Modeling), a lightweight auxiliary objective that aligns degraded deep-layer features with mid-layer representations from future observations. SWOL effectively introduces a self-supervised “world modeling” signal in hidden space without architectural modifications or inference overhead.\nExtensive experiments on CALVIN (simulation) and real-world robotic tasks (Aloha/ARX5 platform) show that SWOL improves policy generalization, particularly in low-data settings, enhancing both semantic grounding and dynamic awareness of VLAs.",
"strengths": "- The discovery of representation degradation fills an important analytical gap. By decomposing the forward pass and probing layer-wise generalization, the authors provide valuable interpretability into how semantic and dynamic information dissipates through depth.\n\n- SWOL stands out for its conceptual clarity: it uses mid-layer representations from future observations to “rejuvenate” degraded deep-layer embeddings. This plug-and-play auxiliary loss is elegant, general, and requires no changes to model architecture or inference.\n\n- The authors test SWOL across multiple VLA architectures (π₀, π₀-fast, OpenVLA-OFT), different data regimes (1%, 10%, 100%), and both simulated and real-world environments. The consistent improvements across setups strengthen the empirical claim.\n\n- Unlike many representation analysis papers confined to simulation, this work extends experiments to real manipulation tasks like folding a towel, plugging in a cable, and cleaning a table. These practical gains significantly bolster the contribution’s impact.\n\n- The study includes careful ablations on loss weight, target layer, and predictor architecture, along with visualization of improved deep-layer representation quality. This rigor demonstrates strong experimental maturity.",
"weaknesses": "- The connection between SWOL and formal notions of world modeling remains intuitive rather than mathematically grounded. The paper could benefit from a clearer theoretical explanation of why aligning to future mid-layer features enhances generalization beyond empirical observation.\n\n- All experiments focus on imitation learning scenarios. While the method should, in principle, generalize to reinforcement learning or planning-based agents, this is not tested or discussed in detail.\n\n- Although the authors claim “no inference overhead,” SWOL roughly increases training GPU hours by 25–30%. This discrepancy could be better contextualized.\n\n- Similar auxiliary consistency losses (e.g., temporal or cross-view prediction) have been explored in visual representation learning. The paper’s differentiation from these paradigms could be more explicit.\n\n- Aligning deep features to mid-layer targets risks encouraging representational homogeneity. While empirical results show gains, a discussion on possible over-regularization effects is missing.",
"questions": "- How does SWOL compare against standard temporal consistency or contrastive predictive coding baselines in representation learning?\n\n- Does the improvement persist if the target mid-layer is randomly sampled instead of fixed (e.g., layers 5–9)?\n\n- Could SWOL interfere with the diversity of action representations by enforcing excessive alignment across time?\n\n- Could the authors provide qualitative visualization (e.g., t-SNE) of how representation structure changes before vs. after SWOL?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T05:26:58",
"modification_date": "2025-11-12T14:15:46",
"review_url": "https://openreview.net/forum?id=qR2TjMZ10B¬eId=Z1MoKELnIT",
"license": "CC BY 4.0"
},
{
"id": "fWkEFtLKRv",
"forum": "qR2TjMZ10B",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18448/Reviewer_by5U",
"reviewer_name": "Reviewer_by5U",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The work isolates the issue of representation degradation within vision-language action models, where deeper layer fail to carry rich information (semantic and dynamical) that is useful for generalization. These metrics are defined in this context and three VLA models are analyzed to exhibit these shortcomings. The solution proposed to this issue, SWOL, consists in encouraging the deeper perceptual features to match mid-level features from the next observation in time. This is done as to force some sort of world model representation learning at those deeper layers. This is evaluated in simulation as well as in the real world and some performance benefits are shown.",
"strengths": "The paper topic is very relevant to the current approaches to robot learning, and the problem highlighted is clearly identified.\n\nThe metrics to evaluate the degradation phenomenon are clear and the case made for this being an issue is sound and convincing.\n\nThe design with the self-supervised loss aligning representations between timesteps is quite elegant.\n\nThe experiments run are adequate to test the hypothesis made and solution suggested.\n\nThe performance gains, though not miraculous, are sufficiently beneficial for this to be an interesting result.",
"weaknesses": "**Insight and implementation**\n- The authors resort to world modeling as the general idea behind the solution for mitigating representation degradation in deep VLA layers. Just in terms of presentation, the insight should not be a question (lines 231-232) rather an observation.\n- The insight on its own is insufficient to directly lead to the solution proposed: current deep layer features aligned with future mid layer features. This warrants more explanation as to why this specifically is the best way and not just one way to do things.\n- The mid level representations seems to also vary substantially in quality both within the same architecture (e.g. pi_0's 8th layer does very poorly on semantic classification) and among VLAs (Very noisy for openVLA while very clean for pi_0-Fast). Admitting that the observation is general and empirical, this is still not discussed to a sufficient extent, and the solution does not seem to directly account for this.\n- The presentation of the partitioning of the hidden features into perceptual and action parts is not clearly presented. It is unclear to the reader why such a partitioning is taken as a postulate. The authors cite Gandelsman et al., 2024, but the text is in no way self-contained in presenting this decomposition method and relating it to the policy architectures considered.\n- Along these lines, the update rules (lines 112-124) are hard to decipher both because of the above point, but also due to the cumbersome notation. I would suggest rewriting this section as well as creating a more technical figure that exhibits the decomposition and update rules in a clear fashion, even should it be in the appendix.\n\n**Presentation**\n- I did not find figure 1 very useful, especially considering the area/information ratio.\n- In table 1 the best per data fraction and per task score should be in bold as it is fatiguing on the eyes to decipher such a big block of numbers. This is done for the average length but I suspect practitioners care more about success rate.\n- The paper fails to report a reproducibility statement as well as a statement on the use of LLMs which I believe are required",
"questions": "- Why does it make sense to do deep-layer to mid-layer of next step matching?\n\n- What is the \"target\" mid-layer selection protocol?\n\n- Why is the average length an interesting/meaningful metric (table 1) ?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-20T23:32:19",
"modification_date": "2025-11-12T14:15:46",
"review_url": "https://openreview.net/forum?id=qR2TjMZ10B¬eId=fWkEFtLKRv",
"license": "CC BY 4.0"
}
] | |
XU2STJa1Fi | https://openreview.net/forum?id=XU2STJa1Fi | Mechanistic Detection and Mitigation of Hallucination in Large Reasoning Models | 4 | 3.5 | [
6,
4,
2,
4
] | [
3,
4,
4,
3
] | 4 | [
"Reasoning",
"Hallucination",
"Mechanistic Interpretability"
] | Large Reasoning Models (LRMs) have shown impressive capabilities in multi-step reasoning tasks. However, alongside these successes, a more deceptive form of model error has emerged—**Reasoning Hallucination**—where logically coherent but factually incorrect reasoning traces lead to persuasive yet faulty conclusions. Unlike traditional hallucinations, these errors are embedded within structured reasoning, making them more difficult to detect and potentially more harmful. In this work, we investigate reasoning hallucinations from a mechanistic perspective. We propose the **Reasoning Score**, which quantifies the depth of reasoning by measuring the divergence between logits obtained from projecting late layers of LRMs to the vocabulary space, effectively distinguishing shallow pattern-matching from genuine deep reasoning. Using this score, we conduct an in-depth analysis on the ReTruthQA dataset and identify two key reasoning hallucination patterns: early-stage fluctuation in reasoning depth and incorrect backtracking to flawed prior steps. These insights motivate our **R**easoning **H**allucination **D**etection (**RHD**) framework, which achieves state-of-the-art performance across multiple domains. To mitigate reasoning hallucinations, we further introduce **GRPO-R**, an enhanced reinforcement learning algorithm that incorporates step-level deep reasoning rewards via potential-based shaping. Our theoretical analysis establishes stronger generalization guarantees, and experiments demonstrate improved reasoning quality and reduced hallucination rates. | We propose a Reasoning Score grounded in mechanistic interpretability to detect and mitigate reasoning hallucinations in LRMs, introducing RHD for detection and GRPO-R for mitigation via step-level rewards. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=XU2STJa1Fi | 2025-09-11T13:08:37 | 4 | [
{
"id": "59VyOqWit5",
"forum": "XU2STJa1Fi",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3914/Reviewer_umeb",
"reviewer_name": "Reviewer_umeb",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper studies reasoning hallucinations in large reasoning models, where models produce coherent but incorrect reasoning. It introduces a Reasoning Score based on internal layer divergences to measure reasoning depth and distinguish shallow pattern matching from real reasoning. Using this metric, the authors identify three hallucination patterns and propose the Reasoning Hallucination Detection framework and GRPO-R reinforcement learning method, which together improve reasoning accuracy and reduce hallucinations across multiple benchmarks.",
"strengths": "1. The work analyzes internal layer dynamics. It also bridges interpretability and reasoning reliability.\n2. The work combines analytical diagnosis (RHD) with actionable intervention (GRPO-R) into a full pipeline.",
"weaknesses": "1. The work lacks a validation/ablation study of the later-layer divergence correlating with reasoning depth.\n2. Layer-wise JSD across all tokens is computation-intensive, which may be limited when scaling to larger models.\n3. The work only focuses on Qwen series models. There is no study on other model series, such as the Llama.",
"questions": "See weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T19:55:21",
"modification_date": "2025-11-12T11:11:35",
"review_url": "https://openreview.net/forum?id=XU2STJa1Fi¬eId=59VyOqWit5",
"license": "CC BY 4.0"
},
{
"id": "TqZDWAnvBF",
"forum": "XU2STJa1Fi",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3914/Reviewer_zVVx",
"reviewer_name": "Reviewer_zVVx",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "This paper presents Reasoning Score (RS) to tackle the problem of reasoning hallucinations in Large Reasoning Models (LRMs). RS can quantify reasoning depth by analyzing divergence in late-layer logits, then distinguish shallow pattern-matching from deeper reasoning. By applying RS to the ReTruthQA dataset, two hallucination patterns are identified, including early fluctuations in reasoning depth and incorrect backtracking to flawed prior steps. With these findings, the authors propose Reasoning Hallucination Detection (RHD) framework to achieve state-of-the-art performance, and a GRPO-R approach that can integrate step-level reasoning rewards for better generalization and reduced hallucination rates.",
"strengths": "1. The problem tackled in this paper, i.e., reasoning hallucinations, is crucial in modern language models and corresponding downstream tasks.\n\n2. The patterns identified in this paper are important to address the hallucination issues.\n\n3. The presented results demonstrate the effectiveness of the proposed approach.",
"weaknesses": "1. The experimental results are mainly collected from the DeepSeek series, which cannot well demonstrate the generalizability of the proposed method.\n\n2. Many hallucinations can be made by the lack of factuality of language models, and there has been some previous work investigating this topic. Technically speaking, these approaches also adopt (supervised) fine-tuning plus GRPO-like algorithms to solve the problem. Compared with them, what are the advantages possessed by the proposed method?\n\n3. Some notations, e.g., $R_{final}$, are not explained. I suggest the authors list all parameters and notations in a Table in the appendix.\n\n4. How does the hyperparameter, $\\gamma$, influence the performance of the proposed method?\n\n5. How do models refined by RHD perform in reasoning on OOD domains or datasets?",
"questions": "1. How does RHD perform when applied to reasoning models other than R1?\n\n2. Many hallucinations can be made by the lack of factuality of language models, and there has been some previous work investigating this topic. Technically speaking, these approaches also adopt (supervised) fine-tuning plus GRPO-like algorithms to solve the problem. Compared with them, what are the advantages possessed by the proposed method?\n\n3. How does the hyperparameter, $\\gamma$, influence the performance of the proposed method?\n\n4. How do models refined by RHD perform in reasoning on OOD domains or datasets?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T14:13:06",
"modification_date": "2025-11-12T11:11:35",
"review_url": "https://openreview.net/forum?id=XU2STJa1Fi¬eId=TqZDWAnvBF",
"license": "CC BY 4.0"
},
{
"id": "SoljfPyLCv",
"forum": "XU2STJa1Fi",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3914/Reviewer_Sw2r",
"reviewer_name": "Reviewer_Sw2r",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 1,
"presentation": 2,
"summary": "The authors propose a simple extension of information theorectic metrics to measure hallucination in reasoning outputs. Particularly these use the Jensen Shannon Divergence between the vocab distributions at intermediate and final layer. The authors show some validations on how this score correlates with accuracy, perplexity and shows how this score performs better than baselines based on Entropy/EigenScore or PRMs.",
"strengths": "The motivation and general writing in the paper is clear & it is a relevant problem to solve",
"weaknesses": "- The main hallucination score seems to be overtly simple. I feel this is more like the standard information theoretic measures Like entropy being rehashed for reasoning outputs (which typically have a sequence of steps). And like entropy, perplexity etc I feel these kinds of token level metrics would be noisy and would suffer from capacity bottleneck and robustness issues (I mean in many cases reasoning fallacy or logical inconsistency may not be associated with a high JS Divergence at the token level. This would probably only capture some of the simpler more obvious hallucinations. \n\n- Also, I feel computing such a token level score at each intermediate layer of the model can be even more noisy. The only analysis done on the reliability of these scores in Table 3 and 4 does not seem enough.\n - Validation of GSM-Noop - Introducing Noops are one of the simplest kinds of injected hallucinations, \n - Validation on Stable/Rising sets: Not clear what is the size of the sets. This shows overall there is some correlation between the score, the accuracy and the perplexity but that can be from the fact that it is capturing some of the lower hanging hallucinations. \n\n- The main hallucination scoring is also somewhat heuristical. There are multiple hyperparameters involved (one in Attention Score, 4 in the final hallucination score, \\tau etc). This makes it more tricky to be practically used. The authors need to explicitly show through their experiments whether it is sensitive to these hyperparameters or not.\n\n- How can we definitely say that GRPO-R “encourages deep—but not excessive—reasoning during RL fine-tuning” — the R-score no matter what is a noisy proxy. I would assume this scoring would be way more noisy and also varying (not robust), sensitive to minor changes in token space. The experiment results in table 2 are also not convincing. Esp for Qwen. Doing this on the small model 1.5B may not be enough\n\n- Why are GRPO related experiments and R-Score based RHD experiments done on separate benchmarks (ReTruthQA and GPQA). This makes the conclusion a bit disjoint from each other. Why can’t the detection and mitigation be applied on the same benchmarks. \n\n- Overall the scoring seems like a slightly better strategy on avg than the baselines but whether it is really generalizable or robust, it’s not clear from the experimental results. Practically speaking too many hyperparameters would make usage of this too unstable and complex - so it is important for authors to show how sensitive it is to hyperparams.",
"questions": "See the weakness section",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T11:47:05",
"modification_date": "2025-11-12T11:11:36",
"review_url": "https://openreview.net/forum?id=XU2STJa1Fi¬eId=SoljfPyLCv",
"license": "CC BY 4.0"
},
{
"id": "sNE2TewQt9",
"forum": "XU2STJa1Fi",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3914/Reviewer_MJhw",
"reviewer_name": "Reviewer_MJhw",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "This paper tackles the challenge of detecting and preventing reasoning hallucinations, which can be described by factually wrong but logically coherent/convincing chain-of-thought traces typically produced by large reasoning models (LRMs). First, a reasoning score is proposed based on the assumption that hallucinations relate to shallow reasoning, which can be characterized by static activations (i.e., low entropy) in the late Transformer layers. Based on empirical analysis, we see that the reasoning score alone cannot predict hallucinating steps/traces. To this end, different patterns are characterized that aim to describe different hallucination effects, such as shallow pattern matching and consecutive verification (#1), incorrect backtracking to earlier hallucinated steps (#2), and overthinking steps with both high reasoning score and perplexity (#3). The final reasoning hallucination detection (RHD) algorithm combines all derived metrics. Finally, the reasoning score is included in GRPO to mitigate hallucinations (dubbed GRPO-R). Experimental results on a novel benchmark (ReTruthQA) show superior performance in detecting hallucinations using RHD. Moreover, applying RL with GRPO-R improves overall performance on reasoning benchmarks (MATH500, AIME, GPQA).",
"strengths": "Overall, this paper aims to address a crucial and indeed difficult problem in LRMs. I see the main strengths in:\n\n1. The paper covers quite a large scope, starting from introducing a novel benchmark (ReTruthQA), over to an atomic reasoning score, a derived reasoning hallucination detection algorithm (RHD), and finally a method to mitigate reasoning hallucinations (GRPO-R). This is also reflected in the very comprehensive appendix. \n2. The approach seems to be effective in both detecting hallucinations and improving the reasoning performance. \n3. The paper is well written.",
"weaknesses": "My main reservations are concerned with ambiguities in defining and quantifying hallucinations. \n\n1. Potentially ambiguous evaluation. As there are apparently no datasets available on reasoning hallucinations, this paper proposes a self-generated one (ReTruthQA). As described in appendix D, traces are labelled based on final reasoning outcomes, GPT-4o-Mini, and two human validators. No labeling accuracy is provided (e.g., based on a subset that has been even more thoroughly labelled with more models and human validators). This makes both the justification of the reasoning score and the comparison to other methods difficult. For example, GPT-4o as an LLM-as-Critic (LCM) has clearly non-perfect AUC in Table 1. However, it is used as a labeling method in section 3 (Figures 3 and 4). \n2. Heuristic detection mechanism. Clearly, the proposed reasoning score alone is not sufficient to detect hallucinations (is a high or low score desirable?). Hence, derivatives (statistical measures, relationships to attention and perplexity) had to be introduced, requiring many hyperparameters. While the reasoning score seems not to be effective, it is still included in the Reasoning Hallucination Detection (RHD). It is questionable how much the average reasoning score can help. Indeed, it is often not even activated by setting $\\alpha_1$=0. The questionable impact of the average reasoning score is also shown in Table 5 (sometimes better scores without it in R1-14B) and Figure 8 (the highest drop with increasing weight in the reasoning score). One could even question why we don’t use the inverse of the reasoning score as well. \n3. Task-, model-, and metric-specific hyperparameters. As described in Appendix J, the proposed RHD uses specific hyperparameters for every task, model, and even metric (AUC, MC1-3). Did a similar hyperparameter tuning go into the baselines? \n4. Unclear gain of GRPO-R. While GRPO-R seems to improve the accuracy on the standard task, an analysis of the actual reasoning hallucination reduction is missing. It is not clear where the gain in accuracy comes from.",
"questions": "I would appreciate if the rebuttal could address the individual weaknesses. Besides, Ref. (Valmeekam et al.) is missing the date.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-24T19:14:42",
"modification_date": "2025-11-12T11:11:36",
"review_url": "https://openreview.net/forum?id=XU2STJa1Fi¬eId=sNE2TewQt9",
"license": "CC BY 4.0"
}
] |
SLhLUdlaqc | https://openreview.net/forum?id=SLhLUdlaqc | Parameter-Efficient Reinforcement Learning using Prefix Optimization | 4.5 | 3.75 | [
4,
4,
8,
2
] | [
4,
4,
3,
4
] | 4 | [
"reinforcement learning with verifiable rewards",
"parameter efficient tuning"
] | Reinforcement Learning with Verifiable Rewards (RLVR) is a leading approach for tuning language models on mathematical reasoning tasks. However, it remains unclear whether RLVR's gains stem from genuine reasoning improvements or simply from steering the model toward answer formats that already appear in the reference distribution. Inspired by recent evidence \citep{zhao2025echo,yue2025does}, we study this question by optimizing only the first $k$ tokens (e.g. $k=32$) of each solution, generating the remainder of the response from the reference model. We study two methods for prefix optimization, using a naive algorithm that clusters prefixes and selects the best prefix (Prefix Clustering), and a method that optimizes the prefix by finetuning a lightweight adapter model with RL (Prefix-RL). We show that tuning only the first $k$ tokens can significantly improve the accuracy on math, suggesting that at least some of the gains from RL are due to upweighting a preferable solution strategy. Our results suggest that simple prefix optimization methods can provide an efficient alternative to RL, delivering substantial improvements across different models and benchmarks for a tiny fraction of the compute required for standard RL. | Optimizing just the first k tokens with a small RL-tuned adapter (“Prefix-RL”) or a Prefix Clustering approach steers a frozen LLM’s solution strategy, recovering much of full RL’s math gains at a tiny compute cost. | reinforcement learning | https://openreview.net/pdf?id=SLhLUdlaqc | 2025-09-20T04:04:25 | 4 | [
{
"id": "krSR5K41rl",
"forum": "SLhLUdlaqc",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20975/Reviewer_hLfX",
"reviewer_name": "Reviewer_hLfX",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces a methodology called _Prefix-RL_ to algorithmically identify ways for an LLM to start its response given a user input so that it is more likely to correctly answer math questions. The work uses this methodology to fine-tune both LLama and Qwen models on the mathematical reasoning benchmarks MATH, AIME, AMC23, and Minerva. The results show similar gains to direct RL finetuning.",
"strengths": "* The paper's primary strength is the ~3000x reduction in FLOPs and the 4x reduction in GPU requirements during training. This approach to RL fine-tuning is much more accessible to research labs than tuning the full model.\n\n* The core idea is based on an interesting insight and is also practical to implement. The insight that prefixes index into parts of the training data that are useful for answering certain math questions is potentially a fruitful idea for inspiring more works.\n\n* This method is demonstrated on FP8-quantized models, which, as noted in Section 3.2, was previously difficult to do. It seems to make progress on the performance gap between quantized Llama-8B and its full-precision counterpart, which is impressive and useful.",
"weaknesses": "*Weaknesses*\n* The work is posed as a form of parameter-efficient RL, but only compares against a standard RL baselines. A more fair comparison would consider other techniques for parameter-efficient RL such as LoRA [Hu et al., 2022] (or QLoRA [Dettmers et al., 2023] for quantized models) or Adapters [Houlsby et al., 2019]. It would also be nice to compare against prefix-tuning, as mentioned in the related work, given that it can be directly trained on the same labels generated for RL as a supervised signal. \n\n* It is unclear why the baseline method of prefix clustering was selected with k=16, whereas the experimental method was tested at k=32 and k=64. I believe this is a bit of an unfair comparison, as it is possible that k=16 is simply not enough tokens to meaningfully index into the parts of the LLMs training data that are “good” for solving math questions, which is the core insight of this work. It would be better to have a comparison that has equal numbers of k values across all approaches.\n\n* Additionally, selecting k doesn’t seem to be very clear-cut. In Table 1, it appears that having k=32 seems to work well for some models/benchmarks (e.g., the Qwen-72B model or the Minerva Benchmark), whereas k=64 works better for others. It seems difficult to know ahead of time what value should be selected for k to achieve good performance. This mitigates some of the benefits of efficiency, because implementing this approach now requires an engineer to search over the k-values that work the best.\n\n* The paper appears to report results from a single training run for each experiment. The training curves (e.g., in Figure 4) seem to be quite noisy, with high variance between steps. Selecting the \"best checkpoint\" from a single, noisy run is not super robust to initial conditions. The paper should report the mean and standard deviation over multiple runs (e.g., 3-5) with different random seeds to establish statistical significance, as is the standard in the RL literature.\n\n* This approach seems to rely on having some objective way of calculating “correctness” of an answer. It is not clear how well this does under different kinds of label noise that is common in RLHF. Having a “correct answer” is a large assumption for problems that LLMs are typically used for, such as creative writing, open-ended dialogue, or exploratory information retrieval. \n\n*Minor Edits*\n* The section on Prefix clustering is a bit confusing. Is it one prefix for all evaluation examples, or is it the nearest cluster’s prefix? It seems like the former, but a more reasonable baseline would be the latter.\n* It is not immediately clear why setting g_theta to the same size and architecture of the reference model implies that “improvement from RL upweights existing strategies” as claimed in Sec 2 under the subheading “Prefix-RL”. This could be better elaborated on.\n* In Section 3.1, the authors state the MATH training split has 7,500 examples. This dataset is appears to have 12,500 examples. Your subsequent filtered number of 8,888 examples also suggests the starting number was larger than 7.5k. Please double-check and clarify this dataset statistic.\n\nReferences:\nHu, Edward J., Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. \"Lora: Low-rank adaptation of large language models.\" ICLR 1, no. 2 (2022): 3.\n\nDettmers, Tim, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. \"Qlora: Efficient finetuning of quantized llms.\" Advances in neural information processing systems 36 (2023): 10088-10115.\n\nHoulsby, Neil, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. \"Parameter-efficient transfer learning for NLP.\" In International conference on machine learning, pp. 2790-2799. PMLR, 2019.",
"questions": "The questions below correspond to the number of bullet points of the weaknesses.\n\n* 1.1: How does this technique compare to other techniques for parameter-efficient RL such as LoRA [Hu et al., 2022] (or QLoRA [Dettmers et al., 2023] for quantized models) or Adapters [Houlsby et al., 2019]?\n* 1.2: What are the benefits of this approach of using RL with automatically calculated labels vs. supervised fine-tuning with automatic labels?\n\n* 2.1: How does prefix clustering at k=32 and k=64 compare to the proposed approach?\n\n* 3.1: how can one determine a k-value for their problem?\n* 3.2: what is the worst-case number of evaluations to make for k?\n\n* 4.1: how consistent are these results across different training runs?\n\n* 5.1: how sensitive is this approach to noise in labels?\n* 5.2: how applicable is this approach to other problems that are common uses of LLMs?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-08T06:49:20",
"modification_date": "2025-11-13T10:13:13",
"review_url": "https://openreview.net/forum?id=SLhLUdlaqc¬eId=krSR5K41rl",
"license": "CC BY 4.0"
},
{
"id": "xCxET35dGk",
"forum": "SLhLUdlaqc",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20975/Reviewer_y1zy",
"reviewer_name": "Reviewer_y1zy",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes a method for performing RL with low computational resources. The approach optimizes a small model to generate the beginning portion of responses, after which a large model completes the remaining decoding. The authors experiment with Llama3.1 and Qwen2.5 series models on several mathematical reasoning tasks. Results show that prefix-RL can achieve most of the performance gains of standard RL using relatively little computational resources.",
"strengths": "1. This paper proposes a method for RL under low computational resources that can achieve most of the performance gains with far less computational cost than conventional RL.\n2. The proposed method does not require full access to the target model; it only needs inference access. Therefore, it is applicable not only to open-source models but also to closed-source models.\n3. The paper designs a Prefix Clustering experiment to verify the importance of the beginning portion of the response for performance gains, and further proposes the main method of this work, Prefix-RL.",
"weaknesses": "1. The method in this paper is limited by the need to use models from the same family, which may cause it to perform poorly or even fail in settings involving closed-source models.\n2. Experiments in this paper were conducted only on mathematical reasoning tasks, so the method's applicability to other RLVR tasks remains unknown.",
"questions": "1. The upper-right subplot of Figure 4 shows an anomalous behavior of Prefix Clustering on the Qwen model; in L375–L376 the paper explains this as \"Qwen’s preferred openings are more input-dependent.\" Could a clearer example be provided to substantiate this point?\n2. If the target model were used directly as the adapter model, what kind of performance could be expected? This approach seems to potentially eliminate the need for an additional model and avoid the restriction that the method requires a smaller model within the same family. Theoretically, such performance should lie between the current method and Full-RL, and it would also allow a clearer comparison of the stylistic and performance differences between the \"prefixes\" obtained by this method and those obtained in the paper.\n3. Has Prefix-RL shown gains on OOD tasks? For example, can we observe that the adapter model produces more guiding responses in tasks other than mathematical reasoning?\n4. In L460–L461 it is mentioned that \"cross-family configurations lead to performance degradation.\" Is there any data that can visually show the extent of these performance drops? Also, note that open-source models versus closed-source models would also count as \"cross-family.\"",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-04T19:48:06",
"modification_date": "2025-11-13T10:13:14",
"review_url": "https://openreview.net/forum?id=SLhLUdlaqc¬eId=xCxET35dGk",
"license": "CC BY 4.0"
},
{
"id": "IOzPAnY6zT",
"forum": "SLhLUdlaqc",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20975/Reviewer_Yk1Z",
"reviewer_name": "Reviewer_Yk1Z",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 4,
"presentation": 4,
"summary": "This paper investigates whether the performance gains observed in RLVR for mathematical reasoning are due to genuine improvements in reasoning ability, or primarily from shifting the model toward high-accuracy solution strategies already present in the base distribution. To answer this, the authors propose prefix optimization: only the first k tokens of a generated solution are optimized, while the remainder is completed by a frozen reference model.\n\nTwo methods are evaluated:\n1. Prefix Clustering — selects a fixed prefix via k-means clustering of sampled candidate prefixes and uses it for all inputs.\n2. Prefix-RL — trains a small adapter using PPO to generate an input-conditional prefix conditioned on the question.\n\nDespite modifying only a tiny fraction (first 16–64 tokens) of the sequence, both methods yield substantial accuracy improvements on math benchmarks such as MATH-500, AIME, AMC, Minerva, OlympiadBench, often recovering a large share of full RL gains. Prefix-RL is compute-efficient, works with quantized models, avoids catastrophic forgetting, and requires inference-only access to the main model. Improvements are most pronounced when the adapter and target share a model family.\n\nOverall, the work argues that strategy selection and formatting, not deep reasoning skill, may explain a substantial portion of RL gains.",
"strengths": "1. A simple enough method, especially Prefix Clustering, not only brings significant improvements on downstream tasks, but also unveils that the high-quality solution already learned in the pre-training distribution. It offers another profound insight into the origin.\n2. Highly compute-efficient;\n3. Could work with closed-weight models;",
"weaknesses": "1. Lack of direct comparison to full RL at a large scale\n2. Lack of comparison to other parameter-efficient RL methods.\n3. Generalization beyond math remains uncertain;\n4. Prefix clustering harms Qwen but helps Llama, suggesting architectural or data-distribution differences worth deeper investigation. I think more analysis on why Qwen behaves differently needs to be conducted.",
"questions": "Same to Weakness.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T00:36:00",
"modification_date": "2025-11-13T10:13:13",
"review_url": "https://openreview.net/forum?id=SLhLUdlaqc¬eId=IOzPAnY6zT",
"license": "CC BY 4.0"
},
{
"id": "OcbHBbiOIP",
"forum": "SLhLUdlaqc",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20975/Reviewer_UfLZ",
"reviewer_name": "Reviewer_UfLZ",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper investigates parameter-efficient reinforcement learning (RL) for math reasoning by optimizing only the first k generated tokens (the “prefix”) and letting a frozen, larger target model complete the solution. \n\nTwo approaches are explored: (1) Prefix-Clustering, which selects a single fixed prefix by clustering candidate prefixes from the reference model and choosing the best on a training set; and (2) Prefix-RL, which RL-finetunes a small 1B adapter model to generate the first k tokens while the large target model remains frozen. \n\nUsing verifiable rewards (answer correctness), the authors show consistent gains on math benchmarks (e.g., MATH-500, AIME, Minerva) across Qwen and Llama families, including FP8-quantized ones, with significantly lower training compute than full-model RL. The empirical results suggest that a substantial share of RL gains arises from steering toward effective formats/strategies rather than improving token-by-token reasoning across the entire sequence.",
"strengths": "- This paper introduces a compute-lean adapter-based RL setup where only k initial tokens are learned, separating strategy choice from long-horizon generation.\n- The pipeline is well-illustrated (adapter emits prefix; target completes; reward computed on final answer).",
"weaknesses": "1. The paper implicitly assumes early tokens determine the solution strategy that the model will keep following. This may not hold for reflective/iterative solvers (e.g., o3-like, DeepSeek-R1, Qwen-Thinking) that backtrack, revise, or branch mid-solution. The generality of prefix steering under multi-pass reflection remains untested.\n\n2. Prefix-Clustering protocol seems train-set-dependent and of unclear inference value. The method traverses MATH-train to choose a single best fixed prefix, which may not be practically meaningful at inference time (and risks train-set over-selection).\n\n3. To support the efficiency claim, add a direct baseline: “1B Prefix-RL + Large Target” vs. “Full RL on the Large Target” under matched or budget-normalized compute and matched data. Without this, the efficiency–performance trade curve is hard to judge.\n\n4. Some figure narratives (e.g., the 1B self-completion plot analogous to Fig. 3) could better articulate what hypothesis each figure specifically tests (e.g., how much of full-RL gain is recovered by prefix control?)\n\n5. The paper states this is the first demonstration of RL finetuning applied to quantized models yet the method does not RL-update the quantized target’s weights—only the small adapter is updated while the FP8 model is used for inference-only completion. As worded, this can be misleading.",
"questions": "See weakness.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T11:56:09",
"modification_date": "2025-11-13T10:13:13",
"review_url": "https://openreview.net/forum?id=SLhLUdlaqc¬eId=OcbHBbiOIP",
"license": "CC BY 4.0"
}
] |
O02qsgSUtY | https://openreview.net/forum?id=O02qsgSUtY | STEDiff: Revealing the Spatial and Temporal Redundancy of Backdoor Attacks in Text-to-Image Diffusion Models | 5 | 3.5 | [
4,
6,
4,
6
] | [
4,
3,
3,
4
] | 4 | [
"Diffusion Models; Backdoor Attacks; Backdoor Defense; AI Security"
] | Recently, diffusion models have been recognized as state-of-the-art models for image generation due to their ability to produce high-quality images. However, recent studies have shown that diffusion models are susceptible to backdoor attacks, where an attacker can activate hidden biases using a specific trigger pattern, causing the model to generate a predefined target. Fortunately, executing backdoor attacks is still challenging, as they typically require substantial time and memory to perform parameter-based fine-tuning. In this paper, we are the first to reveal the **spatio-temporal redundancy** in backdoor attacks on diffusion models. **Regarding spatial redundancy**, we observed the *enrichment phenomenon*, which reflects the abnormal gradient accumulation induced by backdoor injection. **Regarding temporal redundancy**, we observed a marginal effect associated with specific time steps, indicating that only a limited subset of time steps plays a critical role in backdoor injection. Building on these findings, we present a novel framework, *STEDiff*, comprising two key components: *STEBA* and *STEDF*. *STEBA* is a spatio-temporally efficient accelerated attack strategy that achieves up to **15.07×** speedup in backdoor injection while reducing video memory usage by **82%**. *STEDF* is a detection framework leveraging spatio-temporal features, by modeling the enrichment phenomenon in weights and anisotropy across time steps, which achieves a backdoor detection rate of up to **99.8%**. Our code is available at: [https://anonymous.4open.science/r/STEDiff-9E9F/](https://anonymous.4open.science/r/STEDiff-9E9F/). | In this paper, we are the first to reveal the spatio-temporal redundancy in backdoor attacks on diffusion models. We present a novel framework, STEDiff, including a novel backdoor attack strategy and a reliable backdoor defense framework. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=O02qsgSUtY | 2025-09-17T15:11:39 | 4 | [
{
"id": "gQtRdjksSv",
"forum": "O02qsgSUtY",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8636/Reviewer_ct6g",
"reviewer_name": "Reviewer_ct6g",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper \"STEDiff\" introduces a unified attack and defense framework that uncovers spatio-temporal redundancies in backdoor attacks on diffusion models . The authors identify two key phenomena: the enrichment effect (spatial redundancy in weight updates) and the marginal effect of timesteps (temporal redundancy in backdoor training). Building on these findings, they propose STEBA, an efficient attack method that reduces GPU memory and training time, and STEDF, a real-time defense mechanism that detects backdoors by monitoring behavior in diffusion dynamics. Their method significantly improves attack efficiency while maintaining high attack success rates. The study demonstrates that both attack and defense can be optimized by focusing on key weights and critical timesteps, reducing overhead while enhancing robustness.",
"strengths": "1. Significant ASR Improvement with Lower Compute Cost: The Paper proposes a computationally-efficient backdoor attack on diffusion models. Demonstrate the backdoor attack on diffusion models can be achieved by controlling a few timesteps.\n2. Novel Insight – Redundancy in Backdoor Training: This work is the first to pinpoint spatial and temporal redundancies in diffusion model backdoor attacks . Identifying that only a small fraction of model parameters and diffusion steps are truly responsible for the backdoor is a fresh and important insight.\n3. Highly Effective Defense: The defense component, STEDF, demonstrates near-state-of-the-art detection performance. It can detect backdoor-compromised models with Backdoor Detection Rates ~98–100% across a wide range of trigger types, while maintaining very low false positive rates (often 0–2%)",
"weaknesses": "1. Heuristic Methodology: The paper doesn't provide theoretical explaination for the method. Also lack of the analysis of various hyperparameters choosing.\n2. Unclear Temporal Selection for STEBA: It's not surprise that diffusion models has reduntant steps because close timesteps have almost identical score or velocity field. The paper should include more detailed temporal selection algorithm and various amount of chosen timestep. It should also demostrate the results for such different settings. A better investigation should further cover the changing temporal dynamic across various amount of chosen timestep.\n3. Unclaer Sampler Choice: It's trivial if only train on the timestep used by the specific sampler and achieve good FID and ASR on the identical sampler. For example, evaluate with 50 steps DDIM and backdooring on the these 50 step used by DDIM. The paper should cover a more comprehensive experiments to demostrate the generalization or failure on various samplers and sampling steps, including DDIM, DPM-Solver, PNDM, and UniPC while backdoored on fewer effective timestep.\n4. Ignore the Usage of LoRA in VillanDiffusion: The paper doens't recognize the usage of LoRA in VillanDiffusion, which might be the root cause of enrichment effect.\n5. Not Clarify the Contribution in Comparing to Previous HIdden-Activation-Based Backdoor Detection: Existing works have identify that activations in the hidden layers can pose strong signal for bnackdoor actication, like [Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering](https://ceur-ws.org/Vol-2301/paper_18.pdf). However, the paper doesn't emphasize the main contribuition and difference between this paper and prior works.",
"questions": "1. How to choose the effective timestep for STEBA? Can you provide pseudo code and details? What if choosing different strategies and timesteps?\n2. For each backdoored diffusion models, can ¥ou demostrate the utility and the ASR on various samplers and sampling stepsd? including DDIM with 100 steps, DPM-Solver with 20 steps, UniPC with 20 steps, and PNDM with 20 steps, which align with VillanDiffusion settings.\n3. It looks like the experiment in section 9.3 and 9.5 don't recognize the usage of LoRA in VillanDiffusion. Can you conduct an experiment to demostrate if enrichment effect exists without LoRA? What's the consequence with and without LoRA?\n4. Please survey the prior works on Backdoor Detection via Network Activation.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-04T12:01:58",
"modification_date": "2025-11-12T12:07:41",
"review_url": "https://openreview.net/forum?id=O02qsgSUtY¬eId=gQtRdjksSv",
"license": "CC BY 4.0"
},
{
"id": "uwqD18B9l1",
"forum": "O02qsgSUtY",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8636/Reviewer_mF24",
"reviewer_name": "Reviewer_mF24",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This work reveals the novel phenomenon of text-to-image backdoor attack methods, denoted as spatial and temporal redundancy. They argue that the existing abnormal gradient accumulation brought by backdoor injection is regarded as spatial redundancy, and the subset of time steps that impact backdoor injection is considered as temporal redundancy. After recognizing two types of redundancy, this research proposes a novel framework for attack and detection in the field. The experimental results present their roadmap for discovering the phenomenon and the following framework, which provides a new perspective for this field.",
"strengths": "1. The observations of the UNet, as well as the Transformer for the enrichment phenomenon (spatial redundancy), are good to demonstrate the existing defect of the VillanDiffusion.\n2. The observations of the time steps correlation with ASR/FID are good to know that VillanDiffusion still has space for improvement.\n3. For the defense, the authors provide more types of triggers to check the generalization (scope) of their proposed detection framework.",
"weaknesses": "1. **Unclear base attack methods for analysis.** Based on Table 1, I conjecture that your analysis from Sec 2 to Sec 4 is based on the VillanDiffusion. Is that right?\n2. **Comparisons with other attack methods.** However, several attack methods have been proposed today, such as BadT2I (as you mentioned in Sec 2.2), EvilEdit (1), and PaaS (2) mentioned in the survey paper (3). In my experience, EvilEdit and PaaS also rely on a few resources for consumption. Could you provide the comparisons with these methods? If not, please give convincing reasons.\n3. Follow 2., as I know the attack behaviors in (1) and (2) are different from VillanDiffusion and RickRolling in their cross-attention maps, which makes me concerns about the generalization of your observation of the enrichment phenomenon. Could you provide more theoretical or empirical explanations about the enrichment phenomenon?\n4. **Unclear about the analysis of the marginal effect in timesteps.**During the earlier timesteps, how do you obtain the images for calculating ASR and FID? ** Do you estimate the final image $x_0$?\n5. **Unclear about the Trigger patterns.** Could you please provide the details of the trigger patterns in Tables 2 and 4? I might miss this part in the main article and in the Appendix.\n6. The authors sometimes refer to $M_{be}$ as the benign model (Line 241) or baseline model (Line 247). **I suggest that the authors make the call consistent or clarify that it has the same meaning. **\n\n- (1) Wang, H., Guo, S., He, J., Chen, K., Zhang, S., Zhang, T., & Xiang, T. (2024, October). Eviledit: Backdooring text-to-image diffusion models in one second. In Proceedings of the 32nd ACM International Conference on Multimedia (pp. 3657-3665).\n- (2) Huang, Y., Juefei-Xu, F., Guo, Q., Zhang, J., Wu, Y., Hu, M., ... & Liu, Y. (2024, March). Personalization as a shortcut for few-shot backdoor attack against text-to-image diffusion models. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 19, pp. 21169-21178).\n- (3) Lin, W., Zhou, N., Wang, Y., Li, J., Xiong, H., & Liu, L. (2025). BackdoorDM: A Comprehensive Benchmark for Backdoor Learning in Diffusion Model. arXiv preprint arXiv:2502.11798.",
"questions": "1. I wonder about the choice of the diffusion model 'Realistic Vision V4.0'. Is there any reason? What is the structure of this UNet or DiT model?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T13:55:19",
"modification_date": "2025-11-12T12:07:42",
"review_url": "https://openreview.net/forum?id=O02qsgSUtY¬eId=uwqD18B9l1",
"license": "CC BY 4.0"
},
{
"id": "wyqzq4eVft",
"forum": "O02qsgSUtY",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8636/Reviewer_YB7w",
"reviewer_name": "Reviewer_YB7w",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "The paper observes redundancies in the backdoor attacks against diffusion models. The authors propose an attack method, STEBA, and a defense method, STEDF, based on the observations.",
"strengths": "1. The observations seem intuitve and supported by experiments.\n2. STEBA achieves high attack sucess rate with reduced computational cost in the paper's setting.",
"weaknesses": "1. There lacks necessary understanding into the STEBA methods. It is unclear to me whether the observations about redundancy are confirmed by the optimizaiton results of STEBA. And whether the distribution of the most important parameters/time-steps follows certain rules.\n\n2. The evaluation of STEDF is flawed. It is not mentioned what attack method was used, and what are the configurations of the attack and the baseline. It feels that the evaluation is weak since the baseline already achieves over 90% accuracy. Besides, it is unclear whether STEDF can transfer between different attack methods/datasets etc.\n\n3. What does 'MSE' mean in Figure 4? Fonts in Figure 5(b) are too small to see.",
"questions": "Please see weekness.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-24T20:30:49",
"modification_date": "2025-11-12T12:07:42",
"review_url": "https://openreview.net/forum?id=O02qsgSUtY¬eId=wyqzq4eVft",
"license": "CC BY 4.0"
},
{
"id": "NWa8nTpxP4",
"forum": "O02qsgSUtY",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8636/Reviewer_XGH7",
"reviewer_name": "Reviewer_XGH7",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper presents two key findings. First, it identifies an enrichment phenomenon where backdoor injection causes abnormal gradient accumulation in a few critical weight parameters. Second, it reveals that only a small subset of timesteps significantly influences backdoor injection.\n\nBuilding on these insights, the authors propose two frameworks:\n1. STEBA – a backdoor attack that targets optimization on key weights and crucial diffusion timesteps. It achieves a 15.07× speedup in injection and reduces *ideo memory usage by 82%.\n2. STEDF – a detection framework that monitors spatio-temporal feature dynamics across timesteps to halt malicious generations mid-process, achieving up to 99.8% detection accuracy.",
"strengths": "The paper is overall clear and well-organized. It provides insights into the mechanisms of backdoor injection in diffusion models, in the enrichment of gradients in key weight parameters and the temporal sparsity of critical timesteps. \n\nTheir proposed methods, STEBA and STEDF, shows strong practical values, improving the efficiency of backdoor insertion and providing an effective mechanism for detection.\n\nThe experimental evaluation spans three widely used diffusion models (Stable Diffusion v1.5, v2.1-base, and Realistic Vision v4.0) and multiple trigger types, which reinforces the robustness and generality of the findings. Overall, the work offers meaningful contributions to understanding and mitigating backdoor vulnerabilities in diffusion models.",
"weaknesses": "The study could be further strengthened by extending experiments to a broader range of diffusion model families to better assess generalizability. Also, it would be useful to evaluate STEDF under adaptive attacker scenarios to understand its resilience against under this adaptive threat model.",
"questions": "For STEBA, \n- How does various parameters such as top-k and thresholds affects the attack effectiveness? \n\nFor STEDF, \n- Which timestep found to be the most effective for detection? \n- What is the average compute savings in diffusion steps?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-22T13:37:42",
"modification_date": "2025-11-12T12:07:42",
"review_url": "https://openreview.net/forum?id=O02qsgSUtY¬eId=NWa8nTpxP4",
"license": "CC BY 4.0"
}
] |
82IUMx3yRJ | https://openreview.net/forum?id=82IUMx3yRJ | Equivariant Flow Matching for Point Cloud Assembly | 5 | 3.5 | [
2,
6,
4,
8
] | [
4,
3,
4,
3
] | 4 | [
"flow matching",
"point cloud assembly",
"equivariant model"
] | The goal of point cloud assembly is to reconstruct a complete 3D shape by aligning multiple point cloud pieces. This work presents a novel equivariant solver for assembly tasks based on flow matching models. We first theoretically show that the key to learning equivariant distributions via flow matching is to learn related vector fields. Based on this result, we propose an assembly model, called equivariant diffusion assembly (Eda), which learns related vector fields conditioned on the input pieces. We further construct an equivariant path for Eda,
which guarantees high data efficiency of the training process. Our numerical results show that Eda is highly competitive on practical datasets, and it can even handle the challenging situation where the input pieces are non-overlapped. | learning on graphs and other geometries & topologies | https://openreview.net/pdf?id=82IUMx3yRJ | 2025-09-19T21:20:28 | 4 | [
{
"id": "HTIQIVkmkk",
"forum": "82IUMx3yRJ",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18461/Reviewer_9ojc",
"reviewer_name": "Reviewer_9ojc",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces an equivariant solver for assembly tasks based on flow matching. Theoretically, the authors demonstrate that learning equivariant distributions via flow matching requires learning corresponding equivariant vector fields. Building upon this result, this paper proposes the Equivariant Diffusion Assembly (EDA) model, which learns these vector fields conditioned on the input pieces. Furthermore, they construct an equivariant sampling path for EDA, a design that ensures high data efficiency during training.",
"strengths": "1.\tThe motivation is clear.\n2.\tMathematical descriptions are sufficient.",
"weaknesses": "1. Inadequate literature review on key related works, especially patch-based registration and non-overlap registration methods.\n2. Unsubstantiated claim of solving the multi-piece problem, as the method and experiments primarily focus on two-piece problem with one experiment for multi-piece problem on BB dataset.\n3. Unclear novelty and contribution, as the method heavily builds upon established components without a clear clarification.\n4. Incomplete experimental validation, due to an insufficient number of compared methods, limited datasets, and a lack of rigorous testing on multi-piece cases.",
"questions": "1.\tThis paper claims to address the multi-piece assembly. There are lots of patch-based point cloud registration methods that have not been carefully discussed: [1] Zhao, T., Tian, T., Zou, X., Yan, L., & Zhong, S. (2025). Robust Point Cloud Registration via Patch Matching. IEEE Transactions on Geoscience and Remote Sensing. [2] Zhao, T., Li, L., Tian, T., Ma, J., & Tian, J. (2023). Patch-guided point matching for point cloud registration with low overlap. Pattern Recognition, 144, 109876. [3] Qin, Z., Yu, H., Wang, C., Guo, Y., Peng, Y., Ilic, S., ... & Xu, K. (2023). Geotransformer: Fast and robust point cloud registration with geometric transformer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(8), 9806-9821. \n2.\tThis paper is also targeted at non-overlap assembly, which is also not a new problem. For example, the following strengths and weaknesses should also be discussed: [1] Xu, J., Dai, H., Hu, X., Fan, S., & Ke, T. (2024). SCREAM: Scene rendering adversarial model for low-and-non overlap point cloud registration. IEEE Transactions on Geoscience and Remote Sensing. [2] Xu, J., Zhang, Y., Zou, Y., & Liu, P. X. (2023). Point cloud registration with zero overlap rate and negative overlap rate. IEEE Robotics and Automation Letters, 8(10), 6643-6650. \n3.\tThis method seems to be constructed on the basis of Ryu et al. (2024). The main difference between this method and Ryu et al. lies in the Brownian diffusion on SO(3), while the method proposed by Ryu et al. solves the more general multi-piece problem.\n4.\tThe related work is not organized very well; readers can not catch the differences between this paper and existing methods. It is recommended to separate this section into several related subsections.\n5.\tThe preliminaries section is too long; it is suggested to shorten it and only put the important information into the main text.\n6.\tAlthough this paper claims the use of multiple pieces, the definition and experiments mainly focus on a two-piece problem with one dataset to validate the multiple pieces problem.\n7.\tSO(3)-equivariant networks are also widely used in 3D. The introduction of this network in the main text needs to be shortened.\n8.\tMoreover, the vector field in flow matching is also a well-known definition, whose introduction also needs to be shortened.\n9.\tIf all experiments are conducted on only one multi-piece problem, then it is not very suitable to claim that solving a multiple-piece problem.\n10.\tEquivariant flow is widely used in 2D computer vision and 3D molecular generation. It is hard to find the main contribution when compared with existing equivariant flows. It is recommended to re-emphasize your contribution in Section 4.2.\n11.\tSampling with the RUNGE-KUTTA method is also not a novel technique in flow matching.\n12.\tIn the implementation, the vanilla Transformer is employed. Why not use a point transformer with permutation-equivariant or a transformer with SO(3)equivariant in recent years? They might obtain better performance. Most importantly, there are lots of innovations in point cloud process. Building your methods on existing practices will be better.\n13.\tThere are also existing blocks employed in your architecture. The main contributions can not be clearly understood. Moreover, it is not recommended to rename self-attention and cross-attention with a new name croco block. They can be easier to understand than giving a new name,\n14.\tAs for experiments, there are lots of pages for introducing existing details. Therefore, fewer pages are left for experiments, which leads to incomplete experimental validation.\n15.\tIn Figure 3, if the 8-piece assembly process is to be displayed, 8 different colors should also be given. It is highly suggested to validate your method on multiple piece problems.\n16.\tMoreover, the compared methods in point cloud registration are not comprehensive enough. It is a widely studied area, and more up-to-date methods should be compared.\n17.\tImportantly, only two datasets are limited. There are also many datasets related to point cloud registration.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-07T19:35:53",
"modification_date": "2025-11-12T14:15:54",
"review_url": "https://openreview.net/forum?id=82IUMx3yRJ¬eId=HTIQIVkmkk",
"license": "CC BY 4.0"
},
{
"id": "WnegAobEc9",
"forum": "82IUMx3yRJ",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18461/Reviewer_xdq7",
"reviewer_name": "Reviewer_xdq7",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "The paper proposes Eda, an equivariant flow-matching framework for assembling 3D point cloud fragments. It combines E(3)-equivariant layers with a flow-based architecture to enable efficient SE(3)-equivariance learning . Experiments on 3DMatch, 3DLoMatch, and Breaking Bad show over 50% lower rotation error than baselines. While theoretically elegant and empirically strong, the paper lacks ablation on equivariance, efficiency analysis, and tests on noisy real-world point clouds.",
"strengths": "- The paper provides a solid theoretical foundation for framing point cloud assembly as a flow matching problem. \n- On 3DMatch and 3DLoMatch, Eda achieves >50% lower rotation errors than GEO/ROI/AMR baselines. It also handles non-overlapping fragments (3DZeroMatch) where correspondence-based methods fail entirely.\n- The paper provides a good ablation study on varying different settings.",
"weaknesses": "- How does the method work on an untrained category of assembly?\n- How does the method work if equivariance is ablated? The author might want to consider comparing with the same architecture but only lack of equivariance for paper completeness. \n- While the theory side is useful, a better native like intuitive diagram would help aid readability of the paper. \n- While the paper claims that learning related vector fields provides a more efficient alternative to full equivariant flow modeling, the evidence remains largely qualitative. The only quantitative indicator is a reduction in assembly runtime (≈ 19 minutes per object versus ≈ 34 minutes for diffusion-based baselines). A more detailed analysis of the computational efficiency would benefit the paper’s completeness (e.g. training convergence, FLOPs, memory footprint, scalability curve, etc.)",
"questions": "- How does the method generalize to real world noisy point clouds? - As one of the benefits of flow base method is its generalization ability to messy real world applications.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T03:03:56",
"modification_date": "2025-11-12T14:15:55",
"review_url": "https://openreview.net/forum?id=82IUMx3yRJ¬eId=WnegAobEc9",
"license": "CC BY 4.0"
},
{
"id": "LNff90v6oc",
"forum": "82IUMx3yRJ",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18461/Reviewer_QEHc",
"reviewer_name": "Reviewer_QEHc",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes Eda (Equivariant Diffusion Assembly), a novel correspondence-free, multi-piece point cloud assembly model built on equivariant flow matching. The key theoretical contribution is to show that learning equivariant distributions can be reduced to learning related vector fields, provided the initial noise distribution is invariant. Building on this, the authors design an SE(3)^N-equivariant flow-matching framework where the equivariance of the learned distribution is guaranteed by construction. Eda parametrizes vector fields through an equivariant neural network and introduces an equivariant path construction that improves data efficiency during training. Experiments on 3DMatch, BB, and KITTI demonstrate strong quantitative improvements over state-of-the-art baselines, including robust performance even on non-overlapping fragments (3DZeroMatch). The results support both theoretical soundness and empirical effectiveness.",
"strengths": "- The paper provides a solid theoretical framework by reducing equivariant distribution learning to learning related vector fields. The derivations appear rigorous and consistent, although I did not verify every step in detail.\n\n- The use of E3NN-based equivariant attention and Croco blocks makes the approach practical.",
"weaknesses": "- Although the paper provides theoretical guarantees for SO(3)^N-equivariance, the empirical validation is mostly indirect. The ablation studies show performance drops when removing the equivariant backbone or path, suggesting that equivariance helps, but this only demonstrates effectiveness, not faithful equivariant behavior. A more rigorous validation would involve explicitly applying controlled rotations to input fragments (right-multiplication in SO(3)^N), or global rotations (left-multiplication), and verifying whether the predicted poses transform accordingly. Such experiments would directly confirm that the learned flow v_X(g) satisfies the claimed equivariance relation v_{rX}(rg)=r\\,v_X(g).\n\n- The authors claim that their model achieves permutation equivariance; however, no direct experiment is provided to validate this claim. It remains unclear how the predicted poses change when the input order of point clouds is permuted.\n\n- Weak Experimental Validation. The experimental section is relatively weak and limited in scope. The first experiment focuses on pairwise registration, which does not align with the paper’s stated goal of multi-piece point cloud assembly. The comparison set is also narrow and excludes strong, widely recognized metrics commonly adopted in the pairwise registration literature and also missing many prevalent pairwise methods such as FCGF, Predator and BUFFER. The multi-piece assembly evaluation is further constrained, only 2–8 fragments on synthetic datasets, which makes it difficult to assess the method’s scalability or robustness in realistic settings. Fig. 4 further indicates limited generalization capacity, performance degrades notably on unseen fragment lengths, revealing the model’s fragility. Including results on more comprehensive datasets such as Fantastic Breaks or FRACTURA would significantly strengthen the empirical claims.\nFinally, the KITTI experiment appears loosely connected to the main task, and its relevance to point cloud assembly is not clearly justified, leaving the overall empirical validation unconvincing.\n\nOverall, the experimental validation is quite weak and does not convincingly demonstrate the real effectiveness of the proposed Equivariant Flow Matching. The experiments are limited in scope, and key claims, such as equivariance and invariance, are only indirectly supported. These issues collectively make me lean toward rejecting this paper at its current stage.",
"questions": "See weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T20:00:48",
"modification_date": "2025-11-12T14:15:55",
"review_url": "https://openreview.net/forum?id=82IUMx3yRJ¬eId=LNff90v6oc",
"license": "CC BY 4.0"
},
{
"id": "RgXLRwmXxF",
"forum": "82IUMx3yRJ",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18461/Reviewer_zRLa",
"reviewer_name": "Reviewer_zRLa",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This work proposes an equivariant flow matching framework for multi-piece point cloud assembly tasks. The key idea is to employ a vector field parameterized by equivariant networks on an invariant base distribution to ensure the output distribution is equivariant to SO(3) rotations and permutations. Additionally, the training efficiency is enhanced by considering modified samples and random noises with minimum distance across all possible rotations. Overall, the experimental results show that the proposed framework achieves better results than existing baselines, and the ablation studies well validate the effectiveness of the proposed network.",
"strengths": "- This work proposes a clear formulation of the base distribution and vector field network assumptions to ensure the output distribution is equivariant to the group. The formulation of the base distribution ($\\left(U_{\\mathrm{SO}(3)} \\otimes \\mathcal{N}(0, \\omega I)\\right)^N$) and equivariant layers appear to be well-suited for these assumptions.\n- The experimental results demonstrate strong performance of the proposed framework, achieving better results in both pair-wise registration and multi-piece assembly.\n- Additionally, the manuscript validates the proposed components through the ablation study in Table 4, which further justifies the need for rotation correction and equivariant networks.",
"weaknesses": "There is only one minor concern in the current manuscript:\nRegarding the ablation study, it is mentioned that the proposed equivariant network is replaced with a non-equivariant counterpart. It would be beneficial to provide more details on the non-equivariant network and describe the exact changes in the Appendix to ensure the experiment's fairness.",
"questions": "See the weakness section.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T06:59:44",
"modification_date": "2025-11-12T14:15:56",
"review_url": "https://openreview.net/forum?id=82IUMx3yRJ¬eId=RgXLRwmXxF",
"license": "CC BY 4.0"
}
] | |
MYqAKKsjF9 | https://openreview.net/forum?id=MYqAKKsjF9 | LifelongAgentBench: Evaluating LLM Agents as Lifelong Learners | 2 | 3.666667 | [
2,
2,
2
] | [
5,
3,
3
] | 3 | [
"lifelong learning",
"continual learning",
"incremental learning",
"LLM agent"
] | Lifelong learning is essential for intelligent agents operating in dynamic environments. Current large language model (LLM)-based agents, however, remain stateless and unable to accumulate or transfer knowledge over time. Existing benchmarks treat agents as static systems and fail to evaluate lifelong learning capabilities. We present LifelongAgentBench, the first unified benchmark designed to systematically assess the lifelong learning ability of LLM agents. It provides skill-grounded, interdependent tasks across three interactive environments—Database, Operating System, and Knowledge Graph—with automatic label verification, reproducibility, and modular extensibility. Extensive experiments reveal that conventional experience replay has limited effectiveness for LLM agents due to irrelevant information and context length constraints. We further introduce a group self-consistency mechanism that significantly improves lifelong learning performance. We hope LifelongAgentBench will advance the development of adaptive, memory-capable LLM agents. | We propose a unified benchmark to evaluate the lifelong learning ability of LLM-based agents under diverse environments. | datasets and benchmarks | https://openreview.net/pdf?id=MYqAKKsjF9 | 2025-09-20T15:33:34 | 3 | [
{
"id": "49lwiGbZjH",
"forum": "MYqAKKsjF9",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24174/Reviewer_eERx",
"reviewer_name": "Reviewer_eERx",
"rating": 2,
"confidence": 5,
"soundness": 2,
"contribution": 1,
"presentation": 1,
"summary": "This work introduces LifelongAgentBench, the first unified benchmark for evaluating LLM agents’ lifelong learning across databases, operating systems, and knowledge graphs. It features task dependency, verifiable labels, reproducibility, and modularity. Experiments show traditional experience replay is limited, while a grouped self-consistency mechanism boosts performance. Experience quality matters more than quantity, and model architecture and task difficulty strongly affect replay effectiveness.",
"strengths": "1. The benchmark is highly reliable and flexible, easy to use, and readily extensible.\n2. The grouped self-consistency mechanism effectively mitigates memory and inference overhead in large-scale experience replay.",
"weaknesses": "1. The paper shows limited novelty. Among the four claimed innovations, Task Dependency is a common method for constructing tasks and does not clearly differ from prior work. Label Verifiability and Reproducibility are basic requirements for a benchmark, while Modularity relates to usability. Only Task Dependency contains some technical content, and the others cannot be considered true innovations.\n2. The evaluation of lifelong learning is incomplete because it only considers rapid adaptation to new tasks and does not assess whether new experiences cause forgetting on previously learned tasks. LifelongAgentBench does not measure the impact of new experiences on old tasks.\n3. The benchmark includes too few agent environments, which limits the generality of the conclusions.\n4. Many tasks in the database and operating system environments are generated by DeepSeek-R1, making them synthetic and potentially misaligned with real-world human task distributions.\n5. The paper defines lifelong learning narrowly, essentially by adding past experiences to the context, which resembles few-shot learning. Observed results, such as small gains for strong base models or performance improvement with more experiences, are common across tasks and not specific to agent settings.\n6. The writing and focus of the paper are problematic because it emphasizes lifelong learning while devoting most of the content to engineering details rather than conceptual or methodological contributions.",
"questions": "The conclusion mentions that adding experience to a high-performing base model yields little improvement, and can even be detrimental. What, then, are the challenges faced by such strong base models in lifelong learning, and how can they be addressed?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T02:43:07",
"modification_date": "2025-11-12T18:22:53",
"review_url": "https://openreview.net/forum?id=MYqAKKsjF9¬eId=49lwiGbZjH",
"license": "CC BY 4.0"
},
{
"id": "PuTIAeUjm2",
"forum": "MYqAKKsjF9",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24174/Reviewer_PjtQ",
"reviewer_name": "Reviewer_PjtQ",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 1,
"presentation": 1,
"summary": "This paper introduces LifelongAgentBench, a benchmark intended to evaluate the lifelong learning capabilities of LLM agents. The authors posit that existing benchmarks fail to assess knowledge accumulation over time. The benchmark provides a sequence of skill-grounded tasks in three environments (Database, OS, Knowledge Graph). The paper's primary evaluation focuses on in-context experience replay (ER), finding that replaying relevant experiences is superior to replaying recent ones. It also proposes \"group self-consistency\", a voting method, to manage the context-length limitations of this replay strategy.",
"strengths": "Important Problem: The paper's core motivation is strong. Evaluating the ability of agents to learn continuously is a critical, timely, and under-studied problem in the field of LLM-based agents.\n\nBenchmark Artifact: The creation of a dedicated, open-source benchmark with containerized environments and automatic verification is a non-trivial engineering effort. This infrastructure could, in principle, be a useful tool for the community.",
"weaknesses": "Unclear Definition of \"Lifelong Learning\": The paper fails to provide a precise and operational definition of lifelong learning. In Section 3, the problem formulation is presented as a generic sequential POMDP, which does not capture any distinctive characteristics of lifelong tasks. No explicit statement is given to clarify what “lifelong” means in this context, and how it affects the benchmark design.\n\nOverstated Novelty and Weak Analysis: The claimed methodological contribution appears minor and is overstated. The concept of group self-consistency (Section 6.5) seems to be a straightforward rebranding of standard self-consistency methods (e.g., Wang et al., 2023), without a clear theoretical or methodological differentiation. The “systematic analysis” is limited, focusing only on the comparison between experience replay and group self-consistency, which does not provide a comprehensive understanding of the proposed approach in relation to the broader body of methods.\n\nPoor Clarity and Presentation: The paper is difficult to follow due to unclear exposition of core concepts. Terminology such as \"skill concurrency\", \"skill-grounded\", \"label verification\" (is this equivalent to \"label validation\"?), \"parallel execution\", etc., is introduced without proper definitions or contextual examples. Figures and tables suffer from poor readability (extremely small font sizes; Figure 1 is overly cluttered). The paper does not convincingly justify what makes its task dependencies uniquely \"lifelong\", as similar setups could be replicated using existing benchmarks with replay-based agents. Table 1 lists differences, but it is unclear why these differences provide specific advantages under a lifelong learning scenario. Further explanation is necessary. Also, Table 1 is inconsistent with the “four key innovations” described later in the text, very confusing. Several citations are incorrect (e.g., VisualWebArena, AgentBench, wrong authors, wrong links), are they AI generated?",
"questions": "I highly doubt that some major parts are written by llms without careful checking. I strongly recommend that the authors carefully review these paragraphs and thoroughly refine the wording to improve clarity and precision. Due to the poor readability of many parts, I may have overlooked some of the paper’s potential contributions. A significant improvement in writing quality would positively influence my evaluation and may result in a higher score.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T16:02:25",
"modification_date": "2025-11-12T18:22:53",
"review_url": "https://openreview.net/forum?id=MYqAKKsjF9¬eId=PuTIAeUjm2",
"license": "CC BY 4.0"
},
{
"id": "Wqod02lcIQ",
"forum": "MYqAKKsjF9",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24174/Reviewer_mS98",
"reviewer_name": "Reviewer_mS98",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes LifelongAgentBench, the first unified benchmark framework specifically designed to evaluate the capabilities of LLM-based agents in lifelong learning scenarios. Unlike previous evaluation methods that treat agents as static systems, this framework emphasizes assessing an agent’s ability to accumulate, retain, and transfer knowledge within continuous, interdependent task sequences.\n\nMain contributions include:\n- Systematic analysis of the effects of experience replay: Identifies limitations of traditional approaches in LLM-based agents, such as interference from irrelevant information and context length constraints.\n- Proposing the group self-consistency mechanism: Improves decision quality by grouping historical experiences and using voting, significantly alleviating memory and reasoning overhead issues.",
"strengths": "- This work is the first to propose a benchmark specifically targeting the lifelong learning capability of LLM-based agents, with a novel problem definition that fills a gap in existing evaluation frameworks.\n- The proposed grouped self-consistency mechanism represents an improvement over traditional experience replay methods, demonstrating methodological innovation.\n- The work offers an extensive suite of well-defined and verifiable agent tasks, enabling performance evaluation and experiments.",
"weaknesses": "There are flaws in the experimental aspect:\n1. On line 054, table 1 only includes a few agent-related benchmarks for comparison. Examples like osworld and browsecomp were not taken into consideration.\n2. On line 328, table 2 intends to express the effectiveness of replay, but it only uses one model.\n3. Line 435, Table 3 only measured DB and KG. Additionally, the number of models used for DB and KG was different. If a model fails in KG, then DB should not be included either, as it has no significance.\n4. On line 270, fig 3 is the only experiment that used a closed-source model. Why wasn't it presented in a table?\n\nOverall, as a benchmark, it fails to provide sufficient evaluation results using both open-source and closed-source models. The types and quantities of models used in each experiment are very arbitrary. There was no appropriate ablation study for the proposed replay and vote methods.",
"questions": "See the \"Weaknesses\" section.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-24T15:18:22",
"modification_date": "2025-11-12T18:22:54",
"review_url": "https://openreview.net/forum?id=MYqAKKsjF9¬eId=Wqod02lcIQ",
"license": "CC BY 4.0"
}
] |
5F2XfLe7An | https://openreview.net/forum?id=5F2XfLe7An | SINQ: Sinkhorn-Normalized Quantization for Calibration-Free Low-Precision LLM Weights | 4 | 4.333333 | [
6,
4,
2
] | [
5,
4,
4
] | 3 | [
"LLM",
"quantization",
"integer",
"INT4",
"inference",
"W4A16",
"uniform quantization",
"calibration-free quantization"
] | Post-training quantization has emerged as the most widely used strategy for deploying large language models at low precision. Still, current methods show perplexity degradation at bit-widths $\leq 4$, partly because representing outliers causes precision issues in parameters that share the same scales as these outliers. This problem is especially pronounced for calibration-free, uniform quantization methods. We introduce SINQ to augment existing post-training quantizers with an additional second-axis scale factor and a fast Sinkhorn–Knopp–style algorithm that finds scales to normalize per-row and per-column variances, thereby minimizing a novel per-matrix proxy target for quantization: the matrix imbalance. Our method has no interactions between layers and can be trivially applied to new architectures to quantize any linear layers.
We evaluate our method on the Qwen3 model family and DeepSeek-V2.5. SINQ improves WikiText2 and C4 perplexity significantly against uncalibrated uniform quantization baselines and can be further enhanced by combining it with calibration and non-uniform quantization levels. Code is available in the supplementary. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=5F2XfLe7An | 2025-09-19T19:57:51 | 3 | [
{
"id": "SZ8EsdGIeu",
"forum": "5F2XfLe7An",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18032/Reviewer_uY9X",
"reviewer_name": "Reviewer_uY9X",
"rating": 6,
"confidence": 5,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "This paper presents a post-training weight quantization method on a dual-scale matrix quantization scheme. \nComparison to existing calibration-free PTQ methods including rotation-based methods is conducted empirically.",
"strengths": "+ Clearly presented idea. \n+ Potentially significant practical value.",
"weaknesses": "- Practical overhead of dual-scaling on actual HW is not comprehensively discussed, except for memory efficiency.",
"questions": "* I am not sure the comparison against Hadamard rotation, etc. is also based on dual-scale scheme or not--it should be for a fair comparison. \n* Random rotation is purported to mix channels and thereby eliminate outliers--this seems to be doing similar things as dual-scale. Could you do an ablation study with (1) Hadamard + single-scale, (2) Hadamard + dual-scale, and (3) SINQ?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T11:20:27",
"modification_date": "2025-11-12T14:10:18",
"review_url": "https://openreview.net/forum?id=5F2XfLe7An¬eId=SZ8EsdGIeu",
"license": "CC BY 4.0"
},
{
"id": "GMTwXPJPlv",
"forum": "5F2XfLe7An",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18032/Reviewer_Eu9q",
"reviewer_name": "Reviewer_Eu9q",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes SINQ, a weight-only PTQ scheme that applies dual scaling one scale per row and one per column, to each weight tile. The aim is to mitigate outliers along both dimensions and make ≤4-bit uniform quantization easier. The authors also introduce a proxy metric, matrix imbalance (the ratio of the largest to smallest row/column standard deviations), and a Sinkhorn–Knopp–style iteration that alternately normalizes row and column standard deviations to reduce this imbalance prior to quantization.",
"strengths": "- **Simple, calibration-free** recipe that improves perplexity compared to strong uniform PTQ baselines at 3–4 bits across model sizes.\n- Clear ablations comparing **imbalance** vs **kurtosis** as proxies for quantization difficulty.\n- **Competitive results**, outperforming HQQ, GPTQ and AWQ in many settings.",
"weaknesses": "- **Hardware evidence.** There are no end-to-end **inference throughput/latency** results or **kernel-level utilization** measurements; only **quantization-time** is reported. Without runtime data on common backends, deployment value is hard to assess.\n- **Weight-only scope.** Although activation quantization is discussed, the experiments are weight-only. It remains unclear how dual scaling interacts with **W×A** low-precision matmuls and whether common kernel fusions remain intact.\n- **Baselines.** Since the focus is weight quantization, a head-to-head with **codebook/rotation** approaches (e.g., **QuIP#**, **QTIP**) would strengthen the empirical case; these are mentioned in related work but not featured in the main tables.\n- **Further empirical results.** Results would be more convincing with additional families (e.g., **LLaMA**, **Phi**) to demonstrate generality.\n- CrossQuant appears closely related but is missing from the citations; it would help to explain the methodological differences and compare performance.\n\nCrossQuant: A Post-Training Quantization Method with Smaller Quantization Kernel for Precise Large Language Model Compression., Liu, Wenyuan, et al. (2024).",
"questions": "check the weakness, \n\n1- how would dual scaling be implemented in hardware for W×A low-precision matrix multiplications, and what impact would it have inference speed?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T09:07:18",
"modification_date": "2025-11-12T14:10:20",
"review_url": "https://openreview.net/forum?id=5F2XfLe7An¬eId=GMTwXPJPlv",
"license": "CC BY 4.0"
},
{
"id": "51Z72eTcdD",
"forum": "5F2XfLe7An",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18032/Reviewer_jhSj",
"reviewer_name": "Reviewer_jhSj",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper presents SINQ, a post-training quantization method for large language models that uses dual-axis scaling and a modified Sinkhorn–Knopp algorithm to minimize \"matrix imbalance,\" improving perplexity on models like Qwen3 and DeepSeek-V2.5 while being compatible with mainstream paradigms such as NF4 and AWQ. However, it suffers from critical flaws—including unclear experimental details, unproven core metrics, limited innovation, and missing key comparisons—resulting in a \"Reject\" score, though addressing these issues could lead to reconsideration.",
"strengths": "1. The necessity of dual-axis scaling has not been verified: The paper fails to compare the performance of \"row-only scaling\", \"column-only scaling\", and \"dual-axis scaling\". This makes it impossible to prove the advantage of \"dual-axis scaling\" — for instance, if column-only scaling can achieve similar performance, the additional complexity of dual-axis scaling becomes meaningless. \n2. It is a non-isolated solution and can be combined with mainstream quantization paradigms to expand application scenarios (e.g., NF4, AWQ).",
"weaknesses": "1. Lack of experimental details: For the SINQ algorithm, the \"specific selection rule for σmin\" and \"threshold setting for early-stopping\" are not provided. In A-SINQ, the calibration dataset for AWQ (e.g., sample count, source) is not mentioned. While AWQ typically relies on 128–512 samples from the C4 dataset, the paper does not confirm consistency with this practice, nor does it explain whether calibration samples affect the optimization results of SINQ. \n\n2. The rationality of using matrix imbalance as a surrogate metric is unproven: The paper defines matrix imbalance as $I(W)=\\sigma_{min}(W)/\\sigma_{max}(W)$ (the ratio of the minimum to maximum standard deviations of rows and columns) and claims that minimizing $I(W)$ improves quantization accuracy. However, it does not establish a mathematical connection between \"matrix imbalance\" and \"quantization error\" — for example, why a smaller $I(W)$ leads to lower post-quantization MSE or perplexity. The paper only observes through Figure 2 that \"minimizing $I(W)$ reduces kurtosis\", but fails to analyze the relationship between kurtosis and quantization error (e.g., whether reduced kurtosis necessarily decreases distribution overlap under low-bitwidth conditions), leaving the core assumption without theoretical support. \n\n3. The paper modifies the standard Sinkhorn-Knopp algorithm to normalize row and column standard deviations, but does not prove the convergence of the modified algorithm (e.g., whether iterations enter cycles or if a unique fixed point exists). Additionally, it does not explain the basis for selecting the number of iterations $n_{iter}$ (e.g., why a fixed number is chosen instead of dynamic stopping based on the convergence threshold of $I(W)$), casting doubt on the algorithm’s stability. While Figures 2(a)(b) show that $I(W)$ stabilizes after 10 iterations, the paper does not explain \"why 10 iterations are optimal\" nor conduct ablation studies on the impact of $n_{iter}$ on performance. \n\n4. The paper adopts the sequence \"SINQ normalization → AWQ scaling → quantization\" (Section 2.2.2). However, AWQ’s core lies in \"activation-aware weight scaling\", which relies on the distribution characteristics of original weights (e.g., correlation between activations and weights). Prior SINQ normalization alters the weight distribution, potentially disrupting the correlation relied on by AWQ. The paper does not compare the performance of alternative sequences such as \"AWQ first, then SINQ\" or \"joint optimization of SINQ and AWQ\", making it impossible to verify the rationality of the current sequence. It also fails to decompose contributions from \"SINQ alone\", \"AWQ alone\", and \"their combination\", leaving uncertainty about whether performance gains stem from dual-axis scaling or AWQ. \n\n5. SINQ’s innovations are more akin to \"combinatorial optimization of existing technologies\" with limited breakthroughs: SINQ merely expands the scaling dimension from \"weight-activation\" or \"single-axis weight\" to \"dual-axis weight\", which is essentially an extension of the scaling target rather than a fundamental innovation. The standard Sinkhorn-Knopp algorithm normalizes row and column sums; SINQ only replaces the target with \"row and column standard deviations\" while retaining the algorithm framework (alternating iterative normalization), making this a routine modification rather than an innovative design. \n\n6. Key comparative methods are missing: Representative methods such as FlatQuant and OSTQuant are not included in the comparisons. \n\n7. The necessity of dual-axis scaling has not been verified: The paper fails to compare the performance of \"row-only scaling\", \"column-only scaling\", and \"dual-axis scaling\". This makes it impossible to prove the advantage of \"dual-axis scaling\" — for instance, if column-only scaling can achieve similar performance, the additional complexity of dual-axis scaling becomes meaningless. \n\n8. For common datasets (HellaSwag, PIQA, MMLU), accuracy is the standard evaluation metric, while \"Flip rates\" are uncommon. The paper provides no justification for selecting this metric. \n\nBased on the above weakness, I would assign a Reject score. I look forward to the authors addressing the aforementioned problems in future revisions, and I would be happy to reconsider and raise my score accordingly.",
"questions": "See \"Weaknesses\"",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-16T00:32:29",
"modification_date": "2025-11-12T14:10:21",
"review_url": "https://openreview.net/forum?id=5F2XfLe7An¬eId=51Z72eTcdD",
"license": "CC BY 4.0"
}
] | |
DKOIADzbtM | https://openreview.net/forum?id=DKOIADzbtM | EchoVLM: Measurement-Grounded Multimodal Learning for Echocardiography | 5 | 3.75 | [
6,
6,
4,
4
] | [
4,
5,
3,
3
] | 4 | [
"Echocardiography",
"vision-language model",
"ultrasound"
] | Echocardiography is the most widely used imaging modality in cardiology, yet its interpretation remains labor-intensive and inherently multimodal, which requires view recognition, quantitative measurements, qualitative assessments, and guideline-based reasoning. While recent vision–language models (VLMs) have achieved broad success in natural images and certain medical domains, their potential in echocardiography has been limited by the lack of large-scale, clinically grounded image–text datasets and the absence of measurement-based reasoning central to echo interpretation. We introduce EchoGround-MIMIC, the first measurement-grounded multimodal echocardiography dataset, comprising 19,065 image–text pairs from 1,572 patients with standardized views, structured measurements, measurement-grounded captions, and guideline-derived disease labels. Building on this resource, we propose EchoVLM, a vision–language model that incorporates two novel pretraining objectives: (i) a view-informed contrastive loss that encodes the view-dependent structure of echocardiographic imaging, and (ii) a negation-aware contrastive loss that distinguishes clinically critical negative from positive findings. Across five types of clinical applications with 36 tasks spanning multimodal disease classification, image–text retrieval, view classification, chamber segmentation, and landmark detection, EchoVLM achieves state-of-the-art performance (86.5\% AUC in zero-shot disease classification and 95.1\% accuracy in view classification). We demonstrate that clinically grounded multimodal pretraining yields transferable visual representations and establish EchoVLM as foundation model for end-to-end echocardiography interpretation. We will release EchoGround-MIMIC and data curation code, enabling reproducibility and further research in multimodal echocardiography interpretation. | applications to physical sciences (physics, chemistry, biology, etc.) | https://openreview.net/pdf?id=DKOIADzbtM | 2025-09-04T11:49:15 | 4 | [
{
"id": "uqTActq3Nq",
"forum": "DKOIADzbtM",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1886/Reviewer_zREj",
"reviewer_name": "Reviewer_zREj",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "The paper presents a vision-language model, CLIP-style, for echocardiography. VLMs for echocardiography suffer from internal challenges in accurate measurement predictions, and sparse and unfocused data of image-text pairs. The proposed model combines image and text encoders trained on the novel EchoGround-MIMIC dataset (19,065 measurement-grounded image-text pairs). The dataset comes from preprocessing and organizing existing public repos (MIMIC-ECHO), and could be valuable for further research—so far there is no similar open-source data, therefore the data is useful. The model uses two specialized contrastive losses: a view-informed contrastive loss (same-view positives, different-view negatives) and a negation-aware contrastive loss for distinguishing negative vs. positive clinical findings in text. However, these losses appear to offer limited technical novelty. For downstream tasks like segmentation and landmark detection, task-specific heads are added to the pre-trained encoder and fine-tuned on benchmark datasets.",
"strengths": "-- Data: EchoGround-MIMIC (~20K measurement-grounded image-text pairs) - first open-source dataset of its kind for echocardiography. Data Processing Innovation -- Successfully integrates MIMIC-IV-ECHO imaging with MIMIC-IV-Note reports. \n\n\n-- Clinical Relevance: Addresses critical gap between free-text narratives and quantitative measurements essential for guideline-based echo diagnosis.\n\n-- Comprehensive Evaluation Framework: 36 tasks across 5 clinical application types (classification, retrieval, segmentation, landmark detection) - would be valuable if released as a benchmark.\n\n-- Community Value: Fills significant resource gap for medical AI research in echocardiography.",
"weaknesses": "-- Limited Technical Novelty: View-informed loss is just constrained negative sampling; negation-aware loss potentially similar to existing work (e.g., MICCAI 2025, \"EchoViewCLIP: Advancing Video Quality Control through High-performance View Recognition of Echocardiography\")\n\n-- Evaluation Methodology Issues: Primary comparison against unreleased EchoApex (weights/data are not released, based on reported results) instead of available EchoPrime (weights are open to download) raises reproducibility concerns, and if all models were trained in the same manner. \n\n-- Technical Details: Frame vs. video level processing unclear; mathematical formulation of negation-aware loss may lack sufficient innovation\n\n-- Algorithmic Contributions Questionable: Technical contributions may not meet novelty bar for top-tier venues - relies heavily on dataset contribution rather than methodological innovation",
"questions": "Given that EchoPrime is publicly available while EchoApex is unreleased, why not use EchoPrime as the primary baseline? This would enable reproducible comparisons and address potential selection bias concerns.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T22:17:47",
"modification_date": "2025-11-12T10:52:10",
"review_url": "https://openreview.net/forum?id=DKOIADzbtM¬eId=uqTActq3Nq",
"license": "CC BY 4.0"
},
{
"id": "bBKkZlmV0W",
"forum": "DKOIADzbtM",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1886/Reviewer_Frhu",
"reviewer_name": "Reviewer_Frhu",
"rating": 6,
"confidence": 5,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper introduces the EchoGround-MIMIC dataset, a set of image/text paired datasets for echocardiography. Specifically, the authors use the MIMIC-IV-ECHO dataset and extract numerical measurements using OCR-based methods. Second, the authors propose a CLIP-based contrastive learning framework and evaluate EchoVLM on 5 different clinical applications with 36 clinical tasks.",
"strengths": "- The authors propose a needed dataset for echocardiography. Most of the papers working on VLMs for echo are constrained to private datasets, limiting their applicability and contribution.\n\n- The paper comprehensively details different procedures taken to obtain the final dataset from the raw original MIMIC-IV-ECHO.\n\n- The negation-aware contrastive objective for CLIP, along with diverse ablation studies.",
"weaknesses": "- The main weakness of the paper, to me, is its limited architectural novelty. Although introducing the new dataset is needed for the community working on echocardiography, the proposed Echo-VLM is similar to prior works originally CLIP and also its variants Echo-CLIP.\n\n- Measurements are cropped from overlays and transcribed via an LLM, along with the captions and guideline labels. Although this is acknowledged in the paper and despite manual checks, parsing errors may introduce label noise as mentioned. To what extent is this labelling noise mitigating? Were there cardiologists involved in the process?\n\n-",
"questions": "- Can the authors elaborate on their novelty in terms of the architecture design, as opposed to prior works like EchoCLIP?\n\n- A main concern of mine is whether the dataset is really going to be open-sourced. I understand that the authors mention this; however, based on my experience in this field, I have seen many papers in top-tier conferences that mention they will open-source the code/data, but they don't. This is particularly evident in many echo papers. Ideally, the authors could share an anonymised GitHub repo containing the code/data of the paper. \n\n- Can the authors clarify more on the manual checks performed on the outputs of LLMs? How trustful is the outputs of the OCR algorithtm and the LLM-generated captions?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T05:09:08",
"modification_date": "2025-11-12T10:52:10",
"review_url": "https://openreview.net/forum?id=DKOIADzbtM¬eId=bBKkZlmV0W",
"license": "CC BY 4.0"
},
{
"id": "9U10uzyrto",
"forum": "DKOIADzbtM",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1886/Reviewer_v1CK",
"reviewer_name": "Reviewer_v1CK",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces EchoGround-MIMIC, a measurement-grounded multimodal echocardiography dataset, which includes standardized views, structured measurements, measurement-grounded captions, and guideline-derived disease labels. The authors also propose EchoVLM, a CLIP-style vision-language model, which is pretrained with two novel contrastive loss functions (view-informed and negation-aware).",
"strengths": "1. A significant contribution of this work is the design of a comprehensive data processing pipeline. This pipeline successfully extracts and aligns a complex, multimodal dataset—comprising images, standardized views, quantitative measurements, measurement-related reports, and disease labels—from the MIMIC-IV-ECHO and MIMIC-IV-Note databases.\n\n2. The paper proposes two novel and clinically-motivated pretraining objectives: view-informed contrastive learning and negation-aware contrastive learning. The utility and effectiveness of these objectives are well-supported by the provided ablation studies.\n\n3. The proposed model (EchoVLM) is thoroughly evaluated on a diverse set of five downstream application types (36 tasks in total), demonstrating its generalizability and strong performance across both multimodal and vision-only benchmarks.",
"weaknesses": "1. Disconnect between \"Measurement-Grounded\" Narrative and Methodology: The paper's core theme is \"measurement-grounded multimodal learning.\" However, there appears to be a significant disconnect between this narrative and the technical implementation. The structured measurements (e.g., JSON-formatted values like \"EF: 45%\"), which are a key highlight of the new dataset, are not directly utilized as an input during the model's training phase. The model is only trained on the captions derived from these measurements. Furthermore, the two novel optimization objectives (L_view and L_neg) are independent of the structured measurements. In fact, these objectives appear to be entirely separable from the 'grounded' nature of the data pipeline: L_view is a vision-only objective, while L_neg is a text-only objective that could be applied to any positive/negative caption pair, whether it is measurement-grounded or not.\n\n2. Lack of Ablation on the \"Grounded\" Data Pipeline: While the \"measurement-grounded\" nature of the captions is presented as a core advantage, the paper lacks a direct ablation study to quantify the benefit of this complex and costly data curation process. Specifically, there is no experiment comparing the performance of a model trained on these curated \"measurement-grounded captions\" against a baseline model trained on the original, complete, and noisy \"non-measurement-grounded\" reports (e.g., the full text from MIMIC-IV-Note). This makes it difficult to assess the true value added by the grounding pipeline.",
"questions": "1. Given the paper's core theme of \"measurement-grounded\" learning, could the authors elaborate on the design rationale for not directly utilizing the extracted structured measurements as an input during pretraining (e.g., as explicit tokens or an auxiliary regression loss)? Why was this quantitative data, a key part of the new dataset, only used as an intermediate tool for caption generation?\n\n2. The paper provides a simple example for negation generation (e.g., \"no regurgitation\" from \"mild regurgitation\"). For more complex quantitative statements, such as \"Quantitative biplane left ventricular ejection fraction is 45 %,\" could the authors clarify what form the corresponding \"clinical semantic negation\" takes? \n\n3. Considering that echocardiography is an inherently dynamic (video-based) modality, what are the specific advantages or benefits of the proposed frame-based approach when compared to existing video-based solutions (such as EchoPrime mentioned in the related work)?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T17:16:35",
"modification_date": "2025-11-12T10:52:11",
"review_url": "https://openreview.net/forum?id=DKOIADzbtM¬eId=9U10uzyrto",
"license": "CC BY 4.0"
},
{
"id": "GKQAd1tRQD",
"forum": "DKOIADzbtM",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission1886/Reviewer_qUxR",
"reviewer_name": "Reviewer_qUxR",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "The paper introduces EchoVLM, a measurement-grounded VLM for echocardiography, and EchoGround-MIMIC, a multimodal dataset explicitly linking echocardiographic images with structured measurements, measurement-grounded captions, and guideline-aligned disease labels. EchoVLM extends the CLIP framework with two clinically motivated objectives: a view-informed contrastive loss that models the view-dependent nature of echocardiographic images, and a negation-aware contrastive loss that distinguishes positive and negative clinical statements. Trained on 19,065 image–text pairs, EchoVLM achieves state-of-the-art results across 36 tasks, including zero-shot disease classification (AUC = 86.5%), view classification (95.1% accuracy), and competitive segmentation and landmark detection on public datasets. The results demonstrate strong cross-modal transfer and clinically meaningful visual representations.",
"strengths": "- New measurement-grounded dataset: EchoGround-MIMIC provides the first large-scale, structured dataset pairing echo images with quantitative measurements, standardized views, and guideline-derived labels.\n\n- The proposed view-informed and negation-aware contrastive losses directly encode clinical reasoning patterns, improving both visual coherence and semantic discrimination.\n\n- Extensive validation on 36 tasks across five applications shows consistent superiority over domain and generalist baselines (e.g., +7.2 AUC vs. EchoCLIP, +0.9% precision over EchoApex).",
"weaknesses": "- EchoGround-MIMIC originates from a single institution (MIMIC-IV-ECHO), which constrains demographic, hardware, and acquisition variability, potentially limiting generalization.\n\n- OCR-extracted measurements and LLM-generated captions introduce potential noise; limited manual validation may not fully prevent systematic errors.\n\n- While effective, the new contrastive objectives are empirically motivated with limited theoretical analysis of their convergence or interaction with the CLIP loss.\n\n- EchoVLM operates on single frames rather than sequences, missing dynamic cardiac context critical for echocardiographic interpretation.",
"questions": "- How would EchoVLM perform when trained or tested on multi-institutional data with varying imaging protocols, demographics, and ultrasound vendors?\n\n- Could the framework be extended to incorporate video-level dynamics, given that echocardiography interpretation heavily depends on temporal motion?\n\n- What proportion of the automatically generated measurement-grounded captions were manually verified, and how sensitive are downstream results to errors in this supervision?\n\n- How robust are the view-informed and negation-aware losses to their respective λ parameters when scaling to larger datasets or different imaging domains?\n\n- What are the major failure cases observed in zero-shot classification, e.g., misinterpretations of negations or confusion between anatomically adjacent views?\n\n- What are the computational requirements and latency of EchoVLM inference in a real-time clinical setting, and how might model compression or distillation affect its performance?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-18T02:08:08",
"modification_date": "2025-11-12T10:52:11",
"review_url": "https://openreview.net/forum?id=DKOIADzbtM¬eId=GKQAd1tRQD",
"license": "CC BY 4.0"
}
] | |
J04D9xBUCi | https://openreview.net/forum?id=J04D9xBUCi | Bridging the Preference Gap: Post-Training Input Rewriting with Large Language Models | 3 | 3.75 | [
4,
2,
4,
2
] | [
3,
5,
3,
4
] | 4 | [
"textual entailment",
"natural language inference"
] | Pre-trained language models, such as BERT and RoBERTa, have achieved remarkable performance in semantic classification tasks. Yet, their effectiveness varies with different textual expressions due to inherent preferences developed during training. To address this limitation, we propose a framework that leverages large language models (LLMs) to rewrite input texts in ways that better align with a target classifier's preferences, thereby enhancing its performance. To achieve this, we introduce a training process for the LLM and an automated method for constructing training data that encapsulates the classifier-specific preferences. Furthermore, we present a multi-sampling and filtering strategy to address instability in LLM outputs. Empirical evaluations on semantic classification datasets demonstrate that our framework significantly improves classifier’s performances. | other topics in machine learning (i.e., none of the above) | https://openreview.net/pdf?id=J04D9xBUCi | 2025-09-20T18:52:14 | 4 | [
{
"id": "USRJ9ysNgw",
"forum": "J04D9xBUCi",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission25233/Reviewer_oBL7",
"reviewer_name": "Reviewer_oBL7",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes a post-training framework that uses LLMs to rewrite classification inputs according to a classifier’s inherent preferences. It includes SFT, DPO, and a filtering stage based on classifier embeddings. Experiments on GLUE show small but consistent gains.",
"strengths": "1. The shift from \"eliminating preferences\" to \"adapting to preferences\" is interesting.\n2. The three-stage training from SFT, DPO to Filter, is technically sound in isolation.\n3. In the current experiments, the effectiveness on the selected baselines and benchmarks is demonstrated, and through ablations, the authors effectively show the necessity of each component.",
"weaknesses": "1. The approach is conceptually interesting but not a strict or theoretically grounded post-training method. Training stability and convergence are not statistically validated.\n\n2. The selection of datasets and baselines is limited, lacking generalization analysis.\n\n3. Empirical improvements are minor; significance not verified.\n\n4. The observed performance improvements might stem from the LLM memorizing task-specific linguistic patterns rather than capturing genuine preference alignment. I suggest adding evaluations on out-of-domain datasets to verify the robustness of the proposed method.\n\n4. Minor presentation errors – e.g., (a.1) and (a.4) are mentioned but missing in Figure 2.",
"questions": "1. How stable is the DPO-based training process? Have you observed consistent convergence across multiple random seeds?\n\n2. Could the filter generalize across classifiers, or must it be retrained for each new target model?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T14:33:54",
"modification_date": "2025-11-12T18:29:34",
"review_url": "https://openreview.net/forum?id=J04D9xBUCi¬eId=USRJ9ysNgw",
"license": "CC BY 4.0"
},
{
"id": "0w7ZWcnELN",
"forum": "J04D9xBUCi",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission25233/Reviewer_ShwC",
"reviewer_name": "Reviewer_ShwC",
"rating": 2,
"confidence": 5,
"soundness": 2,
"contribution": 1,
"presentation": 2,
"summary": "This paper presents a framework that employs large language models (LLMs) to rewrite input texts to align with the preferences of a target classifier. The authors propose a training paradigm for the LLM, accompanied by an automated data construction pipeline that encapsulates classifier-specific characteristics. A multi-sampling and filtering strategy is further introduced to mitigate the inherent instability of LLM-generated outputs. Empirical evaluations on semantic classification datasets demonstrate that the proposed framework yields improvements in classifier performance.",
"strengths": "1. This paper empirically validates the distinction of model preferences from human linguistic cognition, demonstrating that traditional text complexity metrics, such as sentence length and lexical rarity, cannot reliably predict model behavior.\n2. This paper proposes a method to train LLMs to capture the preferences of the Classifier, along with a technique for the automated construction of training data.\n3. This paper presents a method that combines multiple sampling with a filtering strategy to address the issue of instability in the outputs generated by the LLM.",
"weaknesses": "1. The practicality of the approach appears limited. In the title of the paper, it seems that the authors propose an empirical paradigm; however, the scope of the study is restricted to text classification tasks, specifically the MRPC, MNLI, and SST-2 datasets. These benchmarks have already achieved near-saturated performance (e.g., 96.20% accuracy with RoBERTa-Large on SST-2). Hence, doubts remain about the meaningfulness of the work, and it is unclear whether the proposed framework can generalize to more diverse and complex scenarios, such as text generation or reasoning tasks.\n2. Although the authors claim that \"*Empirical evaluations on semantic classification datasets demonstrate that our framework **significantly** improves classifiers' performances,*\" the reported performance gains are relatively small and may fall within the range of random variation (e.g., +0.45% on QNLI with BART-Base). To substantiate the claim of significant improvement, multiple experimental runs and statistical significance tests should be conducted. Moreover, the efficiency of the proposed approach is questionable, as the framework requires multiple sampling runs to obtain the final results, which could substantially increase computational overhead.\n3. The experimental evaluation lacks an ablation study or a detailed analysis of the proposed filtering module. It remains unclear how much the filtering component performs and how it compares to the conventional RoBERTa classification. A more comprehensive examination of this module would strengthen the empirical validity of the work.",
"questions": "See \"Weaknesses.\"",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T12:24:49",
"modification_date": "2025-11-12T18:29:34",
"review_url": "https://openreview.net/forum?id=J04D9xBUCi¬eId=0w7ZWcnELN",
"license": "CC BY 4.0"
},
{
"id": "B60OVHR2lj",
"forum": "J04D9xBUCi",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission25233/Reviewer_7n88",
"reviewer_name": "Reviewer_7n88",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This work propose a framework that trains an LLM to rewrite input texts to assist a classifier to make better classification in semantic classification tasks according to its preference. The LLM rewriter is trained via SFT and DPO with data labeled by the classifier to align with the classifier's preference. A filter module is also trained using the classifer's embeds to filter inferior rewrites from the LLM rewriter's multiple samplings. Experiments show that the framework helps improve the performance of bert-based classifiers and LLMs on GLUE benchmark.",
"strengths": "1. This paper propose a framework that leverage classifier's inherent preference to improve its accuracy rather than forcefully correct the preference, providing a new prospective for enhancing the performance of classifiers.\n2. The training process is concise and easy to conduct, which enhances the applicability of the framework.",
"weaknesses": "1. The experiment lacks a comparison with the effect of directly fine-tuning the classifier with diverse rewriting formats, which is necessary to prove that leveraging the preference of classifier is better than directly correcting it.\n2. The research field of this work is limited. This work focuses only on the semantic classification tasks, and GLUE is a relatively simple benchmark for current models. Since the authors said their work \"focuses on unlocking the upper capability boundaries of task models through preference-guided input rewriting\" in the conclusion, it is necessary to validate this method on more tasks, such as instruction following.\n3. The base models selected in the experiments are out-dated. Stronger baselines should be taken into account, such as reasoning models like Qwen3-8B and other LLM-based classifiers.",
"questions": "See weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T13:29:24",
"modification_date": "2025-11-12T18:29:35",
"review_url": "https://openreview.net/forum?id=J04D9xBUCi¬eId=B60OVHR2lj",
"license": "CC BY 4.0"
},
{
"id": "nXqRHGZknh",
"forum": "J04D9xBUCi",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission25233/Reviewer_53au",
"reviewer_name": "Reviewer_53au",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 1,
"presentation": 3,
"summary": "This paper proposes a post-training input rewriting framework that leverages large language models (LLMs) to rewrite input texts at inference time, aligning them with the “preferences” of a fixed downstream classifier (e.g., RoBERTa, BART) to boost its performance on GLUE. The method involves: (1) automatically constructing preference-aligned training data, (2) fine-tuning the LLM via SFT and DPO, and (3) applying a multi-sampling + classifier-embedding-based filtering strategy during inference.",
"strengths": "- The experimental pipeline is relatively complete, including ablation studies, generalization analysis, and comparisons with hand-crafted prompts. \n- It presents a post-hoc, inference-time approach that avoids retraining the classifier. \n- The paper empirically challenges the common assumption that reducing textual complexity improves model performance.",
"weaknesses": "- **Motivation is weak and conceptually muddled**: The term “preference” is used loosely without clear definition or causal analysis. The paper conflates model limitations (e.g., poor generalization to paraphrases) with inherent “preferences,” then proposes a complex workaround instead of addressing root causes. \n- **Performance gains are marginal**: The method improves RoBERTa-Large by only **+1.08** GLUE points and BART-base by **+0.72**. The large gain on MRPC (+3.4) is isolated and likely stems from dataset noise rather than a robust mechanism, as noted by the authors themselves. \n- **Excessive engineering complexity**: The pipeline requires SFT + DPO training of an LLM, construction of a separate classifier-embedding-based filter, and multi-sample inference—yet yields sub-1% gains. This makes the approach impractical for real-world deployment. \n- **Missing key baselines**: No comparison with standard test-time augmentation (TTA), self-consistency decoding, or even simple prompt ensembling—methods that are far cheaper and often equally effective. \n- **Limited scientific insight**: The work offers no analysis of what linguistic features the classifier actually “prefers” or how the LLM learns them. It remains a black-box performance patch with little theoretical or practical generalizability.",
"questions": "1. How do you formally define “model preference”? Can you provide evidence that it is a stable, intrinsic property of the classifier—not an artifact of training data bias or insufficient robustness? \n2. Why not compare against standard test-time augmentation or ensemble-based inference, which are simpler and more widely adopted? \n3. Are the reported gains (e.g., +1.08 on GLUE for RoBERTa) statistically significant? Were multiple random seeds or runs used to rule out variance? \n4. Given the high computational and latency overhead of your pipeline (LLM rewriting + filtering), how do you justify its practical utility over simply fine-tuning the classifier further or using a stronger base model?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-15T16:29:43",
"modification_date": "2025-11-12T18:29:35",
"review_url": "https://openreview.net/forum?id=J04D9xBUCi¬eId=nXqRHGZknh",
"license": "CC BY 4.0"
}
] | |
RAs8XzpNzQ | https://openreview.net/forum?id=RAs8XzpNzQ | A solvable model of inference-time scaling | 3 | 3 | [
2,
4,
4,
2
] | [
3,
2,
3,
4
] | 4 | [
"test time compute",
"inference time compute",
"scaling law",
"higher dimensional statistics"
] | Recent developments in large language models have shown advantages in reallocating a notable share of computational resource from training time to inference time. However, the principles behind inference time scaling are not well understood. In this paper, we introduce an analytically tractable model of inference-time scaling: Bayesian linear regression with a reward-weighted sampler. We study this problem in the high-dimensional regime, where the deterministic equivalents dictate a closed-form expression for the posterior predictive mean and variance. We analyze the generalization error when training data are sampled from a teacher model. We draw $k$ inference-time samples and select via softmax at a temperature applied to a quadratic reward.
When the reward is not too different from the teacher, the generalization error decreases monotonically with increasing inference time samples $k$. However, the specific reward that optimizes inference-time selection generally differs from the teacher. In contrast, substantial reward misspecification induces a finite optimal $k$ beyond which more sampling can increase the generalization error, consistent with recent empirical observations. Furthermore, for fixed $k$ there exists an optimal sampling temperature. In the “best-of-$k$” limit with the teacher as reward, we prove that the generalization error decays as $\Theta(1/k^2)$ and determine the leading coefficient via extreme value theory. These formulas delineate domains where scaling inference-time computation is provably preferable to collecting more data. Finally, we demonstrate that when task difficulty increases, the previously mentioned advantage of inference-time compute degrades. | We propose a solvable model of inference-time scaling. | learning theory | https://openreview.net/pdf?id=RAs8XzpNzQ | 2025-09-19T19:30:47 | 4 | [
{
"id": "oTyMnGi4GT",
"forum": "RAs8XzpNzQ",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17868/Reviewer_k4BJ",
"reviewer_name": "Reviewer_k4BJ",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper proposes a solvable model of inference-time scaling based on Bayesian linear regression with reward-weighted sampling, deriving analytic expressions for generalization error under different temperatures, reward alignments, and sample counts. The analysis connects these behaviors to patterns reported in recent LLM work.",
"strengths": "- Clean and rigorous theoretical development.\n- The model is mathematically elegant and yields interpretable predictions (e.g., optimal k and temperatures).\n- The paper provides useful intuition about how reward quality influences inference-time compute.",
"weaknesses": "- **No experiments on any real model.** All empirical results come from the same synthetic linear teacher–student setup used in the derivations. There is no validation on actual neural networks or LLM inference-time sampling. As a result, none of the claims about LLM behaviors are verifiable, and the practical relevance of the theory remains untested. This is the major flaw of this paper.\n\n- The experimental section is minimal and does not explore settings beyond the analytic assumptions. The paper seems overstate the connection between this toy model and real LLM inference dynamics.",
"questions": "NA",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-09T17:33:35",
"modification_date": "2025-11-12T14:08:13",
"review_url": "https://openreview.net/forum?id=RAs8XzpNzQ¬eId=oTyMnGi4GT",
"license": "CC BY 4.0"
},
{
"id": "StzQ51VYq5",
"forum": "RAs8XzpNzQ",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17868/Reviewer_hztQ",
"reviewer_name": "Reviewer_hztQ",
"rating": 4,
"confidence": 2,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces an analytically tractable model of inference-time scaling using Bayesian linear regression with reward-weighted sampling, deriving closed-form expressions for generalization error in the high-dimensional limit. The authors prove that when the reward model is well-aligned with the teacher, error decreases monotonically with inference samples $k$ (scaling as $\\Theta(1/k^2)$ in the best-of-k limit), but substantial reward misspecification induces a finite optimal $k$ and optimal temperature. The theory delineates parameter regimes where scaling inference-time compute is provably more effective than collecting additional training data, though this advantage degrades as task difficulty increases.",
"strengths": "1. The model provides closed-form solutions for generalization error that can be directly computed and verified, unlike most existing work that relies purely on empirical observations.\n\n2. The paper identifies concrete conditions (optimal temperature, optimal k, reward alignment thresholds) that practitioners can actually use when designing inference-time systems.\n\n3. The theoretical framework quantifies when to invest compute in inference versus training, addressing a key resource allocation question that lacks prior rigorous analysis.",
"weaknesses": "1. Oversimplified model: The paper only studies linear regression with quadratic rewards and Gaussian assumptions, while real LLMs involve highly nonlinear neural networks, complex reward models, and non-Gaussian data distributions.\n\n2. No validation on real LLMs: All experiments use synthetic linear regression data, and the theoretical insights (optimal temperature, optimal k) are not verified on actual language models, with connections to LLM phenomena relying mainly on citations rather than direct evidence.\n\n3. The paper focuses on best-of-k and reward-weighted sampling but does not provide theoretical analysis for majority voting or meta-voter aggregation schemes, which are commonly used in practice.\n\nThe paper's core contribution is providing an analytically tractable toy model, but it remains far from explaining inference-time compute behavior in real LLMs. It serves more as a proof of concept, demonstrating that certain phenomena (e.g., non-monotonic k, optimal temperature) can be theoretically understood in simplified settings, but significant follow-up work is needed to bridge the gap between the theoretical model and practical systems before it can truly guide real-world applications.",
"questions": "Please see above.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T19:47:05",
"modification_date": "2025-11-12T14:08:13",
"review_url": "https://openreview.net/forum?id=RAs8XzpNzQ¬eId=StzQ51VYq5",
"license": "CC BY 4.0"
},
{
"id": "PqQd0zitHb",
"forum": "RAs8XzpNzQ",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17868/Reviewer_TCYV",
"reviewer_name": "Reviewer_TCYV",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper introduces an analytically solvable model to theoretically investigate the principles of inference-time scaling. The authors model this problem using Bayesian linear regression in a high-dimensional, teacher-student framework. At inference time, k candidate predictions are sampled and then selected using a softmax function over a quadratic reward, controlled by a temperature parameter T. The paper derives closed-form expressions for the generalization error and analyzes its dependence on the number of samples (k), temperature (T), and the quality of the reward model (i.e., its alignment with the true data-generating \"teacher\" model).",
"strengths": "1. The community has observed many empirical phenomena about inference-time compute (e.g., \"best-of-k\", \"self-consistency\"), but a clear theoretical understanding is lacking. This paper fills that gap\n\n2. Despite its simplicity, the model successfully reproduces several non-trivial behaviors seen in massive, complex models like LLMs. This paper provides a simple, intuitive reason: an imperfect reward model will eventually favor samples that are \"good\" according to its flawed criteria but bad according to the true objective, and more samples increase the chance of finding such a \"trap\" sample.\n\n3. The theoretical derivations (based on high-dimensional statistics and deterministic equivalents) are thoroughly validated against numerical simulations of the model itself (e.g., Figures 2 and 4). The extremely close match between the \"D.E.\" (theory) and \"Expt.\" (simulation) lines gives high confidence that the mathematical analysis is correct.",
"weaknesses": "1. The model is based on linear regression, whereas modern applications use highly non-linear Transformer architectures.\n\n2. The reward is a simple quadratic function. Real-world reward models (often used for RLHF) are complex neural networks trained to predict human preferences. The data is assumed to be Gaussian, which is very different from the structured, discrete nature of language.\nThis gap means the specific quantitative results (e.g., the exact formula for the 1/k² decay) may not transfer directly to LLMs. However, the qualitative insights and intuitions derived are still extremely valuable.\n\n3. This paper does not contain experiments on real-world datasets or with actual LLMs. The validation is purely \"internal\" (checking the theory against simulations of the same theoretical model). While this is standard for a purely theoretical paper, it leaves the question of how well these insights generalize to practice open. An ideal follow-up work would be to test if the principles derived here (e.g., the relationship between reward model quality and optimal k) hold up in experiments with a real LLM.",
"questions": "n/a",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T10:00:39",
"modification_date": "2025-11-12T14:08:14",
"review_url": "https://openreview.net/forum?id=RAs8XzpNzQ¬eId=PqQd0zitHb",
"license": "CC BY 4.0"
},
{
"id": "hSPwsm5yE4",
"forum": "RAs8XzpNzQ",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission17868/Reviewer_QdH3",
"reviewer_name": "Reviewer_QdH3",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes a Bayesian linear regression model to study the performance of best-of-k sampling in inference-time scaling. The model involves elements such as teaching model, reward modeling, and sampling temperature. The theoretical results provide insights into the relationship between the model performance with (i) the temperature, (ii) the number of samples k, and (iii) the goodness of the reward model.",
"strengths": "The paper is well-written, and the theoretical analysis is thorough and accompanied with discussions and intuitions. \n\nIn particular, \n- The analysis captures the key factors in inference scaling: temperature, k, and reward model.\n- The resulting curves (monotonic and non-monotonic) are both observed in practice. So, to some extent, the implications of the theoretical frameworks match the empirical observations.",
"weaknesses": "My main concern is whether the model provides insights into the real practical usage of the inference-time scaling.\n- As I mentioned above, the resulting curves derived from the theoretical framework match empirical observations in practice. However, the insights from the paper on the choices of the parameters, such as k and temperature, provides no guidance on their practical choice. The actual behavior of the inference-time scaling might not be able to be captured by a linear regression setup. \n- There is no numerical experiment on running inference-time scaling for real LLM models in this paper, which makes the results less convincing. In this light, the paper is more like a thorough analysis of a (Bayesian) linear regression model but its technical contribution doesn't go beyond that. \n\nFrom the modeling viewpoint:\n- Can the current framework capture techniques such as beam-search-based generation and the self-consistency approach in inference-time scaling?",
"questions": "See above weaknesses. Also, I think the authors should think more about how to make the framework more realistic to capture the real usage of inference-time scaling, or, how to convince the practioners using inference-time scaling that the framework can guide their daily practice.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T06:25:49",
"modification_date": "2025-11-12T14:08:15",
"review_url": "https://openreview.net/forum?id=RAs8XzpNzQ¬eId=hSPwsm5yE4",
"license": "CC BY 4.0"
}
] |
nU4Fv2yXN1 | https://openreview.net/forum?id=nU4Fv2yXN1 | Understanding Subpopulation Shifts through a Unified Lens of Separability | 5 | 3.5 | [
4,
4,
6,
6
] | [
4,
4,
3,
3
] | 4 | [
"Subpopulation shift",
"distribution shift",
"spurious correlation"
] | Subpopulation shifts have been a major challenge for deploying machine learning algorithms. The shift in subgroup proportions between training and test data always leads to a significant performance drop or suboptimal performance in certain groups, therefore limiting the broader or more reliable usage of machine learning methods. We present a unified theoretical framework to characterize a broad range of subpopulation shifts, including but not limited to well-studied shifts such as spurious correlation, under-representation, and class imbalance. Within this framework, we derive the performance of the Bayesian optimal classifier fitted on skewed data. The evaluation of thorough subpopulation shifts provides a quantitative tool to guide dataset collection. Our analysis further highlights the critical role of the feature separability assumption in our modeling, which explains the effectiveness of recent shift-mitigation methods and enabled principled comparison of encoders. Overall, this framework offers a unified perspective on evaluating subpopulation shifts and provides practical guidance on future work in both data collection and training strategies. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=nU4Fv2yXN1 | 2025-09-18T22:31:48 | 4 | [
{
"id": "Bk91danVFQ",
"forum": "nU4Fv2yXN1",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12139/Reviewer_tAiR",
"reviewer_name": "Reviewer_tAiR",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The authors theoretically study the problem of learning under subpopulation shift (spurious correlation, under-representation, and class imbalance). They assume a Gaussian features model, and then derive a closed form solution for the overall and per-group accuracies. Of crucial importance is the feature separability of the invariant and spurious features. The authors show that their theory can be used to estimate performance on real-world data by inferring the parameters of their model.",
"strengths": "1. The paper presents a solid connection between theory and previously observed empirical phenomenon.\n2. The authors are able to derived closed-form solutions, and the insights from the paper are compelling.",
"weaknesses": "1. The novelty of this work over Wang and Wang (2024) is rather limited. In particular, the modelling assumptions and notation are almost identical, with I believe the test-set accuracy from the prior work being the same as the adjusted accuracy from this work. The prior work also notes the importance of feature separation (denoted there as $m_{inv}$ and $m_{spur}$). Though I understand that this work explores the subpopulation shift setting more generally, I am not convinced that the contribution over prior work is significant.\n\n2. The assumptions made in the paper are rather strong. In addition to the mixture of Gaussians assumption, it is further assumed that the covariance has a block-diagonal structure, and that the single invariant attribute is perfectly informative (equals the feature) which rules out any label noise. Further, the most salient analyses are presented for the case of only two features. All of these assumptions reduce the practical applicability of the theory.\n\n3. The connection between adjusted accuracy and WGA is interesting. Can the authors derive a theory showing the relation between these two, e.g. a closed form expression relating the two metrics?\n\n4. In all of the empirical results, under-representation does not seem to affect the adjusted accuracy. \n\n5. The authors should provide more detail in the main paper on how the estimated performance in Figure 4 is calculated. In particular, assuming $\\mu$ and $\\Sigma$ are estimated from data, are the embeddings actually mixtures of Gaussians? Further, the authors state that they solve Theorem 1 by nonlinear optimization; can the authors comment on the existence and uniqueness of the solution?",
"questions": "Please address the weaknesses above.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T16:57:59",
"modification_date": "2025-11-12T12:51:39",
"review_url": "https://openreview.net/forum?id=nU4Fv2yXN1¬eId=Bk91danVFQ",
"license": "CC BY 4.0"
},
{
"id": "7R9Rs8e9RM",
"forum": "nU4Fv2yXN1",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12139/Reviewer_uHDz",
"reviewer_name": "Reviewer_uHDz",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper studies the problem of subpopulation shifts in classification problems, encompassing well-studied areas like spurious correlations, class imbalance, and under-representation. The main framework for the theoretical results is a binary classification setting where training data points are Gaussian parameterized by attributes. Under this model, the authors first provide a characterization of the group accuracy for the Bayes-optimal linear classifier than maximizes the overall test accuracy. This is then used as motivation for two empirical studies. First, on synthetic datasets where the subpopulation shift can be controlled, it is shown that the group performance predicted by the theory (using some estimated parameters) aligns well with the empirical performance. Secondly, on standard subpopulation shift datasets, it is shown that the empirical separability (estimated as the distance between feature clusters) correlates with the group-adjusted accuracy.",
"strengths": "- The authors provide a characterization of the Bayes-optimal error for linear classifiers in terms of subpopulation shift parameters. The formulation applies to a wide variety of important problems like spurious correlations and class imbalance. It is not altogether surprising to me that the Bayes optimal solution can be derived in this way, but it does require a decent amount of work to go through the details and get the final result.\n- From a theoretical perspective, an understanding of the Bayes optimal solution is an important starting point for comparing existing approaches and can serve as a testbed for future work on subpopulation shifts\n- In the setting of datasets with \"flexible subpopulation configurations\" and two attributes, the parameters needed to compute the Bayes optimal error (specifically, the feature separability parameters) can be estimated from two sets of data with different subpopulation configurations. Assuming a well-trained linear model (that is close to the Bayes solution), this allows for prediction of the group errors, which can be useful with the important task of encoder selection",
"weaknesses": "I outline my main concerns below:\n1. Feasibility of using the theory for estimating the expected performance on real problems: From my understanding, this requires first estimating the feature separability for each attribute (side question: this only works for 2 attributes?) based on two sets of data with different subpopulation configurations. Outside of the synthetic settings considered in the paper, this seems like quite a strong assumption, since if such datasets were available, we could probably do better by just using this extra data to improve performance in the first place. I would appreciate some more discussion about the feasibility of this assumption\n2. The main point of Figure 1 seems to be that a well-trained linear classifier is close to Bayes-optimal. This does not seem to be surprising, given the simplicity of the synthetic datasets used in these experiments. In more realistic settings, the empirical and Bayes-optimal solution may differ more significantly, so the theoretical results might have less usefulness overall\n3. Writing concerns: I found the writing in several parts to be a bit unclear/vague, especially regarding the terms \"data aspect\" and \"model aspect\", which show up in many parts of the paper. For example, \"our aim to analyze from the model aspect\" (pg. 8). This terminology seems imprecise and I wasn't able to fully understand what is meant by these terms",
"questions": "- It seemed to me like Tables 1 and 2 have little do with the main argument of the paper, since the theory would only capture the \"ERM\" method that does not actively try to mitigate shifts. How do these results connect with the insights from the theory?\n- I did not fully understand the claim that this framework can aid as \"a practical tool for dataset design\". Is the idea that you could estimate the group accuracy from a dataset in order to determine what groups to collect more data for?\n- What data is used to estimate the empirical separability in Section 5.3? \n\nSmall comments/fixes\n- Section 3.2 - \"randomly parameterized\". I assume this result is for a *fixed* w,b, and not a random (i.e., stochastic) choice?\n- How is the empirical classifier trained in the experiments (this is in the Appendix, but I think it deserves to be mentioned in the main paper for clarity)",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T03:39:24",
"modification_date": "2025-11-12T12:51:39",
"review_url": "https://openreview.net/forum?id=nU4Fv2yXN1¬eId=7R9Rs8e9RM",
"license": "CC BY 4.0"
},
{
"id": "4mftt4bc0B",
"forum": "nU4Fv2yXN1",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12139/Reviewer_DAn4",
"reviewer_name": "Reviewer_DAn4",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper presents a theoretical framework to describe different type of distributions shifts that arise in data -- including class imbalance, under-representation, and spurious correlations. This framework is built on assumptions that the data can be modeled via Gaussian functions, following advances made by prior work. After presenting how the data is modeled, a series of theoretical results build up to a main theorem given the form of the Bayesian optimal classifier as a function of the dataset parameters. From this, synthetic data can be studied to understand how the effect of data separability affects performance under different domain shifts. Commonly used benchmark datasets from the domain shift literature are used to test the applicability of this theoretical framework to \"real\" data. To my knowledge, the proposed framework is novel and helpful, especially in that it is general enough to cover different types of distribution shifts with one model. I have some questions/concerns on (i) how realistic this data model is and (ii) connection of the experimental results with the formal results. If the authors could please answer my questions and address my concerns, I will be happy to consider updating my score accordingly.",
"strengths": "1. To the best of my knowledge, the proposed framework and the theoretical results are novel. This bridges an important gap in modeling and characterizing model performance under different types of distribution shifts.\n2. Formal statements are clear, and to large degree well explained in the text surrounding them. Due to time constraints I was not able to fully check the proofs.\n3. Empirical results connect the formal results to datasets that are curated from real images to have distribution shift. I found the results in Figure 4 and Figure 5 most compelling in this regard, as they show the trends predicted by the framework tend to hold up in practice (though the correlation in Figure 5 is a bit noisy, it does look like AA is increasing with increasing separability across all three datasets).",
"weaknesses": "1. The connection between table 1 and 2 and the framework was not clear to me. The authors could make it clearer in the text how the results in those tables contribute to the overall aims of the paper. Specifically, \n2. It was not clear to me the degree to which the modeling assumptions make sense for \"real\" data -- for example, modeling $\\Sigma = \\textrm{diag}(\\Sigma_1, \\ldots, \\Sigma_N)$, would that imply that the covariance of features corresponding to different attributes $a_n$ have 0 covariances (and why would that make sense in practice)?\n\n\nSmaller things:\n1. Adjectives like \"holistic\", \"comprehensive\", etc. do not really need to be used when describing this framework, are vague, and leave the paper's contributions open to scrutiny. Better to be specific about what you mean (this framework allows us to study different types of distribution shift with one data model). \n2. Missing a \\ for an \\in in line 360.",
"questions": "The data modeling setup in 3.1 to me. To make sure I'm understanding, I ask the following:\n1. In addition to the section on Gaussian data modeling in the appendix, could the authors give some intuition for what type of data will/will not be modeled well by the problem setup in section 3.1?\n2. Is the Bayes optimal classifier guaranteed to be linear because of the assumptions made in section 3.1? This seems to be taken for granted in Lemma 1 onward, and perhaps it's a direct result of the modeling assumptions, but that could be better explained.\n\nOther questions:\n3. In section 5.2, it is stated \"Thus, our framework ... is capable of estimating the expected performance to guide the dataset collection for the afterward robust model training.\" In figure 4 it looks like the optimal training is always around 0.5, a 50/50 split. Is there some other reason the model would prefer a 50/50 split (lots of reasonable models would...)? This experiment alone does not seem enough to really evaluate the claim in the quote, but perhaps I'm missing something. \n4. I had some questions that I put in the weaknesses section as well to better contextualize why I am asking them.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T02:38:49",
"modification_date": "2025-11-12T12:51:40",
"review_url": "https://openreview.net/forum?id=nU4Fv2yXN1¬eId=4mftt4bc0B",
"license": "CC BY 4.0"
},
{
"id": "NkbwGhvnV0",
"forum": "nU4Fv2yXN1",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12139/Reviewer_Zda8",
"reviewer_name": "Reviewer_Zda8",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes a unified theoretical framework to analyze subpopulation shifts, positing that phenomena like spurious correlation (SC), under-representation (UR), and class imbalance (CI) can be understood through the single lens of \"feature separability\". The authors model features as Gaussian mixtures controlled by invariant (label) and non-invariant attributes.",
"strengths": "The attempt to unify SC, UR, and CI under a single, simple geometric principle (separability) is elegant and provides a novel conceptual lens for the community. The core idea—that robustness relies on making the true feature easy to separate ($m_1$) and the spurious feature hard to separate ($m_2$)—is highly intuitive and explanatorily powerful.",
"weaknesses": "Despite the paper's strengths, its claims are built on a foundation with several significant and largely unaddressed limitations. The validity of the theoretical derivations is not the primary issue; rather, it is the applicability of their underlying assumptions to real-world deep learning.\n\nThe Gaussian Feature Assumption: The entire theoretical framework, from Lemma 1 to Theorem 1, rests on the strong assumption that features $Z_n$ are Gaussian8. The authors state that this analysis \"generalizes well to complex models and real-world data\"9, but this claim is not sufficiently substantiated. Features extracted from deep networks like ResNets are known to be non-Gaussian. The paper provides no discussion or analysis on why a theory built on this premise holds for complex, high-dimensional, non-Gaussian features. This is a critical omission that undermines the generality of the theoretical claims.\n\nThe Linear Classifier Limitation: The theory derives an optimal linear classifier (Lemma 2, Theorem 1). The experiments explicitly adhere to this, training only a linear layer on top of a pre-trained (and presumably frozen) encoder11. This setup does not reflect the dominant paradigm of end-to-end finetuning. In a finetuning scenario, the encoder is dynamic, meaning the feature distributions ($\\mu_n, \\Sigma_n$) and their separability ($m_n$) are constantly changing. The paper's static framework cannot model this, severely limiting its relevance to how most SOTA models are actually trained for robustness.\n\nThe analysis and experiments are strictly confined to binary classification ($Y \\in \\{\\pm1\\}$) and binary attributes ($A \\in \\{\\pm1\\}^N$). This is a significant simplification. The paper offers no discussion on how this framework (e.g., the definition of $m_n$ and the 3-simplex visualization) would extend to multi-class classification or attributes with more than two values (e.g., 10 different types of backgrounds, not just \"water\" and \"land\").\n\nThe proposed quantitative tool (Contribution 1) relies on the availability of \"data variants\". This procedure is only feasible for synthetic datasets (like Waterbirds- $\\zeta$) where the authors can control the subpopulation ratios. This tool is not applicable to fixed, \"in-the-wild\" datasets where such variants cannot be generated. This limitation on the tool's practical utility should be stated far more explicitly.\n\nComparison to Augmentation-Based Methods: The authors may need cite and contrast their geometric framework with algorithmic approaches based on data interpolation. E.g., Umix: Improving importance weighting for subpopulation shift via uncertainty-aware mixup, Improving out-of-distribution robustness via selective augmentation.",
"questions": "As shown in Weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-25T21:57:01",
"modification_date": "2025-11-12T12:51:41",
"review_url": "https://openreview.net/forum?id=nU4Fv2yXN1¬eId=NkbwGhvnV0",
"license": "CC BY 4.0"
}
] | |
Eu25AOvORb | https://openreview.net/forum?id=Eu25AOvORb | UniOD: A Universal Model for Outlier Detection across Diverse Domains | 6 | 3.5 | [
6,
6,
6,
6
] | [
4,
2,
4,
4
] | 4 | [
"outlier detection"
] | Outlier detection (OD), distinguishing inliers and outliers in completely unlabeled datasets, plays a vital role in science and engineering. Although there have been many insightful OD methods, most of them require troublesome hyperparameter tuning (a challenge in unsupervised learning) and costly model training for every task or dataset. In this work, we propose UniOD, a universal OD framework that leverages labeled datasets to train a single model capable of detecting outliers of datasets with different feature dimensions and heterogeneous feature spaces from diverse domains. Specifically, UniOD extracts uniform and comparable features across different datasets by constructing and factorizing multi-scale point-wise similarity matrices. It then employs graph neural networks to capture comprehensive within-dataset and between-dataset information simultaneously, and formulates outlier detection tasks as node classification tasks.
As a result, once the training is complete, UniOD can identify outliers in datasets from diverse domains without any further model/hyperparameter selection and parameter optimization, which greatly improves convenience and accuracy in real applications. More importantly, we provide theoretical guarantees for the effectiveness of UniOD, consistent with our numerical results. We evaluate UniOD on 30 benchmark OD datasets against 17 baselines, demonstrating its effectiveness and superiority. | A universal model that can be used for outlier detection on datasets with different feature dimension and heterogeneous feature space across diverse domains. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=Eu25AOvORb | 2025-09-11T22:43:56 | 4 | [
{
"id": "dCs9aqAEXN",
"forum": "Eu25AOvORb",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4106/Reviewer_tKHD",
"reviewer_name": "Reviewer_tKHD",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "This paper introduces UniOD, a novel framework for outlier detection that is generalizable to diverse datasets. The authors address the inefficiency of conventional OD methods, which typically require per-dataset retraining and hyperparameter tuning. UniOD tackles this by leveraging a collection of historical labeled datasets to train a single, universal model. The framework's key innovation is its data unification pipeline, which transforms each dataset into a set of multi-scale similarity matrices to create graph-structured representations. It then employs Singular Value Decomposition (SVD) to generate uniformly dimensioned features. Outlier detection is thus reformulated as a node classification task on these graphs, tackled by a GNN-based model. Once trained, UniOD can be directly applied to new, unseen datasets without any further training, demonstrating superior performance against 17 baseline methods on a benchmark of 30 datasets.",
"strengths": "•\tA Novel Paradigm for OD: The primary strength is the innovative concept of a universal framework that eliminates the need for per-dataset retraining. This directly addresses a major bottleneck in the practical application of outlier detection.\n•\tElegant Unification of Heterogeneous Data: The use of multi-scale similarity matrices combined with SVD is a powerful and clever technique for creating a unified feature space from datasets with diverse dimensionalities and semantics.\n•\tStrong Empirical and Theoretical Backing: The claims are convincingly supported by comprehensive experiments on a large benchmark, which is further bolstered by a theoretical analysis of the model's generalization ability.\n•\tHigh Practicality and Efficiency: By decoupling training from testing, the framework is highly practical and computationally efficient at inference time, making it well-suited for real-world scenarios requiring rapid analysis of new data.",
"weaknesses": "•\tHeavy Reliance on Historical Data Composition: The model's success is fundamentally tied to the quality, scale, and diversity of the historical datasets. The paper lacks an investigation into the sensitivity of the model to the composition of this training pool.\n•\tPotential Scalability Bottlenecks: The methodology relies on constructing an n*n similarity matrix, which has a quadratic complexity (O(n²)) with respect to the number of samples. This could be computationally prohibitive for very large datasets.\n•\tLimited Exploration of Graph Construction: The framework exclusively uses a Gaussian kernel to build the similarity matrices. An investigation into different kernel functions or alternative graph construction techniques would have strengthened the paper's claims of robustness.",
"questions": "1.\tIt will strengthen the paper if providing an analysis of the model's sensitivity to the composition of the historical training data. For example, how does performance on a specific target domain (e.g., finance) change when all finance-related datasets are deliberately excluded from the training pool? This would clarify the practical requirements for curating the training set.\n2.\tThe O(n²) complexity for similarity matrix construction is a potential bottleneck. It will be helpful to explore more scalable graph construction techniques, such as those based on approximate nearest neighbors, to enhance the framework's applicability to datasets with millions of samples.\n3.\tIt will be helpful to provide some qualitative analysis on the structural patterns the GNN model learns to distinguish outliers. For instance, in the graph representation, are outliers typically identified as isolated nodes, or do they belong to small, dense clusters disconnected from the main graph component? This would provide valuable insight into the model's decision-making process.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T17:43:18",
"modification_date": "2025-11-12T11:12:58",
"review_url": "https://openreview.net/forum?id=Eu25AOvORb¬eId=dCs9aqAEXN",
"license": "CC BY 4.0"
},
{
"id": "n3XYu8rrxl",
"forum": "Eu25AOvORb",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4106/Reviewer_53cw",
"reviewer_name": "Reviewer_53cw",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposed UniOD, a universal pretrained model for outlier detection. UniOD learns from various historical tabular datasets and can directly score the outliers for new datasets in a zero-shot manner, without re-training the model or hyperparameter tuning. The datasets are converted into point-wise kernel matrices at different scales and a GNN is trained on graph-structured datasets.",
"strengths": "1. Plug-and-play: this paper proposes the UniOD framework by pretraining on various datasets and conduct inference in a zero-shot manner, saving the deployment cost for OD tasks.\n2. Unified dataset representation: this paper utilizes the multi-scale similarity and SVD to produce unified node features, enforcing the generalizability of the model.\n3. The authors provides both comprehensive theoretical justification and extensive empirical analysis of the method. UniOD is well theoretical grounded.",
"weaknesses": "1. Dependence on historical datasets. UniOD requires labeled historical datasets, which can be unavailable in real-world applications. The limited historical datasets may impair the performance of UniOD on new datasets, especially when the historical datasets is limited to few domains.\n2. Generality concern: the effect of dataset variability is not fully discussed in the paper, as this will potentially influence the model's generality if the model hasn't encountered datasets from similar distributions.",
"questions": "See the weaknesses above.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T15:57:00",
"modification_date": "2025-11-12T11:12:59",
"review_url": "https://openreview.net/forum?id=Eu25AOvORb¬eId=n3XYu8rrxl",
"license": "CC BY 4.0"
},
{
"id": "WwSAHze8Xj",
"forum": "Eu25AOvORb",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4106/Reviewer_efRk",
"reviewer_name": "Reviewer_efRk",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes UniOD, a universal outlier detection model designed to address the core limitations of conventional methods, including the significant efforts required for hyperparameter tuning, the need for repetitive model training on new datasets, and the inability to leverage knowledge from historical datasets. The key contributions of this work are fourfold:\n\n1. **Problem Formulation**: It is the first to explicitly define and formalize the problem of universal outlier detection, which aims to \"train a single model that can directly generalize to any unseen tabular datasets from diverse domains without any retraining or hyperparameter tuning\".\n\n2. **Technical Framework**: It introduces a novel technical framework based on a \"graph reformulation\" process, which unifies heterogeneous datasets by constructing multi-scale similarity matrices, decomposing them into uniformly dimensioned features via SVD, and subsequently employing graph neural networks for node classification.\n\n3. **Theoretical Analysis**: It provides a theoretical generalization bound, offering mathematical guarantees for the effectiveness of the proposed method.\n\n4. **Experimental Validation**: Extensive experiments demonstrate that UniOD outperforms 17 baseline methods on 30 datasets while achieving faster inference speed.",
"strengths": "This paper presents several noteworthy strengths:\n\n1. **Paradigm-Shifting Contribution**\n - First explicit proposal and systematic formalization of \"universal outlier detection\"\n - Core innovation: single model generalizing across diverse domains without retraining or hyperparameter tuning\n - Represents fundamental reformulation of conventional outlier detection paradigm\n\n2. **Technical Innovation**\n - Elegant graph reformulation process transforms heterogeneous tabular data\n - Creates unified, structurally learnable representations from disparate datasets\n - Overcomes limitations of existing transfer learning approaches that require:\n - Extensive hyperparameter evaluation\n - Strong domain similarity assumptions\n\n3. **Theoretical Rigor**\n - Provides generalization error analysis beyond empirical demonstrations\n - Offers mathematical guarantees for method effectiveness\n - Enhances academic credibility and theoretical foundation",
"weaknesses": "While the proposed UniOD framework demonstrates compelling performance, several limitations warrant discussion for future improvement:\n\n1. **Scalability Challenges in Preprocessing** \nThe O(n²) computational and memory requirements for similarity matrix construction present practical constraints, as evidenced by the needed subsampling for datasets beyond 6,000 samples. Future work could explore approximate nearest neighbor techniques or sparse graph construction to enhance applicability to larger-scale datasets.\n\n2. **Dependence on Euclidean-Based Similarity** \nThe reliance on Gaussian kernel similarity (and consequently Euclidean distance) may limit performance in scenarios where this metric is suboptimal. Incorporating learnable or adaptive distance metrics could strengthen robustness across diverse data distributions.\n\n3. **Sensitivity to Historical Data Availability** \nWhile leveraging multiple labeled historical datasets is a strength, the framework's effectiveness in domains with extreme scarcity of labeled anomalies remains unverified. Investigating few-shot or semi-supervised adaptations would valuablely expand its applicability.",
"questions": "1. **Scalability and Computational Efficiency** \nThe paper indicates that datasets exceeding 6,000 samples required subsampling due to computational constraints. Have the authors explored more scalable graph construction alternatives, such as k-nearest neighbor sparse graphs or Nyström approximation methods, to avoid full similarity matrix computation? Could experimental results demonstrate whether these approaches maintain performance while handling datasets at larger scales (e.g., tens of thousands of samples), thereby improving practical applicability in big data scenarios?\n2. **Analysis of Performance Boundaries** \nWhile UniOD shows strong average performance, simpler methods outperform it on specific datasets (e.g., Cardiotocography, Pima). Could the authors provide further analysis identifying dataset characteristics where UniOD may underperform? For instance, are there correlations with meta-features like anomaly ratio, dimensionality, or cluster structure clarity? Defining such boundaries would offer valuable guidance for practical application.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T20:25:00",
"modification_date": "2025-11-12T11:12:59",
"review_url": "https://openreview.net/forum?id=Eu25AOvORb¬eId=WwSAHze8Xj",
"license": "CC BY 4.0"
},
{
"id": "pKjAULr9W9",
"forum": "Eu25AOvORb",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4106/Reviewer_v82N",
"reviewer_name": "Reviewer_v82N",
"rating": 6,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "The paper proposes UniOD, a universal outlier detection framework that trains a single model on a collection of labeled historical datasets to detect outliers in new, unseen tabular datasets without retraining or hyperparameter tuning. To achieve cross-domain generalization, UniOD constructs multi-scale similarity matrices from each dataset, factorizes them via SVD to obtain uniform-dimensional node features, and then employs a hybrid architecture of GINs and graph transformers to perform node-level binary classification (inlier vs. outlier). The method is evaluated on 15 datasets from ADBench.",
"strengths": "1.\tThe proposed method introduces a novel paradigm shift from dataset-specific to universal outlier detection, which is underexplored in the literature. \n2.\tThe methodology is well-motivated and technically sound. The integration of SVD-based feature unification, graph construction, and GNNs is carefully designed to handle varying feature dimensions and semantics. The theoretical analysis provides a nontrivial generalization bound that aligns with empirical findings.\n3.\tThe paper is generally well-written, with clear figures and a logical flow from problem statement to evaluation.",
"weaknesses": "1.\tThe authors state that experiments are conducted on 30 datasets from ADBench, split into two groups of 15 for cross-validation. However, ADBench actually contains 57 datasets, not just 30. The paper does not justify why only a subset was selected, nor does it clarify the criteria for partitioning. This raises concerns about potential cherry-picking. The authors should either (a) use the full ADBench tabular benchmark, or (b) explicitly state the selection rationale and provide results on a broader set to demonstrate robustness.\n2.\tThe paper partitions the 30 ADBench datasets into two fixed groups of 15 for training and testing, and performs a single cross-validation swap. While this demonstrates basic robustness, it does not adequately address how historical datasets should be chosen in practice or how performance varies under different selection strategies.\n3.\tThe claim of universality is compelling but narrowly validated. All experiments are on tabular data; it remains unclear whether UniOD can generalize to other modalities (e.g., images, time series) without significant architectural changes. Clarifying the scope of “universal” (i.e., universal across tabular domains only) would temper overstatement.",
"questions": "Could the authors clarify the criteria used to select these 30 datasets?\nCan the authors provide qualitative or quantitative analysis of when and why UniOD fails?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T11:55:12",
"modification_date": "2025-11-12T11:12:59",
"review_url": "https://openreview.net/forum?id=Eu25AOvORb¬eId=pKjAULr9W9",
"license": "CC BY 4.0"
}
] |
oHEaIwPv9s | https://openreview.net/forum?id=oHEaIwPv9s | Build-Bench: Benchmarking LLM Agents on Compiling Real-World Open Source Software | 4.5 | 3 | [
4,
4,
6,
4
] | [
3,
3,
3,
3
] | 4 | [
"Agent",
"Benchmark",
"Compilation",
"LLM"
] | Automatically compiling open-source software (OSS) projects is a vital, labor-intensive, and complex task, which makes it a good challenge for LLM Agents.
Existing methods rely on manually curated rules and workflows, which cannot
adapt to OSS that requires customized configuration or environment setup. Recent
attempts using Large Language Models (LLMs) used selective evaluation on a
subset of highly rated OSS, a practice that underestimates the realistic challenges
of OSS compilation. In practice, compilation instructions are often absent, de-
pendencies are undocumented, and successful builds may even require patching
source files or modifying build scripts. We propose a more challenging and realistic
benchmark, BUILD-BENCH, comprising OSS that are more diverse in quality,
scale, and characteristics. Furthermore, we propose a strong baseline LLM-based
agent, OSS-BUILD-AGENT, an effective system with enhanced build instruction
retrieval module that achieves state-of-the-art performance on BUILD-BENCH and
is adaptable to heterogeneous OSS characteristics. We also provide detailed analysis regarding different compilation method design choices and their influence to
the whole task, offering insights to guide future advances. We believe performance
on BUILD-BENCH can faithfully reflect an agent’s ability to tackle compilation
as a complex software engineering tasks, and, as such, our benchmark will spur
innovation with a significant impact on downstream applications in the fields of
software development and software security. | datasets and benchmarks | https://openreview.net/pdf?id=oHEaIwPv9s | 2025-09-20T06:36:19 | 4 | [
{
"id": "mcFejxteVT",
"forum": "oHEaIwPv9s",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21758/Reviewer_y24p",
"reviewer_name": "Reviewer_y24p",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper BUILD-BENCH introduces a new benchmark for evaluating large language model (LLM) agents on the task of compiling real-world open-source C/C++ software. The authors developed BUILD-BENCH, a dataset of 148 diverse repositories, and proposed a multi-agent system called OSS-BUILD-AGENT that automatically retrieves build instructions and iteratively fixes compilation errors. Experiments show that this method outperforms both traditional rule-based and previous agentic approaches, achieving up to 71.8% success under flexible validation, demonstrating the strong potential of LLMs in automating complex software compilation tasks",
"strengths": "1. The BUILD-BENCH benchmark covers more realistic and complex open-source projects, enabling a more comprehensive evaluation of automated compilation capabilities.\n2. The proposed OSS-BUILD-AGENT leverages multi-agent collaboration and LLM-based retrieval to significantly improve compilation success rates and automatic error-repair capabilities.",
"weaknesses": "**Unspecified Computational Costs:** The paper lacks quantitative analysis of *time cost* (the total time required to complete the entire compilation process) and *economic cost* (the total cost incurred to complete the compilation tasks), as well as a horizontal comparison with baselines—particularly **CompileAgent**.\n\n**Insufficient Baseline Comparison:** It is recommended to include single-round data for both closed-source and open-source models: closed-source models such as Gemini-2.5-Flash, GPT-4o, and Claude-3.5-Sonnet, and open-source models such as Qwen3 235B, Qwen3 Coder 485B. Additionally, OSS-BUILD-AGENT w/o Retrieval should also include Claude 3.7-Sonnet, Gemini-2.5-Flash, Qwen3 235B, Qwen3 Coder 485B, and a RAG-based baseline.\n\n**Incomplete Task Information:** The paper lacks detailed information on the task set and its difficulty distribution, including details such as *project topic* and whether build instructions are **InRepo** (the repository contains a build guide), **NotInRepo** (the repository does not directly contain a build guide but external documentation is available), or **NoGuide** (the project completely lacks any build guide). The proportion of projects in each of these categories should be reported.\n\n**Unclear Methodology:** The implementation details and prompt design of the **LLM-Assisted Retrieval Module** are not clearly described—particularly how the system retrieves other files in the repository and accesses external web resources.\n\n**Incomplete Evaluation Metrics:** In Section 6.2, *Retrieval and Error Resolution*, the analysis of the retrieval module is overly simplistic. Evaluating retrieval success solely based on whether the retrieval module accessed the ground-truth URL that hosts the build instruction for the given repository is insufficient. The evaluation should also account for retrieval performance over other files in the repository.",
"questions": "1.Could you explain in detail the implementation of LLM-Assisted Retrieval?\n\n2.For lines 395–399, which describe the workflow that “*mimics a human engineer*,” could you provide a **case study** illustrating how this process is implemented in practice?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T02:51:53",
"modification_date": "2025-11-12T18:05:20",
"review_url": "https://openreview.net/forum?id=oHEaIwPv9s¬eId=mcFejxteVT",
"license": "CC BY 4.0"
},
{
"id": "p03Y5wcx8c",
"forum": "oHEaIwPv9s",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21758/Reviewer_tMyd",
"reviewer_name": "Reviewer_tMyd",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper presents Build-Bench, a benchmark designed to evaluate LLMs on software build and compilation tasks. The benchmark collects real open-source projects with diverse build systems and automatically generates build tasks that simulate dependency errors, configuration failures, and compilation issues. Each task provides the model with build logs, dependency information, and target build commands as input, requiring it to output repair actions such as command corrections, dependency installations, or configuration updates. Tasks are automatically evaluated via sandbox compilation. Experiments on multiple LLMs show that while current models can handle simple dependency errors, they struggle with complex build environments and cross-language configurations.",
"strengths": "The benchmark extends the evaluation of LLMs from code understanding to practical software build and dependency repair tasks. Its design is systematic and well-documented, using reproducible environments and real-world build systems. The task formulation is clear and grounded in realistic developer workflows. This benchmark provides insights into the limitations of current models in dependency management, configuration understanding, and handling multi-language projects.",
"weaknesses": "1. Although the projects are sourced from real open-source repositories, the build failures are synthetically generated through automated error injection, which may reduce their realism. In actual development scenarios, build failures often arise from more complex and interdependent causes that maybe not fully represented in the benchmark.\n2. The experimental analysis lacks depth, with limited discussion of error categories or their influence on model behavior. A more detailed breakdown of failure types probably can reveal additional insights and lead to stronger conclusions.\n3. The evaluation metrics are relatively narrow and strictly binary, while build repair is inherently a gradual process. Fully successful repairs may also differ in their complexity or practical soundness, which the current evaluation framework does not capture.",
"questions": "See weakness please",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T02:40:51",
"modification_date": "2025-11-12T18:05:21",
"review_url": "https://openreview.net/forum?id=oHEaIwPv9s¬eId=p03Y5wcx8c",
"license": "CC BY 4.0"
},
{
"id": "tKNzBruVgT",
"forum": "oHEaIwPv9s",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21758/Reviewer_Z2nX",
"reviewer_name": "Reviewer_Z2nX",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper presents BUILD-BENCH, a benchmark dataset derived from 385 randomly sampled C/C++ GitHub repositories, with 148 manually validated as compilable and annotated with ground-truth binary artifacts and documentation URLs. The authors propose OSS-BUILD-AGENT, a multi-agent LLM system featuring an LLM-assisted retrieval module and iterative error resolution. Evaluations against rule-based baselines (GHCC, Assemblage), single-turn LLM approaches, and CompileAgent using multiple frontier models demonstrate that the best configuration achieves 66.4% strict validated success rate, substantially outperforming existing methods on this more challenging and representative benchmark.",
"strengths": "BUILD-BENCH employs principled random sampling from 6.57M repositories, addressing the selection bias in COMPILEAGENTBENCH where 99.88% of real-world C/C++ projects have fewer than 500 stars. The benchmark's build system diversity (Make, CMake, Autotools, MSBuild, custom scripts) and ground-truth annotations enable rigorous evaluation. The experimental design is comprehensive, spanning multiple baselines and frontier LLMs (GPT-4o, o3-mini, Claude 3.7-Sonnet, Gemini 2.5, Qwen3), with transparency regarding stochasticity through repeated runs and pass@k analysis. The retrieval module comparison with CompileAgent provides actionable design insights, particularly regarding documentation-first traversal versus build-script-focused approaches.",
"weaknesses": "- The benchmark's scope is restricted to Linux C/C++ projects, with no evaluation on Windows, macOS, or mixed-language codebases, limiting generalizability claims. \n- Manual exclusion of \"uncompilable\" projects (criteria a-d in Section 2) may introduce selection bias toward better-documented repositories; the paper would benefit from publishing exclusion statistics and considering a \"hard mode\" supplemental dataset. \n- Critical practical metrics are absent: per-repository runtime, API token consumption, and monetary costs are essential for practitioners evaluating adoption trade-offs. \n- Ablation studies are insufficient—the paper does not isolate retrieval quality versus agent architecture, evaluate lightweight retrieval heuristics, or test retrieval module sensitivity to base LLM choice.",
"questions": "1. Can the authors provide comprehensive efficiency metrics—average runtime per repository, total API calls, token consumption, and estimated costs—for OSS-BUILD-AGENT compared to CompileAgent and rule-based baselines?\n2. What are the exclusion statistics for the 237 removed repositories? Specifically, what percentages were excluded due to cross-platform requirements, missing dependencies, broken builds, or other criteria?\n3. Have you conducted ablation studies on the retrieval module: (a) performance with different base LLMs, (b) comparison with lightweight heuristics (regex-based link extraction, TF-IDF), and (c) impact of iteration depth and link count limits?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T04:25:04",
"modification_date": "2025-11-12T18:05:21",
"review_url": "https://openreview.net/forum?id=oHEaIwPv9s¬eId=tKNzBruVgT",
"license": "CC BY 4.0"
},
{
"id": "Uh4YqUpJkg",
"forum": "oHEaIwPv9s",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21758/Reviewer_atPJ",
"reviewer_name": "Reviewer_atPJ",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper presents Build-Bench, a benchmark on compiling open-source software project with LLM Agents. The authors propose more realistic challenges of OSS compilation, mainly targeting github repositories of C/C++ projects. The authors also propose a strong baseline for LLM agent which enhance build instructions based on a retrieval module.",
"strengths": "(1) The authors too greate efforts to enhance the data selection process for collecting Build-Bench problems. Especially the distribution of stargazer counts more aligns with data collected from the wild on github. Furthermore the dataset demonstrate a wide variety of build systems and tool chains.\n\n(2) A multi-agent LLM agentic baseline is proposed to solve Build-Bench. The first stage LLM are used to extract relevant instructions and access relevant links and files to generate compilation instruction. The second stage consists of a flow-based method, where a bash command generator compiles bash commands and the executor agent executes the command.",
"weaknesses": "(1) Marginal improvements over CompileAgent. It seems that CompileAgent with Retrieval on GPT-4o works quite well, and only a marginal improvement on flexible validated successes is presented.\n\n(2) Success metric is determined by compilation and existence of binary links, which seems too tailored to C/C++.",
"questions": "(1) How would the success method be extended to programming languages outsied of C/C++. Is using unit tests etc. a more rigourous method for evaluation?\n\n(2) Why solely focus on C/C++? Any plans to extend to other programming languages (such as Python etc.)?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T03:09:08",
"modification_date": "2025-11-12T18:05:21",
"review_url": "https://openreview.net/forum?id=oHEaIwPv9s¬eId=Uh4YqUpJkg",
"license": "CC BY 4.0"
}
] | |
DuPYSaCiep | https://openreview.net/forum?id=DuPYSaCiep | UDDETTS: Unifying Discrete and Dimensional Emotions for Controllable Emotional Text-to-Speech | 4.5 | 4 | [
6,
4,
2,
6
] | [
4,
4,
5,
3
] | 4 | [
"text-to-speech",
"LLM",
"dimensional emotion",
"ADV space",
"semi-supervised"
] | Recent large language models (LLMs) have made great progress in the field of text-to-speech (TTS), but they still face major challenges in synthesizing fine-grained emotional speech in an interpretable manner. Traditional methods rely on discrete emotion labels to control emotion categories and intensities, which cannot capture the complexity and continuity of human emotional perception and expression. The lack of large-scale emotional speech datasets with balanced emotion distributions and fine-grained emotional annotations often causes overfitting in synthesis models and impedes effective emotion control. To address these issues, we propose UDDETTS, a universal LLM framework unifying discrete and dimensional emotions for controllable emotional TTS. This model introduces the interpretable Arousal-Dominance-Valence (ADV) space for dimensional emotion description and supports emotion control driven by either discrete emotion labels or nonlinearly quantified ADV values. Furthermore, a semi-supervised training strategy is designed to comprehensively utilize diverse speech datasets with different types of emotional annotations to train the UDDETTS. Experiments show that UDDETTS achieves linear emotion control along three interpretable dimensions, and exhibits superior end-to-end emotional speech synthesis capabilities. Code and demos are available at: https://anonymous.4open.science/w/UDDETTS. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=DuPYSaCiep | 2025-09-19T03:40:27 | 4 | [
{
"id": "no6NHFG7Jj",
"forum": "DuPYSaCiep",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13886/Reviewer_bqm5",
"reviewer_name": "Reviewer_bqm5",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes UDDETTS, a LLM framework that unifies discrete and dimensional emotions for controllable emotional text-to-speech (TTS). The framework introduces an interpretable ADV space to describe dimensional emotions, supporting emotion control driven by discrete emotion labels or non-linearly quantized ADV values. Moreover, this paper designs asemi-supervised training strategy to fully utilize speech datasets with different emotion annotation types, experimental results show promising emotion controlablety for speech synthesis.",
"strengths": "1. The paper tackles a critical and timely problem in expressive speech synthesis, contious and dimensional control is a clear and important direction for the field.\n2. The semi-supervised learning strategy is an effective solution to extend the training to larger-scale dataset, while only part of the data is well labeled.",
"weaknesses": "1. Although this article compares many different baselines, the reasonableness of the comparison is still not clear to me. A more reasonable comparison would be to add the adv prediction and control modules to the corresponding frameworks, which would better illustrate the universality of the article's contribution.\n2. Some details are not very clear. For example, in Table 3, preference scores are given for two systems, but it is uncertain whether the same backbone is used for the corresponding systems, and it is also uncertain whether the baseline systems have been optimized with similar training methods using the same emotional data.\n3. The article mentions some other control schemes, such as EmoSphere-TTS, but they are not shown in the experimental results.",
"questions": "Besides the issues mentioned in the weakness,\n\n1. How sensitive is the model's performance to the ratio of ADV-annotated data versus label-only data?\n2. Has the accuracy of the ADV predictor been tested standalone?\n3. Was any experiment conducted where the ADV predictor and the emotional mixture encoder were integrated into a different LLM-based framework, such as IndexTTS2 or Spark-TTS, to measure the performance lift?",
"flag_for_ethics_review": [
"Yes, Privacy, security and safety"
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T19:15:33",
"modification_date": "2025-11-12T13:13:16",
"review_url": "https://openreview.net/forum?id=DuPYSaCiep¬eId=no6NHFG7Jj",
"license": "CC BY 4.0"
},
{
"id": "E5umWl7R4p",
"forum": "DuPYSaCiep",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13886/Reviewer_D694",
"reviewer_name": "Reviewer_D694",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "The paper proposes UDDETTS, a unified LLM-based framework for controllable emotional TTS that integrates both discrete emotion labels and dimensional emotions in the Arousal-Dominance-Valence (ADV) space. It addresses challenges including the sparsity and imbalance of emotional annotations, by introducing a semi-supervised training strategy and a nonlinear binning method for ADV quantization. The architecture includes a neural codec language model, an optimal-transport conditional flow matching (OT-CFM) module with an emotional mixture encoder, and a vocoder. An ADV predictor supports end-to-end synthesis from text alone. Trained on large-scale emotional and general speech datasets, UDDETTS demonstrates superior performance in label-controlled, ADV-controlled, and end-to-end TTS tasks, with linear control along ADV dimensions.",
"strengths": "1. The paper proposes a LLM-based TTS framework to explicitly unify discrete and dimensional emotions, addressing a key limitation in prior work of emotional TTS.\n \n2. Introducing the interpretable ADV space to LLM-based TTS is a meaningful step toward continuous, decoupled emotion control, addressing limitations of discrete-label methods. The nonlinear binning and semi-supervised fusion of annotations effectively tackle data imbalance and sparsity.\n\n3. Evaluations across three tasks use diverse metrics (e.g., MOS, ES, SRC/KW) and show consistent improvements over baselines. The visualization in Figure 4 effectively shows that the proposed techniques (nonlinear binning, semi-supervised training) increase the coverage of the ADV space.",
"weaknesses": "1. The novelty is limited. The work is built directly upon the architecture of models like Spark-TTS and CosyVoice. The addition of ADV control seems to be an incremental improvement rather than a novel framework.\n\n2. The core components lack detailed explanation. For ADV quantizer, the nonlinear binning based on clustering is a potential key innovation, but its derivation and relationship to solving sparsity/imbalance are unclear in the main text.\n\n3. The semi-supervised strategy for mixing spontaneous/elicited datasets with varying annotations is not sufficiently ablated. It's unclear if this fusion is mutually beneficial or merely a way to scale data volume.\n \n4. The experiments are not sufficient enough. Further justifications are required. \n* The baselines were not trained on the same datasets, making it difficult to determine whether the performance is influenced by model architecture or training data.\n* Comparisons with other dimensional emotion models (e.g., EmoSphere++) are missing. \n* The comparison against \"description-based baselines\" (Sec. 4.5) is potentially unfair, as these models are not designed for the specific prompt format used.\n* Custom emotional texts (Table 7) lack details on design/validation for bias (e.g., inter-annotator agreement or diversity checks), risking overfitting to specific prompts.",
"questions": "1. The nonlinear binning is central to handling sparsity. Can you provide a more intuitive explanation about how the clustering algorithm leads to a balanced and effective quantization? Why not alternatives like quantile-based or density-based methods? How sensitive is the coverage rate (89.35%) to bin count (m=14)?\n\n2. Why is a separate RoBERTa-based regression model with MSE loss used instead of integrating ADV token prediction directly into the LLM (using CE loss like sparkTTS)? Was the above latter alternative approach explored, and if so, how did its performance compare?\n\n3. For Sec. 4.5 comparisons, did you ensure baselines were prompted in alignment with their intended capabilities? The current setup may not fairly test them.\n\n4. How do you demonstrate that mixing dataset types (spontaneous vs. elicited) via semi-supervised training can benefit this task effectively, rather than just adding data volume? For example, ablate training only on fully labeled data (D_{S,AL}) vs. the full setup.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T16:06:42",
"modification_date": "2025-11-12T13:13:16",
"review_url": "https://openreview.net/forum?id=DuPYSaCiep¬eId=E5umWl7R4p",
"license": "CC BY 4.0"
},
{
"id": "FNI83Fa7G8",
"forum": "DuPYSaCiep",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13886/Reviewer_8NAV",
"reviewer_name": "Reviewer_8NAV",
"rating": 2,
"confidence": 5,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This paper proposes UDDTTS, an LLM-based TTS using an ADV space to model emotional representations for expressive speech synthesis. While the idea is interesting, the paper lacks methodological clarity, strong experimental validation, and a comprehensive review of related LLM-based TTS work. The results also do not show clear advantages over existing methods.",
"strengths": "The work introduces a potentially useful direction for controllable emotional TTS by modeling ADV in LLMs.",
"weaknesses": "Your proposed UDDTTS does not outperform other approaches in terms of MOS, UTMOS, WER, SS, and STOI. I suggest further improving these metrics through more refined method design.\n\nMethodology is not well written. Please define the symbols before using them. I am confused with the method design. \n\nThe generated speech quality is not good with unclear pronunciations, which is not common in the existing TTS models. I am wondering if including ADV is the reason why the speech intelligence is getting worse. I would suggest improving the performance further with more advanced techniques.\n\nThe literature review on LLM-based TTS approaches is relatively limited, and a more comprehensive investigation is recommended.\n\nI am not fully convinced by how you disentangle complex emotions in the ADV space while addressing sparsity and imbalance issues. Could you provide experimental evidence to support this claim?\n\nIt is unclear why AB preference tests were not included, as they are commonly used to assess perceptual differences in TTS quality.",
"questions": "How did you prove that you capture the continuity of emotion distributions?\n\nHaving only 12 listeners for the subjective evaluation is insufficient for a comprehensive assessment of TTS models. For each listener, how many samples were evaluated? Were these samples randomly selected from the test set or manually chosen? \n\nHow did you create and process spontaneous emotion datasets and elicited emotion datasets?\n\nWhat does Z1 mean? \n\nWhy do you assume that Xspk can effectively represent the speaker embedding while excluding emotional representations? Could you elaborate on this assumption and provide evidence or verification?\n\nHow do you demonstrate that your proposed speech tokenizer captures rich emotional information? What are the key differences between your tokenizer and CosyVoice’s speech tokenizer?\n\nI am also unclear about the motivation, design, and working mechanism of the ADV predictor. Could you explain this in more detail? Is the speech signal considered in its process, or is it modeled solely based on textual emotion inputs? If speech is not incorporated, the emotional states between speech and text might differ. How did you address this issue?\n\nWill you release your testests?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T22:11:29",
"modification_date": "2025-11-12T13:13:17",
"review_url": "https://openreview.net/forum?id=DuPYSaCiep¬eId=FNI83Fa7G8",
"license": "CC BY 4.0"
},
{
"id": "3thfg5gIOB",
"forum": "DuPYSaCiep",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13886/Reviewer_2DR5",
"reviewer_name": "Reviewer_2DR5",
"rating": 6,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 4,
"summary": "This paper proposes a controllable emotional TTS framework that unifies discrete emotion labels with a dimensional Arousal–Dominance–Valence (ADV) space. Contributions can be summarized as:\n- a semi-supervised neural-codec LLM that can take either labels or quantized ADV tokens as control input;\n- a nonlinear binning scheme that discretizes the ADV space into controllable units;\n- an ADV predictor that infers pseudo-ADV from text for end-to-end, text-only emotional synthesis.",
"strengths": "- Unifying discrete and dimensional emotion control in LLM-based TTS is well-motivated and really novel for fine-grained affect control.\n- This paper presents solid and extensive experiments.\n- This paper provides code at an anonymous link.",
"weaknesses": "- ADV ranges are normalized to [1,7], and bins are chosen via a CLT-inspired heuristic plus clustering. It’s unclear how sensitive control linearity is to the chosen number of bins and cluster variability.\n- Although the paper aims to achieve fine-grained and interpretable emotional control through the continuous (ADV) space, the amount of training data with ground-truth ADV annotations appears to be very limited. Most emotional datasets only provide discrete emotion labels, while ADV values are available for a small subset. As a result, the ADV predictor and the overall controllability of the system rely heavily on pseudo-ADV values inferred from semi-supervised learning rather than real annotated data. This raises concerns about the precision and reliability of the ADV mapping, especially for subtle or compound emotions. The scarcity of reliable ADV-labeled samples might constrain the model’s ability to learn accurate continuous emotion representations, which somewhat contradicts the paper’s goal of achieving fine-grained control in the ADV space.\n- The system employs both an ADV predictor and a label predictor. However, the paper does not clearly explain how these two emotion sources interact or which one dominates when their predictions disagree. Since the final emotional output depends on the fusion of both, inconsistencies between the predicted ADV vectors and categorical labels could lead to unstable or conflicting emotional expressions. Moreover, no quantitative analysis (e.g., disagreement rate, calibration curve, or preference correlation) is provided to demonstrate whether the two predictors are aligned. This ambiguity raises concerns about the reliability and interpretability of the emotional control, which is central to the paper’s claimed contribution.",
"questions": "- How robust are the nonlinear bin boundaries across different training splits?\n- How often does the ADV predictor disagree with the LLM-predicted label, and which path dominates the final emotion?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T11:47:37",
"modification_date": "2025-11-12T13:13:17",
"review_url": "https://openreview.net/forum?id=DuPYSaCiep¬eId=3thfg5gIOB",
"license": "CC BY 4.0"
}
] | |
pBTXsu1i77 | https://openreview.net/forum?id=pBTXsu1i77 | VLM-SubtleBench: How Far Are VLMs from Human-Level Subtle Comparative Reasoning? | 5.5 | 3.25 | [
2,
6,
6,
8
] | [
3,
3,
3,
4
] | 4 | [
"Vision-language Models",
"Multimodal Large Language Models",
"Comparative Reasoning",
"Benchmark",
"Visual Question Answering"
] | The ability to distinguish subtle differences between visually similar images is essential for diverse domains such as industrial anomaly detection, medical imaging, and aerial surveillance. While comparative reasoning benchmarks for vision-language models (VLMs) have recently emerged, they primarily focus on images with large, salient differences and fail to capture the nuanced reasoning required for real-world applications. In this work, we introduce **VLM-SubtleBench**, a benchmark designed to evaluate VLMs on *subtle comparative reasoning*. Our benchmark covers ten difference types—Attribute, State, Emotion, Temporal, Spatial, Existence, Quantity, Quality, Viewpoint, and Action—and curate paired question–image sets reflecting these fine-grained variations. Unlike prior benchmarks restricted to natural image datasets, our benchmark spans diverse domains, including industrial, aerial, and medical imagery. Through extensive evaluation of both proprietary and open-source VLMs, we reveal systematic gaps between model and human performance across difference types and domains, and provide controlled analyses highlighting where VLMs’ reasoning sharply deteriorates. Together, our benchmark and findings establish a foundation for advancing VLMs toward human-level comparative reasoning. | datasets and benchmarks | https://openreview.net/pdf?id=pBTXsu1i77 | 2025-09-08T15:33:07 | 4 | [
{
"id": "ac3qBXAB6O",
"forum": "pBTXsu1i77",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3016/Reviewer_Waq7",
"reviewer_name": "Reviewer_Waq7",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "In this paper, the author presented VLM-SubtleBench, a benchmark to evaluate subtle comparative reasoning in VLMs across 10 difference types and 6 visual domains. The benchmark contains more than 11k image-pair QA samples and 1k human-annotated difference captions. In this paper, the authors also claim that existing such benchmarks focus on salient differences and lack domain diversity. They evaluated several current VLMs on the benchmark and found that they struggle with subtle differences compared to human performance.",
"strengths": "- In this paper, they introduced an increasingly relevant capability: subtle visual comparison between images, across multiple domains. It contains various difference types (attribute, temporal, viewpoint, etc.) and datasets beyond natural images (industrial, medical, etc.).\n- It also has a mix of real data and synthetic setups, to show controlled evaluation capabilities.\n- The paper is easy to read, though the figures could use more text to make them clearer.",
"weaknesses": "- The dataset seems to be largely a small increment of prior multi-image VLM benchmarks like MLLM-CompBench, ReMI. The claim of novelty in subtlety is only partially convincing. Subtle differences are defined via embedding cosine similarity (DINOv3), but this does not necessarily guarantee perceptual or semantic subtlety.\n- In Figure 3, it can be seen that when the catcher moved, the other player moved as well. As the paper claims to be fine-grained, I would be interested to know how the authors ensured that in the video, only the object/person in question moved, not the other parts.\n- Repetition of the same datasets across categories. MVTEC-AD reused across “attribute” and “state” categories. \n- “Overlap” and “subtraction” images are described vaguely. It is unclear to me whether pixel-level overlay vs feature-space subtraction is used. This step needs technical clarity and justification. [line 354]\n- No evaluation on two-image native models (e.g., LLaVA Next, Clip).\n- What is domain domain-specific performance difference for each model in the benchmark? Can the author provide some insights on: are the methods bad at medical or really good in synthetic for every type?\n- Typos: line 242, archange to are",
"questions": "- how subtlety was maintained for the types from frames were takes from videos, to be sure that only the object in question moved.\n- How do authors ensure repetition of the same datasets across categories represents distinct types [weakness 3]?\n- Insights about domain-specific evaluation and evaluating multi-image models on the benchmark [weakness 5].",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T15:58:58",
"modification_date": "2025-11-12T11:01:43",
"review_url": "https://openreview.net/forum?id=pBTXsu1i77¬eId=ac3qBXAB6O",
"license": "CC BY 4.0"
},
{
"id": "LvEsgp2dc5",
"forum": "pBTXsu1i77",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3016/Reviewer_chNu",
"reviewer_name": "Reviewer_chNu",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper proposes a benchmark to evaluate how well models are at discerning subtle changes between two images in form of visual question answering and captioning. The dataset covers instances from multiple domains (such as game, industrial, medical, etc) and has ten different types of differences (such as temporal, spatial, emotion, etc). Evaluation results are comprehensive including multiple popular open-source and proprietary models such as GPT-5, o3, Claude-sonnet-4, Gemini-2.4-pro along with different prompting strategies. Human evaluation is also conducted and current results indicate a gap between human performance (avg. performance 95.5) and current frontier models (best 77.8 avg performance).",
"strengths": "1. The paper focuses on an interesting problem setup of fine-grained changes between two images, and it is interesting how current frontier models struggle at these tasks.\n2. The paper adequately describes the dataset construction process, model evaluation setup, and experimental results. Overall, it is well written.\n3. The evaluation process studies multiple factors that can influence model performance -- such as how to combine the two images when feeding images, different prompting strategies and impact on model performance with diff. controllable percentage of change b/w 2 images.",
"weaknesses": "1. This task of subtle difference changes b/w two images has been previously explored in works such as Spot-the-Diff [1], Img-Diff [2] and MLLM-CompBench [3] as noted by authors. The primary novelty seems to be expansion to multiple domains, more question types and combination of multiple choice questions and captioning in a single benchmark. In this regard, novelty is a bit limited.\n\n2. There can be further baselines/prompting strategies considered such as:\n- Calculating regions of interest from the subtraction of 2 images, and then highlighting these regions in the 2 input images (through simple bounding boxes or masks) and feed them to VLM stating regions of interest are highlighted.\n- 2-step reasoning process -- first ask the VLM to describe differences b/w the 2 images with respect to answering the question, and then feed this output in addition to the 2 images and original question.\n\nRelatively minor:\n3. It would also be interesting to study whether this task can simply be solved by training models on a mix of synthetic and real samples as the task itself might be out-of-distribution. But I understand this may be out of scope for current paper.",
"questions": "Please see weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T09:21:38",
"modification_date": "2025-11-12T11:01:43",
"review_url": "https://openreview.net/forum?id=pBTXsu1i77¬eId=LvEsgp2dc5",
"license": "CC BY 4.0"
},
{
"id": "q7vllUHFOl",
"forum": "pBTXsu1i77",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3016/Reviewer_Qbbf",
"reviewer_name": "Reviewer_Qbbf",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 4,
"presentation": 3,
"summary": "The paper introduces VLM-SubtleBench, a benchmark targeting subtle comparative reasoning across 10 difference types (Attribute, State, Emotion, Temporal, Spatial, Existence, Quantity, Quality, Viewpoint, Action) and 6 domains (Natural, Game, Industry, Aerial, Medical, Synthetic), comprising 11.7k (image-pair, question, answer) triplets. It emphasizes minimally different pairs (e.g., pairs with high DINOv3 similarity) and augments MCQ with a captioning track. Systematic studies span open-source and proprietary VLMs, prompt/fusion strategies, and controlled synthetic stress tests. Human accuracy is ~95.5%, whereas the best proprietary model remains well below this, with particular weaknesses in temporal/spatial/viewpoint categories.",
"strengths": "- Substantive contribution via task definition & data curation. Clearly formalizing subtle comparative reasoning as an evaluation target and curating a benchmark dataset with transparent collection/validation protocols is, by itself, a meaningful research contribution.\n\n- Breadth + diagnostics. Coverage of 10 difference types and 6 domains with controlled synthetic factors (e.g., brightness deltas, object size, translation, object count) supports failure-mode analysis rather than aggregate scores only.",
"weaknesses": "- Data generation dependencies. Some Attribute pairs are created with Gemini-2.5 flash image preview (“nano-banana”) editing; Medical questions are refined by gpt-4o. This can introduce stylistic artifacts or distribution shifts that confound evaluation unless carefully audited. Please quantify any such effects (e.g., edited vs. non-edited subsets).\n\n- The paper identifies notable gaps in temporal/spatial/viewpoint (e.g., stable accuracy requiring ~160 px camera translation in synthetic tests), but a more explicit reporting of difficulty curves on natural data would help establish ecological validity.",
"questions": "- Your Figure 5 synthetic-control study probes failure modes by manipulating low-level factors (e.g., brightness/scale/count/translation). Could you extend this with a color-sensitivity axis inspired by VLM’s Eye Examination [1], which reports consistent green insensitivity across VLMs, to refine the Attribute subset? Concretely, consider hue sweeps with a green vs. non-green contrast and ΔE/brightness steps, and report performance as a function of hue as well as factor interactions (color × size/count/translation). This would test whether known perceptual color deficits compound subtle comparative reasoning failures.\n\n- You clearly motivate the benchmark with high-stakes domains (industrial anomaly detection, medical imaging, aerial surveillance). Could you provide evidence of transfer, e.g., correlations between SubtleBench category scores and downstream metrics on representative datasets in those domains, or a small-scale finetuning study showing measurable gains in real applications?\n\n[1] VLM’s Eye Examination: Instruct and Inspect\nVisual Competency of Vision Language Models, https://arxiv.org/pdf/2409.14759",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T20:09:29",
"modification_date": "2025-11-12T11:01:43",
"review_url": "https://openreview.net/forum?id=pBTXsu1i77¬eId=q7vllUHFOl",
"license": "CC BY 4.0"
},
{
"id": "p9jaPLs2z5",
"forum": "pBTXsu1i77",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3016/Reviewer_WLtr",
"reviewer_name": "Reviewer_WLtr",
"rating": 8,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper introduces a new benchmark designed to evaluate whether multimodal LLMs can distinguish subtle differences between similar images (i.e., perform subtle comparative reasoning). The benchmark is comprehensive: it contains 11.7K triplets of image pairs, questions, and answers, covering ten types of differences (e.g., attribute or state) across five image domains, including natural, industrial, and medical. The authors evaluate multiple recent MLLMs, both open-source and proprietary, on this benchmark and show that subtle comparative reasoning remains a significant challenge for current models.",
"strengths": "- The benchmark is comprehensive, encompassing a large number of data points as well as diverse visual types and domains.\n\n- The paper identifies the remaining limitations of recent MLLMs, providing valuable insights into where the research community should focus to further improve these models.\n\n- The paper is well-written and easy to follow.",
"weaknesses": "- A more detailed analysis is needed. e.g., why do MLLMs struggle more with certain types of comparisons? Why do some models perform better than others?\n\n- The paper currently reports results but lacks discussion or suggestions on how to improve subtle comparative reasoning in MLLMs.",
"questions": "- Similar to the study (Table 3) in MLLM-CompBench, what would happen if the models were first asked to analyze two images separately and then compare them using a purely language-based question, instead of being given both images simultaneously?\n\n- Did annotators label all the data? How many annotators were involved in building this benchmark, and what were their backgrounds?\n\n- It seems that only a test set is provided. Do you also have training or validation sets? It would be interesting to see whether fine-tuning on such data could improve performance.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T01:30:09",
"modification_date": "2025-11-12T11:01:43",
"review_url": "https://openreview.net/forum?id=pBTXsu1i77¬eId=p9jaPLs2z5",
"license": "CC BY 4.0"
}
] | |
3PECod4ieb | https://openreview.net/forum?id=3PECod4ieb | Chronoberg: Capturing Language Evolution And Temporal Awareness In Foundation Models | 5 | 3.5 | [
6,
8,
4,
2
] | [
3,
3,
3,
5
] | 4 | [
"Large Language Models",
"Temporal Generalization",
"Lexical Semantic Change",
"Continual Learning",
"VAD Lexicons"
] | Large language models (LLMs) excel at operating at scale by leveraging social media and various data crawled from the web. Whereas existing corpora are diverse, their frequent lack of long-term temporal structure may however limit an LLM's ability to contextualize semantic and normative evolution of language and to capture diachronic variation. To support analysis and training for the latter, we introduce Chronoberg, a temporally structured corpus of English book texts spanning 250 years, curated from Project Gutenberg and enriched with a variety of temporal annotations. First, the edited nature of books enables us to quantify lexical semantic change through time-sensitive Valence-Arousal-Dominance (VAD) analysis and to construct historically calibrated affective lexicons to support temporally grounded interpretation. With the lexicons at hand, we demonstrate a need for modern LLM-based tools to better situate their detection of discriminatory language and contextualization of sentiment across various time-periods. In fact, we show how language models trained sequentially on Chronoberg struggle to encode diachronic shifts in meaning, emphasizing the need for temporally aware training and evaluation pipelines, and positioning Chronoberg as a scalable resource for the study of linguistic change and temporal generalization. $\\textcolor{red}{Disclaimer:}$ This paper includes language and display of samples that could be offensive to readers. $\\textcolor{blue}{Open Access:}$ Chronoberg will be available publicly on HuggingFace. | datasets and benchmarks | https://openreview.net/pdf?id=3PECod4ieb | 2025-09-18T16:47:06 | 4 | [
{
"id": "Ek69LlwRXx",
"forum": "3PECod4ieb",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10946/Reviewer_GhzC",
"reviewer_name": "Reviewer_GhzC",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper addresses the crucial problem that most Large Language Models (LLMs) are trained on temporally stationary data, limiting their ability to comprehend the long-term evolution of language, social norms, and semantics (diachronic variation). The introduction of Chronoberg, a large-scale (2.7B tokens) and temporally structured corpus of full-length English book texts (25,061 books from Project Gutenberg) spanning 250 years (1750–2000). Experiments using the dataset show that LLMs trained sequentially struggle significantly with catastrophic forgetting and generalization to future language, particularly concerning words that have undergone affective valence shifts. This positions Chronoberg as a necessary benchmark for temporal generalization, Continual Learning, and evaluating the temporal robustness of AI systems.",
"strengths": "1. The introduction of Chronoberg is a major contribution, filling a critical gap in LLM training and evaluation. It provides a large-scale (2.7B tokens) and long-horizon (250 years) temporally structured corpus of full-length texts. \n2. The systematic construction of temporally calibrated VAD lexicons for $\\sim$335,000 words is highly innovative.",
"weaknesses": "1. The temporal structure, which is the foundation of the dataset, relies on an inferred publication date from external sources (OpenLibrary), while validated, this process has an unavoidable Mean Absolute Error (MAE) of $\\pm 3.05$ years.\n\n2. The methodology for constructing the temporal VAD lexicons relies on selecting the Top-K nearest neighbors ($K=20$). However, the main text does not include a systematic ablation study demonstrating how the choice of $K$ (or the effect of the 50-year interval size) impacts the final, crucial results, such as the LLM perplexity gap between valence-stable and valence-shifting test sets.",
"questions": "1. How did you get the \"positive\" or \"negative\" in Table 1? It is still not clear to me. \n\n2. The paper uses Word2Vec and CADE alignment. Could the authors justify why this combination was chosen over more contemporary diachronic methods?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-07T13:19:25",
"modification_date": "2025-11-12T12:35:38",
"review_url": "https://openreview.net/forum?id=3PECod4ieb¬eId=Ek69LlwRXx",
"license": "CC BY 4.0"
},
{
"id": "PpsPsIZb4Q",
"forum": "3PECod4ieb",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10946/Reviewer_Vpi9",
"reviewer_name": "Reviewer_Vpi9",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 4,
"presentation": 4,
"summary": "This paper proposes MetaDiff, a diffusion-based framework for few-shot adaptation of large vision-language models (VLMs). Traditional few-shot fine-tuning approaches—such as adapter tuning or prompt learning—are limited by overfitting and poor uncertainty estimation when data are scarce. MetaDiff reframes few-shot adaptation as a meta-generative prior learning problem: it learns a conditional diffusion model in the weight or latent embedding space of the VLM such that, given a few examples from a novel task, it can sample adapted model states that generalize better.",
"strengths": "1. The idea of using a diffusion process in the parameter or embedding space for meta-adaptation is highly creative. Unlike deterministic meta-learners, MetaDiff explicitly models the distribution over adapted models, allowing uncertainty-aware adaptation. This is both novel and well-motivated theoretically.\n2. The paper provides comprehensive experiments across diverse domains (classification, captioning, retrieval), showing that MetaDiff outperforms fine-tuning, adapter tuning, and standard meta-learning baselines under few-shot constraints. Ablations clearly indicate that the diffusion prior contributes significantly to performance.",
"weaknesses": "1. While the motivation for using diffusion models is strong, the paper does not provide a formal link between the learned generative prior and Bayesian meta-learning or PAC-Bayesian guarantees. A short theoretical justification (e.g., that MetaDiff approximates amortized inference under a hierarchical Bayesian model) would strengthen the contribution.\n2. MetaDiff’s meta-training stage is computationally heavy, involving thousands of diffusion steps across multiple tasks. While test-time sampling can be parallelized, a more detailed cost analysis or discussion on distillation into fewer diffusion steps would be valuable.",
"questions": "1. It would be useful to see how performance varies with the number of diffusion timesteps. Is MetaDiff’s success primarily due to stochastic regularization or due to the learned generative structure?\n2. When sampling multiple adapted models via diffusion, how diverse are their predictions? Are these samples genuinely capturing task uncertainty or just random noise around a single mode?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T17:40:22",
"modification_date": "2025-11-12T12:35:39",
"review_url": "https://openreview.net/forum?id=3PECod4ieb¬eId=PpsPsIZb4Q",
"license": "CC BY 4.0"
},
{
"id": "M7t73bl5Lf",
"forum": "3PECod4ieb",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10946/Reviewer_bh7T",
"reviewer_name": "Reviewer_bh7T",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 1,
"presentation": 2,
"summary": "This paper presents Chronoberg, a large diachronic English corpus (1750–2000, 2.7B tokens) with sentence-level affective annotations and temporally aligned Valence–Arousal–Dominance (VAD) lexicons. The authors detail a pipeline for dating and cleaning Project Gutenberg texts, generate time-specific VAD lexicons via aligned embeddings, and use the resource to test hate-speech detectors and temporal adaptation of LLMs (sequential fine-tuning, EWC, LoRA). Results show severe forgetting and limited handling of historically shifting word valence, highlighting Chronoberg’s potential as a benchmark for temporal robustness and continual learning.",
"strengths": "- Language temporal drift and historical semantics are important for ensuring LLM robustness.\n- The paper presents interesting insights on semantic shift.",
"weaknesses": "- Narrow domain coverage. Chronoberg is built entirely from Project Gutenberg books, spanning 1750–2000. This makes it a valuable literary resource but also limits generalization to modern or conversational language where current LLMs are deployed .\n\n- Unclear LLM evaluation setup. The paper compares several classifiers, including “OpenAI” models, in its hate-speech evaluation tables . However, it does not describe how those models were prompted or whether they were informed about the historical origin of the text. Without that context, it’s hard to interpret what the observed disagreements actually reflect.\n\n- Limited quantitative evaluation of temporal drift. Section 4.1 and the associated tables mostly illustrate qualitative examples of valence shifts and classifier disagreements. There is no formal metric (e.g., correlation or agreement score) quantifying alignment between the temporal VAD lexicons and model outputs .\n\n- Unsurprising claims without measurement. The authors hypothesize that modern LLMs “rely too heavily on surface-level keywords,” but this remains an intuitive explanation rather than an empirically tested one . Quantifying how much this reliance contributes to misclassification would make the finding stronger.\n\n- Vague notion of “contextualization.” The paper claims that temporally fine-tuned models “struggle with contextualization of historical content,” yet does not specify how contextual information (e.g., time metadata or retrieved examples) was provided during inference .\n\n- Ambiguous terminology. Phrases like “dissonance of ~85% between OpenAI and RoBERTa” are reported without a clear definition of what “dissonance” measures (e.g., disagreement rate or correlation gap) . This makes quantitative interpretation difficult.",
"questions": "- How do you expect findings from literary English (Chronoberg) to generalize to modern or conversational domains where current LLMs are applied?\n\n- For the “OpenAI” model evaluations, what exact prompts or instructions were used? Was the model told the historical period or context of the text?\n\n- Have you computed any quantitative metrics (e.g., correlation or agreement) between temporal VAD lexicons and classifier outputs, beyond qualitative examples?\n\n- Can you operationalize “contextualization of historical content” — does it mean awareness of time, lexical shifts, or broader discourse cues?\n\n- What precisely does the reported “∼85% dissonance” between OpenAI and RoBERTa measure? Is it disagreement rate, accuracy gap, or another metric?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T21:06:40",
"modification_date": "2025-11-12T12:35:39",
"review_url": "https://openreview.net/forum?id=3PECod4ieb¬eId=M7t73bl5Lf",
"license": "CC BY 4.0"
},
{
"id": "Kcao4iESMQ",
"forum": "3PECod4ieb",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10946/Reviewer_m7ZD",
"reviewer_name": "Reviewer_m7ZD",
"rating": 2,
"confidence": 5,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper presents a dataset for evaluating temporal changes of words, which is then used to evaluate LLMs. Temporal change detection is a task that has been researched extensively (see SemEval workshops on this topic over the years for example) but its progress has been hindered by the lack of large-scale time-annotated datasets. This paper fill this important gap.",
"strengths": "- The creation and release of a large-scale dataset for evaluating temporal changes of VAD of words is the main strength/contribution of this paper.\n- The paper then goes on to evaluate multiple foundation models on this dataset.",
"weaknesses": "- given the automatic nature of the annotation process, a manual verification (e.g. using a random sample) should be conducted.\n- books are a curated set of texts and might not necessarily express the sentiment held by the broader population. Social media on the other hand would have been a better source for capturing modern VAD of words but cannot be used for historical texts. What is the coverage in terms of the number of authors covered in the corpus? If this number is low then it might not be representing the view point of the general public but an elite and small group of authors. This dataset bias should be investigated further.\n- I am not sure whether all books in this corpus is suitable for this purpose of evaluating diachronic changes. For example, there could be fantasy books which do not reflect a social viewpoint but an imaginary one. Even if a book is written at a particular point in time, it might be handling a historical context that belongs to a different time period (e.g. a book written in 2020 on Edo period of Japan would not be reflecting the modern usage of Japanese language). I am not sure how these complications are handled in this corpus (or the authors are aware of such issues)?\n- Although I appreciate the extensive evaluations conducted in the paper using the CHRONOBERG dataset, those findings will only be valid to the extent of the accuracy of the dataset itself.\n- Moreover, I think this paper is more appropriate for the linguistic resources track at an NLP venue (e.g. LREC or xACL) rather than ICLR.",
"questions": "- Did you conduct any manual (even at a small scale) evaluation of the VAD scores computed using Eq. (2)?\n- What is the coverage in terms of the number of authors covered in the corpus? If this number is low then it might not be representing the view point of the general public but an elite and small group of authors. This dataset bias should be investigated further.\n- In Table 1, can you explain the shift from neg to pos for `infatuation` and `destinty` please?\n- What would the valency shift look like for a word such as `gay`, which is known to be used positively (happiness) in olden times, whereas more neutral? in modern usage?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-20T18:20:50",
"modification_date": "2025-11-12T12:35:39",
"review_url": "https://openreview.net/forum?id=3PECod4ieb¬eId=Kcao4iESMQ",
"license": "CC BY 4.0"
}
] | |
skR7tTT32C | https://openreview.net/forum?id=skR7tTT32C | KITINet: Kinetics Theory Inspired Network Architectures with PDE Simulation Approaches | 3.5 | 2.75 | [
2,
4,
4,
4
] | [
3,
2,
3,
3
] | 4 | [
"Physics Inspired Neural Network",
"Kinetic Theory"
] | Despite the widely recognized success of residual connections in modern neural networks, their design principles remain largely heuristic. This paper introduces KITINet (KInetics Theory Inspired Network), a way that reinterprets feature propagation through the lens of non-equilibrium particle dynamics and partial differential equation (PDE) simulation. We propose a new residual module that models feature updates as the stochastic evolution of a particle system, numerically simulated via a discretized solver for the Boltzmann transport equation (BTE). This formulation mimics particle collisions, enabling additional neuron-wise information propagation via physical interactions. Additionally, we reveal that this mechanism is an implicit regularization approach that induces network parameter condensation during training, where parameters progressively concentrate into a sparse subset of dominant channels. Experiments on large language modeling, image classification, scientific computation, and text classification show consistent improvements over classic network baselines, without additional inference cost. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=skR7tTT32C | 2025-09-15T16:43:38 | 4 | [
{
"id": "HVwTU3UemV",
"forum": "skR7tTT32C",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5656/Reviewer_gGkH",
"reviewer_name": "Reviewer_gGkH",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This paper considers a modification to the standard residual layer by incorporating a new stochastic connection inspired by molecular kinetics with collision. The residual connection is a simplified version of a Monte-Carlo-based technique used for Boltzmann Transport equations. By replacing the residual connections in many architectures in different domains, the authors claim improved performance. The article also highlights the notion of \"network parameter condensation\" as a metric that is used to claim the improvement.",
"strengths": "The idea is interesting, the notion of network parameter condensation appears to be a new metric to consider. The approach of modifying the residual connection using the Direct Simulation Monte Carlo (DSMC) time-step appears widely applicable to generic tasks using residual connections.",
"weaknesses": "- It is not clear in the manuscript why the \"network parameter condensation\" is a useful measure of performance. \n- The authors say that the design principles are largely heuristic, but appears to simply consider the \"a different residual connections\" in a modular fashion applied to existing architectures. \n- The \"residual connection\" originally is a very simple operation, whereas the proposed new residual connection involves new parameters in the \"residual layer\" $l_\\theta$. This simply makes the terminology confusing because then this layer is no longer a \"residual layer\"? For example, you can add a residual layer on top of their \"DSMC-inspired module\" and calling this new module a residual connection simply causes confusion.\n- It is also not clear to this reviewer what Theorem 4.1 is attempting to achieve, the terms used in the statements were not introduced properly.\n- The improvements against existing models are very modest, and does not appear significant.",
"questions": "- Regarding the notion \"network parameter condensation\": Is there a way to connect the sparsity of the features to the performance of the task? Is this connection valid both for LLM-based tasks and Operator Learning tasks?\n- On the design principles: How is the design principle different than a ODE-based view? In a way the Boltzmann Transport Equation is being evolved in time and the proposed methods can be viewed as an ODE-based approach as well. \n- Regarding Theorem 4.1: What is this assumption that a neural network is \"under thermal equilibrium\" and what does the \"rapid convergence process\" mean here.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-04T00:48:34",
"modification_date": "2025-11-12T11:29:38",
"review_url": "https://openreview.net/forum?id=skR7tTT32C¬eId=HVwTU3UemV",
"license": "CC BY 4.0"
},
{
"id": "MEk3S7Cdsu",
"forum": "skR7tTT32C",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5656/Reviewer_V9j8",
"reviewer_name": "Reviewer_V9j8",
"rating": 4,
"confidence": 2,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "This paper introduces KITINet, a novel, parameter-free module to replace the standard residual connection. The core idea is to reinterpret feature propagation through the lens of kinetic theory, treating feature channels as \"particles\" (with positions $x$ and velocities $v$). The module simulates the stochastic evolution of this particle system using a \"DSMC-inspired\" discretized solver for the Boltzmann transport equation (BTE). This physics-based collision simulation is active only during training and acts as an implicit regularizer. The authors claim this method induces \"network parameter condensation,\" where parameters consolidate into a sparse, dominant subset of channels. The module is turned off during inference, where it reverts to a standard residual addition, thus incurring zero additional parameters or computational cost at inference time. The authors demonstrate consistent performance improvements across a wide range of tasks.",
"strengths": "- The conceptual framework seems original. Linking residual learning to non-equilibrium particle dynamics and the Boltzmann transport equation is a novel approach.\n- The proposed module is a training-time-only regularizer that is free at deployment. This is a desirable trade-off, as it allows for a more robust and better-generalized model with no extra-test time computation.\n- Testing suite seems to show that the approach is quite general: the experiments span diverse domains including language modeling, image classification and PDE solving. The improvements on the PDE front seem rather substantial.",
"weaknesses": "- Although the paper mentions that KITINet reaches target accuracy \"approximately 20% fewer training steps,\" the actual computational overhead during training is not thoroughly analyzed. Would like to see some more training details and logs. Might be helpful to see more results on the FLOPs / training time per epoch / iteration. If we were to plot out the training FLOPs against the accuracy, can we expect to see the method beating current baselines?\n- The improvements on general-purpose benchmarks like LLMs and image classification seem rather modest. However, I'm not strongly familiar with benchmarks in that domain. On the front of PDEs, the selection of benchmarks could be strengthened. To fully evaluate the method's efficacy against the current state-of-the-art and ensure fair comparisons, it would be beneficial to test it on more recent, standardized benchmarks. The datasets from 'TheWell', for example, would be an excellent candidate for this.\n- The paper does not investigate why KITINet works better in some contexts than others. Does the \"particle collision\" metaphor universally helpful, or is it (as the PDE results suggest) particularly effective for problems that are already governed by diffusion or particle-like dynamics?\n- Does this module benefit all architectures equally? The paper applies it to both FNO and OFormer. A direct comparison is needed. Does the collision mechanism provide more of an advantage to the Transformer's attention mechanism or to the FNO's global convolution?",
"questions": "Please address the weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T14:45:34",
"modification_date": "2025-11-12T11:29:38",
"review_url": "https://openreview.net/forum?id=skR7tTT32C¬eId=MEk3S7Cdsu",
"license": "CC BY 4.0"
},
{
"id": "ssW6ey1KGv",
"forum": "skR7tTT32C",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5656/Reviewer_QJdT",
"reviewer_name": "Reviewer_QJdT",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper examines residual learning through the perspective of kinetic theory, non-equilibrium particle dynamics. existing residual modules are designed heuristically, neglecting the rich dynamics of particle interaction or energy exchange. existing dynamical systems perspective models residual networks as discretized odes, ignoring stochastic interaction.\n\nThe paper proposes a new residual module that models feature updates as evolution of a particle system, mimicking collisions and enabling information propagation through physical interaction. The evolution is simulated numerically where each channel is a particle whose interaction is simulated by a discretized PDE solver.\n\nAnalysis shows that the new mechanism is an implicit regularization approach. Experiments in multiple settings including language modeling, image classification, scientific computing show consistent improvement.",
"strengths": "The proposed module is novel with motivating theory, replacing the residual connection operation of addition as particle collision dynamics.\n\nThe writing is of good quality. \n\nExperiments are extensive and show improvement on a number of benchmarks. The experiments range of large language model pretraining to image classification to neural operator learning.\n\nParameter condensation is promoted by the method which may be an explanation of generalization",
"weaknesses": "Motivation for modeling the interaction as particles is not clear.\n\nThere is no discussion of related work in the main paper. There is some discussion in the appendix that should be in the main part.\n\nParts of the paper are confusing and not clearly written (say, lines 158-160). The preliminary section appears to have details not later used.\n\nThe description of the architectural components is also confusing. The description of the simulation in equations 2-5 is opaque and hard to understand.\n\nModeling collisions requires numerical solution of a PDE which may be more expensive.\n\nI am not familiar with parameter condensation. It would be useful to have a description of this phenomenon.",
"questions": "I am missing some part of the motivation. Why would kinetic theory and stochastic particle collision dynamics be a good model for residual connections in neural networks?\n\nFurthermore, in the introduction, what is the relation of physics inspired architectures to residual networks. As far as I can tell these are two unrelated things where physics inspired architectures embed physical principles into architectures which may or may not employ residual modules.\n\nIn line 44-45 it is said that dynamical systems as ODEs fails to account for stochastic, collision driven interaction. Is this kind of interaction present in residual networks? Have the authors (or prior work) shown this behavior in NNs that the ODE perspective fails to account for?\n\nThis leads to the question of whether this work is an explanation of residual modules? or a new physics-inspired architecture based on particle collisions?\n\nI understand that to model collisions the Boltzmann transport equation needs to be solved. How efficient is this? How does this effect the time complexity of the models in relation to the original models in the experiments?\n\nTable 4 compares with FNO in terms of root MSE. It is difficult to judge the raw MSE scores. What are the results in normalized error? \n\nline 61: What is the motivation for physically-grounded architectures for language or text classification?\nline 65: ‘heated phenomenon’\nIn line 86 I do not understand what batchnorm has to do with kinetic theory or particle dynamics",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T02:08:24",
"modification_date": "2025-11-12T11:29:39",
"review_url": "https://openreview.net/forum?id=skR7tTT32C¬eId=ssW6ey1KGv",
"license": "CC BY 4.0"
},
{
"id": "dWnpv3EIeN",
"forum": "skR7tTT32C",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5656/Reviewer_mfJA",
"reviewer_name": "Reviewer_mfJA",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The authors propose a new way of combining the residual stream and processed activations in residual layers. Inspired by Neural ODEs, they consider the residual stream as position $x$ and the processed activations as velocity $v$ for which they reshape the hidden feature dimension into a particle simulation that they simulate for one time step. This mechanism is only active during training. It falls back to simple residual-stream addition during inference. The authors argue that their method improves parameter condensation since particle collisions cause spatial distributions to be more dispersed. Since residual layers are common in most modern deep learning architectures, their extension can be monkey-patched into various networks for different tasks. The authors demonstrate this with GPT-like next-token prediction, ResNet-style image classification, PDE operator learning, and BERT-like text classification, for which they report improvements over the baseline.",
"strengths": "- The authors make an interesting connection to kinetic gas theory. \n- They use a wide range of experiments and can consistently demonstrate an improvement over the chosen baseline (even if it might not hold in the strictest statistical setting).\n- The modification they suggest seems plug-and-play and should allow for easy integration into existing architectures. While it does influence training time, at inference the network behaves as if it has a classical residual connection.",
"weaknesses": "- The authors motivate their method by claiming a certain \"feature-space distance\". It is unclear to me how they define this distance and why they end up with the value $L \\approx 3.29$. Moreover, I encourage the authors to add a reference for the valid region of the BTE.\n- Some results are not statistically significant. For example, in Table 1 most values when comparing between GPT2 and KITTI-GPT2 are within +-1.96 standard error range. Considering that there are no computational advantages (The authors note \"KITTINet consistently reaches target accuracy levels with approximately 20% fewer training steps than the baseline, which covers the overheads of its additional computation during training.\"), there seems to be little incentive to use KITTINet over the baseline method\n- The entire section 2 seems a bit unclear to me. The connection between kinetic gas theory and residual layers requires a stronger motivation. Moreover, this section breaks with the natural flow of the paper since it foreshadows quantities that are only introduced later (e.g., in line 85)",
"questions": "* Please fix the citations in the references to datasets, e.g., in line 308\n* The integral of Eq. (7) is missing the integrand $dt$ \n* In lines 401, the authors perform an ablation on varying the number of `collision_heads` for the FNO. Since FNOs typically use hidden feature sizes of $O(100)$, I wonder how the authors distribute this into $2^{10} \\approx 1000$ heads?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T01:05:36",
"modification_date": "2025-11-12T11:29:39",
"review_url": "https://openreview.net/forum?id=skR7tTT32C¬eId=dWnpv3EIeN",
"license": "CC BY 4.0"
}
] | |
BS0PhDOaJ7 | https://openreview.net/forum?id=BS0PhDOaJ7 | CUARewardBench: Benchmark for Evaluating Reward Models on Computer-using Agent Trajectories | 4.5 | 3.5 | [
4,
4,
2,
8
] | [
3,
3,
4,
4
] | 4 | [
"Computer-using Agent; Reward Models; Benchmark"
] | Computer-using agents (CUAs) enable task completion through natural interaction with operating systems and software interfaces. While script-based verifiers are widely adopted for evaluation, they suffer from limited scalability and inability to provide step-wise assessment. Reward models offer promising alternatives, but their effectiveness on CUA evaluation remains largely underexplored. To address this gap, we present CUARewardBench, comprising four key contributions: (1) First-ever Comprehensive CUA Reward Benchmark:* We introduce the first benchmark for evaluating both outcome reward models (ORM) and process reward models (PRM) on CUA tasks, enabling systematic assessment across trajectory-level and step-level evaluation. (2) Diverse and Representative Dataset: Our benchmark encompasses trajectories spanning 10 software categories and collected from 7 agent architectures with varying performance levels (25.9%-50.8% success rates), ensuring comprehensive coverage of CUA decision-making patterns. (3) Expert-Validated Annotations: All trajectories undergo rigorous expert annotation through carefully designed trajectory selection criteria, key step identification protocols, and systematic annotation standards. Expert annotations are validated through comprehensive cross-checking and quality control processes to ensure benchmark reliability and practical applicability. (4) Comprehensive Analysis and Insights: Through extensive experiments across 7 vision-language models and 3 prompt templates, we reveal critical limitations of current CUA RMs, including insufficient visual reasoning capabilities, knowledge deficiencies, and the superiority of general VLMs over specialized CUA models for reward evaluation.
Our findings provide practical guidance for future CUA RM development and highlight potentials for advancing evaluation of CUA models. | CUARewardBench: A Benchmark for Evaluating Reward Models on Computer-using Agent | datasets and benchmarks | https://openreview.net/pdf?id=BS0PhDOaJ7 | 2025-09-18T20:51:56 | 4 | [
{
"id": "sD519JfvYn",
"forum": "BS0PhDOaJ7",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11472/Reviewer_R33o",
"reviewer_name": "Reviewer_R33o",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces CUARewardBench, the first benchmark for evaluating both outcome reward models (ORM) and process reward models (PRM) on computer-using agents (CUAs). The benchmark contains 272 trajectories and 346 step annotations across 10 software categories, with data collected from 7 CUA architectures. The authors provide expert-validated annotations and analyze the performance of 7 vision-language models (VLMs) under 3 prompt templates. Key findings include that visual reasoning capability dominates reward model performance, general VLMs outperform CUA-specialized models, and current reward models still struggle on both trajectory- and step-level evaluations.",
"strengths": "+ The paper addresses an important and timely gap in evaluating reward models for computer-using agents (CUAs).\n\n+ The benchmark is well designed, covering both trajectory-level (ORM) and step-level (PRM) evaluation, and includes multiple software environments and agent types.\n\n+ The empirical analysis is thorough and reveals clear weaknesses in visual reasoning and consistency across current reward models.\n\n+ Writing and experimental presentation are clear and professional, enabling reproducibility.",
"weaknesses": "+ While the benchmark clearly reveals important deficiencies in existing reward models, the paper does not explore or propose mechanisms to mitigate these issues. For instance, after identifying reasoning and visual comprehension failures, no ablation or refinement strategy is suggested to improve such capabilities.\n\n+ The analysis remains largely diagnostic rather than prescriptive—it characterizes the current landscape effectively but stops short of translating the insights into new evaluation metrics or model improvements.\n\n+ The results, though informative, focus primarily on descriptive comparison (precision/recall across models and prompts). The paper would benefit from further discussion on how these empirical findings could guide the design of future reward modeling frameworks or hybrid evaluators combining visual and symbolic verification.",
"questions": "+ Since the benchmark conclusions rely heavily on observed precision and recall differences, how do the authors ensure that prompt sensitivity or label ambiguity did not confound these measurements?\n\n+ The analysis points out that current RMs lack visual reasoning consistency, but the paper does not test any ablation or control study. Would it be possible to quantify which visual features or interface elements contribute most to model failures?\n\n+ The study emphasizes the diagnostic role of CUARewardBench but not prescriptive validation. Could the authors provide additional empirical evidence (e.g., cross-correlation or regression analysis) showing that benchmark metrics meaningfully distinguish stronger from weaker reward models?\n\n+ The paper mentions that screenshots alone limit observability for verifying outcomes. How do the authors control for unobservable success conditions when labeling trajectory outcomes to ensure reliability of the ground-truth annotations?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T18:52:49",
"modification_date": "2025-11-12T12:42:56",
"review_url": "https://openreview.net/forum?id=BS0PhDOaJ7¬eId=sD519JfvYn",
"license": "CC BY 4.0"
},
{
"id": "ORpquSNHkJ",
"forum": "BS0PhDOaJ7",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11472/Reviewer_UTH3",
"reviewer_name": "Reviewer_UTH3",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces CUARewardBench, the first benchmark designed to evaluate reward models for computer-using agents operating in desktop environments. The authors construct a dataset of 272 trajectories, collected from 7 different agent architectures performing tasks across 10 diverse software categories. The benchmark facilitates a dual evaluation of both Outcome Reward Models for trajectory-level success and Process Reward Models for step-level correctness. Through extensive experiments on 7 vision-language models, the paper reveals several key findings, notably that general-purpose VLMs outperform specialized CUA models in the reward modeling task, and that strong visual reasoning capability is the most critical factor for success.",
"strengths": "Novel and Important Problem: This is the first work to systematically benchmark reward models for CUAs in diverse desktop environments. It successfully bridges the gap between agent execution environments like OSWorld and web-focused RM benchmarks like AgentRewardBench, addressing a critical need in the community.\n\nComprehensive Experimental Design: The evaluation is thorough, covering 7 different VLMs that span a spectrum of architectures (general-purpose, reasoning-focused, CUA-specialized) and testing them with 3 distinct prompt templates. This design allows for a robust and insightful analysis of the key factors that influence RM performance.",
"weaknesses": "Insufficient Scale and Statistical Generalizability: The primary weakness of this work is the small size of the dataset. With only 272 trajectories distributed across 10 software categories and 7 agent models, the data per condition is extremely sparse. For some categories, such as Thunderbird, there are only 16 trajectories in total. This small sample size makes it difficult to draw robust, generalizable conclusions and raises serious questions about whether the observed performance differences between models are statistically significant. The absence of any statistical significance testing is a major oversight for a paper introducing a new benchmark.\n\nUnverifiable Annotation Reliability: The paper's core contribution is its expert-annotated dataset, yet its reliability is not empirically demonstrated. The protocol for selecting \"key\" steps for PRM annotation is defined with subjective terms like \"as large as possible,\" which is not a reproducible scientific standard. More critically, the paper fails to report Inter-Annotator Agreement (IAA), the standard method for validating the consistency and reliability of human-generated labels. Without IAA, the quality of the ground truth remains unknown, which fundamentally undermines the validity of the entire benchmark and the conclusions drawn from it.",
"questions": "On Scale: Given the small number of trajectories per software category, could the authors perform statistical significance tests (e.g., bootstrapping or permutation tests) to validate that the performance differences reported between models are not simply due to chance? How do the authors reason about the generalizability of their conclusions from this limited dataset?\n\nOn Annotation Protocol: Could the authors provide a more concrete, operationalized definition for identifying \"key\" good/bad actions to improve reproducibility? Crucially, was any form of Inter-Annotator Agreement (IAA) calculated during the annotation process? If so, what were the scores? If not, how can the community be confident in the reliability of the benchmark's ground-truth labels?\n\nOn PRM Scope: The exclusion of redundant actions seems to significantly narrow the scope of PRM evaluation, limiting it to error detection rather than a holistic assessment of step quality that includes efficiency. Could the authors elaborate on the reasoning behind this decision and discuss how this limitation impacts the benchmark's utility for training agents via reinforcement learning, where rewarding efficiency is paramount?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T11:34:44",
"modification_date": "2025-11-12T12:42:56",
"review_url": "https://openreview.net/forum?id=BS0PhDOaJ7¬eId=ORpquSNHkJ",
"license": "CC BY 4.0"
},
{
"id": "FcXMhxhBsg",
"forum": "BS0PhDOaJ7",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11472/Reviewer_ABxH",
"reviewer_name": "Reviewer_ABxH",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 1,
"presentation": 1,
"summary": "This paper proposes CUARewardBench, a benchmark for evaluating reward models (RMs) on computer-using agents (CUAs), aiming to address the limitations of script-based verifiers in CUA evaluation. The benchmark covers both outcome reward models (ORM) and process reward models (PRM), includes trajectories from 10 software categories and 7 agent architectures, and relies on expert annotations for evaluation.",
"strengths": "⦁\tAs the first benchmark specifically targeting CUA reward models, it fills the gap of dedicated evaluation tools for desktop software-oriented CUA RMs.\n⦁\tThe expert annotation process incorporates cross-checking and quality control mechanisms, ensuring basic reliability of the benchmark’s annotation data for subsequent CUA RM evaluation.",
"weaknesses": "⦁\tLimited innovation with partial similarities to existing work: Core framework is similar to the existing ICML work 'Boosting Virtual Agent Learning and Reasoning: A Step-Wise, Multi-Dimensional, and Generalist Reward Model with Benchmark'—both construct process reward model benchmarks to address traditional outcome-based evaluation limitations. CUARewardBench only adjusts the scenario to desktop CUAs; it uses expert annotation instead of the ICML paper’s automatic annotation (MCTS-P) to reduce costs, may leading to annotation inefficiency.\n⦁\tPoor and awful figures and citation presentation: Extremely few visualizations—only one ambiguous and unaesthetic figure. More clear visualizations would facilitate readers' understanding. Additionally, the paper appears to exclusively use \\citet citation format, causing method names and authors to be conflated (e.g., \"OSWorld Xie et al. (2024)\"), which creates significant reading difficulties.\n⦁\tLimited and shallow experimental design: Evaluates few VLMs (mostly Qwen2.5VL series) and 3 prompt templates, missing comparisons with mainstream CUA models and powerful closed-source models (e.g., GPT-5, Claude 3.7) with strong agent capabilities.\n⦁\tUnjustified trajectory selection thresholds: Excludes trajectories with <1 or >7 successful agent configurations to control difficulty, but provides no rationale for this threshold (e.g., why 8+ successes are \"too easy\"). Also lacks analysis of how the threshold impacts the benchmark’s ability to distinguish RM performance.",
"questions": "⦁\tExplanation for experimental results: The experimental analysis only focuses on overall precision/recall, without in-depth exploration of performance differences across software categories (e.g., why Thunderbird has the lowest successful trajectory count but no analysis of underlying reasons) or model parameter scale vs. performance correlation (e.g., Qwen2.5VL-72B underperforming 32B but no in-depth technical explanation).\n⦁\tLack of annotation quality metrics: The paper claims to have \"rigorous expert annotation\" but does not provide key metrics such as inter-annotator agreement (e.g., Cohen’s kappa coefficient). Please explain whether the annotation quality meets the standards of top conference benchmarks, and provide specific data to prove the reliability of the annotation.\n⦁\tMore exploration of low-resource scenarios: The paper’s dataset only includes 272 trajectories, and it is highly recommended to explore the performance of RMs in low-resource scenarios (e.g., few-shot fine-tuning with limited trajectories). This is crucial for the practical application of the benchmark but is not mentioned at all.\n⦁\tExplanation for incomplete cross-software analysis: The paper covers 10 software categories, but the experimental results only focus on 5 categories (VS Code/GIMP/etc.) and relegate the rest to supplementary materials without analysis. Please explain why the performance differences across software categories (e.g., Thunderbird has the lowest successful trajectory count) are not discussed, and how these differences affect the benchmark’s generalization ability.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-25T19:01:54",
"modification_date": "2025-11-12T12:42:57",
"review_url": "https://openreview.net/forum?id=BS0PhDOaJ7¬eId=FcXMhxhBsg",
"license": "CC BY 4.0"
},
{
"id": "9FS5AxppHb",
"forum": "BS0PhDOaJ7",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11472/Reviewer_sj9V",
"reviewer_name": "Reviewer_sj9V",
"rating": 8,
"confidence": 4,
"soundness": 3,
"contribution": 4,
"presentation": 4,
"summary": "This paper introduces the first benchmark designed to evaluate **reward models** for computer-use agents. It systematically annotates, categorizes, and analyzes the common failure cases of existing reward models when applied to complex, multi-step computer-interaction tasks. The benchmark provides a much-needed standardized evaluation setup for this emerging subfield of embodied and tool-using language models, offering metrics, data splits, and error analyses that can serve as a foundation for future work. \n\nSection 3.4, which studies the effects of prompt variations, and Section 4, which presents a detailed error analysis, are particularly strong and insightful contributions that deepen understanding of model behavior.\n\nIn sum, my judgement is that this is a good and needed paper and likely to become a reference point in the evaluation of reward models for computer use agents. The empirical analysis is thoughtful, the benchmark is well-motivated, and the contributions are meaningful. Addressing the statistical and presentation issues can further solidify its impact.",
"strengths": "Overall, this work is very good. I like it.\n\n1. **Originality and significance.** \n This is the first known benchmark focusing specifically on reward models for computer-use agents. The contribution is timely, as the community increasingly explores agents capable of manipulating real user interfaces or software environments.\n\n2. **Quality and depth of analysis.** \n The benchmark is carefully annotated and exhibits thorough evaluation of multiple open models. The inclusion of rich failure categorizations and detailed qualitative analyses demonstrates deep engagement with the data rather than superficial benchmarking. \n Section 4’s analysis of error modes helps readers understand why models fail. **I am quite surprised that there's no error coming from hallucination**, by the way. Is this not a problem at all here?\n\n3. **Clarity and interpretability.** \n The paper is overall well-organized and easy to follow. \n The visualizations help illustrate the impacts of prompt phrasing and context-window limitations.\n\n4. **Practical utility.** \n The benchmark will likely become a valuable diagnostic tool for researchers developing reward models for computer use agentic LMs. \n Its release could catalyze a line of work around better reward alignment for interactive agents.\n Section 3.4, which explores how prompt design affects reward-model outputs, is particularly instructive to me.",
"weaknesses": "This is a good paper and there are not many weaknesses. All weaknesses below are easy to solve.\n\n1. **Lack of closed-source baselines.** \n The evaluation currently excludes leading proprietary models. Including at least one closed-source model via API, would strengthen claims of comprehensiveness and help contextualize open-weight performance. This is probably not that expensive, but would improve benchmark credibility and adoption.\n\n2. **Absence of statistical uncertainty.** \n The paper reports single-run results without confidence intervals or variance estimates. It is difficult to judge statistical significance. The authors should specify the number of evaluation runs per model and report confidence intervals in the tables. Otherwise, it's hard to interpret the numbers.\n\n3. **Overstated claims about scale versus training quality.** \n The paper claims (l. 350) that “training quality becomes more critical than parameter scale beyond 7B,” yet the largest model, GLM-4.5V-106B, also achieves the best performance overall, so the data are suggesting that **having more parameters does help a lot, doesn't it?** This is an example of the numerous claims made in section 3.3 that are simply too strong in tone. These are more like (valuable and useful) intuitions and not concrete conclusions supported by data. The authors should either provide additional evidence—such as matched-scale ablations or training-data comparisons—or moderate their claim to reflect the limits of their observations. My take is that the paper as it stands right now is strong enough, so just rewriting section 3.3 to make the tone more moderate is enough. You can do more experiments and perform serious hypothesis testing about the claims in 3.3 in the future.\n\n4. **Language and style issues.** \n There are various grammatical and typographical errors throughout. Please proofread and fix all of them. \n Examples include:\n - Line 266: extra space after “selection.” \n - Line 349: the comma should be a semicolon, connecting two complete clauses. \n - Quotation marks uses are wrong; the authors make the beginners' LaTeX mistake of never typing any left quotation marks. Please check how to write left quotation marks in LaTeX, or just use some package like `csquotes` that fixes this for you. \n\n5. **Benchmark reporting issues.**\n This is a good benchmark. Please follow the guidelines in figure 4 of Zhu et al 2025 *Establishing Best Practices for Building Rigorous Agentic Benchmarks* to properly and systematically describe and report this benchmark. Right now, from looking at their requirements, the current benchmark and the supplementary materials fall short in several aspects\n - There is no README document explaining how to use the code.\n - A large portion of the code is in Chinese (not just the annotations, but also the descriptions of errors), so non-Chinese speakers can find it hard to understand or use the benchmark. \n - There is no analysis of potential flaws.\n - There's no reporting or calculation of statistical significance. \n - etc.\n\n The work is already high-quality, and it can be made more impactful and cleaner, if the authors follow the best practices for benchmark reporting. \n\nI'm okay if you don't solve problem 1. Please solve problems 2, 3, 4, and 5",
"questions": "1. **Generalization to closed-source models.** \n Can the authors test whether the benchmark’s reward-model failure taxonomy also applies to closed models (e.g., GPT-4.1 or Claude-3.5)? \n Would such models show qualitatively similar failure types or novel ones?\n\n2. **Confidence interval reporting.** \n How many evaluation runs were used to compute the reported averages? \n Could the authors include confidence intervals to quantify robustness?\n\n3. **Claim moderation.** \n Could the authors clarify the empirical basis for claiming that training quality outweighs model size? \n Is there a controlled experiment or only observational correlation?\n\n4. **Is there no hallucination?**\n I am just surprised that the error analysis does not contain any of those. Do reward models never hallucinate?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-20T13:25:53",
"modification_date": "2025-11-12T12:42:57",
"review_url": "https://openreview.net/forum?id=BS0PhDOaJ7¬eId=9FS5AxppHb",
"license": "CC BY 4.0"
}
] |
LNilmuJmF0 | https://openreview.net/forum?id=LNilmuJmF0 | HEART-ViT: HESSIAN-GUIDED EFFICIENT DYNAMIC ATTENTION AND TOKEN PRUNING IN VISION TRANSFORMERS | 2.5 | 4.75 | [
4,
2,
2,
2
] | [
4,
5,
5,
5
] | 4 | [
"Vision Transformers (ViTs)",
"Dynamic pruning",
"Hessian-based sensitivity",
"Token and head pruning",
"Edge-efficient inference"
] | Vision Transformers (ViTs) deliver state-of-the-art accuracy but their quadratic at- tention cost and redundant computations severely hinder deployment on latency- and resource-constrained platforms. Existing pruning approaches treat either tokens or heads in isolation, relying on heuristics or first-order signals, which often sacrifice accuracy or fail to generalize across inputs. We introduce HEART- ViT, a Hessian-guided efficient dynamic attention and token pruning for vision transformers, which to the best of our knowledge, is the first unified, second-order, input-adaptive framework for ViT optimization. HEART-ViT estimates curvature-weighted sensitivities of both tokens and attention heads using efficient Hessian–vector products, enabling principled pruning decisions under explicit loss budgets. This dual-view sensitivity reveals an important structural insight: token pruning dominates computational savings, while head pruning provides fine-grained redundancy removal, and their combination achieves a superior trade-off. On ImageNet-100 and ImageNet-1K with ViT-B/16 and DeiT-B/16, HEART-ViT achieves up to 49.4% FLOPs reduction, 36% lower latency, and 46% higher throughput, while consistently matching or even surpassing baseline accuracy after fine-tuning (e.g., +4.7% recovery at 40% token pruning). Beyond theoretical benchmarks, we deploy HEART-ViT on different edge devices, like- AGX Orin, demonstrating that our reductions in FLOPs and latency translate directly into real-world gains in inference speed and energy efficienc. HEART-VIT bridges the gap between theory and practice, delivering the first unified, curvature-driven pruning framework that is both accuracy-preserving and edge-efficient. | optimization | https://openreview.net/pdf?id=LNilmuJmF0 | 2025-09-20T09:46:54 | 4 | [
{
"id": "d3WNWJJmSh",
"forum": "LNilmuJmF0",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission22556/Reviewer_8ymd",
"reviewer_name": "Reviewer_8ymd",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposed a Hessian-guided pruning framework for ViTs, which jointly pruning both tokens and attention heads based on second-order sensitivity. The second order information is gathered using Hessian–vector products (HVPs) method, for each token and head. \nEmpirical evaluations on ImageNet on ViT and DeiT show up to 49% FLOPs reduction, 36–43% latency improvements. Hardware benchmarking also shows realistic efficiency improvement.",
"strengths": "The second-order based pruning criterion is well motivated.\nThe authors also made endeavors to test on real world edge device to evaluate the hardware efficiency.",
"weaknesses": "Although authors mentioned the complexity analysis of the HVP calculation, the level of empirical overhead is still concerning as it requires 2 backward passes.\nAlso the paper lacks discussion on their differences and advances against existing hessian-based ViT pruning papers, e.g. LPViT [1], NViT [2].",
"questions": "1. There are too little comparisons with SOTA methods, especially missing more recent papers that also adopts hessian in pruning, e.g. LPViT [1] (ECCV'24), NViT [2] (CVPR'23).\n2. Although authors provided hardware benchmarking results, i wonder how is it compared with SOTA methods.\n\nMinor problems:\nFigure 1 there are some visual clarity issue.\n\n\nReference:\n[1] Xu, Kaixin, et al. \"Lpvit: Low-power semi-structured pruning for vision transformers.\" European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2024.\n[2] Yang, Huanrui, et al. \"Global vision transformer pruning with hessian-aware saliency.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-09T15:14:34",
"modification_date": "2025-11-12T18:11:12",
"review_url": "https://openreview.net/forum?id=LNilmuJmF0¬eId=d3WNWJJmSh",
"license": "CC BY 4.0"
},
{
"id": "niPLzC0XZU",
"forum": "LNilmuJmF0",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission22556/Reviewer_oymU",
"reviewer_name": "Reviewer_oymU",
"rating": 2,
"confidence": 5,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "This paper introduces a unified token and head pruning algorithm for ViTs called HEART-ViT. HEART-ViT measures the loss perturbation caused by removing certain tokens or attention heads to identify redundant components. It also proposes a simplified formulation based on the second-order Taylor expansion of the converged model for a efficient measurement implementation. Experiments on different ViT backbones demonstrate its effectiveness.",
"strengths": "* The unified form of token and head pruning for ViT with mathematical proof is novel and interesting.\n\n* The visualizations effectively illustrate the pruning behavior and provide valuable interpretability.\n\n* Broad studies on model performance with different pruning ratios.",
"weaknesses": "* No comparisons are provided with state-of-the-art efficient ViT methods, particularly recent token or structural pruning and merging approaches [1,2,3,4].\n\n* The experiments are limited to ViT-B and DeiT-B, which share nearly identical architectures, leaving the method’s generalization to other ViT variants unverified.\n\n* The comparison with the backbone in Appendix Tables 3&4 is unfair, as the baseline is not finetuned for the same 100 epochs used by the proposed method.\n\n* The contribution appears marginal compared to AdaViT [5], which also unifies token, head, and block pruning while employing a simpler and more practical estimation strategy.\n\n[1] Bolya, Daniel, et al. \"Token merging: Your vit but faster.\" ICLR, 2023.\n\n[2] Yang, Huanrui, et al. \"Global vision transformer pruning with hessian-aware saliency.\" CVPR, 2023.\n\n[3] Kim, Minchul, et al. \"Token fusion: Bridging the gap between token pruning and token merging.\" WACV, 2024.\n\n[4] Wang, Hongjie, Bhishma Dedhia, and Niraj K. Jha. \"Zero-TPrune: Zero-shot token pruning through leveraging of the attention graph in pre-trained transformers.\" CVPR, 2024.\n\n[5] Meng, Lingchen, et al. \"Adavit: Adaptive vision transformers for efficient image recognition.\" CVPR, 2022.",
"questions": "* How do you explain the performance different between symmertic and asymmertic pruning? Asymmetric pruning seems to be more flexible but introduces worse trade-off bwteen efficiency and performance in most cases. Does ViT architecture prefer a certain type of pruning?\n\n* How do you explain the significant performance degradation after pruning pre-finetuning? Notably, many state-of-the-art methods are finetuning-free.\n\n* Why do you specifically choose second-order Taylor expansion rather than more accurate k-th Taylor expansion?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T10:21:10",
"modification_date": "2025-11-12T18:11:12",
"review_url": "https://openreview.net/forum?id=LNilmuJmF0¬eId=niPLzC0XZU",
"license": "CC BY 4.0"
},
{
"id": "UB72mR4lKr",
"forum": "LNilmuJmF0",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission22556/Reviewer_grLo",
"reviewer_name": "Reviewer_grLo",
"rating": 2,
"confidence": 5,
"soundness": 4,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes HEART-ViT, a joint token and head pruning method for efficient ViTs. HEART-ViT employs a second-order Taylor expansion on a converged ViT model to approximate the loss change induced by pruning. The resulting scoring criterion is simplified to rely solely on the curvature term, enabling efficient computation. The proposed method is evaluated on ViT-B and DeiT-B backbones, demonstrating good performance and efficiency.",
"strengths": "1. The mathematical motivation is solid. The proposed scoring strategy is sound.\n\n2. The experiments on backbone model are comprehensive.\n\n3. This work includes evaluation on edge devices, further demonstrating its practical effectiveness.",
"weaknesses": "1. __Insufficient empirical results:__ This paper lacks a bunch of experiments:\n\n * Comparisons to prior token and/or structural pruning methods;\n * Performance on different ViT architectures (e.g., Swin Transformer) and sizes (e.g., ViT/DeiT-Small)\n * Performance on downstream tasks after pruning\n\n Although the analytical studies on latency, pruning effectiveness, and layerwise similarity are informative, the absence of these fundamental experiments significantly undermines the empirical strength and overall significance of the work.\n\n2. __Massive finetuning demands:__ HEART-ViT requires 100-epoch finetuning to be effective, which is an essential drawback compared to state-of-the-art token pruning/merging methods that usually require a few finetune epochs or none at all. This substantially reduces the practical efficiency and applicability of HEART-ViT.\n\n3. __Poor presentation:__ The presentation quality is low:\n\n * Citation format does not align with the ICLR format\n * Overlapping between Figure 6 and Figure 5's caption\n * Overlapping between some formulae and surrounding texts\n * Some formulae have equation numbers while some others do not\n * Figure 1 is never referred to\n * Inconsistent reference formats (btw, ToMe should be published on ICLR but labelled CVPR)",
"questions": "Please refer to the weaknesses above",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T21:07:33",
"modification_date": "2025-11-12T18:11:13",
"review_url": "https://openreview.net/forum?id=LNilmuJmF0¬eId=UB72mR4lKr",
"license": "CC BY 4.0"
},
{
"id": "K8HuIBoW1m",
"forum": "LNilmuJmF0",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission22556/Reviewer_hz49",
"reviewer_name": "Reviewer_hz49",
"rating": 2,
"confidence": 5,
"soundness": 3,
"contribution": 2,
"presentation": 1,
"summary": "This paper proposes a hessian-guided ViT pruning technique, which is applied on both tokens and attention heads. The biggest strength of the paper is its mathematical rigor, as the motivation, formulation and solution of the problem all followed a clear math process, providing a strong insight into the method's working mechanism. However, there are fundamental flaws in the paper's experiments, for lack of comprehensive comparison, and poor presentation, which renders the paper below the standard for an ICLR paper.",
"strengths": "- Very clear math behind the proposed method, all the way through the motivation, the formulation and the solution \n- Unified token and attention head pruning provides a novel angle for ViT pruning \n- Experimental results are competitive against vanilla baselines \n- The method supports both finetuned and non-finetuned modes",
"weaknesses": "- Overall, the quality of presentation is not great – this includes inconsistent spacing (eg. Ln226,231), lack of proper explanation of symbols (ln178), poor quality of figures (e.g. fig 1,2,3). The authors are encouraged to work on the improvement of presentation vigorously to match top-tier conference standards. \n- The biggest weakness of the paper is the experiment section. It generally lacks the comprehensive evaluations required by a solid study for a new efficient ViT method. The range of SotA baseline methods is lacking, so is the variety in model size is lacking, and the diversity in the downstream tasks (OD, seg, etc), and the coverage for datasets is limited (only ImageNet)",
"questions": "- I recommend removing Table 1 for something that provides more information that is useful to the audience from this area. Table 1 is too verbose and contains many items that are very technique-specific.\n- Figure 1 and Figure 2 are good for teasers but poor choices for presenting as main results - table with plain numbers are easier to read for the main results. Comparisons are clearer too with tables.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T19:07:17",
"modification_date": "2025-11-12T18:11:15",
"review_url": "https://openreview.net/forum?id=LNilmuJmF0¬eId=K8HuIBoW1m",
"license": "CC BY 4.0"
}
] | |
W8bKDPf1Ko | https://openreview.net/forum?id=W8bKDPf1Ko | Graph-Theoretic Intrinsic Reward: Guiding RL with Effective Resistance | 4.666667 | 2.666667 | [
8,
4,
2
] | [
2,
2,
4
] | 3 | [
"Reinforcement Learning",
"Intrinsic Motivation",
"Goal Conditioned RL",
"Effective Resistance"
] | Exploration of dynamic environments with sparse rewards is a significant challenge in Reinforcement Learning, often leading to inefficient exploration and brittle policies. To address this, we introduce a novel graph-based intrinsic reward using Effective Resistance, a metric from spectral graph theory. This reward formulation guides the agent to seek configurations that are directly correlated to successful goal reaching states. We provide theoretical guarantees, proving that our method not only learns a robust policy but also achieves faster convergence by serving as a variance reduction baseline to the standard discounted reward formulation. We perform extensive empirical analysis across several challenging environments to demonstrate that our approach significantly outperforms state-of-the-art baselines, demonstrating improvements of up to 59% in success rate, 56% in timesteps taken to reach the goal, and 4 times more accumulated reward. We augment all of the supporting lemmas and theoretically motivated hyperparameter choices with corresponding experiments. | We propose an intrinsic reward formulation using the notion of Effective Resistance based on spectral graph theory, for learning robust policies in sparse environments. | reinforcement learning | https://openreview.net/pdf?id=W8bKDPf1Ko | 2025-09-18T16:01:38 | 3 | [
{
"id": "21sJGJg7pw",
"forum": "W8bKDPf1Ko",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10814/Reviewer_FfQG",
"reviewer_name": "Reviewer_FfQG",
"rating": 8,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes a reward shaping methods orignated from spectral graph theory to tackle reward sparsity in reinforcement learning setting. By modifying its instrinsic reward, the agent needs to maintain a graph of its surrounding environment first through its sensors (e.g. LIDARs) every timestep, then calculates its effective resistance between itself and the goal, which will be used as part of its reward construction. They provide theoretical guarantees to prove they can learn a robust policy and also achieves faster convergence. Through experiments, they show their methods can beat state of the art baselines.",
"strengths": "- Good originality: abstract objects into nodes in graph, and design rewards based on the constructed graph. An novel reward shaping methods that encourage the agent to get closer to goal.\n- Quality: theoretically sound. Assumptions setup with proper citation and completely. Proved decreasing the effective resistance can also maintain connectivity on the graph. This paper also show the advantage of its algorithms emprically with extensive experiments.",
"weaknesses": "- This methods can only be applied to specific domains, for example, robotics navigation tasks in the paper, in which the robots have a suite of sensors that are assumed not having noise and the robots having a good localization capability and can categorizes or recognize objects as nodes in the map. \n- I would appreciate more explanation on how the graph is constructed and what reducing effective resistance would bring on the main text, but overall the paper is easy to follow.",
"questions": "- To increase the impact of this paper, can this method apply to a more general MDP setting, e.g. continuous MDP or tabular MDP, how would you construct the graph in such MDPs? specifically, what are nodes/edges/weights in these settings?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-07T05:33:10",
"modification_date": "2025-11-12T12:33:40",
"review_url": "https://openreview.net/forum?id=W8bKDPf1Ko¬eId=21sJGJg7pw",
"license": "CC BY 4.0"
},
{
"id": "T0kMV7acov",
"forum": "W8bKDPf1Ko",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10814/Reviewer_Pwps",
"reviewer_name": "Reviewer_Pwps",
"rating": 4,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces a novel intrinsic reward mechanism for reinforcement learning in sparse reward environments based on effective resistance from spectral graph theory. The key idea is to construct a time-evolving graph from the agent's observations (specifically LiDAR data) where nodes represent the agent, goal, and environmental objects, and edges encode proximity relationships. The intrinsic reward is defined as the negative change in effective resistance between the agent and goal nodes, encouraging the agent to seek configurations that improve structural accessibility to the goal.",
"strengths": "1. The application of effective resistance from spectral graph theory to RL is creative and theoretically grounded. \n2. The paper provides a comprehensive theoretical analysis with multiple lemmas and a main theorem.",
"weaknesses": "1. The method is specifically designed for environments where meaningful graph construction from observations is possible. The reliance on LiDAR data limits applicability to certain domains, and it's unclear how this would extend to other observation modalities or higher-dimensional state spaces.\n2. Algorithm 1 involves many design choices (clustering threshold τ, connectivity patterns, central node selection) that appear to require careful tuning. The sensitivity analysis (Section A.9) shows some robustness to τ, but the overall complexity raises concerns about generalizability.\n3. While the paper compares against several baselines, most are relatively older methods. More recent state-of-the-art intrinsic motivation methods could strengthen the comparison.\n4. While Section A.10 provides some runtime analysis, the computational cost of repeated graph construction and effective resistance computation could be prohibitive in real-time applications or larger graphs.",
"questions": "1. How does the method scale to environments with many more objects or higher-dimensional observation spaces? What is the computational complexity as a function of graph size?\n2. How does the method scale to environments with many more objects or higher-dimensional observation spaces? What is the computational complexity as a function of graph size?\n3. How does the method perform in environments with dense rewards? Does the intrinsic reward provide benefits or potentially interfere with learning in such settings?\n4. How does the method perform in environments with dense rewards? Does the intrinsic reward provide benefits or potentially interfere with learning in such settings?\n5. Beyond τ, how sensitive is the method to other hyperparameters like α and β? The theoretical guidelines (Corollary 1) provide bounds, but practical selection seems to require empirical validation.\n6. How does this approach compare to more recent intrinsic motivation methods like NGU, RND, or ICM on the same environments?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-03T15:26:00",
"modification_date": "2025-11-12T12:33:41",
"review_url": "https://openreview.net/forum?id=W8bKDPf1Ko¬eId=T0kMV7acov",
"license": "CC BY 4.0"
},
{
"id": "uQXtf0gQ2v",
"forum": "W8bKDPf1Ko",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission10814/Reviewer_nmWg",
"reviewer_name": "Reviewer_nmWg",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This paper proposes a new metric termed effective resistance that can be used as an intrinsic motivation reward for goal-conditioned RL tasks. The paper hypothesizes that this reward is better than using one proportional to Euclidean distance to the goal. The paper also performs some analysis to show that the proposed intrinsic reward is almost unbiased with respect to the extrinsic reward, and that using the proposed intrinsic reward leads to improved sample complexity.\n\nThe empirical evaluation on a suite of environments called Safety-Gymnasium seek to show that the above proposed theoretical guarantees hold, and that the proposed technique outperforms some reasonable baselines.",
"strengths": "Overall the idea is somewhat novel and might be a useful addition to the literature.\n* The proposed technique is interesting. Using ideas that consider graph flows to estimate how easy it is to navigate from one point to another has been seen to be useful in the past [1], and this modern revival of the technique could present some benefits.\n* Dynamic graph updates are very useful, allowing an environment that changes over time.\n* Theoretical analysis seems to be sound and gives some confidence that the proposed approach will not converge to a suboptimal solution and will help with some local exploration.\n* The evaluation methodology seems robust and statistically sound, and I especially appreciate the 1000 episode evaluations (5 training seeds and 200 episode evals per seed), which should capture variance in the policy and variance in the training.\n* The baselines that are used in the evaluation seem mostly reasonable. There are caveats here that I expand on in weaknesses.\n\n\n\n## References\n[1] Şimşek, Ö., Wolfe, A.P. and Barto, A.G., 2005, August. Identifying useful subgoals in reinforcement learning by local graph partitioning. In Proceedings of the 22nd international conference on Machine learning (pp. 816-823).",
"weaknesses": "There are some issues that keep this paper from being of a quality that I can confidently recommend for acceptance:\n* This paper is trying to suggest a new intrinsic motivation based on effective resistance in a graph. But seems to be specific to robotics type problems with objects present in the environment causing navigational or manipulational difficulties. If specific to robotics problems, the setup and writing should clarify and try to position itself accordingly so that it will attract the same community of researchers. It also does not compare to other GCRL methods for exploration in literature [1, 2, 3], or more up to date GCRL benchmarks like OGBench [4].\n* Part of the issue here is that the problem setup is specific to continuous state spaces and a 2 dimensional action space (Section 3.1).\n* My understanding is that the intrinsic reward is only calculated when the reward enters the agent's field of view. This seems like it will help mostly with local exploration instead of more generally.\n* The baselines proposed do not seem to be exploiting the structure of GCRL based problems from what I can tell. One of [1] or [3] would be great additions to show how effective the proposed intrinsic reward is in GCRL problems specifically.\n\n\n## References\n[1] Grace Liu, Michael Tang, and Benjamin Eysenbach. A single goal is all you need: Skills and exploration emerge from contrastive RL without rewards, demonstrations, or subgoals. In The Thirteenth International Conference on Learning Representations, 2025. URL https: //openreview.net/forum?id=xCkgX4Xfu0\n\n[2] Ma, Y.J., Yan, J., Jayaraman, D. and Bastani, O., 2022. How Far I'll Go: Offline Goal-Conditioned Reinforcement Learning via $ f $-Advantage Regression. arXiv preprint arXiv:2206.03023.\n\n[3] Durugkar, I., Tec, M., Niekum, S. and Stone, P., 2021. Adversarial intrinsic motivation for reinforcement learning. Advances in Neural Information Processing Systems, 34, pp.8622-8636.\n\n[4] Park, S., Frans, K., Eysenbach, B. and Levine, S., 2024. Ogbench: Benchmarking offline goal-conditioned rl. arXiv preprint arXiv:2410.20092.",
"questions": "* Could the authors clarify how more general exploration will be handled under the given scheme? Perhaps contrast with [3] from the weakness section, since that is an approach that handles more general exploration?\n* Could approaches like Quasimetric learning [1] also learn some metric like effective resistance?\n\n## References\n[1] Wang, T., Torralba, A., Isola, P. and Zhang, A., 2023, July. Optimal goal-reaching reinforcement learning via quasimetric learning. In International Conference on Machine Learning (pp. 36411-36430). PMLR.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-22T20:05:38",
"modification_date": "2025-11-12T12:33:41",
"review_url": "https://openreview.net/forum?id=W8bKDPf1Ko¬eId=uQXtf0gQ2v",
"license": "CC BY 4.0"
}
] |
kHhMs642rR | https://openreview.net/forum?id=kHhMs642rR | Evaluating SAE interpretability without generating explanations | 3.5 | 3.75 | [
2,
4,
4,
4
] | [
3,
5,
3,
4
] | 4 | [
"interpretability",
"explanation",
"sae",
"transcoder"
] | Sparse autoencoders (SAEs) and transcoders have become important tools for machine learning interpretability. However, measuring the quality of the features they uncover remains challenging, and there is no consensus in the community about which benchmarks to use. Most evaluation procedures start by producing a single-sentence explanation for each feature in the sparse coder. These explanations are then evaluated based on how well they enable an LLM to predict the activation of a feature in new contexts. This method makes it difficult to disentangle the explanation generation and evaluation process from the actual interpretability of the features in the sparse coder. In this work, we adapt existing methods to assess the interpretability of sparse coders, with the advantage that they do not require generating natural language explanations as an intermediate step. This enables a more direct and potentially standardized assessment of interpretability. Furthermore, we compare the scores produced by our interpretability metrics with human evaluations across similar tasks and varying setups, offering suggestions for the community on improving the evaluation of these techniques. | Instead of evaluating whether explanations match activating contexts, we evaluate how much are activating contexts similar between themselves. | interpretability and explainable AI | https://openreview.net/pdf?id=kHhMs642rR | 2025-09-19T01:24:48 | 4 | [
{
"id": "UhdnwsXEao",
"forum": "kHhMs642rR",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13305/Reviewer_VHFv",
"reviewer_name": "Reviewer_VHFv",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 1,
"presentation": 2,
"summary": "The authors introduce two novel methods for evaluating interpretability of SAE latents without the requirement for generative methods. The authors construct an intruder detection task as the first approach and compare performance of human and LLM detectors, showing a high correlation albeit at a small sample size of 56 latents. Example embedding scoring, on the other hand, measures proximity of positive and negative sentence samples in the latent space. Example embedding scoring reports a moderate correlation with human scores, which could be caused by the fact that sentence embedders might poorly reflect individual token relevances.\n\nThe problem of evaluating SAE interpretability is an important one, and the proposed methods have merit. The high correlation between human and LLMs on the intruder detection task is promising. The presentation of the paper, however could be improved upon. I find more extensive experiments lacking, such as adding more latents in the intruder detection task or performing subsequent analses on what causes the low correlation between human and example embedding scores — is the embedder quality a factor driving this gap? Furthermore, it is not clear to me where the data used for positive and negative SAE samples is sourced from, which is a crucial detail. An interesting question would also be also how do the methods fare across different data domains of source text?",
"strengths": "- The authors study an interesting problem of interpreting SAE latents without the use of generative LMs\n- The authors propose two methods, one of which exhibits a high correlation with human annotators",
"weaknesses": "- The presentation of the paper would benefit from improvement\n- Some important experimental details are missing: where is the data used as positive/negative samples for SAE latents sourced from? \n- Experimental limitations: increasing the number of latents, or analysing the cause of poor correlation between the example embedding method and human scoring would be interesting.",
"questions": "See above",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T18:57:44",
"modification_date": "2025-11-12T13:06:09",
"review_url": "https://openreview.net/forum?id=kHhMs642rR¬eId=UhdnwsXEao",
"license": "CC BY 4.0"
},
{
"id": "iAzccAF55A",
"forum": "kHhMs642rR",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13305/Reviewer_Bo6j",
"reviewer_name": "Reviewer_Bo6j",
"rating": 4,
"confidence": 5,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "This paper proposes two explanation-free methods for evaluating the interpretability of sparse autoencoders (SAEs): intruder detection and example embedding scoring. The authors test the proposed methods on SmolLM2 135M across 56 latents and find a strong correlation between human and LLM evaluators in intruder detection. The intruder detection method successfully bypasses natural language explanation generation while maintaining interpretability assessment; however, the embedding method shows limited correlation with human judgments. Higher activation deciles prove more interpretable across both methods, and the evaluation reveals that most SAE latents demonstrate interpretability without requiring explicit verbal descriptions.",
"strengths": "1. Figure 1 effectively illustrates the conceptual shift from explanation-based to activation-based evaluation, and the writing is generally accessible.\n\n2. The paper introduces evaluation methods that bypass natural language explanation generation, addressing a significant limitation in existing sparse autoencoders' interpretability assessment. This is a significant contribution that streamlines the evaluation pipeline and minimizes the impact of confounding factors.\n\n3. Example embedding scoring offers a computationally lightweight alternative using small embedding models, making large-scale SAE evaluation more feasible.",
"weaknesses": "1. The evaluation focuses exclusively on SmolLM2 135M across only 4 layers with 56 total latents. This narrow scope raises questions about generalizability to larger models, different architectures, or other SAE training approaches beyond TopK.\n\n2. Example embedding scores do not correlate as strongly with human intruder scores (r = 0.48), and AUROC are close to random, which limits the practical utility of the proposed scoring method. \n\n3. The paper lacks a discussion of failure modes or which types of latents are poorly captured by the proposed methods. \n\n4. Limited investigation of why LLMs consistently underestimate interpretability compared to humans",
"questions": "1. Why does the example embedding score not correlate as strongly with human intruder scores as it does with LLM intruder scores? Authors say example embedding scores tend to underestimate the interpretability of latents due to the small size of the embedding. Does the correlation improve when the embedding size is increased?\n\n\n2. How sensitive are the intruder detection results to the highlighting strategy? Have you tested alternative approaches, such as not highlighting any tokens or using attention-based highlighting to focus on the most relevant tokens?\n\n3. What is the rationale for randomly selecting a single decile and sampling all activating examples from it?\n\n4. Proposed SAEs use TopK activation with $k=32$. How do results change with different $k$ values, different activation functions, or different sparsity levels?\n\n5. Can you provide examples of latents that score poorly on intruder detection but might still be considered interpretable by other measures?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T13:15:02",
"modification_date": "2025-11-12T13:06:10",
"review_url": "https://openreview.net/forum?id=kHhMs642rR¬eId=iAzccAF55A",
"license": "CC BY 4.0"
},
{
"id": "BAEsBaGlad",
"forum": "kHhMs642rR",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13305/Reviewer_6pUU",
"reviewer_name": "Reviewer_6pUU",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This paper proposes a novel evaluation approach to assess the interpretability of sparse autoencoders (SAEs). Instead of generating natural language explanations as an intermediate step, the authors introduce two explanation-free methods: intruder detection and example embedding scoring. The paper demonstrates that direct assessment of latent interpretability is viable and correlates well with human judgments when using an LLM-as-a-judge approach.",
"strengths": "- This paper demonstrates the feasibility of the proposed method intruder detection, achieving strong correlation between human and LLM assessments.\n- The methods used in this paper are straightforward and easy to understand.\n- The paper examines interpretability across different activation deciles, providing nuanced insights into how interpretability varies with activation strength.",
"weaknesses": "- However, the performance of embedding score is not promising. The AUROC scores are barely above random (0.5-0.7), and correlation with human judgments is weak (r=0.48). This undermines one of the paper's main contributions, as this method was proposed as a fast, scalable alternative.\n- Lack of direct performance comparison with traditional interpretability evaluation methods.\n- Results are presented on very small set of latents (56) and small models. So we don't know if this holds when dataset scales up.\n- The bottleneck of evaluation seems to be extensive data collecting process, why avoiding natural language explanation is a critical problem?\n- Despite claiming to simplify evaluation, intruder detection still relies heavily on LLM queries, contradicting the motivation of reducing computational costs.",
"questions": "- Line 34, the coefficients is not non-negative necessarily, if this refers to activation value of latents. Please verify with examples from Neuronpedia.\n- Line 44 - 46, the conclusion on natural language explanations introduced additional hyper parameters and prompts can be expanded further. It’s not very clear how they introduce additional parameters, which might refer to simulations. But it’s important to explain this clearly at the beginning of the paper. I feel the authors should spend more time polishing the introduction section to stand out their motivation and make it accessible. The last paragraph of the introduction is hard to follow. The introduction to their own methods is not clear at all.\n- Heatmap in Figure 2 is not very illustrative. What does \"All latents which have less than that 0.2 accuracy are considered non interpretable, and different degrees of interpretability are assigned to the other 4 bins of 0.2.\" mean? Would overlapping histogram be more illustrative here?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T04:42:58",
"modification_date": "2025-11-12T13:06:10",
"review_url": "https://openreview.net/forum?id=kHhMs642rR¬eId=BAEsBaGlad",
"license": "CC BY 4.0"
},
{
"id": "5Q918WG5rU",
"forum": "kHhMs642rR",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission13305/Reviewer_KYqC",
"reviewer_name": "Reviewer_KYqC",
"rating": 4,
"confidence": 4,
"soundness": 4,
"contribution": 3,
"presentation": 2,
"summary": "The paper introduces a new method for evaluating Sparse Autoencoders (SAEs). It argues that explaining latent directions in the SAE’s latent space through short textual descriptions is suboptimal for two main reasons. First, this approach complicates the evaluation process by adding hyperparameters and prompt-related variability. Second, a latent factor can be interpretable even if it cannot be concisely expressed in words.\n\nAs an alternative, the paper proposes an intruder detection framework. For each latent, four activating examples and one non-activating “intruder” example are sampled. Interpretability is then assessed based on how effectively humans, large language models (LLMs), and an embedding-based algorithm can identify the intruder. This approach emphasizes intuitive recognition of the pattern a latent does or does not encode.\n\nThe results show strong agreement between human and LLM performance in intruder detection, indicating that LLMs may be well-suited for automating SAE interpretability evaluation.",
"strengths": "Proposes new method for evaluating SAEs that is more permissive in the types of interpretability it allows for (interpretable, but not easily expressible in words). \n\nThe proposed method looks very promising, with LLM accuracies tracking those of humans.\n\nMultiple approaches toward the task are evaluated (LLM vs. embedding).",
"weaknesses": "I think the presentation could be significantly improved.\nOn line 155, it is explained that 'We randomly select one of the ten deciles of the activation distribution, then sample all of our activating examples from the same decile.', but this is then not at all motivated. I found it quite difficult to understand why we would want to do this, and it wasn't until re-reading some of the results section for the second time that I understand the point. Specifically, the paragraph on lines 295-304 goes into the different ways we might (not) assign meaning to the activation strengths of the latent. I think that goes a long way towards motivating why we care about deciles, but it appears in the results section, rather than in an earlier section, where I would expect it.",
"questions": "Looking at the interpretability of distributions of activations, the LLM's results are very far from symmetric: \nit is much better at detecting a low-activating intruder among highly-activating samples than vice versa.\nThis is something I could not have predicted, do you have any intuition for why this is? And, do you have any data on how symmetrical humans are?\n\nHave you compared your interpretability scores to the explanation-centered approach you contrast to in the introduction? Can you find examples of latents which would be deemed uninterpretable according to other methods, but are considered interpretable under your framework?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T22:01:13",
"modification_date": "2025-11-12T13:06:10",
"review_url": "https://openreview.net/forum?id=kHhMs642rR¬eId=5Q918WG5rU",
"license": "CC BY 4.0"
}
] |
gyXfJUcR72 | https://openreview.net/forum?id=gyXfJUcR72 | Memory-Efficient LLM Pretraining via Minimalist Optimizer Design | 5.5 | 3.75 | [
6,
4,
4,
8
] | [
4,
5,
3,
3
] | 4 | [
"LLM Training",
"Optimizer",
"Efficiency"
] | Training large language models (LLMs) typically relies on adaptive optimizers such as Adam, which introduce extra operations and require significant more memory to maintain first- and second-order moments than SGD. While recent works such as GaLore, Fira and APOLLO have proposed state-compressed variants to reduce memory consumption, a fundamental question remains: *What are the minimum modifications to plain SGD needed to match state-of-the-art pretraining performance?* We systematically investigate this question using a bottom-up approach, and identify two simple yet highly (memory- and compute-) efficient techniques: (1) column-wise gradient normalization (normalizing the gradient along the output dimension), which boosts SGD performance without momentum; and (2) applying first-order momentum only to the output layer, where gradient variance is highest. Combining these two techniques lead to SCALE (Stochastic Column-normAlized Last-layer momEntum), a simple optimizer for memory efficient pretraining. Across multiple LLaMA models (60M–1B), SCALE matches or exceeds the performance of Adam while using only 35–45\% of the total memory. It also consistently outperforms memory-efficient optimizers such as GaLore, Fira and APOLLO, making it a strong candidate for large-scale pretraining under memory constraints. For LLaMA 7B model, SCALE outperforms the state-of-the-art memory-efficient methods APOLLO and Muon, in terms of both perplexity and memory consumption. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=gyXfJUcR72 | 2025-09-18T23:20:00 | 4 | [
{
"id": "fYToZ1qWt1",
"forum": "gyXfJUcR72",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12587/Reviewer_Cu6E",
"reviewer_name": "Reviewer_Cu6E",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper investigates the minimal modifications required to make SGD competitive with adaptive optimizers like Adam for large-scale LLM pretraining. Through a systematic ablation, the authors identify two key components: (i) column-wise gradient normalization and (ii) first-order momentum applied only to the last layer, and then integrate them into a new optimizer, SCALE (Stochastic Column-normalized Last-layer momEntum). SCALE matches or exceeds Adam-level performance while using only 35–45% of the memory across LLaMA models ranging from 60M to 7B parameters.",
"strengths": "- The paper addresses a well-defined and practically important question: identifying the minimal components required for memory-efficient yet high-performing LLM training.\n\n- The set of baselines covered is comprehensive, and the systematic investigation of essential design elements is solid and convincing.\n\n- SCALE demonstrates an outstanding memory–perplexity trade-off, effectively establishing a new Pareto frontier in optimizer efficiency.",
"weaknesses": "Since the proposed method mainly combines known techniques (normalization and partial momentum) rather than introducing new algorithmic concepts, it would strengthen the paper to provide a deeper analysis of the normalization component. For instance, the poor performance of row-wise normalization appears to stem from the LM head layer. If the last layer is excluded, it would be interesting to investigate whether different layers exhibit specific preferences for normalization granularity, such as normalizing along the smaller or larger dimension of the gradient tensor. Additionally, exploring block-wise normalization (e.g., aligned with attention heads?) or developing a variance-based, principled normalization criterion could yield more general insights. Such a deeper analysis would make the work more convincing and theoretically insightful.\n\nI am still somewhat unclear about why the combination of SGD, normalization, and partial momentum can achieve performance comparable to Adam. Could the authors provide deeper insights or theoretical intuition into this behavior? Is the observed effectiveness related to properties of the data distribution or gradient statistics?",
"questions": "Please refer to weakness",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T14:29:57",
"modification_date": "2025-11-12T12:57:15",
"review_url": "https://openreview.net/forum?id=gyXfJUcR72¬eId=fYToZ1qWt1",
"license": "CC BY 4.0"
},
{
"id": "UBGVDiG5Hl",
"forum": "gyXfJUcR72",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12587/Reviewer_75sN",
"reviewer_name": "Reviewer_75sN",
"rating": 4,
"confidence": 5,
"soundness": 3,
"contribution": 3,
"presentation": 4,
"summary": "This paper proposes SCALE (Stochastic Column-normalized Last-layer Momentum) — a minimalist, memory-efficient optimizer for LLM pretraining. Instead of modifying Adam or introducing compression subspaces like GaLore, Fira, or APOLLO, the authors take a bottom-up approach to identify the minimal ingredients necessary for high-performance pretraining under tight memory budgets. They find that (i) column-wise gradient normalization and (ii) applying first-order momentum only to the last layer are sufficient to achieve Adam-level performance while consuming only ~35–45% of the memory.\nExtensive experiments on LLaMA models (60M–7B) show that SCALE matches or surpasses Adam and state-of-the-art memory-efficient optimizers (APOLLO, Muon, SWAN) in perplexity–memory trade-offs.",
"strengths": "* **Clarity and thoroughness**: The methodology is clearly articulated,and easy to understand.\n* **Memory saving**: SCALE is lightweight, simple to implement, and achieves Adam-like performance at one-third the memory cost, even beat the SOTA APOLLO.\n* **Originality**: The paper clearly present how to ablate normalization and momentum mechanisms to find the essential ingredients that make adaptive optimizers effective. This is good since whether AdamW is good for high-dim LLM optimziation should be challenged.",
"weaknesses": "I have one major concern, which is the generality of the techniques in the paper:\n* **Potential overfitting to a single model family.**\nThe main design choices (column-wise normalization and last-layer momentum) are validated only on LLaMA-style architectures. It remains unclear whether these techniques generalize to other architectures. This is important as the paper's contribution is building a minimal optimizer via ablation; how the conclusion can be extended would be the main concern or flaw of this paper. This is my main reason to give a negative score now, but I am happy to see more results to show the generality of the techniques.",
"questions": "* It remains unclear whether these techniques generalize to other architectures.\n* Lack of fine-tuning or SFT experiments. The results focus exclusively on pretraining. Adding SFT experioments would be more convincing.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T13:45:33",
"modification_date": "2025-11-12T12:57:15",
"review_url": "https://openreview.net/forum?id=gyXfJUcR72¬eId=UBGVDiG5Hl",
"license": "CC BY 4.0"
},
{
"id": "GA1M6DQJCM",
"forum": "gyXfJUcR72",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12587/Reviewer_agWr",
"reviewer_name": "Reviewer_agWr",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This paper studies modifications of gradient descent to enable faster training for large language models. The analyze the importance of gradient normalizations and momentums, and how training is affected. Thereby, they introduce this new training scheme called SCALE, which includes column-wise gradient descent normalization, along with first-order momentum only to the final layer of the LLM. Using this method, it has been empirically shown that SCALE performs well, often better and more memory efficient than Adam optimizer on models with a large number of parameters.",
"strengths": "- The paper is easy to understand, the results are neatly written and presented.\n- There is an extensive comparison of several previously known optimization techniques along with the proposed ones.\n- The algorithm for SCALE is quite simple, yet it outperforms known optimization techniques.\n- The analysis of the ideas are also displayed well.",
"weaknesses": "- From my understanding, the ideas are not very novel as most of the techniques have been used before.\n- I think the proposed method is not very general. They chose to include momentum only for the last LLM layer since observed variance is high in the last layer. However, this might not always be the case, and a complete optimization scheme needs to be more adaptive to general architectures.\n- It will look good to write the final term for updating \\theta_t after equation 5.\n- Please define the norms ||.||_{a -> b}.\n- The notation is somewhat confusing, for instance, in line 383 of algorithm 1, g^t_l is computed and it's hard to tell where it's used since it doesn't appear anywhere else in the algorithm.\n- The proof of theorem 2.1 is hard to follow since it contains a series of equations. It will be helpful to give a larger overview of the proof and highlighting the steps to prove them, and the purpose of the lemmas. \n- Also, please mention an overview of the proofs of the lemmas, and how the inequalities follow from each other, since they are quite long and tedious to verify.",
"questions": "- Is variance of stochastic gradient computed over batches or individual training data?\n- Could you discuss the weaknesses above, especially giving proof intuitions?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T21:17:44",
"modification_date": "2025-11-12T12:57:16",
"review_url": "https://openreview.net/forum?id=gyXfJUcR72¬eId=GA1M6DQJCM",
"license": "CC BY 4.0"
},
{
"id": "wq8eaxE4np",
"forum": "gyXfJUcR72",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission12587/Reviewer_CaWc",
"reviewer_name": "Reviewer_CaWc",
"rating": 8,
"confidence": 3,
"soundness": 4,
"contribution": 3,
"presentation": 4,
"summary": "This paper tackles the problem of reducing GPU memory footprint in LLM pretraining by simplifying the optimizer. Standard adaptive optimizers like Adam require storing first and second moment statistics for each parameter which triples the memory requirement for \"persistent\" (i.e. non activation or gradient) tensors. The authors pose a central question: what are the minimum modifications to SGD needed to achieve SOTA pretraining performance? They pursue a bottom-up, minimalist design, systematically testing fundamental components (gradient normalization and momentum) to bridge the gap between SGD and Adam with minimal memory cost.\n\nthe paper identifies two key techniques that dramatically improve SGD: \n1. Column-wise gradient normalization (normalize gradients along the output dimension of each layer), which boosts training effectiveness with a simple closed-form scaling and no extra memory use\n2. First-order momentum applied only to the last layer, where gradient variance is highest, to stabilize and accelerate learning with negligible memory overhead. Combining these yields a new optimizer called SCALE, which uses roughly the same memory as vanilla SGD. Notably, SCALE achieves performance on par with Adam and other SOTA optimizers while using a fraction of the memory. For example, on a 1B-parameter LLaMA model, SCALE reaches similar final perplexity to Adam (and Muon) while using only 35–52% of the memory that Adam/Muon require. Compared to other recent memory-efficient optimizers (e.g. GaLore, Fira, Apollo), SCALE attains better perplexity with only about 59% of their memory cost on a 1B model.\n\nIn summary, the paper’s primary contributions are:\n1. Defining a minimalist optimizer design approach for LLM training under memory constraints, and executing a principled study to find the smallest necessary improvements to SGD\n2. Introducing SCALE, a simple two-component optimizer (column-normalized gradients + last-layer momentum) that is memory-efficient and performant\n3. Empirical validation across multiple LLM scales (60M, 130M, 350M, 1B, and a partial 7B) showing SCALE matches or exceeds Adam’s performance while drastically reducing memory usage\n4. Theoretical insight into why these minimal modifications suffice: the authors prove that momentum yields the most benefit on layers with high gradient variance, justifying concentrating momentum in the last layer. They also analyze different normalization schemes to explain why column-wise normalization is especially effective for stabilizing training",
"strengths": "1. Thorough experimental approach: the authors didn’t jump straight to proposing an algorithm. Instead, they earned the design of SCALE through systematic exploration. They ran extensive ablations to dissect what matters: for example, testing four different normalization strategies across multiple model sizes (60M -> 350M) and demonstrating that while all improve upon vanilla SGD, only column-wise or singular-value normalization come close to bridging the gap. Similarly, they evaluated momentum placement by trying momentum in the last layer and showing it dramatically boosts performance, especially for larger models where _\"both singular-value and column-wise normalization + last-layer momentum are matching Adam’s performance\"_. This methodical approach builds confidence that the chosen techniques are indeed the critical ones. The paper effectively rules out alternatives (e.g., it shows why row-wise normalization fails by tracing it to unstable gradient distributions), which strengthens the validity of their conclusions.\n2. The resulting SCALE optimizer is simple by design. It requires only minimal modifications to existing SGD/Adam implementations to achieves SOTA results. Practitioners can easily adopt SCALE without complex infrastructure changes or hyper-parameter gymnastics. The elegance of using one normalization and one localized momentum is that it’s easy to maintain and understand. The authors also connect this simplicity to existing ideas (showing how it relates to prior works but with fewer components) so the novelty doesn’t come at the cost of obscurity. \n3. The empirical results demonstrate that SCALE offers outstanding memory-perplexity trade-offs at scale. The strength here is the practical efficiency gain: large savings in memory without compromise in model quality. Also the method’s benefits increase with model size. The paper notes that as model size grows, SCALE matches or even exceeds the performance of Adam and others. For example, by 350M parameters, “column-wise + last-momentum” outperforms the tuned Adam (Stable-SPAM) baseline in perplexity. At 1B, SCALE ties with the best baseline, and on a 7B partial run, SCALE achieved a lower perplexity than both Muon and Apollo-Mini under the same training length.\n4. Despite introducing an extra normalization step each iteration, SCALE’s design keeps compute costs low. Appendix Table 7 shows that SCALE’s training speed in tokens/sec is essentially the same as Adam and even slightly higher in their setup. This is a significant strength because it means the memory savings do not come at the cost of slower training, which is a common pitfall for some compressed optimizers that spend extra cycles on state transformations.\n5. Beyond raw performance, the work provides insights that strengthen our understanding of optimization. The identification of the last layer’s gradient variance as a key issue is backed by both an empirical plot (variance curves) and a theoretical argument (Theorem 2.1). This realistically explans why momentum should applied only to the last layer. It suggests the approach is built on solid principles (variance reduction and convergence analysis) rather than just trial-and-error.",
"weaknesses": "While this paper is very strong, there are a few aspects that could be seen as weaknesses or areas for improvement:\n1. The optimizer does not introduce fundamentally new primitives beyond what’s known. One could argue that the contribution is more in the clever combination and insight rather than a brand-new algorithmic concept as it still builds upon gradient normalization and momentum.\n2. The experiments focus exclusively on transformer-based LLM pretraining (on C4 dataset). It remains unclear how well it generalizes to other domains or tasks such as vision models and post-training. This isn’t exactly a weakness of what’s done (the paper already has an impressive array of experiments), but it leaves an open question about generality.",
"questions": "1. The study identifies the last layer as having the highest gradient variance and thus focuses momentum there. Did the authors consider or experiment with applying momentum to the first (embedding) layer as well, since Figure 4a shows the embedding layer had the next-largest variance, while at a significantly smaller scale?\n2. By design, SCALE foregoes second-order moment. While the paper’s results suggest this isn’t needed for the tested scenario, are there cases where neglecting second-order adaptivity might hurt?\n3. Echoing 2nd point in Weaknesses section, while the results for language model pretraining are strong, do the authors have any preliminary observations on how SCALE performs in other settings?\n4. It's mentioned in appendix that SCALE in practice uses Adam (or full adaptivity) for some \"vector\" parts of the model that are small (and possibly critical like embeddings?). Could the authors elaborate on this choice? For example, did they notice any degradation if even those vector parameters were trained with the simplified optimizer? And are the embedding word vectors treated as “matrix” (since they are large) or “vector” in this context?\n5. (Minor) the authors might consider releasing pseudocode or a snippet illustrating the few lines of change needed to implement SCALE in a standard training loop to encourage adoption",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T06:05:31",
"modification_date": "2025-11-12T12:57:16",
"review_url": "https://openreview.net/forum?id=gyXfJUcR72¬eId=wq8eaxE4np",
"license": "CC BY 4.0"
}
] | |
roYDAg8Hve | https://openreview.net/forum?id=roYDAg8Hve | How private is diffusion-based sampling? | 4 | 3.333333 | [
6,
4,
2
] | [
4,
3,
3
] | 3 | [
"differential privacy",
"diffusion-based sampling",
"gaussian differential privacy",
"EDM"
] | Diffusion models have emerged as the foundation of modern generative systems, yet their high memorization capacity raises privacy concerns. While differentially private (DP) training provides formal guarantees, it remains impractical for large-scale diffusion models. In this work, we take a different route by analyzing privacy leakage during the sampling process. We introduce an empirical denoiser that enables tractable computation of per-step sensitivities, allowing each denoising step to be interpreted as a Gaussian mechanism. Building on this perspective, we apply Gaussian Differential Privacy (GDP) to derive tight privacy bounds. Furthermore, we identify critical windows in the denoising trajectory—time steps where salient semantic features emerge—and quantify how privacy loss depends on stopping relative to these windows. Our study provides the first systematic characterization of privacy guarantees in diffusion sampling, offering a principled foundation for designing privacy-preserving generative pipelines beyond DP training. | We provide a systematic privacy analysis of diffusion sampling by modeling each step with Gaussian DP and analyzing their total privacy composition. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=roYDAg8Hve | 2025-09-13T05:38:07 | 3 | [
{
"id": "K9Ka8N3sG6",
"forum": "roYDAg8Hve",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4582/Reviewer_Ln5k",
"reviewer_name": "Reviewer_Ln5k",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper tackles the important problem of obtaining differential privacy in diffusion models by incorporating Gaussian noise in the sampling step. It identifies windows in the reverse process where semantic features emerge and appropriately utlize this for limiting privacy loss and also introduce an empirical denoiser to enable the computation of per-step sensitivities. Experiments are shown to match neural and empirical denoisers and their performance on sampling, privacy loss with respect to critical windows, and batch size for denoising.",
"strengths": "* Analyzes the privacy loss with respect to the sampling steps and appropriately identifies critical regions where semantic information is generated and deals with it appropriately. The final steps are dealt with by using public datasets. \n* Utilized Gaussian diffusion process to identify the per-step sensitivities of the sampling and utilized central limit theorem to do noise accounting over multiple steps. \n* Experiments are shown to show that the neural and empirical denoisers match each other in the critical windows while diverging in the later steps (which are replaced by public data). The size of the batch size going from full dataset to subsampled is shown and it trades-off privacy with the quality of the generation as the empirical denoiser performance goes down.",
"weaknesses": "(a) It does early stopping which limits the quality of the generated data. It is unclear how much data is needed for the public denoisers. \n(b) It only tackles the continuous version of the diffusion process and would be interesting to see how it compares to discrete diffusion where similar regimes are detected for privacy leakage (*).\n(c) The connection from empirical denoiser to neural denoiser is not rigorous and shown with empirical experiment.\n\n* On the inherent privacy properties of discrete denoising diffusion models. https://arxiv.org/abs/2310.15524",
"questions": "(1) It seems that to obtain high quality denoised samples, we need to have public denoisers: (a) does it need to be from the same domain as the original private dataset? How much data i\ns typically required to train the final steps of the denoising process? \n(2) Does it help to replace the mini-batch sampling with an importance-weighted sampler to enable a good estimate for the denoiser?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-05T23:15:25",
"modification_date": "2025-11-12T11:17:33",
"review_url": "https://openreview.net/forum?id=roYDAg8Hve¬eId=K9Ka8N3sG6",
"license": "CC BY 4.0"
},
{
"id": "xzEAtyENea",
"forum": "roYDAg8Hve",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4582/Reviewer_sn8e",
"reviewer_name": "Reviewer_sn8e",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper investigates privacy leakage during the sampling process of diffusion models, proposing an alternative approach to differentially private (DP) training. The authors introduce an **empirical denoiser** that replaces the intractable neural denoiser, enabling computation of per-step sensitivities in the denoising process. By framing each denoising step as a Gaussian mechanism, they apply **Gaussian Differential Privacy (GDP)** theory to derive tight privacy bounds through composition. The analysis reveals that privacy loss is non-uniform across the sampling trajectory, with critical windows emerging where semantic features materialize. The paper explores both full-batch and mini-batch (subsampled) settings, demonstrating that subsampling provides substantial privacy amplification. Experiments on CIFAR-10 validate the framework and propose a hybrid strategy using public denoisers for non-critical timesteps to preserve privacy while maintaining generation quality.",
"strengths": "- Analyzing privacy at the sampling stage rather than training is an interesting and underexplored angle, particularly relevant for proprietary models where only outputs are accessible.\n- Properly applying GDP composition to multi-step stochastic processes is technically non-trivial, and the subsampling analysis (Section 4) adds value.\n- The identification of non-uniform privacy loss across timesteps and the proposed hybrid strategy (Section 3.4) are potentially useful.\n- Effective use of figures (especially Figures 4-5 showing ε-δ curves alongside generated samples)\n- Well-structured progression from single-step to multi-step analysis",
"weaknesses": "- The fundamental assumption—that the empirical denoiser $\\hat{\\mathbb{E}}[x|x_t; D]$ adequately approximates the neural denoiser $D(x_t, t; \\theta(D))$—is insufficiently validated. While Figure 1 shows cosine similarity convergence at later timesteps, this does not guarantee that privacy bounds derived from the empirical denoiser translate to the neural case. The bias-variance argument (Section 3.3) is heuristic and relies on questionable assumptions (e.g., equal MSE between estimators). **This gap undermines the paper's central claim** that the analysis characterizes real-world privacy leakage.\n- The authors acknowledge (end of Section 4) that GDP CLT requires no single mechanism to dominate, yet their late-stage denoising steps contribute disproportionately due to reduced noise scales. While they suggest early stopping as mitigation, this doesn't resolve the theoretical inconsistency—the CLT-based bounds may not be valid for the full trajectory.\n- There is no empirical validation (e.g., through membership inference attacks on actual neural denoisers) to confirm that the derived privacy bounds reflect real privacy risks. The paper provides mathematical analysis but no evidence that ε ≈ 90 (full-batch, σ_min = 0.2) corresponds to actual vulnerability.",
"questions": "- Can you provide formal bounds on $|\\mathbb{E}[x|x_t] - \\hat{\\mathbb{E}}[x|x_t; D]|$ that would translate to error bounds on the privacy parameters?\n- Have you considered Lipschitz-based sensitivity analysis for neural denoisers, even if looser, to validate the empirical denoiser bounds?\n- Why not test membership inference attacks on neural-denoiser-generated samples and compare measured privacy leakage to your predicted ε values?\n- Can you show examples where the empirical denoiser produces samples violating the predicted ε bound?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T01:57:14",
"modification_date": "2025-11-12T11:17:34",
"review_url": "https://openreview.net/forum?id=roYDAg8Hve¬eId=xzEAtyENea",
"license": "CC BY 4.0"
},
{
"id": "bKCZVt1hOO",
"forum": "roYDAg8Hve",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4582/Reviewer_KyYx",
"reviewer_name": "Reviewer_KyYx",
"rating": 2,
"confidence": 3,
"soundness": 1,
"contribution": 1,
"presentation": 1,
"summary": "This paper studies privacy leakage during the sampling process of diffusion models. It replaces the intractable neural denoiser with an empirical denoiser by dataset average so that each reverse step can be written as a Gaussian mechanism with sensitivity controlled by clipping. This allows per-step \\mu-GDP accounting and composition across timesteps. The paper also argues that privacy loss concentrates in “critical windows” when semantics emerge; it further suggests a hybrid pipeline that switches to a public (non-private) denoiser outside those windows.",
"strengths": "- The “critical window” framing offers a coherent qualitative perspective on where privacy loss concentrates along the sampling trajectory.\n- A GDP-based accounting is presented, leveraging the exact $\\mu$-composition property for Gaussian mechanisms.",
"weaknesses": "1. **Lack of experimental baselines and validation.**\n - No comparison to other DP methods.\n - Image quality is not measured (the paper explicitly avoids FID/IS), so practical impact on generation quality is unclear.\n\n2. **“Critical window” claims lack an operational detector.**\n - The idea is qualitative only; no quantitative rule (e.g., change-point in $\\mu_t$, SNR threshold, or semantic-classifier stability) is provided or evaluated.\n\n3. **Clarity and notation issues.**\n - Line ~163: What is $\\Delta$?\n - Lines ~185–186: What is $C$? Is this the clip norm used to bound sensitivity?\n - Line ~201: What is $\\mu_{t_i}$ and why is it defined that way?\n - Line 460: “Figure 3” is referenced but not linked/connected.",
"questions": "- Can you provide baselines against DP-trained diffusion (e.g., DP-SGD) at matched privacy budgets, and report FID/IS (or CLIP-based metrics) to quantify utility. \n- Can you provide a quantitative detector for the “critical window” (e.g., a change-point on per-step or cumulative \\mu, an SNR threshold, or a classifier-stability metric), with ablations?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T05:52:09",
"modification_date": "2025-11-12T11:17:34",
"review_url": "https://openreview.net/forum?id=roYDAg8Hve¬eId=bKCZVt1hOO",
"license": "CC BY 4.0"
}
] |
3NLF20wthr | https://openreview.net/forum?id=3NLF20wthr | 3S-Attack: Spatial, Spectral and Semantic Invisible Backdoor Attack Against DNN Models | 4 | 4 | [
6,
4,
4,
2
] | [
4,
4,
3,
5
] | 4 | [
"Artificial intelligence Security",
"Backdoor attack",
"Deep neural network",
"DCT transform"
] | Backdoor attacks implant hidden behaviors into models by poisoning training data or modifying the model directly. These attacks aim to maintain high accuracy on benign inputs while causing misclassification when a specific trigger is present. While existing studies have explored stealthy triggers in spatial and spectral domains, few incorporate the semantic domain. In this paper, we propose 3S-attack, a novel backdoor attack which is stealthy across the spatial, spectral, and semantic domains. The key idea is to exploit the semantic features of benign samples as triggers, using Gradient-weighted Class Activation Mapping (Grad-CAM) and a preliminary model for extraction. Then we embedded the trigger in the spectral domain, followed by pixel-level restrictions in the spatial domain. This process minimizes the distance between poisoned and benign samples, making the attack harder to detect by existing defenses and human inspection. And it exposes a vulnerability at the intersection of robustness and semantic interpretability, revealing that models can be manipulated to act in semantically consistent yet malicious ways. Extensive experiments on various datasets, along with theoretical analysis, demonstrate the stealthiness of 3S-attack and highlight the need for stronger defenses to ensure AI security. | This paper proposes a novel backdoor attack that is stealthy in spatial, spectral, and semantic domains against DNN models | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=3NLF20wthr | 2025-09-01T23:35:06 | 4 | [
{
"id": "8PoeupbHwC",
"forum": "3NLF20wthr",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission461/Reviewer_2KfJ",
"reviewer_name": "Reviewer_2KfJ",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 1,
"summary": "This paper introduces **3S-Attack**, a novel backdoor attack that achieves stealthiness across **spatial, spectral, and semantic domains**. The core idea is to leverage **Grad-CAM** to extract semantically important regions from benign samples, use **Discrete Cosine Transform (DCT)** to identify and manipulate stable frequency components (those with magnitude differences below a threshold), and apply **pixel-level restrictions** to ensure imperceptibility. The attack operates solely via data poisoning without access to the training process. Experiments on five datasets (MNIST, GTSRB, CIFAR-10/100, Animal-10) demonstrate high attack success rates (ASR), high PSNR/SSIM values, and strong resistance to spatial, spectral, and semantic domain defenses (e.g., STRIP, FTD, Grad-CAM, and Fine-Pruning).",
"strengths": "**Comprehensive multi-domain stealth design**\n\n* The attack unifies spatial, spectral, and semantic concealment, which no prior method achieves simultaneously.\n* This cross-domain formulation exposes new security blind spots where standard single-domain defenses fail (Sec. 4.4).\n* The modular design (Grad-CAM → DCT → pixel restriction) makes the idea easily reproducible and adaptable.\n\n**Novel use of Grad-CAM for trigger extraction**\n\n* Grad-CAM is used not for defense but to identify salient semantic regions to build the trigger (Sec. 3.3; Fig. 2).\n* This inverts interpretability tools into attack mechanisms, revealing a nuanced vulnerability in semantic attention consistency.\n* It also allows transferability without model access — a realistic and underexplored threat model.\n\n**Clear motivation and background integration**\n\n* The introduction logically connects Grad-CAM’s interpretability with attack invisibility (pp. 2–3).\n* Figures 2–3 effectively depict the two key stages of the attack—trigger selection based on Grad-CAM saliency and trigger injection through frequency-domain modification—demonstrating the transition from clean to poisoned samples.\n\n**Defense-resistance demonstration**\n\n* Section 4.4 compares 3S-Attack with BadNets using STRIP, Grad-CAM, and FTD (Fig. 6–7).\n* The near-overlap of saliency maps between benign and poisoned samples supports semantic stealth.\n* These results highlight the inadequacy of current interpretability-based defenses.",
"weaknesses": "**Limited theoretical rigor in frequency-domain reasoning**\n\n* The choice of the “Frequency Selection Threshold” (Sec. 3.3) is heuristic; no explicit formula or derivation links magnitude difference and model sensitivity.\n* There is no analysis of how DCT component manipulation affects semantic embeddings or classification confidence (no equations in Sec. 3.3–3.5).\n\n**Ambiguity in semantic transferability across models**\n\n* The method assumes the saliency from a surrogate model approximates that of the target one (Sec. 3.2).\n* No quantitative measure is given for semantic alignment between surrogate and victim Grad-CAM maps.\n\n**Insufficient ablation and interpretability analysis**\n\n* The contribution of each step (semantic extraction, spectral embedding, pixel restriction) is not isolated.\n* For instance, an ablation showing Grad-CAM vs. random region selection would clarify semantic importance.\n* Similarly, removing pixel restriction would test the necessity of that safeguard (Sec. 3.5).\n\n**Unclear visualization and ambiguous labeling in Figure 4**\n\n* The red circles highlighting artifacts are too thin to be noticeable, and the diagram’s directional flow is ambiguous.\n* The figure should explicitly label the two sides (e.g., Before Restriction → After Restriction) above the arrows and use thicker or more vivid annotations to highlight the changed regions.",
"questions": "**Frequency selection rationale**\n\n* Clarify why frequencies with small DCT magnitude differences between the original and Grad-CAM–weighted images are assumed to represent semantically stable components? \n* A more explicit justification or sensitivity analysis for the threshold δ would strengthen the methodological soundness.\n\n**Semantic transferability across models**\n\n* How consistent are the Grad-CAM saliency maps between the surrogate and victim models? Quantitative evidence (e.g., overlap or similarity metrics) would clarify whether the semantic trigger generalizes across architectures.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-05T09:40:04",
"modification_date": "2025-11-12T10:45:24",
"review_url": "https://openreview.net/forum?id=3NLF20wthr¬eId=8PoeupbHwC",
"license": "CC BY 4.0"
},
{
"id": "JrGtu0TrwJ",
"forum": "3NLF20wthr",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission461/Reviewer_jcso",
"reviewer_name": "Reviewer_jcso",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "The authors propose a novel backdoor attack against DNN models, called 3S-attack.It extracts the semantic features of benign samples with a preliminary model as triggers and embed the trigger in the spectral domain. Finally, it restricts the poisoned images in the spatial domain. Thus, the authors successfully make the attack stealthy across the spatial, spectral, and semantic domains.",
"strengths": "1. Novelty — first black-box semantic-stealth attack: \nDemonstrates the first attack that achieves semantic stealth without access to the victim model or training pipeline, filling a notable gap in threat modeling. \n2. Rigorous multi-axis evaluation: \nEmpirically validates stealth across semantic, spatial, and spectral defenses, showing the attack’s robustness against diverse defense paradigms.",
"weaknesses": "1. Surrogate data dependence: The attack's reliance on a clean surrogate model and its sensitivity to distributional mismatch remain unexplored. Labeling ambiguity: The labeling strategy for poisoned samples is unclear and inconsistent with the stated attack objective.\n2. Incomplete reporting: Benign accuracy (BA) and post-defense results are missing for some datasets, weakening claims of minimal performance drop.\n3. Labeling ambiguity: The labeling strategy for poisoned samples is unclear and inconsistent with the stated attack objective.",
"questions": "1. The attack pipeline is unclear regarding poisoned-label assignment. For each poisoned image, what label is used during poisoning: the original (benign) label or the attacker's target class? If the poisoned images retain original labels, how does this reconcile with the stated goal of misclassifying target images into the attack class? \nPlease clarify the labeling strategy and the exact optimization objective.\n2. Grad-CAM extracts regions the model attends to for its decision (i.e., activations most contributive to the target class). Is it right to interpret these regions as semantic features? Please clarify how you distinguish true semantic attention from spurious cues (e.g., learned background correlations), and provide any analysis that demonstrates the highlighted regions correspond to semantically meaningful object parts rather than dataset artifacts.\n3. 3S-attack requires pretraining a clean surrogate model. What are the requirements on the surrogate’s training data and distribution? If the attacker’s data distribution differs from the victim’s (e.g., attacker has cat images while victim trains on birds), what is the expected impact on attack efficacy? Please quantify sensitivity to distributional mismatch.\n4. The authors state BA drops ≤2% and therefore omit BA in Table 2, yet Figure 8 shows notably low BA on GTSRB and Animal10. Please add the per-dataset BA values to Table 2 and report the post --FP-defense attack success (or other relevant metrics) on additional datasets. This will substantiate the claim of minimal",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T12:15:06",
"modification_date": "2025-11-12T10:45:24",
"review_url": "https://openreview.net/forum?id=3NLF20wthr¬eId=JrGtu0TrwJ",
"license": "CC BY 4.0"
},
{
"id": "eCpg5QSOwC",
"forum": "3NLF20wthr",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission461/Reviewer_MTci",
"reviewer_name": "Reviewer_MTci",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "The paper presents an interesting attempt to achieve multi-domain stealth in backdoor attacks, supported by extensive experiments. However, the methodological novelty is moderate, the semantic analysis remains qualitative, and presentation quality can be improved.",
"strengths": "The paper proposes a unified backdoor attack that simultaneously achieves stealthiness across spatial, spectral, and semantic domains, which is an underexplored but meaningful direction.\n\nExperiments are performed on multiple datasets and models, showing the generality of the method.\n\nThe paper provides clear algorithmic descriptions, ablation studies, and defense-resistance analyses, enhancing reproducibility and technical depth.",
"weaknesses": "Although the paper claims semantic invisibility, the evaluation mainly relies on Grad-CAM visualization and AC/NC detection. More quantitative semantic similarity metrics (e.g., feature-space distance, neuron activation overlap) would strengthen the claim.\n\nSome compared methods are relatively dated. Including more recent backdoor attacks would make comparisons more convincing.\n\nThe manuscript is lengthy, with excessive large figures and overlapping content between the main text and appendix, which reduces readability. Condensing and summarizing figures would improve clarity.",
"questions": "Although the paper claims semantic invisibility, the evaluation mainly relies on Grad-CAM visualization and AC/NC detection. More quantitative semantic similarity metrics (e.g., feature-space distance, neuron activation overlap) would strengthen the claim.\n\nSome compared methods are relatively dated. Including more recent backdoor attacks would make comparisons more convincing.\n\nThe manuscript is lengthy, with excessive large figures and overlapping content between the main text and appendix, which reduces readability. Condensing and summarizing figures would improve clarity.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-28T23:04:32",
"modification_date": "2025-11-12T10:45:25",
"review_url": "https://openreview.net/forum?id=3NLF20wthr¬eId=eCpg5QSOwC",
"license": "CC BY 4.0"
},
{
"id": "NgTaS4ZRUk",
"forum": "3NLF20wthr",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission461/Reviewer_6F7W",
"reviewer_name": "Reviewer_6F7W",
"rating": 2,
"confidence": 5,
"soundness": 2,
"contribution": 2,
"presentation": 1,
"summary": "This paper proposes a backdoor attack called 3S-Attack, which aims to achieve stealthiness across three dimensions: spatial, spectral, and semantic. The core idea of the attack is to use the semantic features of benign samples as the trigger. Experimental results show that the attack achieves a high ASR on multiple datasets while maintaining high PSNR and SSIM values.",
"strengths": "1. The paper addresses stealthiness across three different domains, as previous attacks often focused on only one or two domains.\n\n2. The comparison of spatial and spectral residuals in Figure 1 provides an intuitive visual demonstration of the attack's stealthiness.",
"weaknesses": "1. The effectiveness of the attack on high-resolution datasets, such as ImageNet, is not explored. I'm interested in the performance of 3S attack on such datasets.\n\n2. The baseline attacks used for comparison are relatively old (all from 2022 or earlier). A comparison with more recent and advanced backdoor attacks better highlights the paper's contribution.\n\n3. The presentation of experimental results lacks clarity in several key areas:\n\n- Although the authors state in the caption of Table 2 that \"The benign accuracy is not displayed because in each experiment the benign accuracy never drop more than 2%,\" I believe the Benign Accuracy should still be explicitly reported in the table for a clear and direct comparison.\n\n- Table 2 compares ASR, PSNR, and SSIM for different attacks, but it does not specify the poison rate used to obtain these results.\n\n4. Although the paper claims stealthiness in all three domains, the actual effectiveness against defenses is not ideal. The abstract states the attack is harder to detect by existing defenses, but the experiments in the Appendix show that 3S-Attack can still be detected by AC and NC. Many other state-of-the-art stealthy backdoor attacks can already bypass these defenses, which diminishes the claimed novelty and contribution of this work.\n\n5. The claim that \"3S-Attack is also the first semantic-domain stealthy backdoor attack that operates purely through poisoned samples...\" appears to be inaccurate. There are already some existed semantic backdoor attacks.",
"questions": "1. Can the authors elaborate on why AC succeeds in detecting the attack (i.e., what traces does 3S-Attack leave at the activation level)? Furthermore, how would the \"activation-aligned poisoning\" mentioned in the \"Limitations and Future Work\" section (Appendix A.4) be implemented to evade AC?\n\n2. The paper does not state the initial Benign Accuracy for the model trained on Animal10. I observed in Figure 8b that the BA drops below 60% at a very low neuron pruning rate. Is this because the original model's classification performance on this dataset was poor, or is this rapid drop caused by the Fine-Pruning process itself?\n\n3. The authors selected different models for different datasets (e.g., VGG/ResNet for CIFAR-10, WRN for CIFAR-100) instead of showing comprehensive results on a consistent set of models. This experimental setup is not ideal and complicates comparison with other backdoor attack papers. I recommend the authors provide more experiments on consistent model architectures to avoid any suspicion of cherry-picking results.\n\n4. What is the time cost for generating the poisoned samples? The attack relies on training a \"preliminary model,\" which seems to imply that the poison generation time could be even longer than the victim's model training time itself. This high computational cost for preparation may affect the practical feasibility of the attack.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-23T17:58:26",
"modification_date": "2025-11-12T10:45:25",
"review_url": "https://openreview.net/forum?id=3NLF20wthr¬eId=NgTaS4ZRUk",
"license": "CC BY 4.0"
}
] |
yhEi1aeWCQ | https://openreview.net/forum?id=yhEi1aeWCQ | NUMBER REPRESENTATIONS IN LLMS: A COMPUTATIONAL PARALLEL TO HUMAN PERCEPTION | 5.333333 | 3.333333 | [
4,
8,
4
] | [
3,
4,
3
] | 3 | [
"Natural Logarithmic",
"Number line",
"LLM",
"representations",
"embeddings"
] | Humans are believed to perceive numbers on a logarithmic mental number line, where smaller values are represented with greater resolution than larger ones. This cognitive bias, supported by neuroscience and behavioral studies, suggests that numerical magnitudes are processed in a sublinear fashion rather than on a uniform linear scale. Inspired by this hypothesis, we investigate whether large language models (LLMs) exhibit a similar logarithmic-like structure in their internal numerical representations. By analyzing how numerical values are encoded across different layers of LLMs, we apply dimensionality reduction techniques such as PCA and PLS followed by geometric regression to uncover latent structures in the learned embeddings. Our findings reveal that the model’s numerical representations exhibit sublinear spacing, with distances between values aligning with a logarithmic scale. This suggests that LLMs, much like humans, may encode numbers in a compressed, non-uniform manner. | interpretability and explainable AI | https://openreview.net/pdf?id=yhEi1aeWCQ | 2025-09-16T14:58:47 | 3 | [
{
"id": "sogLT89O9T",
"forum": "yhEi1aeWCQ",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7044/Reviewer_W9st",
"reviewer_name": "Reviewer_W9st",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper challenges the prevailing linear representation hypothesis, arguing instead that LLMs encode numerical values on a compressed, logarithmic number line. To test this, the authors extract hidden states for numerals, project them using dimensionality reduction, and then measure monotonicity and spacing using a novel Scaling Rate Index. Their analysis consistently reveals strong sublinear trends across various LLM families, indicating a compressed, non-uniform geometry. The authors validate this finding through causal interventions that modulate next-number predictions and demonstrate its robustness in real-world settings while showing its absence in non-numerical controls.",
"strengths": "1. The authors tackled the novel and underexplored problem of how LLMs internally represent and manipulate numeric values.\n2. The relevant papers and theories are well-introduced in the manuscript, which helps the reader to easily follow the logic of the proposed method.\n3. Two properties, monotonicity and scaling, are well defined and analyzed with appealing metrics.",
"weaknesses": "1. The authors experimented with a single prompt, which is very unlikely to appear in a real-world scenario. Therefore, it is unclear how the experimental findings in this paper can be generalized to real-world data. To improve generalizability, the authors should consider using simple prompts with single numbers (e.g., \"2047\"), varying the number of preceding in-context examples (e.g., i) \"2047=2047 104=?\", ii) \"2047=2047 104=104 37=?\"), or experimenting with numbers from grade-school math QA pairs such as GSM8K, ASDiv, or other datasets. By conducting the same analysis on diverse prompts and reporting the results with statistical analysis, the authors' contribution would be more concrete.\n\n2. The models used in the paper are limited in their numerical reasoning capabilities. Even if the authors are not able to utilize commercial, closed-source models, open-source and light-weight reasoning models such as Qwen, DeepSeek, GPT-OSS, and any other model families that show better results on numerical reasoning are available. I believe that including models with stronger numerical reasoning abilities would help reveal whether the poor understanding of numbers is a general limitation of LLMs. Moreover, including bidirectional language models (e.g., diffusion LMs) might be beneficial. Because the number is always placed at the end of the prompt in the probe's current design, this could introduce positional bias. In bidirectional LMs, the position of the number can be varied, which would be free from such unintended bias.\n\n3. Since PCA and PLS convey different analysis results, I wonder how robust the analysis results proposed in the paper are. For example, if another dimensionality reduction method were utilized, would the analysis results change? If so, how can we say that PCA and PLS are the most effective dimensionality reduction algorithms for analyzing number representations?",
"questions": "1. In Table 1, why do the authors compare results from different layers? For LLaMA-3.1-8B, comparing the monotonicity metric and sublinearity coefficient from the same layer seems fairer. If this is not the case, it would be better if the authors elaborated on the reason for this in the manuscript.\n\n2. Why were only LLaMA models used for experiment 2?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T11:01:28",
"modification_date": "2025-11-12T11:47:20",
"review_url": "https://openreview.net/forum?id=yhEi1aeWCQ¬eId=sogLT89O9T",
"license": "CC BY 4.0"
},
{
"id": "h13MMkcs5s",
"forum": "yhEi1aeWCQ",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7044/Reviewer_9VTM",
"reviewer_name": "Reviewer_9VTM",
"rating": 8,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This manuscript investigates how LLMs internally represent numerical values. The authors find that LLMs encode numbers on a compressed, logarithmic number line, rather than a linear one as commonly assumed. By extracting hidden states corresponding to numerals and projecting them onto one-dimensional manifolds, the study evaluates two metrics: Spearman’s ρ to assess monotonicity and a new Scaling Rate Index (β) to characterize the spacing pattern as sublinear, linear, or superlinear.",
"strengths": "1. The target question is interesting. If number-related questions in the area of LLMs can be thoroughly understood, it may lead to further significant advancements.\n\n2. The proposed methodology is well-motivated and interpretable.\n\n3. The conclusion is highly inspiring, as it establishes a connection between cognitive psychology and representational analysis in LLMs.",
"weaknesses": "1. The paper shows that compression exists, but not why (e.g., is it due to token frequency, positional embeddings, or training distribution?).A deeper analysis of architectural causes or training data statistics would strengthen the work.\n\n2. It’s unclear how logarithmic compression affects numerical reasoning benchmarks or real-world performance (e.g., arithmetic, scale extrapolation).\n\n\n3. Although multiple runs and controls are used, confidence intervals are sometimes narrow, and cross-model consistency could be better quantified.",
"questions": "1. What is the causal origin of the logarithmic compression? Could it be attributed to the frequency distribution of numbers in the training corpora, which approximately follows a power-law?\n\n2. Have you compared the compression patterns across different tokenization schemes (e.g., digit-based vs. word-based numerals) to rule out tokenizer-level artifacts?\n\n3. Have the authors examined whether a similar logarithmic compression of numerical magnitude also emerges in MLLMs that jointly encode visual and textual inputs? For instance, do vision-language models exhibit comparable sublinear spacing when representing quantities inferred from visual stimuli? Investigating this direction could help determine whether logarithmic number representation is a general emergent property of large-scale multimodal learning, rather than a phenomenon restricted to linguistic models.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T10:51:24",
"modification_date": "2025-11-12T11:47:21",
"review_url": "https://openreview.net/forum?id=yhEi1aeWCQ¬eId=h13MMkcs5s",
"license": "CC BY 4.0"
},
{
"id": "GvtDiagfRC",
"forum": "yhEi1aeWCQ",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7044/Reviewer_jbmZ",
"reviewer_name": "Reviewer_jbmZ",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 4,
"summary": "This paper considers whether LLMs encode numbers using logarithmic rather than linear representations. Specifically, the authors pair both a synthetic dataset and some simple real-world data with PCA and PLS to test for order preservation and sublinear compression of numeric representations. Their interpretation of their results is that LLMs do encode numbers using logarithmic representations, in a way that is consistenct with the human mental number line.",
"strengths": "The question is interesting and well motivated. The writing is clearly structured and easy to follow. Experiments are simple but clean. Finding parallels between human numeric cognition and LLM numeric cognition can be illuminating into both systems.",
"weaknesses": "My main concern with this paper is the lack of impact. Even though the topic and questions are interesting to me, I do not believe that the broader ICLR audience will find it impactful.\n\n- The results on the natural data seem fairly weak and the use of birth years to study something analogous to mental number lines is not very compelling. Although birth years are numbers, they do not necessarily symbolize quantity in the way that most other numbers do. For example, '1984' in the input 'Katy Perry was born in 1984' is perhaps more similar to 'California' in the input 'Katy Perry was born in California' than it is to '1984' in the input '1700 + 284 = 1984'. Put another way, the numeric interpretation of '1984' seems far more meaningful in the latter case than the former case. The way that the birth years are presented the real data task therefore do not seem to lend themselves quite as naturally to some notion of having an internal number line.\n- Building on the previous point, it is not clear why the real data experiments use factual questions that only have one correct numeric answer. This also seems to be a fairly unnatural context in which to consider mental number lines, since the model can just memorize that single number without needing to resort to any kind of numeric representation.",
"questions": "- It is not clear how I should interpret specific values of ρ and β beyond some vague notion of 'higher is better'. Is there a concrete baseline that these values can be benchmarked against?\n- In table 2, is 'llama' being misspelled as 'llamba'?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T12:55:07",
"modification_date": "2025-11-12T11:47:21",
"review_url": "https://openreview.net/forum?id=yhEi1aeWCQ¬eId=GvtDiagfRC",
"license": "CC BY 4.0"
}
] | |
4MTFyYOsWJ | https://openreview.net/forum?id=4MTFyYOsWJ | High Probability Streaming Lower Bounds for $F_2$ Estimation | 4 | 3.75 | [
4,
4,
4,
4
] | [
3,
4,
4,
4
] | 4 | [
"sketching",
"streaming",
"dimensionality reduction"
] | A recent paper of Braverman and Zamir [BZ'24] gave a lower bound of $\Omega(\frac{1}{\epsilon^2}\log n)$ for estimating the $F_2$ moment of a stream to within $1 \pm \epsilon$ multiplicative error, resolving the complexity of $F_2$ estimation for constant failure probability $\delta$ in the insertion-only model. We show that their argument can be adapted to achieve tight dependence on the failure probability $\delta$. Our key step is to replace the "Exam Set Disjointness" problem used in [BZ24] with a robust version that we call "Exam Mostly Frequency" (EMostlyFreq). This is the exam version of the communication problem underlying the high-probability analysis introduced in [Kamath, Price, Woodruff '21]. We prove a tight lower bound of $\Omega(\frac{1}{\epsilon^2} \log(\frac{\epsilon\sqrt{n}}{\log(1/\delta)}) \log(1/\delta))$ for $F_2$ estimation. | other topics in machine learning (i.e., none of the above) | https://openreview.net/pdf?id=4MTFyYOsWJ | 2025-09-20T02:05:58 | 4 | [
{
"id": "namxX3MRJo",
"forum": "4MTFyYOsWJ",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20365/Reviewer_zBJb",
"reviewer_name": "Reviewer_zBJb",
"rating": 4,
"confidence": 3,
"soundness": 4,
"contribution": 3,
"presentation": 2,
"summary": "This work studies the space complexity of estimating $F_2$ in the data stream model. The main contribution is an optimal lowerbound on the space complexity: $ \\Omega(1/\\epsilon^2\\log n \\log 1/\\delta)$, where $\\epsilon$ is the approximation error and $\\delta$ is the probability error. \nEarlier work established a lowerbound of $ \\Omega(1/\\epsilon^2\\log n)$, the main contribution of this work is to improve this to a bound that includes $\\delta$ parameter.",
"strengths": "Estimating $F_2$ over data streams has been a well-studied problem, at least for the past 3 decades. The known upperbound is $O(1/\\epsilon^2\\log n \\log 1/\\delta). Thus, this work settles the space complexity of this problem.",
"weaknesses": "The main weakness is the presentation and over-reliance on the proof of BravermanZamir24. It is nearly impossible to follow the proof and details unless one is completely familiar with the work of BravermanZamir24. I understand that the authors had to strike a balance between repeating the claims and technicalities of BravermanZamir24 and conveying their contributions. But in the current form, it is difficult to appreciate the work. There are many phrases/notions that are left undefined, and the reader is forced to infer what they mean. In my view, the paper is not ready to be published, in the current form,",
"questions": "Few questions/concerns\n1. Line 158-1599: This seems to be a critical ingredient of the proof and yet is stated very informally.\n2. Lemma 1: The notion \"No-instance distribution\" is not introduced earlier, though one can infer what it means.\n3. Proposition 1. Not sure what \"as above\" if refering to. What is $t$ ?\n4. Line 227: What is the choice for |U|?\n5. Corollary: What is \"total set size\", I assume it is the size of all sets together?\n6. Line 255: What is super-element size?\n7. As far as I can see, the proof relies on the distribution $\\mu_p$. So, what role does the hard distribution from definition 3 play?\n8. At many places, the authors state \"as in proof of BravermanZamir24\". Is it possible to give few more details?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T22:44:10",
"modification_date": "2025-11-12T15:50:16",
"review_url": "https://openreview.net/forum?id=4MTFyYOsWJ¬eId=namxX3MRJo",
"license": "CC BY 4.0"
},
{
"id": "mwGTFP0i4U",
"forum": "4MTFyYOsWJ",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20365/Reviewer_czhm",
"reviewer_name": "Reviewer_czhm",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "The paper studies the classical problem of estimating the second frequency moment $F_{2}=\\sum_{i=1}^n f_i^2$ in the insertion-only streaming model, where $f_i$ is the frequency of element $i$ in the stream. The well-known AMS algorithm, an $(\\varepsilon, \\delta)$ algorithm for estimating $F_2$, has a space complexity of $O({1\\over \\varepsilon^2}\\cdot \\log n \\cdot \\log ({1\\over \\delta})$. In a recent work, Braverman--Zamir (2024) showed that this indeed is tight in general with respect to $n$ and $\\varepsilon$: for constant failure probability $\\delta$, any such algorithm will require $\\Omega({\\log n \\over \\varepsilon^2})$ space. However, in Braverman--Zamir (2024), the dependency of failure probability $\\delta$ on space complexity is missing. The current paper builds on this work to obtain *tight dependence on the failure probability* $\\delta$ in the lower bound. They show that for $\\delta \\geq {1\\over 2^{\\varepsilon \\sqrt{n}}}$, $\\log({1\\over \\delta})$ multiplicative factor in the space complexity is necessary. For example if $\\delta = 1/n$ (a natural setting), the lower bound matches the upper bound asymptomatically in all three parameters. \nThey also identify special regimes (bounded-frequency streams and sparse streams) in which the general lower bound does not apply, and give algorithms with improved space complexity.",
"strengths": "The complexity of $F_{2}$ estimation is fundamental in data streaming. Precisely characterizing dependence on all parameters is valuable. So the results are definitely worth publishing and is useful to know for those working in streaming algorithms theory. Technically the paper appears good (although I did not check the proofs carefully due to time constraint to know how far it is different from earlier work).",
"weaknesses": "The result, while technically strong and interesting, is highly specialized. It may appeal mainly to researchers in streaming complexity and communication complexity, making it somewhat narrow for ICLR’s broader audience of machine learning researchers. The paper would benefit from significant proofreading. There are several typographical and formatting issues. Examples: in the statement of the main theorem (Theorem 2, which repeats Theorem 1 -- what is $1n-\\delta$?). There is mention of $\\delta$ in the abstract without defining it. \nTheorems 1,2, and 3 are all the same, why have different numberings?",
"questions": "It will be nice to explicitly state what regimes do the lower bounds match the known upper bounds? Can this be stated explicitly in a discussion after theorem statement? Could you provide some open problems arising from this line of work? For instance, are there natural cases where the gap between lower and upper bounds remains for other moment estimation? A related work section will be useful to understand the context of your work.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T02:25:34",
"modification_date": "2025-11-12T15:50:17",
"review_url": "https://openreview.net/forum?id=4MTFyYOsWJ¬eId=mwGTFP0i4U",
"license": "CC BY 4.0"
},
{
"id": "lonaOvJjjm",
"forum": "4MTFyYOsWJ",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20365/Reviewer_WDpk",
"reviewer_name": "Reviewer_WDpk",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "Consider the following problem.\nSuppose we are in the streaming setting.\nLet $U$ be the universe of size $n$ and $x_i$ be the number of occurrence of $i$ in the data stream for $i\\in U$.\nWe would like to estimate the second frequency moment which is defined as $\\sum_{i\\in U} x_i^2$ within a $(1+\\epsilon)$ multiplicative factor with probability at least $1-\\delta$.\nOur goal is to minimize the space complexity for achieving this estimation.\nThe celebrated AMS sketch gives an upper bound of $\\frac{1}{\\epsilon^2}\\cdot \\log (\\frac{1}{\\delta}) \\cdot \\log n$ space.\nOn the other hand, a recent result by Braverman and Zamir showed the lower bound of $\\frac{1}{\\epsilon^2}\\cdot \\log n$ space for constant $\\delta$.\nTherefore, the regime of dependence on $\\delta$ is still missing and the paper provides the tight lower bound of $\\frac{1}{\\epsilon^2}\\cdot \\log (\\frac{1}{\\delta}) \\cdot \\log n$ space.\n\nThe main idea is to reduce the problem from Exam Mostly Set Disjointness (EMostlyDISJ) which is defined as follows.\nSuppose there are $t$ players and one referee.\nLet $U$ be the universe.\nEach player $i$ has a subset $S_i$ of $U$ and the referee has an element $j$ of $U$.\nIt is guaranteed that $S_1,\\dots, S_t$ satisfy one of the two cases: (i) $S_1,\\dots,S_t$ are $M$-almost-disjoint or (ii) there is a unique element $j_0$ in $U$ that is common to at least $ct$ of the sets for some constant $c \\in (0,1)$. \nThe communication is one-way that only player $i$ can send messages to player $i+1$ for $i=1,\\dots, t-1$ and player $t$ can send messages to the referee.\nWith failure probability $\\delta$, the referee needs to decide whether the sets $S_1,\\dots,S_t$ satisfy (i) or (ii) and if (ii) then decide whether $j_0 = j$.\nFinally, the authors prove that the lower bound of this communication game to be $\\frac{1}{\\epsilon^2}\\cdot \\log (\\frac{1}{\\delta}) \\cdot \\log n$.\n\nAdditionally, the authors give the results for different variations of the problem.",
"strengths": "- The paper studies a classical data streaming problem and this allows readers to have a chance of visiting traditional perspectives of learning theory.\n\n- The result provides a tight lower bound for a classical data streaming problem.\nIt provides new insights into the fundamental limitations.",
"weaknesses": "- The paper may need a good amount of work to improve its presentation.\nJudging from the proofs, it seems multiple lemmas are from the previous result and it may need some work to first introduce the idea from the previous work.\nI still have a hard time on understanding the reduction from EMostlyDISJ to the second frequency moment problem.\nIt may be helpful to describe the reduction more rigorously.",
"questions": "- Theorem 1, 2 and 3: I think they are the same.\nThere is a typo $1n-\\delta$.\n\n- Definition 1 and 2: I think they are the same.\nShould it be ``... the referee must decide if the input is an instance of case (ii) ...'' because case (ii) is the case where there is a unique common $j_0$?\nIt may be helpful to move the definition of M-almost-disjoint in line 147 before Definition 1.\n\n- Line 100: What is a super-item?\n\n- Line 102: ``... more than in the NO instance ...''\n\n- Line 160-161: It may be helpful to define the $F_{ct,t}$ problem more rigorously.\n\n- Lemma 1: There is an extra . at the beginning of the lemma.\n\n- Lemma 2: There is a missing . at the end of the first line.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T18:10:54",
"modification_date": "2025-11-12T15:50:17",
"review_url": "https://openreview.net/forum?id=4MTFyYOsWJ¬eId=lonaOvJjjm",
"license": "CC BY 4.0"
},
{
"id": "CYFEx82Llz",
"forum": "4MTFyYOsWJ",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission20365/Reviewer_Vduz",
"reviewer_name": "Reviewer_Vduz",
"rating": 4,
"confidence": 4,
"soundness": 1,
"contribution": 3,
"presentation": 2,
"summary": "The paper studies the streaming lower bound for estimating $F_2$ in the insertion-only model. It claims that any one-pass streaming algorithm achieving an $\\epsilon$-additive error with failure probability $\\delta$ requires $\\Omega(\\log \\frac{\\epsilon\\sqrt{n}}{\\log(1/\\delta)}\\cdot\\frac{1}{\\epsilon^2}\\cdot \\log\\frac{1}{\\delta})$ size of memory. Conceptually, this extends the constant-$\\delta$ tight lower bound of Braverman-Zamir (2024) to regime where $\\log(1/\\delta)\\leq \\sqrt{n}\\epsilon$. Technically, this paper follows from Braverman-Zamir framework but replaces \"Exam Set Disjointness\" with an \"Exam Mostly Set Disjointness\" variant, inspired by Kamath–Price–Woodruff (2021).",
"strengths": "Tight streaming lower bound for estimating $F_2$ is very important and notoriously challenging; pushing from constant $\\delta$ to very small $\\delta$ is a meaningful target. Even if largely building on Braverman-Zamir (2024), a clean and correct extension to small $\\delta$ would be of interest to the streaming algorithm community.",
"weaknesses": "My main concern is the proof soundness and the presentation clarify. Many proof steps are stated without sufficient detail to verify. For example, in the proof of Lemma 2, the authors write \"Using the chain rule for mutual information and the derivation involving D as in the proof of Lemma 4.3 in Braverman-Zamir (2024), we have\", but without providing further details about Lemma 4.3. A detailed revision is needed to make the argument self-contained and checkable.\n\nBesides, the writing has plenty of issues. Specific examples:\n\n1. The citation commands \\citep and \\citet are not used appropriately.\n2. The reference should be updated (e.g. Braverman-Zamir (2024)) and more comprehensive. I suggest adding a brief survey of recent upper and lower bounds for $F_2$ estimation across various streaming models (insertion-only, turnstile, random-order, multi-pass).\n3. Theorems 2 and 3 are identical; so are Definitions 1 and 2. They should be merged and restated once.\n4. In Definition 1, line 91: should \"an instance of case (i) with\" be \"an instance of case (ii) with\"?\n5. Line 95: \"In n order to\" => \"In order to\".\n6. Line 135: \"$1n-\\delta$\" => \"$1-\\delta$\".\n7. $M$ is used both for the memory state (line 144) and for the overlap parameter (line 148); these should be separated.\n8. In the displayed equation in Line 186: $D$ => $P$?\n9. Proposition 1: Clarify how the parameter $p$ enters the bound, and specify whether the random sample is performed with or without replacement.",
"questions": "Could the approach be adapted to the random-order insertion-only setting? Can it be extended to the multi-pass setting?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-23T09:59:04",
"modification_date": "2025-11-12T15:50:17",
"review_url": "https://openreview.net/forum?id=4MTFyYOsWJ¬eId=CYFEx82Llz",
"license": "CC BY 4.0"
}
] | |
wVBVa09JVV | https://openreview.net/forum?id=wVBVa09JVV | Don't Guess the Future, Find the Bottleneck: Spectral Subgoals for Offline Goal-Conditioned RL | 4.5 | 4 | [
4,
6,
4,
4
] | [
5,
3,
4,
4
] | 4 | [
"offline goal conditional reinforcement learning"
] | Offline goal-conditioned RL (OGCRL) learns to reach arbitrary goals from offline dataset, but long-horizon performance hinges on crossing a handful of hard-to-cross bottlenecks. These bottlenecks not only dictate the feasible paths toward the goal but also act as critical keypoints, marking the transitions between adjacent regions and providing the agent with essential directional guidance. Prior hierarchical methods pick subgoals by time or short-horizon value heuristics, which do not localize the bottleneck, as a result, the agent losing the clear guidance that bottlenecks could provide about where to pass next. We instead model long-horizon planning as “cross the next bottleneck”: we apply Laplacian spectral clustering to offline dataset to expose bottlenecks and then identify trajectories from the offline dataset that cross these boundaries, and the intersects are defined as keypoints (KPs).
Then the most representative KPs are automatically selected and a directed KP reachability graph $\mathcal G_{\mathrm{KP}}$ is constructed based on the selected KPs.
We then restrict high-level choices to these bottleneck states and use a pluggable low-level controller to execute the short transitions between them.
We provide theory showing that the next bottleneck is the optimal one-step subgoal and that Laplacian spectra recover bottlenecks with high overlap. Thus, Laplacian spectral clustering can discover approximately optimal subgoals. Empirically, the same pattern holds: across D4RL and OGBench, our method achieves state-of-the-art results on a broad set of navigation and manipulation tasks and across diverse dataset regimes, for example, **96.5\%** on **AntMaze** and **84.5\%** on **Franka-Kitchen**. | reinforcement learning | https://openreview.net/pdf?id=wVBVa09JVV | 2025-09-19T14:50:51 | 4 | [
{
"id": "eqyrqwTqSG",
"forum": "wVBVa09JVV",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16443/Reviewer_PGk3",
"reviewer_name": "Reviewer_PGk3",
"rating": 4,
"confidence": 5,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "* This paper proposes BASS (Bottleneck-Aware Spectral Subgoaling), an offline goal-conditioned reinforcement learning (OGCRL) framework that identifies bottleneck states which connect metastable regions in the state space and utilizes them as subgoals for hierarchical planning.\n\n* BASS argues that the primary difficulty in long-horizon planning within OGCRL environments arises from hard-to-cross bottlenecks, rather than short-term value estimation. It formulates the subgoal selection problem as a spectral graph problem, leveraging the low-frequency structure of the Laplacian operator computed from offline data.\n\n* BASS achieves superior performance compared to prior methods on D4RL and OGBench benchmarks. However, the method has only been validated on numeric-state environments, and the subgoal features appear to have been manually specified, which could further limit its applicability.",
"strengths": "* The paper introduces a novel spectral perspective on hierarchical goal-conditioned reinforcement learning by framing subgoal discovery as a bottleneck identification problem in the state space. BASS leverages the low-frequency eigenstructure of the Laplacian operator derived from offline data to reveal metastable regions and their connecting bottlenecks.\n\n* The paper demonstrates theoretical analysis (Theorem 1 & 2) with a coherent algorithmic design that ties Laplacian representation learning, spectral clustering, and keypoint graph construction into a unified pipeline.\n\n* By grounding subgoal discovery in topological structure rather than temporal heuristics, BASS offers a principled approach that could influence future research in hierarchical and representation-driven RL",
"weaknesses": "1. The proposed method appears to be applicable only to numeric-state environments, and all experiments were conducted exclusively under such low-dimensional state settings. It remains unclear whether the approach can extend to high-dimensional visual-state environments (e.g., image-based observations), and whether the Laplacian-based embedding would still effectively identify bottlenecks in such cases.\nClarifying the applicability to more complex observation modalities would strengthen the generality of the contribution.\n\n2. The paper represents each keypoint (KP) as (IΔ, vΔ), constraining only a subset of the state coordinates. However, the process for determining IΔ that is, which dimensions to include (e.g., x,y) appears to rely on manual or environment-specific heuristics. A more principled or learnable mechanism for feature selection (e.g., through information bottleneck criteria or gradient-based relevance analysis) would improve both generality and reproducibility.\n\n3. In the GENERALIZATION ACROSS ENVIRONMENTS experiment, it is unclear what meaningful insight is gained by swapping keypoint graphs between AntMaze-Stitch and AntMaze-Explore. These two datasets share the same map and the same navigation objective, implying that their keypoint graphs should not differ substantially. Moreover, the reported cross-domain transfer between PointMaze and AntMaze is difficult to interpret: if subgoals are represented solely by (x,y) coordinates and the low-level controller also relies on these coordinates as its input, then the successful transfer is almost inevitable by design, rather than demonstrating genuine generalization.\n\n4. This hand-crafted use of (x,y) as subgoal features may also undermine the fairness of comparison with prior baselines such as HIQL and Diffuser, which employ full-state embeddings (including joint angles or high-dimensional latent representations) for subgoal conditioning.\n\n5. While the bottleneck-based subgoal concept is conceptually appealing, the paper lacks an in-depth comparison with temporal distance representation (TDR)-based approaches such as HILP, QRL, and GAS. TDR methods learn representations that reflect optimal time distances—allowing farther movement in open regions and shorter transitions in constrained areas like corners. Since these works also utilize temporally grounded structure for subgoal selection or graph construction, a quantitative comparison would help clarify the specific advantages of BASS’s spectral formulation.\n\n6. Finally, although BASS demonstrates improvements on numeric-state benchmarks, the paper does not analyze failure cases.\nIt remains ambiguous whether the failures arise from (i) imperfect keypoint extraction or graph connectivity, (ii) limited capability of the low-level controller, or (iii) excessive distances between consecutive keypoints. For instance, in AntMaze-giant-stitch and AntMaze-large-explore, the success rates drop significantly. It would be insightful to analyze these cases in light of concurrent work such as GAS (Baek et al., 2025), which achieves strong performance using a TDR-based graph representation. Such analysis could reveal whether BASS’s limitations stem from the keypoint discovery mechanism itself or from broader challenges in long-horizon offline control.",
"questions": "Please provide responses to the weaknesses mentioned above.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T11:41:27",
"modification_date": "2025-11-12T13:49:15",
"review_url": "https://openreview.net/forum?id=wVBVa09JVV¬eId=eqyrqwTqSG",
"license": "CC BY 4.0"
},
{
"id": "ZozrqhEaIU",
"forum": "wVBVa09JVV",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16443/Reviewer_gT9f",
"reviewer_name": "Reviewer_gT9f",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposed a novel algorithm for generating a planning graph for the offline goal-conditioned reinforcement learning setting. The method works by discovering bottlenecks via a Laplacian Spectral Clustering Algorithm (specifically, ALLO). With this, they have a novel mechanism for identifying the optimal bottleneck to user as the next subgoal in a planning algorithm. The plan of subgoals is that leveraged with a diffusion planner or an MLP for generating actions using subgoals generated by a planner. The key contribution is a mechanism for generating nodes in a graph based on \"keypoints\" derived from ALLO, along with theoretical results for why these are the \"best\" nodes for planning.",
"strengths": "The theoretical foundation and the algorithm are very clearly explained. \n\nThe writing is clear as well.\n\nThe results are strong with the baselines they consider. They have many baselines, which is good. They use 3 environments: AntMaze, FrankaKitchen, and Maze2D, which is good. They show generalization results across environments, which is good. Presence of ablations are also good.",
"weaknesses": "It’s unclear what the specific contribution of this paper is regarding the use of Laplacian graph clusters for subgoal identification compared to prior work. The authors claim that “none of the existing methods have been used to significantly enhance goal-conditioned decision-making,” but that doesn’t seem accurate. There are numerous papers that leverage Laplacian-based representations for goal-conditioned reinforcement learning (RL), where Laplacian bottlenecks are explicitly used to discover meaningful subgoals. Such representations have already been shown to benefit RL. Is it that most of those focused on the benefits for exploration, whereas this paper focuses on offline RL?\n\nIf the authors are arguing that their setting is distinct because it involves transferring from an offline dataset to multiple goals simultaneously, that distinction needs to be clarified. It’s also not clear from the evaluation description how generalization is being tested for each offline dataset---are we evaluating on 1 goal, 4 goals, 10 goals?\n\nFinally, an important missing ablation is a simple baseline using ALLO with cluster means as subgoals. Since the proposed method appears to extend ALLO with a more sophisticated subgoal planning strategy, this baseline would help isolate the contribution of the new approach.",
"questions": "Novelty:\n- Is the main novelty of your method that you have a new way to generate nodes for a planing algorithm based on a Laplacian Spectral Clustering Algorithm? How does this compare to other methods for deriving nodes from Laplacian Representation Learning algorithms. I believe using low-frequency eigencomponents is something that has been done, e.g. [1,2]\n- Confirming that your use of a diffusion planner is not novel?\n\nExperiments:\n- I don't think I understand why you should expect generalization if you swap keyboard graphs across domains?\n- How do I interpret Table 3? What is the baseline comparison? It's just your method, which is confusing without references.\n- How do I interpret table 4? Same question for baseline comparisons? Should I just be comparing to your method when trained on the original target environment? There isn't a size dimension for Table 2 as far as I understand. The transfer here is really good. All above 95%. How do comparison methods do?\n- For the visualization of keypoints, how do other methods for detecting keypoints work? This is relevant for Figure 3.\n- A naive bottleneck discovery method would just use cluster means. I don't see that as comparison method. This seems important as the main contribution of this paper, as I understand it, is that you choose the \"optimal\" subgoal based on properties on properties of Laplacian spectral clustering.\n\n[1] Proto-value Functions: A Laplacian Framework for Learning: Representation and Control in Markov Decision Processes\n[2] A Laplacian Framework for Option Discovery in Reinforcement Learning",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T03:18:34",
"modification_date": "2025-11-12T13:49:15",
"review_url": "https://openreview.net/forum?id=wVBVa09JVV¬eId=ZozrqhEaIU",
"license": "CC BY 4.0"
},
{
"id": "PeXEIBs0xx",
"forum": "wVBVa09JVV",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16443/Reviewer_Ne7X",
"reviewer_name": "Reviewer_Ne7X",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The paper proposes an offline goal-conditioned reinforcement learning (OGCRL) method that learns to expose bottlenecks as subgoals and discover trajectories to cross these subgoals. Specifically, key points (KPs) on the cluster boundaries are revealed as bottlenecks, and a KP graph is constructed for further route planning. Experimental results on D4RL and OGBench show improved performance of the proposed method compared with representative offline methods.",
"strengths": "1. The proposed framework is clear and easy to follow.\n2. Leveraging state transitions to select boundaries is interesting, as the key points could indicate hard-to-cross regions.\n3. Experimental results show that the proposed method achieves an improved success rate compared with previous works.",
"weaknesses": "1. I have several concerns regarding the selection of key points (KPs):\n- First of all, KP selection is highly dependent on the clustering performance. As discussed in Section 5.4, reducing the number of clusters results in coarse discovered boundaries. In addition, increasing the number of clusters might introduce route complexity during planning. Although Section 5.4 includes an ablation study, the range of cluster numbers evaluated is, in my opinion, insufficient. Since the peak success rate is achieved when K=26, only K=28 is evaluated as an “extra cluster,” and the drop in success rate (6.7%) is non-trivial. I would recommend evaluating more cluster numbers. Furthermore, as route complexity may increase with more KPs, adding trajectory steps could strengthen the experiment.\n- Secondly, in practice, the offline dataset is often noisy, where the agent may reach unexpected states and get stuck (e.g., corners in AntMaze). In such cases, complex yet unnecessary KPs may appear, causing additional complexity during planning.\n\n2. The idea of exposing cluster boundaries as subgoals is interesting, but the similar idea has been studied in this paper (https://arxiv.org/pdf/2411.01396).\n\n3. Some questions regarding the claim that “KPs are optimal subgoals”:\n- To sufficiently support this claim, I would recommend adding trajectory steps in the experimental results for comparison with the baselines.\n- In addition to Section 5.4, a comparison of trajectory steps with high-quality demonstrations or expert-annotated routes could further strengthen the evidence.\n\n4. Minor Issues:\n- The visualization of KPs is interesting, but the visualization of trajectories seem not to be shown in Figure 3.\n- KP discovery in Section 4.1 is an essential and complex step of the proposed method. Including pseudocode could strengthen the readability.",
"questions": "Please answer the questions in weakness.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T05:01:53",
"modification_date": "2025-11-12T13:49:16",
"review_url": "https://openreview.net/forum?id=wVBVa09JVV¬eId=PeXEIBs0xx",
"license": "CC BY 4.0"
},
{
"id": "AeChXoLo2r",
"forum": "wVBVa09JVV",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16443/Reviewer_3mmM",
"reviewer_name": "Reviewer_3mmM",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper proposes a new approach to hierarchical OGCRL. It first learns a representation of the state space and then applies K-means clustering to partition the space based on the learned representations. A graph is subsequently constructed to identify key points that must be traversed to reach the goal (although it is not entirely clear how these key points are selected as subgoals). Finally, the method trains a low-level policy to sequentially reach these key points, ultimately achieving the desired goal. The proposed approach demonstrates strong performance in long-horizon navigation tasks.\n\nAs suggested by the AC, I have added a list of the key claims made in the paper and briefly assessed their validity:\n\n1. The optimal one-step subgoal is the next bottleneck (claimed in line 65). I would suggest being cautious with the use of the word \"optimal\". I can show a counterexample: in AntMaze, the ant can reach the goal through the inner circle of the branch rather than the bottleneck in the middle of the branch.\n\n2. The proposed method shows consistent bottleneck recovery. This seems reasonable as shown in Figure 3, but it would be more convincing to include results from different environments rather than only navigation tasks.\n\n3. The proposed method brings performance gains. I would agree, as there are significant improvements over the baselines (Table 2).\n\n4. The method generalises well to different but similar environments. This is supported by the high-level and low-level transfer experiments, which are both interesting and convincing.\n\n5. The claim on the impact of the number of clusters $K$ is not thoroughly verified, since the experiments are conducted in only one environment.",
"strengths": "Overall, this is a good paper with a clear presentation, a novel idea, and strong empirical performance.",
"weaknesses": "Compared to other work in OGCRL, the proposed method appears somewhat complicated, although this is not necessarily a drawback. The authors should provide the hyperparameters used in their experiments to demonstrate the robustness or potential vulnerability of the proposed approach. Furthermore, additional ablation studies should be included in the paper, as suggested in Question 5.",
"questions": "1. The sentence \"don’t guess the future, find the bottleneck\" is somewhat ambiguous. From my reading, I understand that the authors mean “we do not rely on remaining time or value guidance; instead, we propose to follow the bottleneck.” Therefore, the authors may consider revising the title and the statement in line 62 for greater clarity.\n\n2. OGCRL typically employs a state-to-goal mapping, often denoted as $\\phi$ in related literature. For example, the state $s$ in AntMaze includes the full dynamics of the ant (such as its location), while $\\phi(s)$ represents a slice of $s$, capturing only the ant’s location. In this paper, BASS learns a Laplacian encoder $\\phi_\\theta$ and uses the resulting latent code for state clustering (via K-means, as mentioned in line 199). My question is whether this representation learning step is necessary. Could we instead directly use $\\phi(s)$ for clustering? Based on my experience, this might already yield a similar structure to what is shown in Figure 1.\n\n3. What does $D$ denote in line 213? The state coordinate dropping requires further clarification.\n\n3. There is a duplicated sentence in lines 243–245. Within that sentence, the meaning of goal residual is unclear. It is also difficult to understand how the authors select the next KP. Do the authors plan the shortest sequence of KPs from $s_0$ to $s_{goal}$? If so, does this shortest sequence in the graph correspond to the shortest path from $s_0$ to $s_{goal}$?\n\n4. The proposed method is relatively more complicated than other OGCRL approaches; therefore, it is necessary to conduct more thorough ablation studies. For example, they could examine the impact of the representation learning step discussed in Question 2, the hyperparameter $\\tau$ in line 202, and the number of K-means clusters. Although the authors analysed the effect of the number of clusters $K$, conducting this analysis in only one environment is insufficient to fully support their claim.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T23:53:06",
"modification_date": "2025-11-12T13:49:17",
"review_url": "https://openreview.net/forum?id=wVBVa09JVV¬eId=AeChXoLo2r",
"license": "CC BY 4.0"
}
] | |
yzwSzhqLpH | https://openreview.net/forum?id=yzwSzhqLpH | Entropy-Guided Dynamic Tokens for Graph-LLM Alignment in Molecular Understanding | 4 | 4 | [
2,
4,
6,
4
] | [
4,
4,
5,
3
] | 4 | [
"Multimodal Modeling",
"Graph–LLM Alignment",
"Molecule Understanding",
"Backbone-Free Tuning"
] | Molecular understanding is central to advancing areas such as science and drug discovery, yet large language models (LLMs) struggle to understand molecular graphs effectively. Existing graph–LLM bridges often adapt a Q-Former–style connector with fixed-length static tokens originally designed for vision tasks. These designs overlook stereochemistry and substructural context and typically require costly LLM-backbone fine-tuning, limiting efficiency and generalization. We introduce EDT-Former, an Entropy-guided Dynamic Token Transformer that generates tokens aligned with informative molecular patches, preserving both local and global structural features for molecular graph understanding. Beyond prior approaches, EDT-Former enables alignment between frozen graph encoders and LLMs without tuning the LLM backbone, resulting in computationally efficient fine-tuning, and it achieves state-of-the-art results on the MoleculeQA and Mol-Instructions benchmarks, underscoring its effectiveness for scalable and generalizable multimodal molecular understanding. | EDT-Former: entropy-guided dynamic query tokens map molecular graphs to LLMs, capturing local and global structure features for comprehensive understanding and reasoning with backbone-free, connector-only training. | applications to physical sciences (physics, chemistry, biology, etc.) | https://openreview.net/pdf?id=yzwSzhqLpH | 2025-09-19T12:09:39 | 4 | [
{
"id": "N3GDvggGsY",
"forum": "yzwSzhqLpH",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15761/Reviewer_y4JF",
"reviewer_name": "Reviewer_y4JF",
"rating": 2,
"confidence": 4,
"soundness": 1,
"contribution": 2,
"presentation": 2,
"summary": "The paper introduces EDT-Former, for aligning molecular graphs with large language models (LLMs) under frozen-backbone settings. The model addresses two issues common in molecular graph–language alignment: (1) loss of structural fidelity caused by fixed-length, Q-Former-style connectors; and (2) inefficient fine-tuning due to large backbone updates. EDT-Former proposes two main mechanisms—Entropy-Guided Patching and a Dynamic Query Transformer—to generate substructure-aware dynamic tokens and integrate them with static anchors for efficient alignment. The corresponding EDT-Former shows promising results on the evaluated benchmarks, including MoleculeQA, Mol-Instructions (molecule captioning and property prediction), Pampa, and BBBP.",
"strengths": "- Good motivation\n - This work is well-motivated by the need for a dynamic length graph token that captures molecular substructure information.\n- Methodological novelty\n - The proposed tokenization is novel, based on entropy-guided segmentation for molecules based on uncertainty peaks from a next-atom predictor, offering a data-driven and deterministic patching mechanism. This design appears to be suitable for learning the representation of molecular functional groups, considering the dynamic size of the molecule.\n- Connector-only alignment design\n - The Dynamic Query Transformer establishes a modular bridge between frozen molecular encoders and frozen LLMs, requiring no or minimal gradient updates to the LLM. This contributes to computational efficiency and strong performance scalability, aligning with the authors’ focus on low-cost, high-fidelity multimodal integration.",
"weaknesses": "Overall, I find the proposed method interesting and reasonable (have a different opinion regarding NAP, though); my concerns primarily relate to the experimental setting and the demonstration of the author's hypothesis.\nI hope these are properly addressed in the rebuttal phase.\n\n- Ambiguity in the Description of Experimental Settings and Results\n - (Major) In line 312, they mention evaluating with Direct, Reasoning, and Rich Instructions prompting to reduce prompt sensitivity, but there is no information in the main body about whether these 3 prompting strategies align with the instructions used for finetuning each baseline model. This is a factor that could largely vary each model's performance and is easy to miss without background knowledge of each model's experimental setting. This descriptive gap makes it difficult to understand if the experiments were conducted fairly.\n - (Major) Mol-LLaMA is missing from the comparison baselines in Table 4, yet Mol-LLaMA also provides evaluation results for molecule captioning and property prediction in Mol-instructions (Table 17 from Mol-LLaMA), so including it seems appropriate. In line 371, the authors claim Q1, but Mol-LLaMA's experimental results appear to show higher performance, which needs to be addressed. The following is Table 17 from Mol-LLaMA. Please let me know if I have misunderstood anything.\n| Models | BLUE-2 | BLUE-4 | ROUGE-1 | ROUGE-2 | ROUGE-L | METEOR | MAE |\n|--------------------------------|--------|--------|----------|----------|----------|---------|---------|\n| Mol-LLaMA (LLaMA-2) | 0.478 | 0.425 | 0.761 | 0.698 | 0.750 | 0.701 | 0.0035 |\n| Mol-LLaMA (LLaMA-3) | 0.476 | 0.426 | 0.767 | 0.708 | 0.759 | 0.707 | 0.0039 |\n\n - In line 301, the authors state they used Mol-Instructions' evaluation benchmark; however, the provided experimental results used only 2 tasks (molecule captioning and property prediction) out of the total 17 tasks in Mol-Instructions. This could be misunderstood as results spanning the entire Mol-Instructions dataset.\n - In many sections including Abstract line 20, Introduction lines 114, 119, etc., they describe LLMs as fully frozen, yet in C.3 line 1238, they state that the LLM embedding layer was tuned. Which explanation is correct?\n- (Major) Fairness of Experimental Settings\n - Fair comparison of baseline models in zero-shot evaluation\n - As mentioned above, when measuring the zero-shot performance of finetuned models in Table 2, prompting strategies that are not aligned with the finetuning dataset can severely impair model performance. This is because models finetuned on molecular tasks can easily lose natural language following ability, which can be immediately confirmed by running inference on molecular LLMs such as BioT5, LlaSMol, LLaMo, etc., using their HuggingFace open-sourced models. This could create an evaluation setting that is favorable to EDG-Former, which is the only model not finetuned LLM backbone (except for embedding layer). To easily address these concerns, it is suggested that the performance for each prompting strategy be reported before averaging.\n - Data contamination\n - The Mol-LLaMa-Instruct used by the authors for EDT-Former's alignment training contains GPT-4o generated molecule captions augmented from Pubchem324K, which can be used as a dataset for molecule captioning tasks. Considering that the source of the Mol-Instructions' molecule captioning dataset used in Table 4 experiments is also PubChem, there is a need to clearly describe whether data contamination exists between the train-test splits of these two datasets. Although the authors claim in D.4 that they performed character-level 13-gram analysis with reference to GPT-3, given that GPT-3 deals with general-purpose text data, not focusing on molecular tasks, this seems insufficient to verify data contamination in molecular tasks. In addition, instead of stating the exact figure for the ratio of data with overlapping 13-characters, the authors merely state it's below 5%, but a figure close to 5% could still be perceptible to humans. To clearly resolve these concerns, it is suggested that a contamination analysis based on scaffold split, which is widely chosen in the molecular domain, be provided.\n- Regarding demonstration of Q3\n - (Major) Comprarision with fixed length token with enough molecule tokens\n - The authors introduced dynamic molecule tokenization to address the problem of fixed length molecule tokens losing molecular features due to limited length. However, in E.1, they state through dynamic token maximum length ablation that 64 molecule tokens are sufficient. Regarding this, considering the sequence max length budget of current LLMs, using a fixed length token of 64 is not a significant burden. Is there a performance difference compared to using fixed length tokens of 64? To prove Q3, a comparison of performance between using fixed length tokens of sufficient size and EDT-Former is necessary, which I believe is an essential ablation study given the message of the paper.\n - Ablation of dynamic token in inference time\n - Since EDT-Former uses fixed length molecule tokens concatenated with dynamic tokens, there is a need to verify whether it might actually be bypassing dynamic tokens and only using fixed length tokens. This seems easy to check at the inference level - I'm curious about the performance when replacing dynamic tokens and fixed tokens with random or dummy tokens during inference.\n- (Major) Limited justification for entropy predictor selection\n - The entropy estimation relies on a simplistic GPT-2–based next-atom predictor (NAP) trained on SMILES. While computationally light, the rationale for this choice is mostly under-investigated, given that next-atom in sequence does not necessarily align with molecular subgraph structure, as originally the author aimed to represent by EDT-Former.\n- Interpretive analysis remains descriptive\n - While attention visualizations (Fig. 6) qualitatively support the claim that dynamic tokens attend to “structural transitions”, the study lacks quantitative or diagnostic analysis. Although I don’t think it is necessary to show that the hypothesis holds for almost all molecules, at least line 250 should be adjusted in light of current experimental evidence. In addition, analysis on specific failure modes (e.g., mis-segmentation, entropy misalignment, or redundant patches) is necessary.",
"questions": "- Regarding experimental setting of Figure 5. I'm curious about how the experiments in Figure 5 were conducted. Does it correspond to retraining the model excluding each component, or excluding each element only at inference time from the full model inference? There doesn't seem to be an explanation of the experimental setting for Figure 5.\n- On the property prediction benchmark choice. While EDT-Former's zero-shot performance in Table 2 is impressive, I wonder why only BBBP was used among the widely comparable evaluation benchmarks for property prediction such as MoleculeNet that are commonly used among molecular LLMs? Given that the evaluation is for zero-shot performance, using only BBBP and Pampa as property prediction benchmarks seems to both forfeit the advantages of models capable of zero-shot inference and provide a non-comprehensive evaluation. As the authors stated, since EDT-Former is not dependent on LLM finetuning, expanding the evaluation benchmarks would be a viable option as model retraining is not required for benchmark selection. Referring to widely used evaluation benchmarks such as MoleculeNet could easily demonstrate directly convincing results.\n- In cases where molecules contain highly repetitive or symmetric substructures (many molecules could have long carbon chain, resulting in CCCC…), how does entropy-guided segmentation handle indistinguishable regions? Is there a mechanism to prevent redundant substructure tokens?\n- The benchmarks used focus mainly on small molecules. Can the authors discuss or show any evidence regarding scalability to large macrocycles, where token budget and entropy dynamics might differ substantially?\n- In Figure 6, attention patterns are qualitatively aligned to high-entropy regions. Have the authors considered quantifying this alignment (e.g., attention–entropy correlation) to strengthen the link between entropy peaks and attention interpretability?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T22:37:11",
"modification_date": "2025-11-12T13:39:51",
"review_url": "https://openreview.net/forum?id=yzwSzhqLpH¬eId=N3GDvggGsY",
"license": "CC BY 4.0"
},
{
"id": "t4DdNfZ20C",
"forum": "yzwSzhqLpH",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15761/Reviewer_x8D4",
"reviewer_name": "Reviewer_x8D4",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This work studies the multimodal LLMs for aligning LLMs with molecular graphs. The authors propose EDT-Former, a connector-only approach that generates fixed query tokens for molecules in different lengths. EDT-Former includes (i) Entropy-Guided Patching, which uses next-atom surprisal peaks from a lightweight SMILES predictor to segment molecules into substructure-aware patches, and (ii) a Dynamic Query Transformer that fuses these variable-length “dynamic tokens” with a small set of learnable modality anchors before projecting into the LLM space. Experiments across multiple tasks and benchmarks show the effectiveness of EDT-Former.",
"strengths": "1. This work targets on fixed-token bottleneck in graph-LLM alignment, which is timely and critical;\n\n2. The proposed approach is novel and interesting;\n\n3. There are significant empirical improvements;",
"weaknesses": "1. The benchmarked tasks seem to be limited. For example, can this approach be applied to other tasks in Mol-Instructions?\n\n2. The empirical comparison seems not to be fair, as EDT-Former uses different training corpus with other baseline approaches. Given the efficiency of the proposed approach, can EDT-Former be applied and ablated with different instruction training data?\n\n3. Lack of comparison and discussion with a closely related work [1]. For example, can the proposed tokenization scheme mitigate the hallucination issue mentioned in [1]?\n\n- HIGHT: Hierarchical Graph Tokenization for Graph-Language Alignment, ICML'25.",
"questions": "Please find the details in the section above.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T19:45:56",
"modification_date": "2025-11-12T13:39:53",
"review_url": "https://openreview.net/forum?id=yzwSzhqLpH¬eId=t4DdNfZ20C",
"license": "CC BY 4.0"
},
{
"id": "OPJxVFD0iR",
"forum": "yzwSzhqLpH",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15761/Reviewer_x1GF",
"reviewer_name": "Reviewer_x1GF",
"rating": 6,
"confidence": 5,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper addresses two major limitations of existing graph-Large Language Model (LLM) frameworks for molecular understanding: 1) information loss from using fixed-length token connectors (e.g., Q-Former), which compress complex, variable-sized molecular graphs into a static representation, and 2) the high computational cost and poor generalization resulting from fine-tuning the entire LLM backbone.\nThe paper proposes **EDT-Former**, an \"Entropy-guided Dynamic Token Transformer,\" as a novel connector. The key idea is to generate a *variable* number of tokens that are aligned with a molecule's structural complexity. This is achieved through a two-part mechanism: entropy-guided patching and a dynamic query transformer. The paper demonstrates the best performance on a wide range of benchmarks.",
"strengths": "- The paper is well-written and easy to follow.\n- The paper proposes a novel query Transformer. Most works simply abstract molecules into queries, not considering the stereochemistry and structural context in the molecule. Another line of work exploits rule-based algorithms to extract the meaningful substructures. Different from them, the proposed work automatically extracts substructures by applying entropy-based patching segments.\n- Experimental results demonstrate the effectiveness of EDT-Former with its superior performance compared to molecular LLMs and General LLMs.\n- The paper provides a rigorous analysis, which helps in understanding the contribution of the proposed component.",
"weaknesses": "- The entire entropy-patching mechanism is based on a 1D SMILES sequence. The properties of a SMILES string (and thus its entropy profile) can change based on the canonicalization algorithm used or if non-canonical strings are permitted. The paper does not discuss the robustness of the patching mechanism to different, yet chemically equivalent, SMILES representations of the same molecule.",
"questions": "Please refer to the weaknesses section.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T14:42:50",
"modification_date": "2025-11-12T13:39:53",
"review_url": "https://openreview.net/forum?id=yzwSzhqLpH¬eId=OPJxVFD0iR",
"license": "CC BY 4.0"
},
{
"id": "f06aL9FyIy",
"forum": "yzwSzhqLpH",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15761/Reviewer_XpYi",
"reviewer_name": "Reviewer_XpYi",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper introduces EDT-Former, a novel method for aligning molecular graphs with Large Language Models (LLMs). It addresses key limitations of existing approaches, such as Q-Former-style bridges, which use a fixed number of tokens that often lead to loss of stereochemical and substructural details, especially in larger molecules.\nThe core innovation lies in its two components: an Entropy-Guided Patching strategy that dynamically segments molecules into informative, variable-length tokens based on structural uncertainty, and a Dynamic Query Transformer that integrates these tokens with static modality anchors. EDT-Former achieves this alignment without fine-tuning the LLM backbone, enabling highly efficient training.",
"strengths": "1. The paper introduces an innovative solution to a clear limitation of existing Q-Former-style bridges.\n\n2. The results demonstrate state-of-the-art performance across multiple benchmarks.\n\n3. The framework is trained without fine-tuning the LLM backbone, which is efficient.",
"weaknesses": "1. Evaluation is centered on question-answering and property prediction. The method's effectiveness on more challenging generative tasks, such as molecule generation, remains an open and important question.\n\n2. This paper should discuss the difference with more molecular graph-text pretraining frameworks, such as [1,2]. \n\n3. This paper uses the SMILES string to represent the molecular structure, which might not capture the structure of the molecule well. The graph structure or 3D structure could contain more structural information.\n\n[1] Multi-modal Molecule Structure-text Model for Text-based Retrieval and Editing\n\n[2] Advancing Molecular Graph-Text Pre-training via Fine-grained Alignment",
"questions": "Please see weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T21:00:43",
"modification_date": "2025-11-12T13:39:53",
"review_url": "https://openreview.net/forum?id=yzwSzhqLpH¬eId=f06aL9FyIy",
"license": "CC BY 4.0"
}
] |
57YfUhcYXd | https://openreview.net/forum?id=57YfUhcYXd | Eliminating Inductive Bias in Reward Models with Information-Theoretic Guidance | 5.5 | 3.5 | [
6,
6,
4,
6
] | [
4,
3,
3,
4
] | 4 | [
"LLM",
"RLHF",
"Reward Hacking",
"Debias"
] | Reward models (RMs) are crucial in reinforcement learning from human feedback (RLHF) to align large language models (LLMs) with human values. However, RM training data is commonly recognized as low-quality, always containing preference conflicts and inductive biases, such as response length or speaking style, which can easily lead to reward overfitting and hacking. A few recent RM debiasing methods either target merely a single specific type of preference bias or only address simple linear bias relations such as Pearson coefficients. To mitigate more complicated inductive bias of reward modeling, inspired by the information bottleneck, we introduce a novel information-theoretic debiasing method called **D**ebiasing via **I**nformation optimization for **R**M (DIR). More specifically, our method trains RMs by maximizing the mutual information (MI) between preference prediction and input response pairs, while minimizing the MI between RM outputs and biased attributes of preference inputs. With the theoretical justification of information theory, DIR can handle different types of bias with more comprehensive non-linear correlations, enlarging its real-world application scenarios. In experiments, we verify the effectiveness of DIR with three types of inductive biases: response length, sycophancy, and format. Based on the numerical results, we discover that DIR can not only effectively diminish target inductive biases but also improve RLHF performances on various benchmarks with better generalization abilities. | probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.) | https://openreview.net/pdf?id=57YfUhcYXd | 2025-09-19T10:52:36 | 4 | [
{
"id": "EGtAa7csX8",
"forum": "57YfUhcYXd",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15332/Reviewer_cNct",
"reviewer_name": "Reviewer_cNct",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper proposes DIR, an information-theoretic framework for debiasing reward models in RLHF. The method maximizes MI between predictions and true preferences while minimizing MI between internal representations and predefined bias attributes. Using data processing inequality and variational bounds (BA and CLUB), the theoretical objective becomes a tractable loss. Experiments on length, sycophancy, and format biases show improvements in both RM metrics and downstream RLHF performance.",
"strengths": "1. The explicit use of data processing inequality to justify representation-level debiasing, combined with dual variational bounds (BA for information retention, CLUB for bias suppression), provides an elegant and principled solution.\n\n2. Experiments cover three diverse bias types with end-to-end assessment (RM performance + downstream PPO policies). Strong ablations on representation choice (Table 6) and hyperparameter $\\lambda$ (Figure 4) validate design decisions.\n\n3. Zero inference overhead and moderate training cost (~33% increase). Demonstrated versatility across bias types suggests broad applicability.",
"weaknesses": "1. Method requires knowing bias types *a priori* and labeling $b_{\\mathrm{rel}}$ for every pair. No mechanism for unsupervised bias discovery limits real-world applicability.\n2. Experiments isolate single biases. Real datasets likely contain concurrent biases (e.g., lengthy + sycophantic responses). Unclear how to extend DIR—multiple debiasing terms with separate $\\lambda$ values? Potential optimization conflicts?\n3. Sycophancy evaluation uses fixed prefix injection (“*Yes, you are right.*”). Real biases are more subtle and contextually integrated, raising generalization concerns.\n4. Unexplored Representation Alternatives: Exclusively uses final hidden state without comparing alternatives (e.g., mean-pooling). Global representations may better capture stylistic/format biases.",
"questions": "1. How would DIR handle concurrent biases? Would you use $\\mathcal{L}\\_{\\text{reward}} + \\sum\\_i \\lambda\\_i \\mathcal{L}\\_{\\text{debias}}^{i}$? What optimization challenges arise from negative interactions between debiasing signals?\n2. Have you studied the architecture sensitivity of $q_{\\psi}$? Could an overly powerful $q_{\\psi}$ discard legitimate preference-informative correlations along with spurious biases?\n3. How do debiased RMs interact with direct alignment methods like DPO? Could altered reward landscapes complicate implicit differentiation?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-07T11:01:18",
"modification_date": "2025-11-12T13:33:46",
"review_url": "https://openreview.net/forum?id=57YfUhcYXd¬eId=EGtAa7csX8",
"license": "CC BY 4.0"
},
{
"id": "CV9yKDrzb8",
"forum": "57YfUhcYXd",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15332/Reviewer_jfKf",
"reviewer_name": "Reviewer_jfKf",
"rating": 6,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "The paper proposes an information-theoretical viewpoint on reward modeling and the Bradley-Terry model. Specifically, the proposed method, DIR, focuses on mitigating inductive biases, such as verbosity bias and stylistic biases, by maximizing the mutual information between preference prediction and input-response pairs. By demonstrating the performance of the reward model itself and as a preference proxy in RLHF training, the paper shows that DIR could be an effective debiasing objective for reward modeling.",
"strengths": "1. The paper presents an intuitive yet theoretically reasonable scope of understanding reward modeling as aligning the preference distribution and preference prediction from the reward model.\n2. The benchmark analysis on the biases in the reward model benchmark, RM-Bench, comes before the actual debiasing evaluation of the proposed method, which strengthens the experimental rigor of the paper.\n3. Alongside the well-known length bias, the paper studies multiple types of biases and demonstrates that DIR can be an effective learning objective across different biases.",
"weaknesses": "The main weakness of the paper is in the clarity of writing. The clarity of mathematical notations and experimental details in the paper can be improved. Other points that could either be clarified or stated as weaknesses are listed in the questions. Overall, the clarity in Sections 2 and 3 should be improved for better clarity. While there are multiple cases where the notational consistency/clarity is lacking, these are a few examples:\n- Section 3.1 starts by saying that $\\mathcal{L}\\_\\text{total}$ consists $\\mathcal{L}\\_\\text{pref}$ and $\\mathcal{L}\\_\\text{debias}$, while Equation (12) uses $\\mathcal{L}\\_\\text{reward}$ instead.\n- $\\mathcal{L}_\\text{debias}$ is not explicitly defined in the paper and appears in Equation (12).\n\nThese inconsistencies and missing definitions prevent a clear understanding of the paper, even though the paper's theoretical soundness should be highlighted as its strength.",
"questions": "- Can DIR be expanded to the implicit reward models like direct alignment algorithms?\n- On the official RM-Bench leaderboard, “Skywork-Reward-Llama-3.1-8B-v0.2” (“BT” in Table 5) has mostly higher numbers compared to the results stated in Table 5. For example, “Hard” score for Skywork-Reward-Llama-3.1-8B-v0.2 is 52.6 and 69.3 for “Chat”, while the reported scores in the paper are 42.76 and 64.69, respectively. Given that the numbers on the leaderboard could change the trend in Table 5 (e.g., the overall score of Skywork-Reward-Llama-3.1-8B-v0.2 on the leaderboard is higher than “Ours-10.0”), this part needs clarification.\n- On the RM-Bench scores, why does the “Ours” model experience a notable drop in the “Easy” accuracy? By debiasing, is the model experiencing a trade-off in easy stylistic differentiation?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T19:27:24",
"modification_date": "2025-11-12T13:33:46",
"review_url": "https://openreview.net/forum?id=57YfUhcYXd¬eId=CV9yKDrzb8",
"license": "CC BY 4.0"
},
{
"id": "jOo5yEOa6U",
"forum": "57YfUhcYXd",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15332/Reviewer_6hPr",
"reviewer_name": "Reviewer_6hPr",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The paper introduces a method called DIR (Debiasing via Information Optimization for Reward Models) aimed at mitigating inductive biases, such as response length, sycophancy, and format, that plague reward models in RLHF settings. It frames training as an information-theoretic problem: the method maximises the mutual information (MI) between model preferences and input response pairs, while simultaneously minimising the MI between the model’s internal representation and specified bias attributes. The debiasing objective is realised via an adversarial variational network applying the CLUB MI-estimator to ensure the learned representation is less correlated with bias attributes.",
"strengths": "1. The paper propose a new method and tackles well-documented biases in reward models\n2. the approach provides a structure that could extend to multiple bias types.\n3. The proposed DIR method shows improvements over several baselines",
"weaknesses": "1. The paper presents itself as introducing a “novel information-theoretic framework,” but its core components are repurposed versions of existing methods. The preference loss is simply the standard Bradley-Terry ranking loss, reinterpreted post hoc as a mutual-information maximization objective. The debiasing term also relies on a conventional adversarial setup using the CLUB estimator [1], a technique already established in prior work. Although the implementation is sound and practically useful, it does not represent a genuinely new theoretical contribution.\n2. The paper’s analysis of the key hyperparameter, $\\lambda$, is self-contradictory. Figure 4 claims that performance peaks at $\\lambda = 1$ and drops sharply at $\\lambda = 10$ , calling the latter an “over-correction.” However, Table 5 shows the opposite: the $\\lambda = 10$ model achieves the highest overall and “Hard” subset scores. These results directly conflict, undermining the credibility of the authors’ claims about the optimal debiasing strength and the rigor of their tuning methodology.\n3. Although the results are generally strong, they are not consistently superior across benchmarks. In the Length Bias test, the proposed method’s correlation (0.468) only slightly outperforms the Skywork baseline (0.498), showing minimal real improvement. In the Sycophancy Bias test, the InfoRM baseline even outperforms the proposed method in the 20%/70% adversarial setting (86.6 vs. 85.1). These mixed outcomes weaken the claim of “most consistent and robust performance” and suggest the improvements may be context-dependent rather than universal.\n4. The method’s dependence on a pre-defined and labeled bias attribute ($b$) is a practical limitation. It requires the user to already know what the bias is (e.g., length, sycophancy) and be able to label it for every data pair. This makes it useless against unknown or hard-to-quantify biases, a limitation that more \"indirect\" methods like InfoRM (which the paper critiques) [2] would not have.\n\n[1] Club: A contrastive log-ratio upper bound of mutual information.\n[2] Inform: Mitigating reward hacking in rlhf via information-theoretic reward modeling.",
"questions": "1. The DIR framework's primary practical limitation is that it requires a \"pre-defined bias attribute b\". This means a human must identify and label the specific bias (length, format, etc.) they want to remove. How does this method fare against unknown or unlabeled biases, which are common in real-world data?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T15:03:57",
"modification_date": "2025-11-12T13:33:47",
"review_url": "https://openreview.net/forum?id=57YfUhcYXd¬eId=jOo5yEOa6U",
"license": "CC BY 4.0"
},
{
"id": "S7R2XKm1eb",
"forum": "57YfUhcYXd",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission15332/Reviewer_UoY4",
"reviewer_name": "Reviewer_UoY4",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper targets to the issue of inductive bias of reward models. This paper proposes an information-theoretic debiasing method, called DIR, which maximizes the mutual information between preference prediction and input pairs, while minimizing the mutual information between outputs and predefined bias attributes to debiase these biases. Experimental results demonstrate its effectiveness in debiasing biases of length, sycophancy and format, while achieving competitive alignment performance.",
"strengths": "- The paper is well written and clear to read.\n- Denoising reward model is very important for RLHF to improve alignment performance of LLMs.\n- The experimental results are solid, covering three types of biases.",
"weaknesses": "- In the experimental setup, the authors address the biases of length, sycophancy, and format individually. However, in practice, we would expect a reward model to debias all these biases simultaneously. It seems that the paper lacks an experimental setup that assesses and discusses the ability of DIR to mitigate all these biases simultaneously.\n- There are two commonly used benchmarks for preference alignment, i.e., AlpacaEval [1] and MT-Bench [2], which should be included to evaluate the model ability of open-ended and multi-turn generations.\n- I think that a figure to illustrate your methods will help readers intuitively understand your proposed method.\n- The proposed method DIR relies on predefined inductive biases, such as length, sycophancy and format. However, if the reward model has some potential biases that are not intuitive such as length perceived by human, can the proposed DIR handle such scenario?\n\n[1] Length-controlled alpacaeval: A simple way to debias automatic evaluators. \n[2] Judging llm-as-a-judge with mt-bench and chatbot arena.",
"questions": "- Do you compare performance with DPO directly trained using the same preference data?\n- What is your implementation details of the variational network $\\psi$?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-23T22:51:03",
"modification_date": "2025-11-12T13:33:47",
"review_url": "https://openreview.net/forum?id=57YfUhcYXd¬eId=S7R2XKm1eb",
"license": "CC BY 4.0"
}
] | |
ky5iqwZSXI | https://openreview.net/forum?id=ky5iqwZSXI | Reliable Fine-Grained Evaluation of Natural Language Math Proofs | 5 | 4 | [
4,
6,
6,
4
] | [
3,
4,
4,
5
] | 4 | [
"automated proof evaluation; LLM-as-a-judge; LLM-generated math proofs; rubric-guided grading; prompt optimization; expert-annotated proof dataset; evaluator reliability; reward modeling"
] | Recent advances in large language models (LLMs) for mathematical reasoning have largely focused on tasks with easily verifiable final answers while generating and verifying natural language math proofs remains an open challenge. We identify the absence of a reliable, fine-grained evaluator for LLM-generated math proofs as a critical gap.
To address this, we propose a systematic methodology for developing and validating evaluators that assign fine-grained scores on a 0–7 scale. Our approach first constructs a carefully designed, problem-specific marking scheme, and then uses it as a foundation to systematically study other key design choices, including the backbone model, additional context, instruction sets, and evaluation workflows.
To enable this study, we introduce ProofBench, the first expert-annotated dataset of fine-grained proof ratings, spanning 131 problems from major math competitions and 393 LLM-generated solutions (from o3, Gemini 2.5 Pro, and DeepSeek-R1) with expert gradings. Our evaluation shows that a strong reasoning backbone, a detailed marking scheme, and simple ensembling are crucial for high performance. This leads to our best evaluator, ProofGrader, which achieves an RMSE of 1.093 compared to expert grading, significantly outperforming simpler baselines.
Furthermore, to demonstrate its practical utility, we test ProofGrader as a reward model in a best-of-$n$ selection task. At $n=8$, it achieves an average score of 4.05/7, bridging more than 90\% of the performance gap between a naive binary evaluator (2.59) and the human oracle (4.21), underscoring its potential to improve downstream proof generation. | LLMs lack reliable proof evaluators. We introduce ProofBench and a 0–7 methodology; our ProofGrader (marking schemes + ensembling) hits RMSE 1.093 vs experts and lifts best-of-8 to 4.05/7, closing >90% of the gap to a human oracle. | foundation or frontier models, including LLMs | https://openreview.net/pdf?id=ky5iqwZSXI | 2025-09-20T16:56:24 | 4 | [
{
"id": "TAq6T2iuEB",
"forum": "ky5iqwZSXI",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24615/Reviewer_HNs6",
"reviewer_name": "Reviewer_HNs6",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "This paper introduces ProofBench, a dataset of 131 competition math problems with 393 expert-graded LLM solutions, and systematically studies design choices for automated proof evaluators. The authors propose ProofGrader, which achieves RMSE of 1.093 against expert scores and demonstrates practical utility in best-of-n selection tasks.",
"strengths": "S1: The paper tackles an important problem for mathematical reasoning research. Reliable proof evaluation is a critical bottleneck for training and assessing LLMs on mathematical reasoning tasks, and the lack of scalable alternatives to human grading or formal verification makes this work timely and valuable.\n\nS2: The systematic methodology using problem-specific marking schemes as an anchor is well-motivated. The two-stage annotation process (generating marking schemes, then grading with them as guidance) provides a principled way to maintain consistency while allowing flexibility for alternative solution approaches.\n\nS3: The experimental design is comprehensive and well-structured. The ablation studies clearly isolate the impact of different design choices (backbone model, context components, instructions, workflows), providing actionable insights about what matters for evaluator performance.\n\nS4: The best-of-n experiment (Section 6) effectively demonstrates practical utility beyond correlation metrics. Showing that ProofGrader closes 90% of the gap between a naive binary evaluator and the human oracle provides evidence of real-world value for downstream applications like RL training.\n\nS5: The decision to use fine-grained (0-7) rather than binary scoring is validated empirically. Figure 2 clearly shows that binary evaluators fail to distinguish among correct solutions, while fine-grained scoring enables effective ranking.",
"weaknesses": "W1: ProofBench is only shown to show predictive power (via MSE on the benchmark) over whether an evaluator is good at grading outputs of three specific models (o3, Gemini 2.5 Pro, DeepSeek-R1), all of which are precisely the models whose proofs are used as the annotated examples for computing the MSE. There is no evaluation of how well ProofGrader generalizes to solutions from weaker models, different model families, or human-written proofs, which limits our understanding of its robustness. The paper should have tested whether evaluators with low MSE on ProofBench have high agreement with human annotators on grading proofs generated by models that are not one of those three models, in order to demonstrate general utility.\n\nW2: Inter-annotator reliability is insufficiently reported. While the paper mentions that two experts underwent calibration and double-scored 20% of items, the actual agreement metrics (e.g., correlation, exact agreement rate, within-1 agreement) are not provided. Since the significance of the contributions hinges on the correctness of the human annotations, this needs to be emphasized in the main text.\n\nW3: Error analysis is absent. The paper does not systematically examine where ProofGrader fails or succeeds, what types of errors it makes (over-crediting vs. under-crediting), or which problem types are most challenging. Understanding failure modes would be valuable for future work and practical deployment.\n\nW4: Computational costs and efficiency are not discussed. For practical deployment, especially in RL training scenarios where evaluators may need to score thousands of candidate solutions, the cost (in terms of API calls, latency, and financial expense) compared to simpler baselines would be relevant information.",
"questions": "Q1. Since the best-of-n experiments are supposed to establish the predictive generalization power of ProofBench for use as a reward model, shouldn’t the ground truth annotations for the proofs used in the experiment be annotated by different people than those used to annotate the examples in ProofBench? A difficulty in designing a benchmark for evaluators is that the bias in the annotations themselves need to be accounted for, i.e, the paper needs to demonstrate that the possibly imperfect annotator preferences used in ProofBench are sufficient for generalization to general aesthetic preferences in the math community (regarding competition style proofs).\n\nQ2. Similarly to Q1, in the best-of-n experiments, shouldn't the set of model generations being evaluated by the candidate evaluators be generated by models that are different to the three specific models used to generate the outputs in the benchmark? Could the authors provide justification why the current methodology is sufficient to demonstrate the general utility of ProofBench for selecting evaluators meant to evaluate generations from other models? Currently, it seems like ProofBench is useful for selecting evaluators for proofs generated by o3, Gemini, and DeepSeek-R1, but no indication of general ability.\n\nQ3. This approach seems very committal to a specific frozen mark scheme. Since the labor intensive annotation process is predicated on a particular MS, what happens if the mark scheme needs to be changed? Do we need to re-annotate each time? Could the authors provide justification why using a particular frozen mark scheme for the annotation process is sufficiently general to avoid re-annotation?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T18:59:11",
"modification_date": "2025-11-12T18:25:36",
"review_url": "https://openreview.net/forum?id=ky5iqwZSXI¬eId=TAq6T2iuEB",
"license": "CC BY 4.0"
},
{
"id": "8bR6jZs7qY",
"forum": "ky5iqwZSXI",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24615/Reviewer_LGPu",
"reviewer_name": "Reviewer_LGPu",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper addresses the challenge of reliably evaluating natural language mathematical proofs generated by large language models. The authors propose a systematic methodology for developing automated evaluators that assign fine-grained scores on a 0-7 scale, rather than binary correct/incorrect judgments. They introduce PROOFBENCH, an expert-annotated dataset containing 393 LLM-generated solutions to 131 competition math problems, along with problem-specific marking schemes. Through systematic experimentation, they develop PROOFGRADER, an ensemble-based evaluator that achieves strong alignment with expert judgments (RMSE of 1.093) and demonstrates practical utility as a reward model for best-of-n selection tasks.",
"strengths": "1 The work tackles an important gap in mathematical proof evaluation with a systematic approach. While building on existing LLM-as-a-judge paradigms, the application to fine-grained mathematical proof evaluation with problem-specific marking schemes is novel.\n2. The experimental design is thorough and methodical. The expert annotation process is well-designed with appropriate quality controls. The systematic ablation studies provide clear insights about what factors matter for evaluator performance.\n3. The paper is well-organized and clearly written. The motivation is compelling, methodology is systematic, and results are presented comprehensively with good visualizations.\n4. Addresses a real bottleneck in mathematical reasoning research. PROOFBENCH provides a valuable resource for the community, and the insights about evaluator design will inform future work. The demonstration of practical utility in best-of-n selection shows real-world applicability.",
"weaknesses": "1. The dataset contains only 393 solutions across 131 problems from competition mathematics. This is relatively small for drawing broad conclusions about evaluator design, and competition problems may not represent the full spectrum of mathematical reasoning tasks.\n2, The approach relies heavily on LLM-generated marking schemes, which could introduce systematic biases or limitations. The quality of these schemes fundamentally constrains the evaluation quality, but this dependency is not thoroughly analyzed.\n3. All experiments use the same expert annotators and marking scheme generation process. It's unclear how well the findings generalize to different mathematical domains, difficulty levels, or evaluation standards.\n4. The paper doesn't compare against other potential evaluation approaches beyond varying LLM configurations. For instance, how does this approach compare to simpler heuristic methods or other structured evaluation frameworks?",
"questions": "1. How sensitive are the results to the quality of the automatically generated marking schemes? Have you conducted experiments with human-written marking schemes for comparison?\n2. How well do these evaluator design principles transfer to other mathematical domains beyond competition problems (e.g., research-level proofs, educational contexts)?\n3. Can you provide more detailed analysis of when and why the evaluator fails? What types of mathematical reasoning or proof structures are most challenging?\n4. What are the computational costs of your best evaluator compared to simpler alternatives? How does this scale with problem complexity?\n5.What is the inter-annotator agreement between your experts, and how does this compare to the agreement between experts and your automated evaluator?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T23:29:33",
"modification_date": "2025-11-12T18:25:36",
"review_url": "https://openreview.net/forum?id=ky5iqwZSXI¬eId=8bR6jZs7qY",
"license": "CC BY 4.0"
},
{
"id": "H0uHn63MJA",
"forum": "ky5iqwZSXI",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24615/Reviewer_zjS9",
"reviewer_name": "Reviewer_zjS9",
"rating": 6,
"confidence": 4,
"soundness": 4,
"contribution": 4,
"presentation": 2,
"summary": "This paper tackles the critical bottleneck of evaluating natural language math proofs from LLMs with PROOFBENCH, a new expert-annotated dataset of LLM-generated proofs graded on a 0–7 scale using detailed marking schemes. Based on a systematic study of evaluator design, they developed PROOFGRADER, an evaluator that achieves high alignment with human experts. Moreover, they show in a Best-of-N task where PROOFGRADER, as a reward model, closes over 90% of the performance gap between a naive binary evaluator and a human oracle.",
"strengths": "1. The paper tackles the crucial and timely problem of scalable, reliable proof evaluation.\n2. PROOFBENCH represents a substantial and meticulous human annotation effort.\n3. The BoN experiments demonstrate the effectiveness of the methods.",
"weaknesses": "1. The paper relies heavily on aggregate metrics (RMSE, etc.). It would be better to include some case studies and error analyses.\n2. The paper mentions MathArena but doesn't adequately compare PROOFBENCH to MathArena's manual annotation efforts.\n3. The backbone models are mainly proprietary models. The analysis would be more comprehensive if it evaluated some open-source models (e.g., Llama and Qwen series) to look into their capabilities as evaluators.",
"questions": "1. Does the evaluator show a \"self-enhancement bias\"? For instance, does the O3-backbone evaluator systematically over-score proofs generated by O3? A heatmap of bias (Generator vs. Evaluator) would be insightful.\n2. The paper shows that strong models (like O3) are strong evaluators. What about the other dynamics? How well do weak models evaluate strong models, and vice versa? Understanding this is important for understanding the robustness of \"LLM-as-a-judge\" and its potential for \"weak to strong generalization\".\n3. Is there a risk that generator-LLMs could hack the evaluator to get a high score?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T18:32:20",
"modification_date": "2025-11-12T18:25:36",
"review_url": "https://openreview.net/forum?id=ky5iqwZSXI¬eId=H0uHn63MJA",
"license": "CC BY 4.0"
},
{
"id": "KrqxuRAjeH",
"forum": "ky5iqwZSXI",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission24615/Reviewer_EDhE",
"reviewer_name": "Reviewer_EDhE",
"rating": 4,
"confidence": 5,
"soundness": 2,
"contribution": 3,
"presentation": 2,
"summary": "This paper evaluates the feasibility and effectiveness of LLMs as judges for the continuous-scale grading of natural language solutions to mathematical problems. The authors present a pipeline for 1) automatically generating grading rubrics, 2) manually verifying LLM-generated solutions, and 3) investigating various aspects of an LLM-as-a-judge system. This process resulted in the `ProofBench` dataset of 393 expertly graded solutions, used for evaluating the backbone model and prompting style of an LLM judge. The work also presents `ProofGrader`: the best-performing combination of a judgment ensemble using the o3 model, with a marking scheme and potential solutions as additional reference information. As a proof of concept for `ProofGrader`'s utility, the authors show that the system can be used as a solution selector in a best-of-8 setting, achieving a score that approaches the human oracle's performance.",
"strengths": "1. The topic of the paper is clearly well-motivated, relevant, timely, impactful, and provides opportunities for further developments in the field\n2. The paper is overall well-written and conveys the high-level idea well.\n3. The authors have investigated several factors affecting the performance of an LLM-as-a-judge, identifying the backbone model and a well-defined marking scheme as the most significant factors for its performance. This provides readers with clear guidance on the information necessary for achieving accurate automatic grading.\n4. This work provides the currently largest dataset of expert-annotated solutions graded on a continuous scale, covering a wide range of challenging recent problems suitable for further LLM-as-a-judge validation.\n5. The best evaluated settings for the LLM-as-a-judge are shown to also correlate with a better performance as a best-of-n selector. This is a very desirable result, showing that the comparisons performed in the analysis are valid on downstream tasks.\n6. The authors provide the prompts necessary for running the LLM-as-a-judge, aiding reproducibility.\n7. The annotator calibration phase is a valuable step for minimizing error and noise in the final dataset.",
"weaknesses": "For this section, I have labelled each comments as either:\n - *Critical*: These weaknesses have significantly impacted the score, and addressing/not addressing them could positively or negatively affect my final recommendation.\n- *Important*: Addressing these weaknesses would, in my opinion, significantly strengthen the paper's quality and/or clarity.\n- *Minor*: These are nitpicks that, while valuable to address, have not impacted the score of this assessment.\n\n1. Reproducibility\n - (Critical) The authors claim `ProofBench` as one of their core contributions. However, despite promising to open-source the dataset upon publication, I see no reason why this was not done as part of the supplementary material. Not providing a core contribution of the paper for peer review significantly hinders the process. Upon release, the authors should also ensure they include an appropriate license, given the nature of the source materials.\n - (Important) The annotation pipeline, described in A.3, is missing a substantial amount of details. Further elaboration can be found in the **Questions** section.\n\n2. Methodology\n - (Critical) The authors have not reported the reliability of their human grading. They describe having performed double-grading on 20% of the solutions but have not reported any inter-annotator agreement statistics. In particular, they mention that they \"adjudicate all flagged disagreements,\" however, it is not clear what threshold for a disagreement constitutes a flag, how often flags occurred, or how they resolved these inconsistencies. The perceived reliability of the remaining 80% of grades is highly dependent on clarifying this process.\n - (Critical) Generating rubrics is an incredibly challenging task in practice. This paper involves an automated system that is difficult to verify without significant manual intervention. Details about how this process was refined are lacking. Furthermore, no discussion is presented on how the annotators verified that these rubrics adhered to a standard for high-quality marking schemes. For example, the rubric in Appendix A.6 gives a total of 3 points for initial observations that seem relatively trivial compared to the rest of the proof. While my interpretation of this rubric is a personal opinion, I believe the authors should clarify how they ensured that the rubrics adhere to a high standard of quality, reflecting authentic evaluation practices.\n - (Important) The authors claim that in Section 6 they investigate the judging framework as a reward model. However, the described setting appears to lack a clear, realistic application. In particular:\n\n * If used within a solver's selection system, as described in the paper, such applications often aim to solve a problem without access to a reference solution. In that case, the best result seems to be `o3`, at around 3 average points, which is considerably lower than the human oracle's 4.21.\n * If used as a reward model for training, as proposed in Sections 1 and 7, the presented setups are all too expensive to be scalable to a full RL (or other) pipeline.\n \n The authors should clarify what a potential use case for this result can be.\n - (Minor) The paper makes no distinction between results on undergraduate- and high-school-level problems, and whether this is a relevant factor.\n\n3. Dataset\n - (Important) The dataset consists of problems from 2022-2025 from popular competitions (IMO, USAMO, Putnam). The earlier years precede the knowledge cutoff dates for the models tested. The authors should discuss whether test-set contamination has impacted the results and, if so, how significant the effect is.\n - (Important) The authors claim to have sourced their problems and solutions from official sources. However, to the best of my knowledge, the USAMO and USATST do not publicly release their competition materials. The authors should clarify this point in their rebuttal.\n\n4. Writing and Clarity\n - (Minor) The work never explicitly states which configuration constitutes `ProofGrader`. While it can be inferred from the results, it would be best if the authors defined this in the earlier sections of the paper.\n - (Minor) Most models presented in the paper are not cited according to best practices. Standard practice is to cite the official model cards from the providers, if one exist.\n - (Minor) In Section 5.3, the authors refer to an \"ensemble\" of evaluators. This term is usually reserved for a collection of **different** models or algorithms applied to the same task, rather than for multiple samples from a single algorithm.",
"questions": "1. Can the authors address all the concerns listed in the **Weaknesses** section?\n2. The authors describe the marking scheme generation methodology in Section 3.1 and later in Appendix A.3. However, the description lacks sufficient detail. Can the authors clarify:\n - The model families they tested for rubric generation?\n - What the prompts (with and without examples) entailed?\n - How the annotators interacted with the initially generated rubrics, e.g., what were their instructions, how was consensus achieved, and what was the extent of disagreement prior to adjudication?\n - What was the average rubric quality rating in the final iteration?\n - How was the best configuration selected, was it based solely on the average score?\n3. The authors have measured the bias of the graders with respect to the human grades. However, when breaking down by model, [1] and [2] show a clear positive bias toward a model's own solutions. Can the authors report similar metrics and discuss the implications?\n4. In what realistic setting can the authors' framework from Section 6 be applied (refer to comments in W2.3)?\n5. When evaluating different provers, the authors have constrained themselves to using a score-based selection system. How does this compare to using a tournament-style approach?\n6. The best-of-n evaluation was done on 29 selected problems. How were these problems selected and what was their difficulty distribution?\n7. The authors claim in 3.2.3 that Staged Evaluation is \"particularly effective for improving the performance of weaker backbone models.\" This is a very strong claim, given that this observation is only seen for the `o4-mini` model. Can the authors consider running additional experiments to support this statement or temper the claim to reflect the limited evidence?\n8. For 2/3 models in 5.2.2, the *Strict* setting yields more accurate judgement than the *Norm* one. Do the authors have any qualitative explanations for this?\n\n## Current rating\n\nI have given this paper a score of **4: Borderline Reject**. The contribution is valid, important, and potentially impactful to the field. However, the lack of transparency on some aspects, particularly reproducibility, prevents me from assigning a higher score. I would be happy to raise my score if the authors address the majority of my concerns during the rebuttal and discussion period.\n\n### References\n\n[1] Dekoninck et al. The open proof corpus: A large-scale study of llm-generated mathematical proofs. arXiv preprint arXiv:2506.21621, 2025.\n\n[2] Petrov et al. Proof or bluff? evaluating llms on 2025 usa math olympiad. arXiv preprint arXiv:2503.21934, 2025.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-20T23:04:37",
"modification_date": "2025-11-12T18:25:36",
"review_url": "https://openreview.net/forum?id=ky5iqwZSXI¬eId=KrqxuRAjeH",
"license": "CC BY 4.0"
}
] |
kYkfCs4ZAH | https://openreview.net/forum?id=kYkfCs4ZAH | FlexiCodec: A Dynamic Neural Audio Codec for Low Frame Rates | 5.666667 | 3.833333 | [
6,
8,
2,
4,
6,
8
] | [
5,
3,
4,
3,
4,
4
] | 6 | [
"Audio coding",
"neural audio codecs",
"speech language model"
] | Neural audio codecs are foundational to speech language models. It is expected to have a low frame rate and decoupled semantic and acoustic information. A lower frame rate codec can reduce the computational cost of speech language models by shortening the sequence length. Recent studies have developed 12.5Hz low-frame-rate audio codecs, but even lower frame rate codecs remain underexplored. We find that a major challenge for very low frame rate tokens is missing semantic information. This paper introduces **FlexiCodec** to address this limitation. FlexiCodec improves semantic preservation with a **dynamic frame rate** approach and introduces a novel architecture featuring an **ASR feature-assisted dual stream** encoding and Transformer bottlenecks.
With dynamic frame rates, it uses less frames at information-sparse regions through adaptively merging semantically similar frames.
A dynamic frame rate also allows FlexiCodec to support inference-time **controllable frame rates** between 3Hz and 12.5Hz.
Experiments on **6.25Hz, 8.3Hz and 12.5Hz** average frame rates confirm that FlexiCodec excels over baseline systems in semantic information preservation and delivers a high audio reconstruction quality. We also validate the effectiveness of FlexiCodec in language model-based TTS. Demos are available at: https://flexicodec.github.io. | generative models | https://openreview.net/pdf?id=kYkfCs4ZAH | 2025-09-17T14:34:23 | 6 | [
{
"id": "tLZkJL4mwe",
"forum": "kYkfCs4ZAH",
"review_number": 6,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8557/Reviewer_PDhk",
"reviewer_name": "Reviewer_PDhk",
"rating": 6,
"confidence": 5,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This paper presents a neural audio codec / tokenizer for English-only speech. The primary novelty is:\n\n- the use of a specialised module to dynamically alter frame rate.\n- the utilization of a pretrained and frozen ASR model as the 'coarse' token of the codec, removing the need for semantic distillation.",
"strengths": "- Writing and presentation is good.\n- Overall technical novelty is incremental, but the changes introduced are worthwhile and justified.\n- Results seem overall fairly good",
"weaknesses": "- The authors justify the focus on frame-rate by arguing that alignment with the token-rate of text is important. However, this claim of importance feels anecdotal rather than well evidenced (although it is plausible).\n- (minor) The choice of 'RVQ-1' as the name for the coarse/semantic token of the stream is confusing, given that it is not produced by an RVQ (rather FSQ). I understand why the authors chose this name as there is precedent elsewhere, but a better name is needed.\n- Justification of the particular pretrained ASR model used for 'RVQ-1' exists, but direct comparison with previous semantic distillation methods is absent. This makes it hard to judge the impact of this change.\n- The authors state that \"We have not applied FSQ for acoustic quantization because FSQ is a single-layer quantization, and we have not discovered a multi-layer FSQ practice in literature.\". There is a residual FSQ formulation available in a paper you already cite - \"Scaling transformers for low-bitrate high-quality speech coding\" by Parker et al.\n- There are many comparable baselines with public checkpoints available that are not included in the reconstruction evaluation (especially in the <0.7kbps) section.\n- It's good that the authors included subjective metrics (albeit dissapointingly only for downstream TTS, not for reconstruction), but the sample size is so small that the results are very weak. The conclusions drawn from these results need to be softened greatly, given that none of the differences are statistically significant.",
"questions": "- It seems 'RVQ-rest' is trained with 24 levels, and then inferenced with 8? Are all results using the truncated RVQ? Why was it trained with more levels in this case? This needs elaboration or justification.\n- The design for 'RVQ-rest' utilises RVQ, but the downstream TTS models work on the continuous embeddings from this part of the bottleneck. What is the motivation for using RVQ in this section if you're not going to use the tokens?\n- How is the FSQ quantizer trained? Is it using straight-through gradient estimates?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-04T00:40:56",
"modification_date": "2025-11-12T12:06:48",
"review_url": "https://openreview.net/forum?id=kYkfCs4ZAH¬eId=tLZkJL4mwe",
"license": "CC BY 4.0"
},
{
"id": "sNTmPk22Hw",
"forum": "kYkfCs4ZAH",
"review_number": 5,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8557/Reviewer_TQuw",
"reviewer_name": "Reviewer_TQuw",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces FlexiCodec, a dynamic neural audio codec targeting particularly low frame rates (<12.5Hz) using a variable frame rate instead of a fixed frame rate, providing improvements particularly in semantic information preservation and acoustic quality. \nIt uses an ASR-feature-assisted dual stream architecture to allocate more frames for complex audio segments and fewer to information-sparse subsequences as in silences or long vowels by adaptively merging adjacent frames. \nExperiments show FlexiCodec significantly outperforms baselines in semantic preservation at 6.25Hz and 8.3Hz, while also supporting variable inference time frame rates (from 3 to 12.5Hz) to control potential trade-offs, and strong performance in downstream TTS tasks.",
"strengths": "The paper presents strong empirical results and practical utility, with appropriate comparisons by e.g. retraining very recent work for lower frame rates. Previous work has primarily focused on lowering frame rates to 12.5Hz but not below. \n- A core contribution is the dynamic frame rate mechanism, which (expectedly) primarily improves semantic preservation rather than acoustic representations\n - Experiments convincingly demonstrate that this dynamic approach significantly improves semantic preservation: compare for example at 6.25Hz, a 26% relative WER reduction compared to a fixed-rate variant, and compared to recent work like DualCodec (Li et al., 2025) retrained for 6.25Hz improvements to 4.15% WER from 31.5% WER\n - This variable frame rate at inference (3-12.5Hz) is also shown to provide significant speedups for downstream TTS with reasonable tradeoffs in performance\n- The novel ASR feature-assisted dual-stream architecture is also a strength - using features from a pre-trained ASR model, optimized for text prediction, for better semantic information than standard SSL features as validated by ablation studies in the appendix",
"weaknesses": "- A limitation is that the decoder and downstream NAR models cannot operate directly on the variable-rate tokens: tokens must first be upsampled back to a 12.5Hz sequence via frame repetition with a (relatively) large 100M Frame Unmerging Transformer, negating some of the efficiency benefits for the synthesis stage. The Frame Unmerging Transformer, accounts for 100M of the models 216M trainable parameters\n- The presented model relies on large pre-trained ASR model (a frozen 230M SenseVoice model) for the semantic features that guide the merging; other models are not compared, so it is not necessarily clear how dependent performance is on the quality and properties of this model or how generalizeable it would be to other languages or domains with weaker models",
"questions": "- The Frame Unmerging Module seems relatively expensive for its task (upsampling repeated frames). Could a more efficient upsampling architecture (like a lightweight convolutional upsampler) potentially achieve similar acoustic quality? \n- Token merging is guided by the pretrained semantic features from the pretrained ASR models. The paper's notes on L430 that acoustic and semantic information density could potentially be misaligned. Could you say more about this potential misalignment and whether for example a merging criterion based on the acoustic stream could mitigate this?\n- The paper notes in the limitations the model is not streaming-capable as it operates on full sequences, but that adaptations for streaming are technically. Frame merging scans from left to right to find \"maximal contiguous segments\", implying a need to look ahead. How would this strategy be adapted for a low-latency streaming implementation without or only a limited look-ahead?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T13:40:00",
"modification_date": "2025-11-12T12:06:49",
"review_url": "https://openreview.net/forum?id=kYkfCs4ZAH¬eId=sNTmPk22Hw",
"license": "CC BY 4.0"
},
{
"id": "SxH50TeiZF",
"forum": "kYkfCs4ZAH",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8557/Reviewer_URnt",
"reviewer_name": "Reviewer_URnt",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper introduces FlexiCodec, a neural audio codec designed to operate at very low and controllable frame rates. The core contribution is a dynamic frame rate mechanism that adaptively merges semantically similar frames, guided by features from a pre-trained ASR model. This is implemented within a dual-stream architecture (semantic and acoustic) that utilizes Transformer-based modules for merging and unmerging frames. The authors demonstrate that this approach improves the preservation of semantic information. The paper also validates FlexiCodec's effectiveness in a downstream TTS application, showing competitive audio quality with substantial inference speedups in the autoregressive stage.",
"strengths": "1. The idea of using pre-trained ASR features to guide the merging of semantically similar frames is intuitive and well-executed. The results in Figure 3a, showing a dramatic improvement in RVQ-1 WER at 6.25Hz, strongly validate this approach's effectiveness in preserving semantic content on in-domain data.\n2. The design allows for a flexible trade-off between semantic quality and sequence length at inference time by simply adjusting the similarity threshold. This is a highly practical feature for applications with varying computational constraints.\n3. The authors conduct a comprehensive set of experiments for their chosen domain, including detailed ablations of the dynamic rate mechanism (Tables 3 & 4), comparisons with numerous existing codecs, and validation on two distinct downstream tasks (TTS and audio understanding).",
"weaknesses": "1. The abstract claims, \"We find that a major challenge for very low frame rate tokens is missing semantic information\". I do not fully agree. Recent work on syllabic / dynamic units, specifically Sylber and SyllableLM (Cho et al., 2024, Baade et al., 2024), showed that at around 6-8 Hz you can still carry the linguistic sequence reasonably well, and what starts to go missing is acoustics/prosody/fine timing, not semantics. Figure 4 of the paper actually supports a syllable-like view: FlexiCodec emits about half the phoneme rate, i.e., around syllabic granularity. The authors should tone down this claim and provide a more nuanced motivation that acknowledges that while fixed downsampling can lose transient phonetic details, the main trade-off at very low rates is often acoustic fidelity vs. semantic representation.\n2. The ASR encoder (Sense Voice-Small) was trained on 300k hours of data , and FlexiCodec itself is trained on Librilight-Large (54k hours of audiobooks). The primary evaluation is on LibriSpeech-test-clean, which is also an audiobook dataset. This creates a risk that the excellent WER results are due to the ASR features being highly specialized for clean, read English speech. The claims of superior semantic preservation would be far more compelling if they were supported by an evaluation on an out-of-domain (OOD) dataset, for instance, a corpus of spontaneous or conversational speech (Emilia). This would test whether the ASR-guided merging generalizes beyond the training domain.\n3. The full FlexiCodec model has 216M trainable parameters, with the Frame Unmerging Module alone containing a 100M parameter Transformer. This is a substantial model. While the paper provides Real-Time Factor (RTF) for the downstream TTS task, it omits the RTF for the codec's own encoding and decoding process. This information is critical for assessing the model's practical usability. A model that is fast for downstream tasks but slow to encode/decode may have limited applications.\n4. Table 3 says: removing dynamic frame rate at 6.25 Hz increases RVQ1 WER and probing WER. That’s good, but this ablation is entangled with (a) ASR features, (b) FSQ, (c) transformer smoothing. Right now we cannot tell whether: 1) ASR features alone at 6.25 Hz already close most of the gap; 2) FSQ is the key piece at low rate; 3) or dynamic merging is the actual differentiator. I would suggest providing a factorial ablation: 1) DualCodec-style SSL features, fixed 6.25 Hz; 2) ASR features, fixed 6.25 Hz; 3)ASR features, dynamic 6.25 Hz; 4) FSQ. That way, we can see the incremental lift of each choice.\n5. The paper states: “To our best knowledge, it is the one of first neural audio codecs under 10Hz… and the first work to explore dynamic frame rate on low-frame-rate neural audio codecs.” But there are concurrent <= 10 Hz speech-token systems (TaDiCodec, TASTE), the authors mention them later, and several dynamic-rate codecs, though at higher base rates. This should be toned down.",
"questions": "See Weaknesses",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T13:11:01",
"modification_date": "2025-11-12T12:06:49",
"review_url": "https://openreview.net/forum?id=kYkfCs4ZAH¬eId=SxH50TeiZF",
"license": "CC BY 4.0"
},
{
"id": "MpGyVTnTyF",
"forum": "kYkfCs4ZAH",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8557/Reviewer_VFuy",
"reviewer_name": "Reviewer_VFuy",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper employs a dynamic neural audio codec, which adaptively merges segments of varying lengths according to different speaking rates to achieve efficient information compression.",
"strengths": "The use of ASR features as auxiliary information is well-motivated and enables more effective compression. This design allows the model to merge segments corresponding to the same phoneme, thereby shortening the overall sequence length while retaining essential information.",
"weaknesses": "I am not entirely certain about my understanding of the implementation. Does each token need to maintain an additional length information field? If so, should this length information also be considered part of the compressed representation, since it seems necessary for accurate audio reconstruction? Clarifying this design choice and its impact on bitrate or compression ratio would strengthen the paper.",
"questions": "How does the proposed method handle fast speech scenarios, where phonemes or syllables occur at very high rates? Can the model still achieve low compression rates while maintaining intelligibility and reconstruction quality in such cases? A discussion or experiment addressing this would be valuable.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T02:13:33",
"modification_date": "2025-11-12T12:06:50",
"review_url": "https://openreview.net/forum?id=kYkfCs4ZAH¬eId=MpGyVTnTyF",
"license": "CC BY 4.0"
},
{
"id": "ElsBOy1K0S",
"forum": "kYkfCs4ZAH",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8557/Reviewer_f4v8",
"reviewer_name": "Reviewer_f4v8",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper introduces FlexiCodec, a novel dynamic-rate neural audio codec that successfully addresses the critical problem of semantic information loss at very low frame rates. Its core ideas are innovative and well-executed, specifically the use of a pre-trained ASR model to guide adaptive frame merging and the implementation of controllable frame rates. Supported by extensive and rigorous experiments against strong baselines, the work presents a compelling case for its effectiveness and makes a significant contribution to the field of low-bitrate speech tokenization for language models.",
"strengths": "1. The core contribution of a dynamic and controllable frame rate is highly novel and effectively solves a well-motivated problem. The adaptive allocation of temporal resolution based on phonetic complexity is an elegant and powerful approach to preserving semantics.\n\n2. The architecture is thoughtfully designed, cleverly using a pre-trained ASR model for the dual purpose of providing semantic features and guiding the frame merging process. The inclusion of Transformer modules for refinement is also a crucial detail for ensuring high quality.\n\n3. The evaluation is exceptionally thorough and convincing. The authors compare against fairly retrained baselines, conduct extensive ablation studies that validate key design choices, and demonstrate practical utility in downstream TTS and audio understanding tasks.",
"weaknesses": "1. The model's evaluation is confined to English, and its reliance on a language-specific ASR model raises concerns about its generalizability to multilingual settings without significant additional effort or resources.\n\n2. A more detailed analysis of the codec's own computational overhead and latency during encoding/decoding would be beneficial to fully assess its efficiency.\n\n3. The frame merging process relies on a simple, greedy left-to-right algorithm. While empirically effective, it may not be globally optimal, and a discussion of more sophisticated segmentation strategies could have strengthened the work..",
"questions": "N/A",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T22:54:35",
"modification_date": "2025-11-12T12:06:51",
"review_url": "https://openreview.net/forum?id=kYkfCs4ZAH¬eId=ElsBOy1K0S",
"license": "CC BY 4.0"
},
{
"id": "ls00kcW9Yc",
"forum": "kYkfCs4ZAH",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8557/Reviewer_U5qj",
"reviewer_name": "Reviewer_U5qj",
"rating": 8,
"confidence": 4,
"soundness": 4,
"contribution": 3,
"presentation": 3,
"summary": "They proposes a new codec with the following innovations:\n1. frame merging based on ASR feature similarity, which leads to very low framerate codes\n2. ASR feature condition, for frame merging, and also for RVQ-1\n\nExtensive experiments show that FlexiCodec achieves good semantic and acoustic information preservation, as well as good performance in downstream NCLM-based TTS task.",
"strengths": "1. Good angle: enhancing semantic info and representation, as well as low framerate compression are very important tasks\n2. Good approach: leverage ASR features is probably the most straightforward approach to enhancing semantic info. Leveraging ASR features similarity for merging has a byproduct which is that we could adjust the compression rate based on the latency requirement\n3. Very extensive and rigorous experiments: the authors not only did ablation studies on most design choices, standard reconstruction-based eval on WER, and acoustic info, but also shown that the alignment between dynamic merging and phonemes, as well as performance on downstream tasks TTS and audio understanding",
"weaknesses": "1. XYcodec also uses ASR features, although in a slightly different way - the ASR module is finetuned during training with transcript - which will likely require more resources. Although in table 5 XYcodec is compared, I'd like to see more discussion on comparing XYcodec and FlexiCodec\n\n2. the use of ASR features put FlexiCodec in a different category than most codec models, because it requires supervised signal. It would enhance the paper if the author can show that English-ASR trained model can also do well on unseen languages.",
"questions": "Why do we use FSQ for RVQ-1, as opposed to VQ?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-26T05:09:18",
"modification_date": "2025-11-12T12:06:51",
"review_url": "https://openreview.net/forum?id=kYkfCs4ZAH¬eId=ls00kcW9Yc",
"license": "CC BY 4.0"
}
] | |
t9cOXsdpKg | https://openreview.net/forum?id=t9cOXsdpKg | What Matters in Deep Learning for Time Series Forecasting? | 3.333333 | 3.333333 | [
4,
2,
4
] | [
4,
3,
3
] | 3 | [
"time series forecasting",
"architecture design",
"deep learning"
] | Deep learning models have grown increasingly popular in time series applications. However, the large quantity of newly proposed architectures, together with often contradictory empirical results, makes it difficult to assess which components contribute significantly to final performance. We aim to make sense of the current design space of deep learning architectures for time series forecasting by discussing the design dimensions and trade-offs that can explain, often unexpected, observed results. We discuss the necessity of grounding model design on principles for forecasting groups of time series and how such principles can be applied to current models. In particular, we assess how concepts such as locality and globality apply to recent forecasting architectures. We show that accounting for these aspects can be more relevant for achieving accurate results than adopting specific sequence modeling layers and that simple, well-designed forecasting architectures can often match the state of the art. We discuss how overlooked implementation details in existing architectures (1) fundamentally change the class of the resulting forecasting method and (2) drastically affect the observed empirical results. Our results call for rethinking current faulty benchmarking practices and for the need to focus on the foundational aspects of the forecasting problem when designing neural network architectures. As a step in this direction, we also propose an auxiliary forecasting model card, i.e., a template with a set of fields to characterize existing and new forecasting architectures based on key design choices. | learning on time series and dynamical systems | https://openreview.net/pdf?id=t9cOXsdpKg | 2025-09-19T22:25:22 | 3 | [
{
"id": "GTcWDCw15M",
"forum": "t9cOXsdpKg",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18911/Reviewer_eXVU",
"reviewer_name": "Reviewer_eXVU",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The paper examines why deep learning architectures for time series forecasting yield inconsistent and often contradictory results. The authors aim to disentangle the factors influencing model performance and to identify which design elements truly matter in building effective forecasting systems.\n\nThe study employs a computational and experimental methodology, combining systematic benchmarking, empirical analysis, and architectural deconstruction. It introduces a framework for analyzing models along four key design dimensions: 1) Model configuration (local, global, hybrid), 2) Preprocessing and exogenous variables, 3) Temporal processing, and 4) Spatial processing.\n\nThrough controlled experiments on established benchmarks (Electricity, Weather, Traffic, Solar), the authors compare well-known models such as PatchTST, DLinear, TimeMixer, and Crossformer against a streamlined reference architecture designed to isolate the effects of specific design choices. Key findings include: 1) Many observed performance differences stem from overlooked implementation details—not from architectural innovation. 2) Global or hybrid models, when well-designed, can match or outperform complex state-of-the-art systems. 3) Exogenous variable inclusion and consistent preprocessing have a greater effect on performance than model type. 4) Spatial attention mechanisms contribute little to long-horizon forecasting accuracy.",
"strengths": "This paper offers a meta-analytical and diagnostic contribution rather than a new predictive model. Its novelty lies in articulating a unified conceptual framework for analyzing deep time series forecasting architectures and demonstrating that benchmarking inconsistencies, not model innovation, explain many reported performance gains. The introduction of a forecasting model card is a valuable proposal for standardizing model documentation, enhancing reproducibility and interpretability across studies.",
"weaknesses": "- While comprehensive, the study focuses solely on deterministic point forecasting. This leaves out probabilistic and uncertainty-aware approaches, which are central to modern time series applications. The authors acknowledge this but could have elaborated on how their findings generalize to probabilistic settings.\n\n- Although the paper references major forecasting works, it under-engages with recent multimodal and foundation time series models (e.g., TFT, Chronos, pretrained time-series transformers) that might challenge or nuance its conclusions about architecture complexity.\n\n- Most comparisons are run with almost the same look-back window (W) and forecast horizon (H). In fact, they use W=96 for most tables (one table uses W=336, Solar excluded) and H=96 almost always. If we widen the settings (e.g., W=336–720; H=192–336), the relative strengths of architectures can change, so current conclusions may be setting-dependent.\n\n- For spatial models, the authors explicitly shrink W to 96 “to keep costs manageable,” and then conclude spatial attention adds limited value. But some cross-series patterns can only emerge with longer windows/horizons; the constraint itself may handicap spatial operators.",
"questions": "- If we sweep W ∈ {96, 336, 720} and H ∈ {96, 192, 336}, do the two core claims still hold: (i) simple models match SOTA, and (ii) spatial attention helps little? Where do rankings flip as context grows? (This matters because current runs mostly fix W=96 and H=96.)\n\n- Under probabilistic evaluation, do simpler models still lead? If not, the paper’s guidance should be framed as point-forecast-specific.\n\n\n- Model configuration (Local, Global, Hybrid) represents a core design choice that directly affects model behavior and capacity. Why do the authors argue that configuration effects should be factored out rather than treated as part of each model’s intended design?\nWould controlling for configuration risk removing meaningful aspects of a model’s inductive bias and thereby alter its intended behavior?\n\n- Table 1 compares models under Hybrid and Global setups, yet the paper does not clearly explain how each version was implemented.\nWhat exactly was modified in each model to create the “Global” or “Hybrid” configuration (e.g., were per-series normalization parameters or embeddings added/removed)? Providing this procedural detail would make the comparison reproducible and easier to interpret.\n\n- Is the proposed way of constructing a Hybrid model (shared parameters plus per-series components) a general framework that can be applied to any architecture, or is it specific to certain models like TimeMixer and Crossformer?\nClarifying this would help readers understand whether Hybrid configuration is a standardized recipe or an ad-hoc adjustment.\n\n- Could similar experiments be performed under a Local configuration (one model per time series) for at least a subset of the deep models?\nIf not, could the authors discuss the practical or computational reasons preventing this?\nSuch results would help establish a complete Local–Global–Hybrid comparison.\n\n- Linear baselines (e.g., Linear, DLinear) can in principle be trained under different configurations.\nIs it feasible to evaluate these models under Hybrid or Global settings as well?\nIncluding these variants could strengthen the paper’s conclusions by showing whether configuration effects are consistent across both deep and linear models.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T17:55:49",
"modification_date": "2025-11-12T15:01:40",
"review_url": "https://openreview.net/forum?id=t9cOXsdpKg¬eId=GTcWDCw15M",
"license": "CC BY 4.0"
},
{
"id": "3JQeH6ijad",
"forum": "t9cOXsdpKg",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18911/Reviewer_HBak",
"reviewer_name": "Reviewer_HBak",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 1,
"presentation": 2,
"summary": "This paper empirically shows that implementation details (local/global configuration, preprocessing, covariates) have larger impact than architecture choice (Transformer vs MLP) in time series forecasting. Simple baselines match SOTA when properly configured, exposing inconsistent benchmarking practices across recent work.\nContribution: No novel methods—purely empirical analysis exposing benchmarking flaws. Proposes a \"model card\" template to standardize future comparisons.",
"strengths": "- Timely and important: Addresses fundamental benchmarking issues affecting the entire time series forecasting community.\n- Rigorous empirical work: Comprehensive ablation studies with controlled comparisons across multiple design dimensions.\n- Actionable template: The forecasting model card could standardize future research and improve reproducibility.",
"weaknesses": "- Given the paper's broad claims about deep learning for time series forecasting, the experimental scope (4 datasets, long-range forecasting only, no probabilistic forecasting) seems insufficient to support such general conclusions.\n- While experienced practitioners may anticipate some findings (e.g., that preprocessing matters), the systematic quantification of these effects is valuable. However, the paper lacks surprising insights that would significantly advance our understanding.\n- The paper is more like a position-track paper, or even a benchmark-track paper, than a main track paper. It lacks theoretical insights, and it only raises issues without providing solutions. (I acknowledge that revealing an important issue is important to the community but the contribution of this paper slightly diverges from what we expect for a main track paper).\n- The usefulness of the proposed 'model card' template is uncertain.",
"questions": "- How confident are you that these findings generalize to short-term forecasting, probabilistic forecasting, and other domains (e.g., irregular time series, multivariate forecasting with true cross-variable dependencies)?\n- Beyond diagnosing problems and proposing the model card template, what concrete steps do you think should the community take?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T17:01:22",
"modification_date": "2025-11-12T15:01:40",
"review_url": "https://openreview.net/forum?id=t9cOXsdpKg¬eId=3JQeH6ijad",
"license": "CC BY 4.0"
},
{
"id": "B59XIJUZAo",
"forum": "t9cOXsdpKg",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission18911/Reviewer_uehw",
"reviewer_name": "Reviewer_uehw",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "The authors classifies design choices of time series forecasting models, finding that overlooked details might change the class of forecasting method and have an impact on experiment results. The authors call for future work to use an auxiliary forecasting model card for key design choices. The authors find that, (1) channel hybrid/global impacts model performance; (2) preprocessing would have an impact on time series benchmarkds; (3) no single model outperforms other models, questioning whether temporal model designing are important; (4) show that some spacial model design would give similar results, thus questioning the importance of spacial design.",
"strengths": "1. The paper calls for rethinking the benchmarks of time series forecasting domain, which I recognize is indeed very necessary and very important. \n2. The authors calls for better understanding of architecture's designing space, which might be a method to solve the phenomena that time series forecasting community has been making little progress in the past years.",
"weaknesses": "1. How are you sure that it's the `model card` rather than the `dataset and benchmarks` that have gone wrong? **Imagine that the CV community are using MNIST rather than CIFAR, ImageNet or other datasets, perhaps researchers could also publish hundreds of papers per year proposing all kinds of CNN/Transformer designs persuing $0.1\\%$ improvement on MNIST**. \"Oh, my method classifies MNIST better than existing sota\". **In that case, you could also do experiments and find \"hey, perhaps using vit is similar as convnets, perhaps swinTF is similar to ViT\"**. In this case, it is **rethinking, reusing, retargeting datasets and benchmarks** that would help with the problem, rather than making some model cards. Actually, very recently there has been work implying that some datasets for TSF might have gone saturated, for example (https://www.arxiv.org/abs/2510.02729; this paper is online Octobor this year and does not count to my down-rating your paper, but it contributes to my argument that perhaps it's the dataset and benchmarks that have gone wrong.). What's your opinion on this?\n2. There have been previous calls for rethinking and utilizing better and more robust time series forecasting and benchmarking. For example, NeurIPS 24 time series in the age of large models workshop Invited Talk by Christoph Bergmeir - Fundamental limitations of foundational forecasting models: The need for multimodality and rigorous evaluation (https://cbergmeir.com/talks/neurips2024/), and also some papers (for example, https://arxiv.org/abs/2502.14045). More recently, some researchers also raise concerns like the time series benchmarks have been saturated. (see weakness 1) **Perhaps the ultimate errors appear in the dataset and benchmarks we are using**, **not in the methods.** Of-course I'm not saying that the methods we propose are fine: saturated datasets and benchmarks might be misleading, resulting in not-that-effective methods. I'm saying that perhaps the dataset issues should be solved first.",
"questions": "Because a part of this paper seems similar to position paper, I would ask some related questions on behalf of a position paper reviewer, as listed:\n\nActually I agree that the time series forecasting area has gone wrong in the past several years. Perhaps we should stop overfitting those small simple naive datasets. Do you have some suggestions, advices or opinions on these?\n\nI'm looking forward to further discussions, and I'm potentially willing to increase my score.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-15T15:07:51",
"modification_date": "2025-11-12T15:01:41",
"review_url": "https://openreview.net/forum?id=t9cOXsdpKg¬eId=B59XIJUZAo",
"license": "CC BY 4.0"
}
] | |
c1jWNZ1Zqg | https://openreview.net/forum?id=c1jWNZ1Zqg | Variational Inference for Cyclic Learning | 6.666667 | 3.333333 | [
4,
8,
8
] | [
4,
2,
4
] | 3 | [
"Cyclic Learning",
"Self-supervised Learning"
] | Cyclic learning, which involves training with pairs of inverse tasks and utilizes cycle-consistency in the design of loss functions, has emerged as a powerful paradigm for weakly-supervised learning. However, its potential remains under-explored due to the current methods’ narrow focus on domain-specific implementations.
In this work, we develop generalized solutions for both pairwise cycle-consistent tasks and self-cycle-consistent tasks. By formulating cross-domain mappings as conditional probability functions, we reformulate the cycle-consistency objective as an evidence lower bound optimization problem via variational inference. Based on this formulation, we further propose two training strategies for arbitrary cyclic learning tasks: single-step optimization and alternating optimization.
Our framework demonstrates broad applicability across diverse tasks. In unpaired image translation, it not only provides a theoretical justification for CycleGAN but also leads to CycleGN—a competitive GAN-free alternative. For unsupervised tracking, CycleTrack and CycleTrack-EM achieve state-of-the-art performance on multiple benchmarks.
This work establishes the theoretical foundations of cyclic learning and offers a general paradigm for future research. | learning theory | https://openreview.net/pdf?id=c1jWNZ1Zqg | 2025-09-20T00:01:14 | 3 | [
{
"id": "YSmWdo0wia",
"forum": "c1jWNZ1Zqg",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19614/Reviewer_uEwd",
"reviewer_name": "Reviewer_uEwd",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This paper presents a general framework for training cyclic learning models, built on single-step and alternating optimization strategies. The framework's broad utility is shown in two areas: for unpaired image translation, it not only provides a theoretical basis for CycleGAN but also produces CycleGN and CycleTrack, a competitive model that doesn't require GANs. Overall, this work lays the theoretical groundwork for cyclic learning and offers a universal approach for subsequent research.",
"strengths": "The paper presents a method (CycleGN) that avoids the bottleneck of the previously proposed cycle methods, like unstable adversarial optimization (GAN-style). This enables competitive results without explicitly using discriminators. The paper is well-structured and easy to follow.",
"weaknesses": "1) The performance of the *CycleGN EM-based* approach was found to be *inconsistent across tasks* - showing stable results in some settings but degraded performance in others compared to CycleGAN (single-step with GAN). This indicates that method still lack a unified mechanism to fully remove the D_{KL} surrogate across different problem domains.\n\n \n2) A issue remains where models can achieve nearly perfect while the *intermediate translation remains unrealistic. This is an intrinsic limitation of cyclic training.",
"questions": "**Q1** : The connection between the proposed EM-style framework and the classical EM algorithm remains unclear. \n In the traditional EM formulation, the objective is to maximize the data likelihood. \n Could the authors clarify how optimizing the cycle-consistency loss ($D_{cyc}$) in their setting corresponds to maximizing the likelihood? \n Is it correct to interpret the cycle loss as a *surrogate reconstruction objective* that implicitly optimizes the likelihood function, similar to the role of the reconstruction term in variational inference?\n\n \n **Q2** : Given the absence of explicit $D_{KL}$ control in the EM method, are there any theoretical or empirical evaluations confirming that the generated distributions $\\mathcal{X}$ and $\\mathcal{Y}$ approach the true priors, rather than merely achieving a cyclically consistent but unrealistic local optimum?\n \n\n**Q3** : Given the observed instability of results across different tasks, could the authors identify specific problem types or conditions under which the proposed EM-based framework is expected to outperform standard GAN-based models? \n What are the authors’ insights regarding the comparative advantages of their approach? \n In other words, what would constitute the \\textit{ideal use case} where this framework would be preferable to a conventional GAN setup?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-05T03:19:19",
"modification_date": "2025-11-12T15:11:10",
"review_url": "https://openreview.net/forum?id=c1jWNZ1Zqg¬eId=YSmWdo0wia",
"license": "CC BY 4.0"
},
{
"id": "axbSaylRkP",
"forum": "c1jWNZ1Zqg",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19614/Reviewer_dfXf",
"reviewer_name": "Reviewer_dfXf",
"rating": 8,
"confidence": 2,
"soundness": 4,
"contribution": 3,
"presentation": 4,
"summary": "This paper proposes a unified probabilistic framework to generalize cyclic learning, moving beyond the ad-hoc, task-specific implementations that currently dominate the field. The authors identify that while cycle-consistency is a powerful tool for weakly-supervised learning, it lacks a common theoretical foundation. To address this, they reformulate the cycle-consistency objective as a variational inference (VI) problem. The core of their approach is to model the cross-domain mappings (e.g., $A \\rightarrow B$ and $B \\rightarrow A$) as conditional probability functions and treat the intermediate generated data as latent variables. This allows them to re-cast the training objective as the optimization of an Evidence Lower Bound (ELBO) on the data log-likelihood.\n\nFrom this single theoretical framework, the authors derive two distinct and general optimization strategies. The first is a \"VAE-style\" single-step loss that optimizes the full objective at once, which the authors show provides a new theoretical justification for the success of architectures like CycleGAN. The second is an \"EM-style\" alternating optimization algorithm that iteratively updates the forward and backward mappings, avoiding the need for an explicit KL divergence approximation (like a GAN's discriminator). The framework's effectiveness is demonstrated on two very different tasks: unpaired image translation, where their proposed GAN-free \"CycleGN\" is competitive with CycleGAN, and unsupervised visual object tracking, where their \"CycleTrack\" and \"CycleTrack-EM\" models establish a new state-of-the-art on multiple benchmarks.",
"strengths": "- Novel Theoretical Contribution: Its primary strength is the novel and elegant reformulation of cycle-consistency as a variational inference (VI) problem. This connects a widely-used heuristic to fundamental probabilistic principles (like ELBO maximization, VAEs, and the EM algorithm) for the first time.\n\n- Generalization: The framework is highly general, providing a unified theory for both paired cyclic tasks (like image translation) and self-cyclic tasks (like video tracking), which were previously treated with separate, task-specific methods.",
"weaknesses": "- Analysis of EM-Style Failure Modes: The paper honestly presents failure cases (Fig. 4a) where the EM-based CycleGN achieves good cycle-reconstruction but produces poor-quality intermediate \"fake\" images. This suggests the model has learned an \"incorrect mapping\" that satisfies $g(f(x)) \\approx x$ but where $f(x)$ is not a faithful member of the target domain $Y$. This is a crucial finding and a known risk of EM-style approaches converging to a local optimum. The paper would be improved by a deeper analysis of why this happens in the VI framework. Is it an inherent instability of the alternating optimization? Or does it confirm that the EM approach, by \"lacking explicit constraints on latent variables\" (Section 2.1), is more vulnerable to this specific failure mode than the single-step method, which explicitly constrains the intermediate variable with $D_{sim}$?\n\n- Limited Scope of Image Translation Experiments: The validation of CycleGN is performed only on the Cityscapes (labels $\\leftrightarrow$ photo) dataset. This is a highly structured translation task. The original CycleGAN paper demonstrated its robustness on much more \"unstructured\" and challenging tasks, such as horse $\\rightarrow$ zebra or style transfer (Monet $\\rightarrow$ photo). To compellingly claim CycleGN is a general, competitive alternative to CycleGAN, it should be tested on these more diverse and difficult translation tasks. It is possible the EM-style approach works well for structured tasks but struggles with more unconstrained mappings where a GAN's distributional matching is essential.",
"questions": "Regarding the CycleGN failure case in Figure 4a (where reconstruction is good but intermediate images are poor), could you elaborate on the cause? Is this a local optimum that is inherent to the alternating EM optimization, or does this failure mode confirm that the EM approach is more vulnerable to \"cheating\" precisely because it lacks the explicit $D_{sim}$ (distributional) constraint on the latent variable that the single-step method has?\n\nThe claim that CycleGN is a general, competitive alternative to CycleGAN would be significantly strengthened by testing it on more unstructured translation tasks (e.g., horse $\\leftrightarrow$ zebra, style transfer). Have you performed experiments on such tasks? How does the EM-style approach perform in these more unconstrained settings where the GAN-based $D_{sim}$ term is known to be critical?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T09:27:59",
"modification_date": "2025-11-12T15:11:10",
"review_url": "https://openreview.net/forum?id=c1jWNZ1Zqg¬eId=axbSaylRkP",
"license": "CC BY 4.0"
},
{
"id": "XA25CLIpyd",
"forum": "c1jWNZ1Zqg",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission19614/Reviewer_w19x",
"reviewer_name": "Reviewer_w19x",
"rating": 8,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This paper has a good intuition to use the latent variables bridging the two domains. They summarize the success of CycleGan by modeling the two terms of reconstruction and distribution alignment and propose several multi-variants like “CycleGN” (GAN-free) and CycleTrack / CycleTrack-EM and achieve SOTA on unsupervised visual tracking tasks.",
"strengths": "Sterngeths:\n\n\nThe paper establishes a framework to broad the cyclin-style loss functions in computer vision tasks. The paper has a clear concept by understanding the cycle term as reconstruction and the GAM term as the KL surrogate for making single-step optimization. Their proposed methods help address the failure conditions for the CycleGAN and are useful in the visual tracking tasks.",
"weaknesses": "Weakness:\n\n\n1 Overall the math deductions are good, but there are many typos and rigor issues, e.g.: In eq.3, where does the p_{data} come from?; Extra parenthesis in Eq. (16); what are the abbreviations IoU for in Eq. (17)? Should it be argmax based on the results in the table 3-4?\n\n\n2 Figure 2 and Figure 3 are not clear enough to be understood. \n\n\n3 CycleGN is close but generally behind CycleGAN in the more challenging direction (photo→labels), suggesting that the Dsim/EM recipe doesn’t yet match a well-trained discriminator as a KL surrogate.",
"questions": "Questions:\n\n\n1 What are the abbreviations IoU for in Eq. (17)? Should it be argmax based on the results in the table 3-4?\n\n\n2 Could you explain in more detail when your method will be successful? Will your methods be useful to the questions like domain adaptations in different modalities?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-27T07:18:18",
"modification_date": "2025-11-12T15:11:11",
"review_url": "https://openreview.net/forum?id=c1jWNZ1Zqg¬eId=XA25CLIpyd",
"license": "CC BY 4.0"
}
] | |
t7wIerUT2E | https://openreview.net/forum?id=t7wIerUT2E | Controllable diffusion-based generation for multi-channel biological data | 3.5 | 3.25 | [
8,
0,
2,
4
] | [
3,
4,
3,
3
] | 4 | [
"diffusion model",
"conditional imputation",
"channel attention",
"random-masking guidance",
"imaging mass cytometry"
] | Biological profiling technologies, such as imaging mass cytometry (IMC) and spatial transcriptomics (ST), generate multi-channel data with strong spatial alignment and complex inter-channel relationships. Modeling such data requires generative frameworks that can jointly model spatial structure and channel relationships, while also generalizing across arbitrary combinations of observed and missing channels for practical applications. Existing generative models typically assume low-dimensional inputs (e.g., RGB images) and rely on simple conditioning mechanisms that break spatial correspondence and overlook inter-channel dependencies. This work proposes a unified multi-channel diffusion (MCD) framework for controllable generation of structured biological data with intricate inter-channel relationships. Our model introduces two key innovations: (1) a hierarchical feature injection mechanism that enables multi-resolution conditioning on spatially aligned observed channels, and (2) two complementary channel attention modules to capture inter-channel relationships and recalibrate latent features. To support flexible conditioning and generalization to arbitrary sets of observed channels, we train the model using a random channel masking strategy, enabling it to reconstruct missing channels from any combination of observed channels as the spatial condition. We demonstrate state-of-the-art performance across both spatial and non-spatial biological data generation tasks, including imputation in spatial proteomics and clinical imaging, as well as gene-to-protein prediction in single-cell datasets, and show strong generalizability to unseen conditional configurations. | applications to physical sciences (physics, chemistry, biology, etc.) | https://openreview.net/pdf?id=t7wIerUT2E | 2025-09-19T07:18:46 | 4 | [
{
"id": "KhQmNvvvYH",
"forum": "t7wIerUT2E",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14506/Reviewer_WNmC",
"reviewer_name": "Reviewer_WNmC",
"rating": 8,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The authors introduce a Multi-Channel Diffusion (MCD) framework, designed for controllable synthesis and imputation of multi-channel biological data such as spatial proteomics, single-cell omics, and MRI. This framework integrates a mechanism for hierarchical spatial feature injection with a dual channel-attention module. This enables the resulting models to preserve spatial alignment while capturing complex inter-channel dependencies. A wide range of experiments on publicly available benchmarks suggest that MCD outperforms existing baselines.",
"strengths": "1) This paper combines two different methodological innovations in a quite ingenious way, effectively addressing the spatial and inter-channel complexity of biological data.\n\n2) The resulting models demonstrates versatility across multiple domains, including spatial proteomics, single-cell omics, and MRI modality synthesis, showing strong generalization and scalability.\n\n3) Finally, the presented evaluation is quite comprehensive. I particularly appreciate the ablation studies that assess the individual contributions of model components.",
"weaknesses": "1) All comparisons reported in Tables 1 to 3 lack any assessement of stastistical significance. This makes it difficult to gauge whether differences in performances are actually significant.\n2) There is not biologically-grounded evaluation of the imputed data. For example, are known protein markers expressed in their corresponding cells?",
"questions": "I would ask the authors to address the weaknesses highlighted above. Particularly:\n1) Evaluate differences in predictive performance between their method and the others through appropriate statistical tests\n2) Check whether expected cell-specific markers are expressed in the inputed data. You could also scale up this assessement to the pathway level, and check which biological pathways are enriched in the imputed data and if the enrichment results are biologically meaningful",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T10:19:12",
"modification_date": "2025-11-12T13:21:39",
"review_url": "https://openreview.net/forum?id=t7wIerUT2E¬eId=KhQmNvvvYH",
"license": "CC BY 4.0"
},
{
"id": "hdi5pEbz7k",
"forum": "t7wIerUT2E",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14506/Reviewer_aLRt",
"reviewer_name": "Reviewer_aLRt",
"rating": 0,
"confidence": 4,
"soundness": 1,
"contribution": 1,
"presentation": 1,
"summary": "This work proposes a unified multi-channel diffusion (MCD) framework for controllable generation of structured biological data.",
"strengths": "1. The idea of developing a framework capable of controllably generating multi-channel biological data using diffusion models is interesting.",
"weaknesses": "1. The paper is quite obscure and its objective remains unclear. The title suggests that it focuses on developing a generative framework for multi-channel biological data, but the type of data is not specified. I assumed the authors were referring to images, yet in the experiments they attempt to predict protein expression from paired scRNA-seq data, and later they evaluate their method on MRI images. This inconsistency makes the overall methodology difficult to understand and significantly undermines the coherence of the paper.\n\n2. The paper contains several incorrect claims and assertions. To list a few: “Existing generative models typically assume low-dimensional inputs (e.g., RGB images)” and “In spatial profiling data, each channel designates a specific molecule of interest (e.g., proteins n ≥ 30 and genes n ≥ 100), and each pixel (or cell)…”.\n\n3. The manuscript contains numerous writing inconsistencies, redundant and buzzword-heavy claims, and incorrect or oversimplified descriptions of diffusion theory. Core method elements (random channel masking, SE attention) are incremental and poorly justified as novel contributions.",
"questions": "1. The authors declared the following: \"All experiments were trained on NVIDIA A5000 GPUs with 24 GB of VRAM. The model was trained for 2000k imgs with a batch size of 256, taking approximately 2 hours to complete at 16 × 16 resolution. All results were obtained using a single-GPU setup unless otherwise specified.\" Which kind of biological data have a resolution of 16 × 16?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-31T07:32:36",
"modification_date": "2025-11-12T13:21:39",
"review_url": "https://openreview.net/forum?id=t7wIerUT2E¬eId=hdi5pEbz7k",
"license": "CC BY 4.0"
},
{
"id": "JGl0dLFmd3",
"forum": "t7wIerUT2E",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14506/Reviewer_iF6S",
"reviewer_name": "Reviewer_iF6S",
"rating": 2,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 1,
"summary": "The paper proposes MCD, a conditional diffusion framework for multi-channel biological data. Conditioning is handled via masked images. The proposed architecture is a dual network with hierarchical feature injection from a contextual encoder into the denoiser, plus two channel-attention mechanisms (SE for injection and transformer-style channel self-attention inside unet blocks). Experiments show state of the art level results on CITE-seq protein prediction, IMC channel imputation including a union vs intersection multi-dataset study, hybrid controls versus ControlNet/BrushNet, and BraTS MRI.",
"strengths": "- **Problem relevance.** Training with random channel masking yields one model that accepts arbitrary observed subsets and making it flexible. The union and intersection result supports cross-dataset integration under partial channel overlap.\n\n- **Strong empirical results.** When reported, the method consistently outperforms baselines. Experiments are broad and span single/multi dataset setups and including hybrid controls.\n\n- **Ablations.** Stepwise ablations and ControlNet/BrushNet hybrids help attribute gains to hierarchical injection.",
"weaknesses": "- **Subset-size stress-tests are missing.** One of the core claims is robustness to arbitrary observed subsets, but there is no sweep of performance vs. #observed channels / masking-probability p, nor targeted leave a group out per channel families. Single vs multi channel and union vs intersection is positive but partial.\n\n- **Efficiency evidence.** Table 1 lists SiD(1-step) with near identical accuracy and claims two orders of magnitude speedup, but there are no wallclock analysis for readers to observe the real performance gains. \n\n- **Method clarity.** I find the pieces in the methods section hard to put together. It requires stitching together several sections to reconstruct the exact flow of input, context encoder, SE-gated injections per scale, denoiser block attention, output SE flow. A single forward schematic/pseudocode can improve the clarity significantly. \n\n- **CFG analogy is conceptual.** I could not directly link random masking with CFG. Instead, the authors can provide a deeper the analysis of how the compared baselines actually operate and where they fall short.\n\n- **Reproducibility.** The paper promises code upon publication without anonymous repo or supplementary materials. This limits the verification significantly.",
"questions": "See the questions and actionable items below. I find the core idea is strong, but the draft feels rushed; clearer presentation and a few added analyses would better show the paper's potential.\n\n\n- **MedVAE ablation.** It would be nice to see the effect of the latent space choice on the results. One can convert each channel into grayscale (or stack each channel 3 times to imitate RGB) and run through a stable diffusion VAE pipeline do the latent diffusion.\n\n- **Efficiency plots.** Single step generation has a clear advantage from its strong results and efficiency but it would be nice to see more details on how you distilled together with the performance comparisons with other baselines.\n\n- **Confidence intervals.** Report mean $\\pm95\\%$ CI over multiple seeds for all main tables to quantify variance from training and random masking. \n\n### Mistakes in text.\n\n- **Section 2.2** The part describing classical RGB image imputation and classical RGB colorization problems are not correct. If we have $C_m = 0$ and $C_o = C = 3$ there is nothing to impute. Additionally, for the colorization example $C_m = 3$ and $C_o = 1$ implies $C = C_m + C_o = 4$ which is clearly not RGB. \n\n- **Dataset reference for BMMC.** Table 1 report BMMC (bone marrow mononuclear cells) as BMNC both in the body and in the caption.\n\n- **Different acronym on Table 3.** Throughout the paper the authors refer their method as MCD however Table 3 uses DiffuseMRI",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T01:52:42",
"modification_date": "2025-11-12T13:21:40",
"review_url": "https://openreview.net/forum?id=t7wIerUT2E¬eId=JGl0dLFmd3",
"license": "CC BY 4.0"
},
{
"id": "aoilMLBXfc",
"forum": "t7wIerUT2E",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14506/Reviewer_t6Ur",
"reviewer_name": "Reviewer_t6Ur",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "Many biological modalities are inherently multi‑channel and spatially co‑registered (IMC, clinical imaging, spatial transcriptomics). Off‑the‑shelf conditional diffusion often assumes low‑dimensional channels and crude conditioning that breaks spatial correspondence. The paper proposes MCD (multi‑channel diffusion) for controllable generation and imputation given arbitrary subsets of observed channels. Core claims: (1) hierarchical feature injection to keep spatial alignment; (2) two channel‑attention modules to model inter‑channel dependencies; and (3) random channel masking for amortized training across any condition configuration. Experiments cover single‑cell gene‑to‑protein prediction, IMC imputation, and cross‑dataset generalization.",
"strengths": "- The hierarchical feature‑injection pipeline preserves spatial correspondence at every resolution, which is exactly what biological channels need. The injection is simple to deploy and avoids brittle concatenation tricks common in vision diffusion.\n- Pairing SE gating with transformer‑style channel attention gives a plausible division of labor: per‑sample recalibration and higher‑order inter‑channel structure. The additional SE head at the output enforces cross‑channel coupling where it matters.",
"weaknesses": "- Comparators underpowered for spatial tasks. ControlNet is a fair reference, but the paper doesn’t benchmark strong end‑to‑end, jointly‑trained spatial conditioners or ablate cross‑attention conditioning versus the proposed SE‑gated injection. Given that the method already computes $E_{\\ell}(c)$, a direct cross‑attention baseline would be informative.\n- Masking policy and distribution shift. Random masking zeros unobserved channels in the condition (Algorithm 1). The paper does not analyze whether zero‑masking creates a train/test mismatch when “missing” at test time means “physically unmeasured but nonzero distribution,” especially for modalities where absolute intensity carries semantics.\n- Metrics don’t reflect calibration. Spatial evaluation is Pearson r. No per‑channel calibration (e.g., bias/variance decomposition), no uncertainty quality, no region‑level histograms for key biomarkers. For clinical‑adjacent imaging, correlation alone can be misleading.",
"questions": "- (Multi-channel vs multi-source) The setting is similar to multi-source integration problem, and the masking idea is similar to scVAEIT. It would be interesting to discuss the connection. (Du, J.‑H., Cai, Z., Roeder, K. (2022), Robust probabilistic modeling for single‑cell multimodal mosaic integration and imputation via scVAEIT.”)\n- (Masking policy) What masking probability $p$ and schedule work best? Any performance cliffs when masks are extremely sparse/dense? Show sensitivity curves and OOD mask combos not seen in training.\n- (Scalability) Provide computational complexity (e.g., wall‑clock, memory, and FLOPs) versus $C$, $D$, and $H\\times W$ for both attention modules. Where does channel self‑attention become the bottleneck?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-18T22:30:38",
"modification_date": "2025-11-12T13:21:41",
"review_url": "https://openreview.net/forum?id=t7wIerUT2E¬eId=aoilMLBXfc",
"license": "CC BY 4.0"
}
] | |
lnTX3GoeTY | https://openreview.net/forum?id=lnTX3GoeTY | Feature segregation by signed weights in artificial vision systems and biological models | 4.5 | 3.5 | [
6,
4,
6,
2
] | [
3,
4,
4,
3
] | 4 | [
"ventral stream",
"circuit mechanisms",
"interpretability",
"deep learning",
"visual system",
"excitation inhibition",
"neuroscience",
"closed-loop optimization",
"ablation"
] | A core principle in both artificial and biological intelligence is the use of signed connections: positive and negative weights in artificial networks, and excitatory and inhibitory synapses in the brain. While both systems develop representations for diverse tasks, it is unclear whether positive and negative signals serve distinct representational roles or whether all representations require a balanced mixture of both. This is a fundamental question for mechanistic interpretability in neuroscience and AI.
Here, we investigate how signed weights shape visual representations in artificial and biological systems involved in object recognition. In ImageNet-trained neural networks, ablation and feature visualization reveal that removing positive inputs disrupts object features, while removing negative inputs preserves foreground representations but affects background textures. This segregation is more pronounced in adversarially robust models, persists with unsupervised learning, and vanishes with non-rectified activations.
To better approximate the excitation versus inhibition segregation observed in biology (Dale’s law), we identified channels that projected predominantly positive or negative weights to the next layer. In early and intermediate layers, positive-projecting channels encode localized, object-like features, while negative-projecting channels encode more dispersed, background-like features.
Motivated by these findings, we performed feature visualization in vivo in neurons in monkey visual cortex, across the ventral stream (V1, V4, and IT). We also fitted linear models using the input layer to classification units studied in ANNs that contained features alike those preferred by the biological neurons.
We replicated ablation experiments in these model neuron units and found, as with class units, that removing positive inputs altered representations more than removing negative ones.
Notably, some units closely approached Dale's law: the positively projecting units exhibited localized features, while the negatively projecting units showed larger, more dispersed features. Furthermore, we increased in vivo neuron responses by clearing the image background around the preferred feature, likely by reducing inhibitory inputs, providing concrete predictions for circuit neuroscientists to test.
Our results demonstrate that both artificial and biological vision systems segregate features by weight sign: positive weights emphasize objects, negative weights encode context. This emergent organization offers a new perspective on interpretability and the convergence of representational strategies in brains and machines, with important predictions for visual neuroscience. | Neural networks trained on ImageNet segregate the object/foreground features of their output layer to the positive input weights, with similar behavior in visual neurons. | interpretability and explainable AI | https://openreview.net/pdf?id=lnTX3GoeTY | 2025-09-20T05:45:11 | 4 | [
{
"id": "TXYCdLoNHd",
"forum": "lnTX3GoeTY",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21502/Reviewer_f5Cb",
"reviewer_name": "Reviewer_f5Cb",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "Across artificial and biological neural networks (macaque visual system) for vision, they show that positive weights emphasize objects, while negative weights encode context.",
"strengths": "- Original study asking an important question about the role of negative vs positive synapses in the visual system and ANNs.\n- Careful analyses which seem to mostly support the claims of the abstract.\n- Clear writing for the most part.",
"weaknesses": "- I am not entirely sure (but I may have missed some of the reasoning steps) that the claim of the abstract about the macaque visual system (\"Our results demonstrate that both artificial and biological vision systems segregate features by weight sign: positive weights emphasize objects, negative weights encode context\") is fully supported by the analyses provided. Could you please explain how the analyses support that claim?\n\n- In the abstract, it is said: \"Notably, some units closely approached Dale's law: the positively projecting units exhibited localized features, while the negatively projecting units showed larger, more dispersed features.\" How is this observation related to Dale's law, which states that a single neuron releases the same neurotransmitter at all of its synapses?",
"questions": "- In order to test the emergence of (approximate) Dale's law in the artificial networks, one one need to run a statistical test to see if neurons tend to have more output connection weights of the same sign than expected by chance.\n\n- Figures are a bit small. Consider enlarging in the final version.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T10:39:21",
"modification_date": "2025-11-12T18:03:26",
"review_url": "https://openreview.net/forum?id=lnTX3GoeTY¬eId=TXYCdLoNHd",
"license": "CC BY 4.0"
},
{
"id": "XYHWz5yD1T",
"forum": "lnTX3GoeTY",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21502/Reviewer_kwmx",
"reviewer_name": "Reviewer_kwmx",
"rating": 4,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 1,
"summary": "This paper is a commendable piece of interdisciplinary research that bridges mechanistic interpretability in deep learning with fundamental principles of neurobiology. The overall contribution is substantial, and the authors' efforts to validate their computational claims with biological data are notable. However, the work suffers from weaknesses in its presentation, methodological rigor, and conceptual clarity across both the AI and Neuroscience domains.",
"strengths": "1) The core contribution is the novel insight that the **signed weights ($+/-$) in ANNs serve distinct, functionally segregated roles**, which moves beyond treating weights merely as mathematical optimization parameters. This work is a crucial step toward understanding the computational significance of weight structure.\n\n* **Interpretability Breakthrough:** The paper provides a clear, interpretable meaning for the sign of weights—positive weights encode **object features**, and negative weights encode **background/contextual information** (i.e., feature segregation). This is a significant advance in explaining *why* modern deep networks achieve robust visual representations.\n* **A Solid Link to Neuroscience:** The work addresses a fundamental question in both fields by investigating whether a balanced mixture of positive and negative signals is required for representations. This alignment of **computational principles (signed weights)** with **neurobiological function (excitatory/inhibitory balance)** makes it particularly relevant for interdisciplinary venues like ICLR.\n\n2) The methodology for cross-validation is robust, demonstrating a deep commitment to scientific rigor and biological plausibility.\n\n* **Biological Plausibility:** The use of **novel, complex V4 neuronal recordings from macaque monkeys** is a major strength. This direct, first-hand biological validation significantly elevates the paper's credibility, ensuring the claimed segregation is not a mere artifact of the ANN architecture but a potentially universal principle of visual processing.\n* **Effective Computational Methodology:** The authors employed an appropriate and sophisticated methodology—**customized activation maximization (feature visualization)**—to probe the specific functional preference of both positive-only and negative-only weighted inputs. This technique is well-suited for visualizing the intrinsic features encoded by the network, moving beyond simple classification tasks.\n* **Architecture and Stimuli Relevance:** The choice of **ResNet** architecture is appropriate given its deep usage in vision and common comparison with the visual processing stream, enhancing the generalizability of the findings.\n\n3) The paper's execution required non-trivial effort across multiple domains, reflecting the significant endeavor of the research team. Conducting novel electrophysiology experiments and integrating them with advanced computational analysis is a demanding task that is not commonly accomplished, warranting praise for the authors' hard work.",
"weaknesses": "1) The paper suffers from a general lack of clarity and transparency that severely impedes the reviewer's ability to assess its claims and ensures reproducibility.\n\n* **Insufficient Referencing and Background:** The Introduction lacks essential references to support foundational claims and necessary background knowledge, particularly in the neuroscience domain. This poor referencing undermines the academic rigor required to justify the \"fundamental question\" being investigated and the paper's overall scholarly context. While some of references are addressed in related works, referencing only 2 papers in 6 paragraphs of introduction needs to be extensively revised considering the majority of the community is not familiar to the neuroscience (e.g., Dale's law, brain's visual stream, brain region etc)\n* **Misleading Presentation of Core Methods:** Despite having detailed procedures in the Appendix, the minimal description in the main text creates unnecessary ambiguity regarding the source of the biological data (new recording vs. open-source) and the specifics of the complex feature visualization. This structure places an undue burden on the reader and reviewers and minimizes the necessary rigor for introducing novel electrophysiology data. **(Recommendation: Integrate a summary of all critical methods into the main body of the paper., even there are repeated section in methods and appendix)**. Moreover, figure 1A seems to describe the concept of method, how exactly monkey recording can reconstruct image while spliting exicatory or inhibitory is really hard to grasp. \n\n\n2) Critical technical details are presented with insufficient formal rigor, making the analysis difficult to verify.\n\n* **Ablation Equation Ambiguity (Line 147):** The equation for ablation (e.g., using $\\alpha$) lacks formal clarity. It is unclear if $w$ refers to layer-wise weights or the entire model, and whether the operation respects the sign of the weights or only their magnitude. Ambiguity regarding whether $\\alpha$ can exceed 1 suggests a lack of precise definition for the weight's normalization or clipping.\n* **Undefined Notation ($L_{\\infty}$):** The term $L_{\\infty}$ (Line 138) must be explicitly defined (presumably the $\\ell_{\\infty}$ norm) as this basic notation should not be assumed or left to the Appendix for clarity in the main body.\n\n3) The core premise of the cross-disciplinary comparison is built on a simplifying assumption that requires a more robust discussion.\n\n* **Structural Mismatch in Synapse Analogy:** The direct mapping of an ANN's signed weight to a biological synapse is structurally flawed. A standard ANN unit receives both positive and negative weighted inputs, whereas a biological neuron's *outgoing* connection typically adheres to Dale's Principle (single sign). The paper fails to rigorously address this significant **structural mismatch**, treating the *sign of the weight* as an equivalent proxy for the *sign of the synapse*. This omission weakens the fundamental premise that the observed feature segregation is truly analogous to biological circuit function and should be thoroughly discussed.\n\n4) Lack of Justification for Sampling (Line 658)\n\n- The authors' justification for data and class sampling is incomplete. The rationale for subsampling the **11 specific classes** for the analysis (Line 658) is not clearly articulated. Without a clear justification for selecting these particular classes, the generalizability of the reported results regarding feature segregation is questionable and may introduce selection bias.",
"questions": "While the core intellectual efforts and the underlying experimental findings are commendable and demonstrate significant effort, the overall packaging and presentation of the manuscript detracts from its substantial contribution. The density of information, poor flow, and insufficient referencing in the main text make it unduly challenging for the reader to immediately grasp the rigor and context of the work. The authors are **strongly encouraged to significantly refine the narrative clarity and academic referencing** to appropriately honor the complexity and value of their research.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T08:29:21",
"modification_date": "2025-11-12T18:03:27",
"review_url": "https://openreview.net/forum?id=lnTX3GoeTY¬eId=XYHWz5yD1T",
"license": "CC BY 4.0"
},
{
"id": "L4hVgQmH0r",
"forum": "lnTX3GoeTY",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21502/Reviewer_udxK",
"reviewer_name": "Reviewer_udxK",
"rating": 6,
"confidence": 4,
"soundness": 2,
"contribution": 3,
"presentation": 3,
"summary": "This paper investigates an interesting organizational principle in neural networks: whether positive and negative weights segregate different types of visual information. Through systematic ablations and feature visualizations across multiple ImageNet-trained architectures, the authors demonstrate that positive weights preferentially encode localized, object-like features while negative weights encode more dispersed, texture-like or background features. They show this segregation is enhanced in adversarially robust models, persists with unsupervised pretraining, but critically depends on ReLU activations. The authors extend their investigation to primate ventral stream recordings, fitting linear models from CNN features to neural responses and performing both in silico and in vivo feature visualizations.",
"strengths": "## Strengths\n\n* Originality: The systematic investigation of feature segregation by weight sign is a novel angle in interpretability research. While prior work has explored weight pruning and sparsity, the specific hypothesis that signed weights might organize visual information differently (analogous to biological E/I circuits) is creative and underexplored. The connection to adversarial robustness is particularly original.\n\n\n* Quality: The experimental design is thorough and rigorous. Testing across multiple architectures (AlexNet, VGG16, ResNet50, robust ResNets) with different training regimes (supervised, unsupervised, robust) provides strong evidence that this is a general phenomenon rather than an architectural artifact. The ablation methodology is sound, using cumulative weight removal based on magnitude. The use of two different GANs (DeePSiM and BigGAN) for feature visualization helps ensure results aren't generator-specific. Scaling to 100 ImageNet classes (Fig. 11) and validating with LPIPS demonstrates robustness of findings.\n\n\n* Clarity: The paper is generally well-written with effective visualizations. Figure 1 provides a clear overview of the phenomenon. The comparison between ReLU and Tanh networks (Fig. 2) elegantly isolates the role of rectification. The progression from output layers to intermediate layers (Section 4.4) is logical. Methods are sufficiently detailed for reproduction.\n\n* Significance: This work has potential impact for both AI interpretability and neuroscience. For AI, it offers a new lens for understanding how networks organize information and suggests that analyzing positive/negative pathways separately could aid interpretability. The connection to adversarial robustness is important - if robust models show stronger segregation, this could inform development of more interpretable models. For neuroscience, while the biological validation is preliminary, the approach of using ablation-based feature visualization to generate predictions for circuit experiments is valuable and could inspire new experimental designs.\n\nThe finding that ReLU is necessary for segregation connects nicely to recent work on how activation functions shape representational geometry, adding to our theoretical understanding of deep learning.",
"weaknesses": "## Weaknesses\n\nWhile the core ANN experiments are well-executed and reproducible, several issues limit the soundness of the overall claims:\n* **Interpretation ambiguity:** The central interpretation of \"object vs. background\" segregation is not sufficiently justified. The observed effects could equally reflect:\n\n1. Local vs. global spatial structure\n2. High vs. low spatial frequency content\n3. Shape vs. texture information\n4. Figure vs. ground organization\n\nThe YOLOv7 objectness metric provides only weak support, as it reflects one specific computational definition of \"object\" that may not align with what the networks actually learned. I recommend: (1) testing alternative interpretations using texture/shape metrics (e.g., Geirhos et al. 2019 style analyses), (2) frequency domain analysis of the features, and (3) more direct tests of foreground/background using segmentation masks.\n\n* Biological validation concerns. The neuroscience component has significant limitations:\n\n1. R² = 0.27 is quite low for claiming the models \"capture\" neural representations\n2. 160 images are substantially a small amount (although I believe experiments could be tricky, I'm afraid this could limit the final conclusions) \n3. The leap from positive/negative ANN weights to excitatory/inhibitory neurons oversimplifies Dale's law, which involves distinct cell types, complex dynamics, and circuit-level interactions not present in ANNs\n4. The penultimate layer may not be optimal for predicting ventral stream responses\n\nI suggest: (1) comparing against larger image sets to assess whether findings hold, if possible, (2) testing multiple layers to find optimal predictions, (3) being more careful about the analogy to E/I balance, and (4) acknowledging these are computational models of neurons, not direct biological measurements.\n\nLastly, I believe that the paper excellently demonstrates **what** happens but provides no insight into **why**. What properties of the learning objective, training dynamics, or network architecture cause this segregation to emerge? Adding:\n\n1. Analysis of weight evolution during training\n2. Theoretical framework or toy models\n3. Predictions about when this would/wouldn't occur would significantly strengthen the contribution.\n\n-----\n\nThe paper is generally well-written with clear figures. However, some improvements would help:\n\n* The \"Dale's law inspired analysis\" in Section 4.4 and Appendix A.3 feels somewhat disconnected from the main narrative. Consider integrating it more smoothly or clarifying its relationship to the classification unit findings.\n* Figure 1B-C would benefit from showing more than just the best visualization per condition to convey variability\n* The biological methods (Section A.1) could be condensed, with some details moved to supplementary materials\n* Some key results (like the 100-class validation) are relegated to appendix; consider moving to main text",
"questions": "## Questions for Authors\n\nTo summarize the points raised above: \n\n* Alternative interpretations: Have you tested whether the segregation reflects spatial frequency rather than object/background? Could you show power spectra of features preferred by positive vs. negative weights?\n\n* Training dynamics: When does this segregation emerge during training? Does it correlate with the development of robust features or adversarial robustness?\n\n* Mechanistic predictions: Can you predict which architectures or training procedures would show stronger/weaker segregation based on some principle?\n\n* Causality: The ablation experiments show correlation, but do they demonstrate that positive weights cause object representations? Could there be confounding factors?\n\n### Minor Issues\n\n* Table 2 shows positive/negative weight ratios are very close to 1:1. This is interesting but underexplored. Why this balance?\n* The limitations section is quite brief; consider expanding",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-01T00:43:22",
"modification_date": "2025-11-12T18:03:27",
"review_url": "https://openreview.net/forum?id=lnTX3GoeTY¬eId=L4hVgQmH0r",
"license": "CC BY 4.0"
},
{
"id": "K4dTl5XsFR",
"forum": "lnTX3GoeTY",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission21502/Reviewer_sJXU",
"reviewer_name": "Reviewer_sJXU",
"rating": 2,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "In this paper, the authors study the role of positive and negative weights in artificial neural networks and their analogues with excitatory and inhibitory synapses in biological neural networks. By performing visualization and ablation experiments in a variety of convolutional neural networks, the authors suggest that positive weight connections primarily encode object-relevant features while negative weight connects play a larger role in encoding background content. The authors further train linear mappings to predict biological neural activity in monkeys from learned representations in CNNs. Performing further visualization and ablation experiments with these learned mappings, the authors show the impact of projection weight sign on biological neural firing rate and conclude that both biological and artificial visual systems segregate features based on weight sign.",
"strengths": "- The authors motivate their work based on biological principles. Studying feature segregation in the visual system with image models is an interesting idea and mechanistic understandings that result from such analyses can be important to better understand information encoding in both biological and artificial systems.\n- Analyses provided in this paper extend beyond the study of artificial neural units. The authors sought to corroborate their findings in-vivo.",
"weaknesses": "- With regard to sensitivity of output unit weights to positive and negative connections (results section 4.1), why would we expect anything different than the results presented (i.e., that positive output weights are important for encoding object-relevant information and negative weights for non-object information)? Since these connections directly contribute to an output unit that is trained to have high activation only when a specific object is present (via the classification loss), shouldn't we expect these observed results as our default hypothesis? And if so, why would this not also explain why we saw a similar, but lesser effect, when studying unsupervised models and no segregation in models trained with tanh units (which could contribute evidence for the presence of an object to an output unit by multiplying a negative activation with a negative weight or positive activation with a positive weight)?\n- I am unconvinced by the qualitative analysis of section 4.4. In early and mid layers, there is no evidence that visualized positive (negative) features are used primarily for object (background) information. All visualized features in these early layers could likely be activated with various objects (or background textures). More causal and quantitative evidence is needed to support this claim.\n- Analyses in section 4.5 rely on the assumption that the model predicts biological neural activity well on stimuli outside of the training set. This is claimed to be true (line 411 “images from intact models reliably drove biological neurons to firing rates…indicating out-of-distribution generalization”) but Figure 16, left, would suggest otherwise: outside of the training data, neuron responses appear to be poorly predicted by the model (what are the $r^2$ scores for held out in-distribution predictivity and out-of-distribution predictivity?). Driving biological neurons to fire more than they do for natural images could simply be a result of the fact that the presented synthetic images are out of the natural image distribution.",
"questions": "- For analyses in section 4.5, why were units from monkey visual areas V1 and V2 aligned with the penultimate representations in AlexNet when earlier layers of CNNs tend to be better predictors of activity in the early ventral stream?\n- With regard to weakness bullet point 3, what is the variance explained (or pearson correlation) between biological neural activity and model neural activity for extrapolated images?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-24T05:45:06",
"modification_date": "2025-11-12T18:03:28",
"review_url": "https://openreview.net/forum?id=lnTX3GoeTY¬eId=K4dTl5XsFR",
"license": "CC BY 4.0"
}
] |
6lobo2PXdl | https://openreview.net/forum?id=6lobo2PXdl | The Power of Small Initialization in Noisy Low-Tubal-Rank Tensor Recovery | 4.5 | 3.5 | [
4,
6,
2,
6
] | [
3,
4,
4,
3
] | 4 | [
"low-tubal-rank tensor recovery; t-SVD; t-product; over-parameterization;non-convex"
] | We study the problem of recovering a low-tubal-rank tensor $\mathcal{X}\_\star\in \mathbb{R}^{n \times n \times k}$ from noisy linear measurements under the t-product framework. A widely adopted strategy involves factorizing the optimization variable as $\mathcal{U} * \mathcal{U}^\top$, where $\mathcal{U} \in \mathbb{R}^{n \times R \times k}$, followed by applying factorized gradient descent (FGD) to solve the resulting optimization problem. Since the tubal-rank $r$ of the underlying tensor $\mathcal{X}_\star$ is typically unknown, this method often assumes $r < R \le n$, a regime known as over-parameterization. However, when the measurements are corrupted by some dense noise (e.g., sub-Gaussian noise), FGD with the commonly used spectral initialization yields a recovery error that grows linearly with the over-estimated tubal-rank $R$. To address this issue, we show that using a small initialization enables FGD to achieve a nearly minimax optimal recovery error, even when the tubal-rank $R$ is significantly overestimated. Using a four-stage analytic framework, we analyze this phenomenon and establish the sharpest known error bound to date, which is independent of the overestimated tubal-rank $R$. Furthermore, we provide a theoretical guarantee showing that an easy-to-use early stopping strategy can achieve the best known result in practice. All these theoretical findings are validated through a series of simulations and real-data experiments. | For the noisy low-tubal-rank tensor recovery problem, we show that factorized gradient descent with small initialization converges to nearly the minimax optimal error. | optimization | https://openreview.net/pdf?id=6lobo2PXdl | 2025-09-15T23:18:03 | 4 | [
{
"id": "KTtam0RrgL",
"forum": "6lobo2PXdl",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5985/Reviewer_Zuao",
"reviewer_name": "Reviewer_Zuao",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This paper studies the problem of recovering a low-tubal rank tensor $X_*$ from its noisy linear measurements, $y = M (X_*) + s$. Focusing on the commonly used Burer-Monteiro factorization framework, it assumes that $X_*$ can be decomposed as $U * U^\\top$, and aims to minimize the quadratic loss function to recover $U$. The authors consider the fractorized gradient descent (FGD) method and attempts to derive upper bounds for the estimator error. A key challenge here is that the true tubal rank of $X_*$, denoted as $r$, is typically unknown, and error bounds in past work often depends on the over-estimated tubal-rank $R$.\n\nThe main contribution of this paper is to show that, both theoretically and empirically, using a small random initialization for FGD achieves a nearly minimax optimal estimation error, which only relies on the true tubal rank $r$, not the over-estimated tubal-rank $R$. To be specific: (i) Theorem 2 establishes estimator error bounds for early stopped FGD, under three different regimes of $r$ and $R$; (ii) Theorem 3 shows that the error bound of Theorem 2 is minimax optimal in the case of Gaussian noise; (iii) Finally, Theorem 4 shows that the early stopping time $t$ can be reliably chosen using sample spitting, thus giving a practical algorithm based on the theoretical results in Theorem 2 and 3.",
"strengths": "The key findings of this paper, that the estimation error of FGD only depends on the true tubal-rank $r$, and that small initialization overcomes the dependency on the estimated rank $R$, are definitely interesting. The authors also compare their work with earlier results on the same topic to highlight their contributions, and include a proof sketch section to illustrate why FGD with small initialization works, which I really appreciate.",
"weaknesses": "(i) The entire analysis replies on the T-PSD assumption on $X_*$, i.e., $X_*$ has the decomposition $X_* = U * U_*$. This is a significant simplification. The authors also acknowledge this limitation in Remark 5 and discussed a bit about extensions to general asymmetric case in Appendix I.\n(ii) Although the error bounds in Theorem 2 does not depend on $R$, the initialization scale $\\alpha$ and early stopping time $t$ depends on $R$. Further, in case 3 it seems to me that we can take $R$ to be so large that $\\alpha$ and $t$ are close to $0$, which is quite counterintuitive. Could the authors clarify more on this? This is very important since the error bounds in this paper improves on previous results only in the case $R \\geq 3r$.",
"questions": "A few minor comments/suggestions for the authors:\n(i) \"CONDECOMP\" should be \"CANDECOMP\" in line 051.\n(ii) What is $R_n$ in Theorem 2, is it just $R$? Is it possible to state the choices of $\\alpha$ and $t$ only in terms of $R$, since $r$ is unknown?\n(iii) In line 216, perhaps it is better to replace $s$ as some other notation since $s$ denotes the noise in $y$.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T11:20:24",
"modification_date": "2025-11-12T11:33:12",
"review_url": "https://openreview.net/forum?id=6lobo2PXdl¬eId=KTtam0RrgL",
"license": "CC BY 4.0"
},
{
"id": "nzfH05c4Y3",
"forum": "6lobo2PXdl",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5985/Reviewer_wQhQ",
"reviewer_name": "Reviewer_wQhQ",
"rating": 6,
"confidence": 4,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "The paper analyzes the optimization dynamics of factorized gradient descent (FGD) for low tubal rank tensor recovery, which is based on the quite recent t-SVD for tensors. The authors prove that using a small random initialization for FGD together with suitable early stopping of FGD leads to a minimax optimal estimator, under an RIP-type condition. The authors include synthetic experiments that validate the theory. Overall, this is a quality submission. I would be open to *raising* my score if the authors addressed a couple of my concerns in the Weaknesses section below with their rebuttal.",
"strengths": "- The optimization dynamics are interesting and not so intuitive. In particular, the authors find that small random initialization is better than smart spectral initialization for this problem. Further, FGD exhibits non-monotonicity with respect to error with the ground-truth; therefore, early stopping is needed.\n\n- The technical strength of analysis in this paper seems impressive. \n\n- The authors confirm their results also through numerical simulations. Random small initialization and early stopping indeed improve the estimated tensor in experiments.",
"weaknesses": "- It is unclear whether the factored form of the tensor used by the authors imposes symmetry and/or psd constraints. The authors use $\\mathcal{U} \\ast \\mathcal{U}^{\\top}$ as their ansatz for the low tubal rank tensor.\n\n- The relation to low rank matrix recovery (i.e., with matrices and matrix SVD, rather than with tensors and t-SVD) is not well-specified as far as I can tell. The authors should comment on this.\n\n- There are no real data experiments. Real data would be nice, because there are certain RIP assumptions in this work, and one wonders how well they capture real data situations.",
"questions": "Line 60-61: The authors write \"Under the t-SVD framework, since problem (2) is NP-hard, a common approach is to relax the tubal-rank constraint to the tensor nuclear norm.\" Is it actually known that the low tubal rank recovery problem (2) is NP-hard? What is a reference? This wouldn't be in Hillar-Lim's paper, for instance. \n\nLine 60-61: The authors should perhaps also say \"tubal tensor nuclear norm\", and provide a reference for its definition. My understanding is that the authors mean the sum of singular values from the t-SVD, rather than the tensor nuclear norm as related to the CP decomposition. That latter tensor nuclear norm is NP-hard to compute.\n\nLines 70-72: Would using the ansatz $\\mathcal{A} = \\mathcal{U} \\ast \\mathcal{U}^{\\top}$ imply symmetry and/or some sort of positive semi-definiteness in $\\mathcal{A}$? If so, the authors should be explicit about these assumptions on $\\mathcal{A}$. To me the natural low-rank ansatz would be $\\mathcal{A} = \\mathcal{U} \\ast \\mathcal{V}^{\\top}$ where $\\mathcal{U}$ need not equal $\\mathcal{V}$.\n\nLine 131: In the caption of Table 1, start the second sentence as \"The noise vector...\", i.e. please drop the leading \"And\".\n\nLine 202: Explain better what the block diagonal matrix $\\bar{Y}$ is.\n\nLines 240-269: In Theorem 2, item 1, is this bound true for all $t \\geq t_1$? If not, and the error can increase later on, how about formulating item 1 with $\\hat{t}$ as you do with items 2 and 3 in Theorem 2?\n\nSection 4: It would be appreciated if the authors included a real data experiment.\n\nGeneral question: How do the results in this work relate to results for factorized gradient descent with just matrices and matrix SVD, rather than with tensors and t-SVD? Are there parallel results in the matrix case? How do the results here compare?",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-11-02T10:44:52",
"modification_date": "2025-11-12T11:33:13",
"review_url": "https://openreview.net/forum?id=6lobo2PXdl¬eId=nzfH05c4Y3",
"license": "CC BY 4.0"
},
{
"id": "my7BgLdhH2",
"forum": "6lobo2PXdl",
"review_number": 2,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5985/Reviewer_svxt",
"reviewer_name": "Reviewer_svxt",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "This paper investigates low-tubal-rank tensor recovery from noisy linear measurements using the t-product framework and analyzes the behavior of factorized gradient descent (FGD) under over-parameterization. The authors show that small initialization and early stopping can significantly reduce recovery error independent of the over-estimated tubal rank, with both theoretical guarantees and empirical validation.",
"strengths": "The main contribution of this paper lies in providing an error bound for low-tubal-rank tensor recovery when the tubal rank is overestimated in noisy settings, showing that FGD can still achieve reliable recovery.",
"weaknesses": "1. The contribution of this work is incremental. Factorized Gradient Descent (FGD) is not an original contribution of this work (see [1]). Similarly, the use of early stopping to avoid overfitting is a standard practice and cannot be regarded as an innovation here. \n[1] Z. Liu, Z. Han, Y. Tang, X. -L. Zhao and Y. Wang, \"Low-Tubal-Rank Tensor Recovery via Factorized Gradient Descent,\" in IEEE Transactions on Signal Processing, vol. 72, pp. 5470-5483, 2024\n\n2. The relationship with [1] should be discussed in detail rather than mentioned only briefly.\n\n3. Please provide a careful comparison with `Implicit Regularization for Tubal Tensors via GD' and A Validation Approach to Over-parameterized Matrix and Image Recovery'', particularly focusing on the technical tools employed in the proofs.\n\n4. The main motivation of this work is that traditional algorithms often use a higher rank than the true rank. However, in most practical applications (e.g., image and video recovery), the true rank is unknown. What really matters is which estimated rank leads to better recovery performance—and in many cases, such rank estimates are not unique. A thorough discussion and empirical comparison in real-world scenarios are therefore necessary. At present, the paper includes only one example on a single color image and lacks broader discussion of practical implications.\n\n5.Over the past five years, many tensor decomposition methods with rank estimation strategies have been proposed. Since this work is motivated by the problem of overestimated rank, it should include comparisons and discussion of rank-estimation-based approaches, such as the following:\n\n[2] Q. Shi, Y.-M. Cheung, and J. Lou, “Robust tensor svd and recovery with rank estimation,” IEEE Transactions on Cybernetics, 2021.\n\n[3] Zheng, J., Wang, W., Zhang, X., & Jiang, X. (2023). A Novel Tensor Factorization-Based Method with Robustness to Inaccurate Rank Estimation. arXiv preprint arXiv:2305.11458.\n\n[4] Zhu Q, Wu S, Fang S, et al. Fast tensor robust principal component analysis with estimated multi-rank and Riemannian optimization[J]. Applied Intelligence, 2025, 55(1): 52.\n\n6. Beyond Implicit Regularization for Tubal Tensors via GD, no other recent tensor decomposition methods (from the last five years) are compared.",
"questions": "See the Weaknesses.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-30T04:51:15",
"modification_date": "2025-11-12T11:33:13",
"review_url": "https://openreview.net/forum?id=6lobo2PXdl¬eId=my7BgLdhH2",
"license": "CC BY 4.0"
},
{
"id": "wZHv4Qzf0y",
"forum": "6lobo2PXdl",
"review_number": 1,
"reviewer_id": "ICLR.cc/2026/Conference/Submission5985/Reviewer_SVNR",
"reviewer_name": "Reviewer_SVNR",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This paper investigates the problem of recovering a noisy low-tubal-rank tensor model. Specifically, it studies the theoretical guarantees of the gradient descent method with small initialization for solving a low-rank recovery problem formulated via the Burer–Monteiro factorization under the t-product framework. The authors prove that when the dimensions of the optimization variables match the ground-truth rank, the algorithm enjoys a linear convergence rate. They further provide error estimates under different levels of over-parameterization, showing that the final recovery error depends only on the noise scale as the iteration proceeds. Furthermore, the paper proposes a practical stopping criterion and establishes the corresponding recovery error bound under this termination condition. Numerical experiments demonstrate that the small-initialization gradient descent method achieves superior recovery performance compared with those initialized randomly or by spectral methods.",
"strengths": "The paper is well structured and clearly written, particularly on the mathematical side. The main results are rigorously proved and thoughtfully discussed. The authors also provide useful comparisons with existing works, which helps position their contributions within the current literature. The topic—low-tubal-rank tensor recovery—is timely and important, as tensor models have recently attracted significant attention due to their complex structures and broad applications. Moreover, the analysis of over-parameterization is of great interest, and the finding that the recovery error depends only on the noise scale is both elegant and encouraging.",
"weaknesses": "Although this paper studies the tensor case, the introduction and background on tensor are relatively limited, which reduces the readability for a broader audience. To improve accessibility, it would be helpful to include more definitions and explanations—such as the t-product and the corresponding notion of tensor rank—in the appendix, given the space limitations of the main text.\n\nAnother concern is that, as mentioned in Remark 7, the results of this paper can be viewed as a direct extension of the matrix case studied in [Ding et.al., 2025]. If one interprets the tensor in this paper as a matrix and the t-product as standard matrix multiplication, it is not immediately clear how the theoretical analysis differs from the matrix setting. The paper would benefit from a more explicit discussion highlighting the distinctive challenges or techniques specific to tensors, which would not only improve readability but also strengthen the contribution of the work.\n\nFinally, Remark 7 might be placed earlier in the text, since both Theorem 2 and its proof sketch are quite similar to those in [Ding et.al., 2025], not only the proposed stopping strategy.",
"questions": "- In the matrix case, the model obtained via the Burer–Monteiro (BM) factorization is nonconvex and may contain spurious local minima. For the tensor case studied in this paper, do similar spurious local minima exist? If so, what guarantees that the proposed gradient descent iterates will not converge to such undesirable local minima?\n\n- In the proof sketch of Theorem 2, the authors mention that the iterates undergo four different stages, during which the singular values of the iterates exhibit distinct behaviors. It would be interesting and instructive to design numerical experiments that illustrate these four phases and verify their correspondence with the theoretical analysis.",
"flag_for_ethics_review": [
"No ethics review needed."
],
"code_of_conduct": "Yes",
"review_date": "2025-10-29T17:15:31",
"modification_date": "2025-11-12T11:33:13",
"review_url": "https://openreview.net/forum?id=6lobo2PXdl¬eId=wZHv4Qzf0y",
"license": "CC BY 4.0"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.