3rd Workshop on Test-Time Updates (TTU): Putting Updates to the Test!#
Our third workshop on test-time updates will be held at ICLR 2026!
When and Where. The workshop will be held on Apr. 27 in Rio de Janeiro.
Scope. Note the scope encompasses test-time updates broadly. This includes test-time adaptation, test-time training, post-training updates, and model editing. As a workshop at ICLR, it is important to host and cross-pollinate work across different learning settings and domains.
Consider joining us to discover and contribute to the latest on updates after training: the test begins now!
Call for Papers#
Topics We will welcome and highlight content on test-time and post-training updates:
Foundations & Objectives: Unsupervised/self-supervised losses at test time; implicit/explicit regularization; stability–plasticity trade-offs; theory of adaptation and generalization under shift.
Parameterizations & Interfaces: Input-space updates (learnable augmentations, prompts), feature-space adapters (BN/affine, LoRA adapters), head-level edits, retrieval-augmented updates, black-box query strategies for closed foundation models.
Shift, Attacks, & Tasks: Coping with domain and style shift, distribution drift, adversarial perturbations, label shift, online continual learning and task switches, model availability attacks.
Adaptation of Foundational Models (FM): Adapting LLMs/VLMs and domain FMs to specialized/personalized settings via in-context learning, adapters/LoRA, TTU-RL, and model editing and unlearning.
Safety, Reliability, & Alignment: Uncertainty, conformal prediction at test time, fallback/abstention, guardrails and risk monitors, privacy-preserving updates, auditability, and roll-back.
Dynamic Architectures: Recurrent depth models, looped transformers, dynamically allocating compute (early-exit networks, mixture-of-depth), and iterative test-time optimization (deep equilibrium networks, implicit computation).
Metrics, Datasets, & Benchmarks: End-to-end metrics that couple utility (accuracy, calibration) with costs (compute, memory, wall-clock, energy); realistic streams and recurrences; reproducible TTU pipelines.
Cost-Aware & Green TTU: Methods and evaluations under compute/energy budgets, latency/throughput targets, edge constraints, carbon accounting, and cost–quality frontiers; any improvement must justify its operational footprint.
Keywords Adaptation, Continual Learning, Robustness, Personalization, Model Editing, Foundation Models, Reliability, Green AI.
Format We will welcome the submission of short papers (= 4 pages content, not including the references, as well as an (optional) appendix with an unlimited number of pages). We will also welcome the submission of tiny papers (= 2 pages content, not including the references, without an appendix). Accepted submissions will be selected for poster, lightning talk (= 1 slide in 1 minute), and oral presentation at the workshop. The workshop will not include proceedings.
Invited Speakers#
Paper Submission (Done)#
Please see the system on OpenReview:
Main track (4 pages): https://openreview.net/group?id=ICLR.cc/2026/Workshop/TTU_Main_Track
Tiny paper track (2 pages): https://openreview.net/group?id=ICLR.cc/2026/Workshop/TTU_Tiny_Track
Please use the ICLR 2026 paper kit for preparing your submission: ICLR/Master-Template
Submission deadline: Feb. 6th 2026 (AoE)
Decisions to authors: Mar. 1st, 2026
Camera ready: TBD
Call for Reviewers (Done)#
We are thankful for all of our volunteer reviewers for completing their assignments! All reviewers will be credited here for their academic service once the paper process is complete.
Organizers#
Contact#
Please reach the workshop organizers at ttu-iclr2026@googlegroups.com.