profile photo

Tianjin Huang

Hi, I'm an assistant professor in the Department of Computer Science at University of Exeter, and a long-term visiting researcher at Eindhoven University of Technology (TU/e).

I am open to collaborating with remote students and visitors (with or without research experience) in the following areas:  ^_^

  • Reliable Foundation Models: Efficient training, tuning, and compression under imperfect data conditions.
  • Agentic AI: Building robust and generalizable AI agents for real-world tasks.
  • AI for Science & Bio-Medical: Unlocking large models for data-scarce and high-stakes domains.

Biography & Research

My research interests lie in developing reliable and generalizable Artificial Intelligence for real-world applications. While modern AI thrives on massive, pristine datasets, the real world is inherently messy. The ultimate goal of my research is to bridge the gap between idealized lab settings and the chaotic reality of deployment by building models that are robust to data imperfections.

My work centers on the reality that real-world data is often flawed. The central question driving my research is: "How can we design efficient learning algorithms that do not just survive, but thrive, under imperfect conditions?" I focus on addressing four fundamental challenges of real-world data: low quantity, low quality, and skewed distributions, all while maintaining learning efficiency in the era of Large Models.

My research helps realize these goals by making progress in the following four directions, specifically tailored for the era of Large Foundation Models:

Learning from Low-Quality Data

Developing robust frameworks to separate signal from noise when dealing with corrupted or unreliable data. This is crucial for preventing foundation models from memorizing noisy pre-training data or failing during downstream adaptation.

Excelling with Low-Quantity Data

Designing architectures and adaptation strategies that unlock the power of large foundation models in data-scarce, high-stakes domains like medical imaging, achieving high performance even when massive datasets are unavailable.

Addressing Skewed Distributions

Ensuring models generalize fairly and effectively across imbalanced and long-tailed data distributions, mitigating the representational biases that large models frequently absorb from real-world, uncurated data.

Advancing Efficient Learning

Improving fundamental model efficiency, knowledge distillation, and architecture design. My goal is to compress the capabilities of large foundation models into efficient forms, making their training and real-world deployment practical and sustainable.

Call for CSC PhD & Joint-Training (Visiting PhD) Students

Topics: Efficient & Trustworthy Foundation Models, Robust Earth Observation (EO), Agentic AI & Safety, AI for Science, Reliable/Green Training.

  • What we offer: Co-supervision with partners (e.g., UNC, ELLIS Institute Tübingen), access to Exeter HPC (A100), Isambard-AI (5000 H200) & national resources, supportive publication mentorship.
  • Funding paths: China Scholarship Council (CSC) full PhD; Exeter–CSC joint program; 6–24 month joint-training/visiting PhD via CSC or home grants.
  • How to apply: Email your CV, transcripts. Use the subject: “CSC PhD / Joint Training – Your Name”.
  • Contact: [email protected]

We welcome emails from prospective students. Feel free to introduce yourself.

News

  • [Mar. 2026] Research Grant   Got a grant from NVIDIA Academic Grant Program.
  • [Mar. 2026] Research Grant   Got a grant from Isambard-AI with 10000 GPU hours.
  • [Feb. 2026] CVPR 2026   TWO paper accepted. Main Paper: Confusion-Aware Spectral Regularizer for Long-Tailed Recognition   and Findings Paper: SCOPE: Scene-Contextualized Incremental Few-Shot 3D Segmentation.
  • [Jan. 2026] ICLR 2026   One paper accepted: Dual-Kernel Adapter: Expanding Spatial Horizons for Data-Constrained Medical Image Analysis.
  • [Jan. 2026] IEEE Transactions on Image Processing   One papers accepted: StealthMark: Harmless and Stealthy Ownership Verification for Medical Segmentation via Uncertainty-Guided Backdoors.
  • [Jan. 2026] CPAL 2026   TWO papers accepted: Dual-Kernel Adapter & PASS.
  • [Jan. 2026] ICPR 2026   I serve as Area Chair of ICPR 2026..
  • [Jan. 2026] ICASSP 2026   One paper accepted: AUDIO DEEPFAKE DETECTION AT THE FIRST GREETING: “HI!”.
  • [Nov. 2025] CPAL 2026   I serve as Area Chair of CPAL 2026..
  • [Nov. 2025] AAAI 2026   One paper accepted: TimeCAP: A Channel-Aware Pre-Training Framework for Multivariate Time Series Forecasting.
  • [Oct. 2025] ELLIS Society   Joined as an ELLIS Member – grateful to my endorsers, collaborators, and students.
  • [Sep. 2025] ACM WSDM 2025   One paper accepted: SARC: Sentiment-Augmented Deep Role Clustering for Fake News Detection.
  • [Sep. 2025] NeurIPS 2025   One paper accepted: REOBench.
  • [May. 2025] EUSIPCO 2025   Paper accepted: Benchmarking Audio Deepfake Detection Robustness in real-world communication scenarios.
  • [May. 2025] REOBench   Released as a benchmark for evaluating the robustness of Earth observation foundation models (paper).
  • [May. 2025] MICCAI 2025   One early-accepted paper: LKA.
  • [May. 2025] ICML 2025   One paper accepted: LIFT.
  • [Mar. 2025] Expert Systems with Applications   One paper accepted: Traffic congestion predictor.
  • [Mar. 2025] ICLR 2025 SCOPE Workshop   Two papers accepted: SPAM and StableSPAM.
  • [Jan. 2025] ICLR 2025   Three papers accepted: SPAM, Composable Interventions, and Robust Fairness via Confusional Spectral Regularization .
  • [Dec. 2024] SGAI 2024   Gave an invited talk at the University of Cambridge .
  • [Dec. 2024] AAAI 2025   One paper accepted: Visual prompting upgrades neural network sparsification.
  • [Jul. 2024] BMVC 2024   One paper accepted: Are Sparse Neural Networks Better Hard Sample Learners?
  • [Jun. 2024] NeurIPS 2024 Competition   Co-organizing the Edge-Device LLM Challenge .
  • [Mar. 2024] ICLR 2024 Workshop   Paper accepted: Composing Knowledge and Compression Interventions .
  • [Oct. 2023] Information Fusion   Paper accepted: Robust Spatiotemporal GCN.
  • [Sep. 2023] Complex Networks 2024   Paper accepted: Heterophily-Based GNN for Imbalanced Classification.
  • [Sep. 2023] NeurIPS 2023   Paper accepted: Dynamic sparsity is channel-level sparsity learner (Channel-DST).
  • [Jun. 2023] ECML 2023   Paper accepted: Enhancing AT via Refining Optimization Trajectories.
  • [Apr. 2023] ICML 2023   Paper accepted: Are Large Kernels Better Teachers than Transformers for ConvNets?
  • [Jan. 2023] ICLR 2023   Oral paper: Sparsity May Cry.
  • [Nov. 2022] LoG 2022   Best Paper Award: Better GNN by Finding Graph Tickets.
  • [Nov. 2022] AAAI 2023   Paper accepted: Lottery Pools.
  • [Jun. 2022] ECML-PKDD 2022   Paper accepted: Hop-count Based Self-Supervised Anomaly Detection.

Selected Publications (full list)

You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets

Tianjin Huang, Tianlong Chen, Meng Fang, Vlado Menkovski, Jiaxu Zhao, Lu Yin, Yulong Pei, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy, Shiwei Liu

LoG 2022  /  Paper  /  Code

Best Paper Award

Are Large Kernels Better Teachers than Transformers for ConvNets?

Tianjin Huang, Lu Yin, Zhenyu Zhang, Li Shen, Meng Fang, Mykola Pechenizkiy, Zhangyang Wang, Shiwei Liu

ICML 2023  /  Paper  /  Code

RT-GCN: Gaussian-based spatiotemporal graph convolutional network for robust traffic prediction

Yutian Liu, Soora Rasouli, Melvin Wong, Tao Feng, Tianjin Huang*

Information Fusion  /  Paper  /  Code

Impact Factor: 18.6

Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!

Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen, Tianjin Huang, Ajay Jaiswal, Zhangyang Wang

ICLR 2023  /  Paper  /  Code

Spotlight Presentation

Dynamic sparsity is channel-level sparsity learner

Lu Yin, Gen Li, Meng Fang, Li Shen, Tianjin Huang, Zhangyang Wang, Vlado Menkovski, Xiaolong Ma, Mykola Pechenizkiy, Shiwei Liu

NeurIPS 2023  /  Paper  /  Code

Enhancing Adversarial Training via Reweighting Optimization Trajectory

Tianjin Huang, Shiwei Liu, Tianlong Chen, Meng Fang, Li Shen, Vlaod Menkovski, Lu Yin, Yulong Pei, Mykola Pechenizkiy

ECML-PKDD 2023  /  Paper  /  Code

Hop-count based self-supervised anomaly detection on attributed networks

Tianjin Huang, Yulong Pei, Vlado Menkovski, Mykola Pechenizkiy

ECML-PKDD 2022  /  Paper  /  Code

Work Experience

Exeter logo

University of Exeter

Assistant Professor, June 2024 –

Department of Computer Science

TU/e logo

Eindhoven University of Technology

Postdoctoral Fellow, Feb. 2023 – Feb. 2024

Advisor: Professor Mykola Pechenizkiy

Services

  • Invited Conference Reviewer: NeurIPS, ICLR, ICML, ECCV, ICIP, CPAL, ECML-PKDD, UAI, IDA
  • Invited Journal Reviewer: IEEE Transactions on Industrial Informatics, Wireless Communications and Mobile Computing, ACM Transactions on Intelligent Systems and Technology