-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.html
More file actions
260 lines (238 loc) · 15 KB
/
index.html
File metadata and controls
260 lines (238 loc) · 15 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Zhenghao Zhao</title>
<link rel="stylesheet" href="styles.css">
</head>
<body>
<header>
<h1>Zhenghao Zhao</h1>
<!-- <div class="header-contact">
<p><a href="mailto:[email protected]">Email: [email protected]</a></p>
<p><a href="https://www.linkedin.com/in/zhenghao-zhao-90ab26172/">LinkedIn</a></p>
<p><a href="https://github.com/ichbill">GitHub: ichbill</a></p>
<p><a href="https://scholar.google.com/citations?user=4T76IkgAAAAJ&hl=en">Google Scholar</a></p>
<p><a href="https://drive.google.com/file/d/16Iq0u4otExQfLnfyxDfqj9LR7pM0l0im/view?usp=sharing">Resume</a></p>
</div> -->
<!-- <nav>
<ul>
<li><a href="#bio">Bio</a></li>
<li><a href="#news">News</a></li>
<li><a href="#publications">Publications</a></li>
</ul>
</nav> -->
</header>
<section id="bio">
<h2>Bio</h2>
<div class="bio-content">
<img src="assets/ZhenghaoZhao_1.jpeg" alt="My Photo" class="bio-photo">
<p>I am currently a Ph.D. candidate in Computer Science at the <strong>University of Illinois Chicago (UIC)</strong> guided by Prof. <a href="https://tomyan555.github.io">Yan Yan</a>. </p>
<!-- <p>My research focuses on advancing <u>Efficient AI</u>, with interests in <u>dataset distillation</u> and <u>large-scale model training</u>. My work explores both algorithmic and systems-level approaches to improving training efficiency. I have published on dataset distillation in <i>CVPR</i>, <i>ICCV</i>, <i>ECCV</i>, and <i>NeurIPS</i>, covering topics on long-tailed dataset distillation, generative dataset distillation, and dataset quantization. </p> -->
<p>My research focuses on <u>data-centric machine learning</u>, with interests in <u>dataset distillation</u>, <u>synthetic data generation</u>, and <u>data selection for LLMs</u>. My work has been published in top-tier conferences such as <i>CVPR</i>, <i>ICCV</i>, <i>ECCV</i>, and <i>NeurIPS</i>, covering topics like dataset distillation for image and multimodal datasets, synthetic dataset generation, and algorithm for data selection.</p>
<p> I interned at <i>Argonne National Laboratory</i> in 2023 and 2025. In 2023, I studied the performance of distributed training frameworks such as PyTorch DDP, Horovod, and DeepSpeed. In 2025, I returned to Argonne to work on LLM training on high-performance computing (HPC) platforms, where I conducted a comparative study of DeepSpeed, TorchTitan, and FSDP for scalable optimization.</p>
<p>Before Ph.D., I received my M.S. in Computer Science from the Illinois Institute of Technology (IIT) and a B.S. in Computer Science and Engineering from Nanjing University of Post and Telecommunications (NJUPT).</p>
<p style="text-align: center;"><a href="mailto:[email protected]">Email</a> / <a href="https://drive.google.com/file/d/16Iq0u4otExQfLnfyxDfqj9LR7pM0l0im/view?usp=sharing">CV</a> / <a href="https://scholar.google.com/citations?user=4T76IkgAAAAJ&hl=en">Google Scholar</a> / <a href="https://www.linkedin.com/in/zhenghao-zhao-90ab26172/">LinkedIn</a> / <a href="https://github.com/ichbill">GitHub</a></p>
<!-- <p><span style="color: red; font-weight: bold;">I am actively exploring 2026 Summer Internship opportunities.</span> Feel free to review my <a href="https://drive.google.com/file/d/16Iq0u4otExQfLnfyxDfqj9LR7pM0l0im/view?usp=sharing">resume</a> here.</p> -->
</div>
</section>
<section id="news">
<h2>News</h2>
<div class="news-container">
<div class="news-item">
<span class="date">2026-02</span>
<p>Our paper, "<a href="https://arxiv.org/abs/2512.14126">Consistent Instance Field for Dynamic Scene Understanding</a>" accepted to <strong>CVPR 2026</strong>!</p>
</div>
<div class="news-item">
<span class="date">2025-12</span>
<p>First-author paper, "<a href="https://arxiv.org/abs/2512.14621">Distill Video Datasets into Images</a>" is now available on <a href="https://arxiv.org/abs/2512.14621" style="font-weight: normal;">arXiv</a>.</p>
</div>
<div class="news-item">
<span class="date">2025-09</span>
<p>First-author paper, "<a href="https://www.arxiv.org/abs/2509.15472">Efficient Multimodal Dataset Distillation via Generative Models</a>" accepted to <strong>NeurIPS 2025</strong>!</p>
</div>
<div class="news-item">
<span class="date">2025-06</span>
<p>Our paper, "<a href="https://openaccess.thecvf.com/content/ICCV2025/html/Wang_CaO2_Rectifying_Inconsistencies_in_Diffusion-Based_Dataset_Distillation_ICCV_2025_paper.html">CaO<sub>2</sub>: Rectifying Inconsistencies in Diffusion-Based Dataset Distillation</a>" accepted to <strong>ICCV 2025</strong>!</p>
</div>
<div class="news-item">
<span class="date">2025-06</span>
<p>Serving as a Local Chair at <strong>ICMR 2025</strong>!</p>
</div>
<div class="news-item">
<span class="date">2025-06</span>
<p>Invited to give Lightning Talk at <strong>MMLS 2025</strong>!</p>
</div>
<div class="news-item">
<span class="date">2025-05</span>
<p>Begin my internship at <strong>Argonne National Laboratory</strong>!</p>
</div>
<div class="news-item">
<span class="date">2025-02</span>
<p>First-author paper, "<a href="https://arxiv.org/abs/2408.14506">Distilling Long-tailed Datasets</a>" accepted to <strong>CVPR 2025</strong>!</p>
</div>
<div class="news-item">
<span class="date">2024-10</span>
<p>Our paper, "<a href="#publications">SSDL:Sensor-to-Skeleton Diffusion Model with Lipschitz Regularization for Human Activity Recognition</a>" accepted to <strong>MMM 2025</strong>!</p>
</div>
<div class="news-item">
<span class="date">2024-07</span>
<p>First-author paper, "<a href="https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/07772.pdf">Dataset quantization with active learning based adaptive sampling</a>" accepted to <strong>ECCV 2024</strong>!</p>
</div>
<div class="news-item">
<span class="date">2024-06</span>
<p>First-author paper, "<a href="https://link.springer.com/chapter/10.1007/978-3-031-78456-9_23">Audio-Visual Navigation with Anti-Backtracking</a>" accepted to <strong>ICPR 2024</strong>!</p>
</div>
<div class="news-item">
<span class="date">2024-04</span>
<p>First-author paper, "<a href="https://dl.acm.org/doi/10.1145/3652583.3658092">Monocular Expressive 3D Human Reconstruction of Multiple People</a>" accepted to <strong>ICMR 2024 oral</strong>!</p>
</div>
<div class="news-item">
<span class="date">2024-02</span>
<p>First-author paper, "<a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10635524">Gated Multi-Scale Attention Transformer For Few-Shot Medical Image Segmentation</a>" accepted to <strong>ISBI 2024</strong>!</p>
</div>
<div class="news-item">
<span class="date">2023-12</span>
<p>First-author paper, "<a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10446239">Supplementing Missing Visions via Dialog for Scene Graph Generations</a>" accepted to <strong>ICASSP 2024</strong>!</p>
</div>
<div class="news-item">
<span class="date">2023-12</span>
<p>First-author paper, "<a href="https://www.sciencedirect.com/science/article/abs/pii/S0141813023056726">Machine learning enabled multiplex detection of periodontal pathogens by surface-enhanced Raman spectroscopy</a>" accepted to the journal <strong>International Journal of Biological Macromolecules</strong>!</p>
</div>
</div>
<!-- <div class="news-container">
<p><span class="date">2024-07</span> One first-author paper about dataset condensation accepted to ECCV 2024!</p>
<p><span class="date">2024-06</span> One first-author paper about audio-visual navigation accepted to ICPR 2024!</p>
<p><span class="date">2024-04</span> One first-author paper about pose estimation accepted to ICMR 2024 oral!</p>
<p><span class="date">2024-02</span> One first-author paper about medical image segmentation accepted to ISBI 2024!</p>
<p><span class="date">2023-12</span> One first-author paper about scene graph generation accepted to ICASSP 2024!</p>
<p><span class="date">2023-12</span> One first-author paper about machine learning enalbled periodontal pathogens detection accepted to the journal International Journal of Biological Macromolecules!</p>
</div> -->
</section>
<section id="experience">
<h2>Work Experience</h2>
<div class="experience">
<div class="exp-logo">
<img src="assets/amazon-logo-squid-ink-smile-orange.png" alt="Amazon" style="width: 100px;">
</div>
<div class="experience-info">
<h4>Amazon · Applied Scientist Intern</h4>
<p>May 2026 - Aug. 2026, Santa Cruz, California, United States</p>
</div>
</div>
<div class="experience">
<div class="exp-logo">
<img src="assets/Argonne_cmyk_black.svg" alt="Argonne National Laboratory">
</div>
<div class="experience-info">
<h4>Argonne National Laboratory · Research Intern</h4>
<p>Research on LLM training on HPC platforms.</p>
<p>May 2025 - Aug. 2025, Lemont, Illinois, United States</p>
</div>
</div>
<div class="experience">
<div class="exp-logo">
<img src="assets/Argonne_cmyk_black.svg" alt="Argonne National Laboratory">
</div>
<div class="experience-info">
<h4>Argonne National Laboratory · Research Intern</h4>
<p>Research on distributed training frameworks.</p>
<p>May 2023 - Aug. 2023, Lemont, Illinois, United States</p>
</div>
</div>
</section>
<section id="publications">
<h2>Publications</h2>
<div class="publications-container">
<div class="publication">
<img src="assets/VideoDD.png" alt="NeurIPS 2025">
<div class="publication-info">
<h4>Distill Video Datasets into Images</h4>
<p><u>Zhenghao Zhao</u>, Haoxuan Wang, Kai Wang, Yuzhang Shang, Yuan Hong, Yan Yan</p>
<p>[<a href="https://arxiv.org/abs/2512.14621">PDF</a>]</p>
</div>
</div>
<div class="publication">
<img src="assets/NeurIPS2025.png" alt="NeurIPS 2025">
<div class="publication-info">
<h4>Efficient Multimodal Dataset Distillation via Generative Models</h4>
<p><u>Zhenghao Zhao</u>, Haoxuan Wang, Junyi Wu, Yuzhang Shang, Gaowen Liu, Yan Yan</p>
<p>Neural Information Processing Systems (NeurIPS), 2025</p>
<p>[<a href="https://www.arxiv.org/abs/2509.15472">PDF</a>] [<a href="https://github.com/ichbill/EDGE">Code</a>]</p>
</div>
</div>
<div class="publication">
<img src="assets/CVPR2025.png" alt="CVPR 2025">
<div class="publication-info">
<h4>Distilling Long-tailed Datasets</h4>
<p><u>Zhenghao Zhao*</u>, Haoxuan Wang*, Yuzhang Shang, Kai Wang, Yan Yan</p>
<p>Computer Vision and Pattern Recognition (CVPR), 2025</p>
<p>[<a href="https://arxiv.org/abs/2408.14506">PDF</a>] [<a href="https://github.com/ichbill/LTDD">Code</a>]</p>
</div>
</div>
<div class="publication">
<img src="assets/ECCV2024.png" alt="ECCV 2024">
<div class="publication-info">
<h4>Dataset Quantization with Active Learning based Adaptive Sampling</h4>
<p><u>Zhenghao Zhao</u>, Yuzhang Shang, Junyi Wu, Yan Yan</p>
<p>European Conference on Computer Vision (ECCV), 2024</p>
<p>[<a href="https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/07772.pdf">PDF</a>]</p>
<!-- <a href="https://github.com/ichbill/DQAS" class="apple-style-button">Code</a> -->
</div>
</div>
<div class="publication">
<img src="assets/ICMR2024.jpg" alt="ICMR 2024">
<div class="publication-info">
<h4>Monocular Expressive 3D Human Reconstruction of Multiple People</h4>
<!-- <p>Zhenghao Zhao, Hao Tang, Joy Wan, Yan Yan</p> -->
<p><u>Zhenghao Zhao</u>, Hao Tang, Joy Wan, Yan Yan</p>
<!-- <p><strong>Zhenghao Zhao</strong>, Hao Tang, Joy Wan, Yan Yan</p> -->
<!-- <p><u><strong>Zhenghao Zhao</strong></u>, Hao Tang, Joy Wan, Yan Yan</p> -->
<p>International Conference on Multimedia Retrieval (ICMR), 2024</p>
<p>[<a href="https://dl.acm.org/doi/10.1145/3652583.3658092">PDF</a>]</p>
</div>
</div>
<div class="publication">
<img src="assets/ICPR2024.png" alt="ICPR 2024">
<div class="publication-info">
<h4>Audio-Visual Navigation with Anti-Backtracking</h4>
<p><u>Zhenghao Zhao</u>, Hao Tang, Yan Yan</p>
<p>International Conference on Pattern Recognition (ICPR), 2024</p>
<p>[<a href="https://link.springer.com/chapter/10.1007/978-3-031-78456-9_23">PDF</a>]</p>
</div>
</div>
<div class="publication">
<img src="assets/ICASSP2024.png" alt="ICASSP 2024">
<div class="publication-info">
<h4>Supplementing Missing Visions via Dialog for Scene Graph Generations</h4>
<p><u>Zhenghao Zhao*</u>, Ye Zhu*, Xiaoguang Zhu, Yuzhang Shang, Yan Yan</p>
<p>International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2024</p>
<p>[<a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10446239">PDF</a>]</p>
</div>
</div>
<div class="publication">
<img src="assets/ISBI2024.jpg" alt="ISBI 2024">
<div class="publication-info">
<h4>Gated Multi-Scale Attention Transformer For Few-Shot Medical Image Segmentation</h4>
<p><u>Zhenghao Zhao*</u>, Hao Ding*, Dawen Cai, Yan Yan</p>
<p>IEEE International Symposium on Biomedical Imaging (ISBI), 2024</p>
<p>[<a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10635524">PDF</a>]</p>
</div>
</div>
<div class="publication">
<img src="assets/IJBM2023.png" alt="IJBM 2023">
<div class="publication-info">
<h4>Machine learning enabled multiplex detection of periodontal pathogens by surface-enhanced Raman spectroscopy</h4>
<p>Rathnayake AC Rathnayake*, <u>Zhenghao Zhao*</u>, Nathan McLaughlin, Wei Li, Yan Yan, Liaohai L Chen, Qian Xie, Christine D Wu, Mathew T Mathew, Rong R Wang</p>
<p>International Journal of Biological Macromolecules, 2024</p>
<p>[<a href="https://www.sciencedirect.com/science/article/abs/pii/S0141813023056726">PDF</a>]</p>
</div>
</div>
</div>
</section>
<footer>
<p>Copyright © 2024 Zhenghao Zhao (赵政豪)</p>
</footer>
</body>
</html>