Junyi Zhu (朱俊屹)
Senior researcher at Samsung Electronics R&D Institute UK (SRUK).

I’m currently working at SRUK with a focus on advanced AI algorithm research. I’ve done my PhD studies at KU Leuven in Belgium, under the guidance of Prof. Matthew Blaschko. With great passion and curiosty for AI technologies, I have contribued to many domains, including image generation models, large language models, distributed learning and privacy-preserving machine learning.
Before my PhD, I earned my Master’s degree from the Karlsruhe Institute of Technology in Germany, specializing in information technologies and vehicle engineering. During my Master’s program, I worked on several research projects using AI to solve control and perception tasks of autonomous driving at Institute of Measurement and Control Systems and Research Center for Information Technology in Karlsruhe.
Engaging in AI research brings me immense joy, and I strive to utilize this technology to enhance industry or our daily lives. I look forward to the day when AGI becomes a reality.
If you are interested in research collaboration, feel free to reach out!😊
news
Sep 18, 2025 | Our paper Latent Zoning Network: A Unified Principle for Generative Modeling, Representation Learning, and Classification has been accepted at NeurIPS 2025. |
---|---|
Sep 14, 2025 | I am honored to serve as an Area Chair for CVPR 2026. I look forward to contributing to the community and supporting the review process in this role. |
Sep 05, 2025 | Our paper “Guided Model Merging for Hybrid Data Learning: Leveraging Centralized Data to Refine Decentralized Models” has been accepted at WACV 2026 (Round 1, accceptance rate 6.3%). A link to the paper will be available soon! |
Mar 14, 2025 | Our paper Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better has been accepted at ICLR 2025. |
selected publications
- ICLRLinear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better2025* = Co-first authors
- EMNLP FindingsFastMem: Fast Memorization of Prompt Improves Context Awareness of Large Language Models2024* = Co-first authors
- CVPRConfidence-aware Personalized Federated Learning via Variational Expectation Maximization2023* = Co-first authors
- ICMLSurrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning2023
- TMLRImplicit Neural Representations for Robust Joint Sparse-View CT Reconstruction2024* = Co-first authors