Junyi Zhu (朱俊屹)

Senior Applied Scientist at Microsoft UK.

job_photo_selected.png

I focus on building AI that actually understands people, whether that’s making their workday easier or their personal life more creative.

Currently, I drive post-training efforts for Microsoft 365 Copilot, where I focus on building models that possess deep contextual awareness of workplace users, their roles, and their organizations.

Previously, I drove innovation at Samsung Research, prototyping advanced AI features for the next generation of smartphones, spanning everything from core productivity to digital entertainment.

I completed my PhD studies at KU Leuven in Belgium, under the guidance of Prof. Matthew Blaschko. During my PhD, I have explored and contributed to many domains in AI, including image generation models, large language models, distributed learning and privacy-preserving machine learning.

Before my PhD, I earned my Master’s degree from the Karlsruhe Institute of Technology in Germany, specializing in autonomous driving. During my Master’s program, I worked on several research projects using AI to solve control and perception tasks of autonomous driving at Institute of Measurement and Control Systems and Research Center for Information Technology in Karlsruhe.

If you are interested in research collaboration, feel free to reach out!😊

news

Sep 18, 2025 Our paper Latent Zoning Network: A Unified Principle for Generative Modeling, Representation Learning, and Classification has been accepted at NeurIPS 2025.
Sep 14, 2025 I am honored to serve as an Area Chair for CVPR 2026. I look forward to contributing to the community and supporting the review process in this role.
Sep 05, 2025 Our paper Guided Model Merging for Hybrid Data Learning: Leveraging Centralized Data to Refine Decentralized Models has been accepted at WACV 2026 (Round 1, accceptance rate 6.3%).
Mar 14, 2025 Our paper Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better has been accepted at ICLR 2025.

selected publications

  1. ICLR
    lcsc.png
    Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better
    Enshu Liu*, Junyi Zhu*, Zinan Lin, and 7 more authors
    2025
    * = Co-first authors
  2. EMNLP Findings
    fastmem.png
    FastMem: Fast Memorization of Prompt Improves Context Awareness of Large Language Models
    Junyi Zhu*, Shuochen Liu*, Yu Yu, and 6 more authors
    2024
    * = Co-first authors
  3. CVPR
    pfedvem.png
    Confidence-aware Personalized Federated Learning via Variational Expectation Maximization
    Junyi Zhu*, Xingchen Ma*, and Matthew B. Blaschko
    2023
    * = Co-first authors
  4. ICML
    sme.png
    Surrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning
    Junyi Zhu, Ruicong Yao, and Matthew B. Blaschko
    2023
  5. TMLR
    nerf.png
    Implicit Neural Representations for Robust Joint Sparse-View CT Reconstruction
    Jiayang Shi*, Junyi Zhu*, Daniel M. Pelt, and 2 more authors
    2024
    * = Co-first authors