Fanxu Meng(孟繁续)

I am a fourth-year Ph.D. student at the Institute for Artificial Intelligence, Peking University, advised by Prof. Muhan Zhang. My research focuses on parameter-efficient fine-tuning of large language models (LLMs) and efficient inference for long-context LLMs. I have served as a reviewer for leading conferences and journals, including NeurIPS, ICML, ICLR, TPAMI, COLM, AAAI, and IJCAI. Prior to joining Peking University, I received my Master’s degree from Harbin Institute of Technology, Shenzhen, where I was advised by Prof. Guangming Lu. I also spent over two years at Tencent YouTu as an intern and later as a full-time researcher, collaborating with Xing Sun, Hao Cheng, Ke Li, and Di Yin.

Selected Publications


[NeurIPS 2025 spotlight (Top 3.19%)] TransMLA: Multi-Head Latent Attention Is All You Need

Proof MLA better than GQA and convert LLaMA, Qwen to Deepseek, (Paper, Code, Twitter)

Fanxu Meng*, Pingzhi Tang*, Xiaojuan Tang, Zengwei Yao, Xing Sun, Muhan Zhang.


[EMNLP 2025 Oral] HD-PiSSA: High-Rank Distributed Orthogonal Adaptation

High-rank updates under data parallel fine-tuning, (Paper, Code)

Yiding Wang*, Fanxu Meng*, Xuefeng Zhang, Fan Jiang, Pingzhi Tang, Muhan Zhang.


[ICML 2025] CLOVER: Cross-Layer Orthogonal Vectors Pruning and Fine-Tuning

Absorb-Decompose for Pruning and Fine-Tuning, (Paper, Code)

Fanxu Meng, Pingzhi Tang, Fan Jiang, Muhan Zhang.


[NeurIPS 2024 spotlight (Top 2.08%)] PiSSA: Principal Singular values and Singular vectors Adaptation

A faster and better LoRA initialization method, (Paper, Code, 1, 2, 3, 4, Twitter, Talk)

Fanxu Meng, Zhaohui Wang, Muhan Zhang.


[Arxiv Preprint] RM -R ./Removing Residual Connection Equivalently

A plug-in tricks for efficient pruning, (Paper, Code, Talk)

Fanxu Meng*, Hao Cheng*, Jiaxin Zhuang, Ke Li, Xing Sun.


[NeurIPS 2020] Pruning Filter in Filter

Stripe wise pruning, (Paper, Code, Talk)

Fanxu Meng*, Hao Cheng*, Ke Li, Huixiang Luo, Xiaowei Guo, Guangming Lu, Xing Sun.


[CVPR 2020] Filter Grafting for Deep Neural Networks

Reactivate invalid filter while training,(Paper, Code)

Fanxu Meng*, Hao Cheng*, Ke Li, Zhixin Xu, Rongrong Ji, Xing Sun, Guangming Lu.

Get in touch