About Me
(Note: The information here may not always be up-to-date.)
I am a Senior Research Scientist at Google DeepMind, where I serve as a core contributor to the Gemini post-training and the Gemini Thinking model.
I earned my B.Sc. degree from the Dept. of Earth and Space Science, Peking University in 2019. During my undergraduate years, I worked as a research assistant at the Key Laboratory of Computational Linguistics at Peking University, advised by Prof. Xu Sun. I also interned at Microsoft Research Asia (2017-2018), and DiDi AI Labs (2018).
My current research centers on large language models (LLMs) post-training, with a focus on improving their general reasoning abilities in math, science, coding and planning. Prior to the emergence of LLMs, I studied deep learning in natural language processing and computer vision. I have also contributed to machine learning research involving optimization algorithms and model compression.
Selected Publications [Full List]
- Liangchen Luo*, Yinxiao Liu*, Rosanne Liu, Samrat Phatale, Harsh Lara, Yunxuan Li, Lei Shu, Yun Zhu, Lei Meng, Jiao Sun, Abhinav Rastogi. Improve Mathematical Reasoning in Language Models by Automated Process Supervision. arXiv preprint. [arXiv] [bib]
- Liangchen Luo*, Yuanhao Xiong*, Yan Liu, Xu Sun. Adaptive Gradient Methods with Dynamic Bound of Learning Rate. In Proc. of ICLR 2019, New Orleans, Louisiana. [arXiv] [bib] [code] [open review] [poster] [slides]
Awards & Honors
- Academic Excellence Award, 2018, Peking University
- Liao Kaiyuan Scholarship, 2018, Peking University
- Study Excellence Award, 2015, Peking University
- May Fourth Scholarship, 2015, Peking University
- First prize of National Olympiad in Informatics in Provinces × 3, 2011-2013, China
Service
- Program Committee Member: AAAI'20, ACL'19, COLM'24, EMNLP'19, ICLR'21/24/25, ICML'23/24/25, NeurIPS'23/24
Teaching
- TA for 04831420: Data Structure and Algorithm, Peking University, Spring 2016