Shange Tang

 

I am a fourth year Ph.D. student in the Department of Operations Research and Financial Engineering at Princeton University under the supervision of Professor Jianqing Fan and Professor Chi Jin. Before coming to Princeton, I received my bachelor's degree from School of Mathematical Sciences at Peking University in 2021.
I am interested in theory and applications of statistics and machine learning.
E-mail: shangetang [@] princeton [DOT] edu
Google Scholar

Research

My recent research interests include

  • Automated theorem proving with LLMs

  • Out-of-Distribution (OOD) generalization

  • Factor Models

Recent Publications

  1. Shange Tang, Yuanhao Wang, Chi Jin, "Is Elo Rating Reliable? A Study Under Model Misspecification. ", arXiv preprint arXiv:2502.10985 (2025). [arXiv]

  2. Yong Lin*, Shange Tang*, Bohan Lyu, Jiayun Wu, Hongzhou Lin, Kaiyu Yang, Jia Li, Mengzhou Xia, Danqi Chen, Sanjeev Arora, Chi Jin. "Goedel-Prover: A Frontier Model for Open-Source Automated Theorem Proving ", arXiv preprint arXiv:2502.07640 (2025). [arXiv]

  3. Kaixuan Huang, Jiacheng Guo, Zihao Li, Xiang Ji, Jiawei Ge, Wenzhe Li, Yingqing Guo, Tianle Cai, Hui Yuan, Runzhe Wang, Yue Wu, Ming Yin, Shange Tang, Yangsibo Huang, Chi Jin, Xinyun Chen, Chiyuan Zhang, Mengdi Wang. "MATH-Perturb: Benchmarking LLMs’ Math Reasoning Abilities against Hard Perturbations ", arXiv preprint arXiv:2502.06453 (2025). [arXiv]

  4. Shange Tang*, Jiayun Wu*, Jianqing Fan, Chi Jin, "Benign Overfitting in Out-of-Distribution Generalization of Linear Models. ", arXiv preprint arXiv:2412.14474 (2024). [arXiv]

  5. Shange Tang, Soham Jana, Jianqing Fan, "Factor Adjusted Spectral Clustering for Mixture Models. ", arXiv preprint arXiv:2408.12564 (2024). [arXiv]

  6. Jiawei Ge*, Shange Tang*, Jianqing Fan, Cong Ma, Chi Jin, "Maximum Likelihood Estimation is All You Need for Well-Specified Covariate Shift. ", International Conference on Learning Representations (ICLR) 2024. [arXiv]

  7. Jiawei Ge*, Shange Tang*, Jianqing Fan, Chi Jin, "On the Provable Advantage of Unsupervised Pretraining. ", International Conference on Learning Representations (ICLR) 2024, spotlight. [arXiv]

* denotes equal contribution.


A brief cv.