My work primarily focuses on the statistical and computational aspects of machine learning, signal processing, and data science. Currently, I am particularly interested in high-dimensional statistics and random matrix theory, as well as their interactions with (deep or not so deep) neural networks.
For more information, see here for my headshot and here for a short bio in Chinese.
[Jan 2005] Our work on the breakdown of Gaussian Universality for large-dimensional (convex) generalized linear classifier accepted at ICLR'2025, check out our preprint here!
[Jan 2025] I will be talking on “Examples and Counterexamples of Gaussian Universality in Large-dimensional Machine Learning” at RMTA 2024 Conference, Changchun, Jilin, China. See slides here.
[Dec 2024] I will be serving as an Area Chair for IJCNN 2025.
[Nov 2024] I will be serving as an Area Chair for ICML 2025.
[Oct 2024] Our work on communication-efficient and robust federated domain adaptation using random features technique accepted at IEEE Trans. KDE, check out our preprint here and code here!
[Aug 2024] I will be serving as an Area Chair for ICLR 2025.
[Jun 2024] One paper at EUSIPCO'2024 on a novel and efficient RMT-improved Direction of Arrival estimation method. We show that standard ESPRIT is biased in the case of large arrays, but can be effectively debiased using RMT. The paper is listed as a Best Student Paper Candidate of EUSIPCO'2024! Check the extended version here.
[May 2024] One paper at ICML'2024 on RMT for Deep Equilibrium Model (DEQ, a typical implicit NN model) that provides very explicit connections between implicit and explicit NNs. Given a DEQ, check our preprint here for the recipe to design an “equivalent” simple explicit NN!
[Jan 2023] I’m very grateful to be supported by the National Natural Science Foundation of China (youth program) on my research on the Fundamental Limit of Pruning Deep Neural Network Models via Random Matrix Methods.
[Aug 2022] One paper at NeurIPS'2022 on the eigenspectral structure of Neural Tangent Kernel (NTK) of fully-connected Deep neural networks for Gaussian mixture input data, with a compelling application to “lossless” sparsification and quantization of DNN models! This extends our previous paper at ICLR'2022. See more details here.
[Jun 2022] Our book “Random Matrix Methods for Machine Learning” with Cambridge University Press, see more details here!