I am now a tenure-track assistant professor at
Huazhong University of Science & Technology (HUST), School of Electronic Information and Communications (EIC), where I’m awarded the Wuhan Youth Talent and East Lake Youth Talent Fellowship 2021. Before that, I was a Postdoctoral Researcher at the University of California, Berkeley, Department of Statistics and ICSI, in 2020, hosted by Prof. Michael Mahoney. I received my Ph.D. from CentraleSupélec, University Paris-Saclay, in 2019, where I worked under the supervision of Prof. Romain Couillet and Prof. Yacine Chitour. I received my B.Sc degree in Optical and Electronic Information from Huazhong University of Science & Technology, China, in 2014, and my M.Sc. degree in Signal and Image Processing ( ATSI) from University Paris-Saclay, France, in 2016. My research interests are broadly in (statistical) machine learning, signal processing, random matrix theory, and high-dimensional statistics.
Here is my CV in
English and in Chinese.
My Erdős number is 4, via Romain Couillet, Zhidong Bai and Gutti Jogesh Babu.
Happy to announce the 2022 Joint Workshop on “Mathematics for Data Science” between
HUST and University Paris-Saclay taking place in Sep 22-23 2022 on zoom! Time: 9:00-12:00 Paris time and 15:00-18:00 Beijing Time. See the detailed schedule here and the playabck here (for Sep 22) and here (for Sep 23)! One paper at
NeurIPS'2022 on the eigenspectral structure of Neural Tangent Kernel (NTK) of fully-connected Deep neural networks for Gaussian mixture input data, with a compelling application to “lossless” sparsification and quantization of DNN models! This extends our previous paper at ICLR'2022. See more details here. Invited talk on “Random Matrix Methods for Machine Learning: Lossless Compression of Large Neural Networks” at
The China conference on Scientific Machine Learning (CSML 2022), see slides here. Upcoming book “
Random Matrix Methods for Machine Learning” with Cambridge University Press, see more details here! One paper at
ICLR'2022 on an incredibly efficient and theoretically guaranteed random feature compression technique, see more details here. One invited paper in
Special Issue on Machine Learning 2021, JSTAT, see more details here. I’m very grateful to be supported by the
CCF-Hikvision Open Fund on my research on the theory of neural network model compression (together with Dr. Kai Wan), see more details here. One
oral paper (top 1% submission) at NeurIPS'2021 on the Hessian eigenspectra of generalized linear models, check our preprint here! One paper on “Sparse sketches with small inversion bias” accepted at
COLT'2021, check our preprint here! Invited talk on “A Data-dependent Theory of Overparameterization: Phase Transition, Double Descent, and Beyond” at
Workshop on the Theory of Over-parameterized Machine Learning (TOPML) 2021. See slides, two-page abstract, and paper. One paper on “kernel regression in high dimensions: refined analysis beyond double descent” at
AISTATS'2021! Check our preprint here. A
spotlight paper at ICLR'2021 on computationally efficient (sparse and quantized) spectral clustering! Check here.
For more information please refer to my detailed
E-mail: zhenyu_liao at hust.edu.cn, or click firstname.lastname@example.org