My Erdős number is 4, via Romain Couillet, Zhidong Bai, and Gutti Jogesh Babu.
News
[July 2024] I will be at Vienna, Austria, during ICML 2024.
[July 2024] Invited talk on “A Random Matrix Approach to Explicit and Implicit Deep Neural Networks” at Institut de Mathématiques de Toulouse (IMT), Toulouse, France, July, 2024. See slides
[July 2024] Short course at IRIT (UT3 campus, 118 Route de Narbonne, Toulouse, France) from 1st to 4th July, 2024. See slides here: Part 1, Part 2, Part 3, and Part 4.
[June 2024] One paper at EUSIPCO'2024 on a novel and efficient RMT-improved Direction of Arrival estimation method. We show that standard ESPRIT (Estimation of Signal Parameters via Rotational Invariance Technique) approach is biased in the case of large arrays, but can be effectively debiased using RMT. Check our preprint here!
[May 2024] 2H short course at SDS, Fudan University, see slides here.
[May 2024] One paper at ICML'2024 on RMT for Deep Equilibrium Model (DEQ, a typical implicit NN model) that provides very explicit connections between implicit and explicit NNs. Given a DEQ, check our preprint here for the recipe to design an “equivalent” simple explicit NN!
[Apr 2024] Our paper on achievable analytic solution to Information Bottleneck for Gaussian mixtures will be presented at ISIT'2024, check here for details!
[Mar 2024] Our paper on efficient federated domain adaptation will be presented at ICASSP' 2024, check here for details!
[Jan 2023] I’m very grateful to be supported by the National Natural Science Foundation of China (youth program) on my research on the Fundamental Limit of Pruning Deep Neural Network Models via Random Matrix Methods.
[Aug 2022] One paper at NeurIPS'2022 on the eigenspectral structure of Neural Tangent Kernel (NTK) of fully-connected Deep neural networks for Gaussian mixture input data, with a compelling application to “lossless” sparsification and quantization of DNN models! This extends our previous paper at ICLR'2022. See more details here.
[Jun 2022] Our book “Random Matrix Methods for Machine Learning” with Cambridge University Press, see more details here!
[Jan 2022] One paper at ICLR'2022 on an incredibly efficient and theoretically guaranteed random feature compression technique, see more details here.
For more information please refer to my detailed CV.