Projects

NSFC-62206101: Fundamental Limits of Pruning Deep Neural Network Models via Random Matrix Methods

This project (2023.01-2025.12) is led by myself, and focuses on the fundamental theoretical limits of pruning as well as quantization of deep neural networks. The objective of this project is to propose, by developing the mathematical tools of random matrix theory, high-dimensional statistics, and optimization theory, a quantitative theory to characterize the “performance and complexity tradeoff” in modern deep neural nets. This project leads to the following scientific publications:

  1. Z. Feng, Y. Wang, J. Li, F. Yang, J. Lou, T. Mi, R. C. Qiu, Z. Liao, “Robust and Communication-Efficient Federated Domain Adaptation via Random Features”, IEEE Transactions on Knowledge and Data Engineering, 2024.

  2. J. Wei, X. Lee, Z. Liao, T. Palpanas, B. Peng “Subspace Collision: An Efficient and Accurate Framework for High-dimensional Approximate Nearest Neighbor Search”, SIGMOD International Conference on Management of Data (SIGMOD 2025), 2025. preprint

  3. W. Yang, Z. Wang, X. Mai, Z. Ling, R. C. Qiu, Z. LiaoInconsistency of ESPRIT DoA Estimation for Large Arrays and a Correction via RMT” (Best Student Paper Candidate), IEEE 32nd European Signal Processing Conference (EUSIPCO 2024), 2024.

  4. Z. Ling, L. Li, Z. Feng, Y. Zhang, F. Zhou, R. C. Qiu, Z. LiaoDeep Equilibrium Models are Almost Equivalent to Not-so-deep Explicit Models for High-dimensional Gaussian Mixtures”, The Forty-first International Conference on Machine Learning (ICML 2024), 2024. preprint

  5. Y. Song, K. Wan, Z. Liao, H. Xu, G. Caire, S. Shamai, “An Achievable and Analytic Solution to Information Bottleneck for Gaussian Mixtures”, 2024 IEEE International Symposium on Information Theory (ISIT 2024), 2024.

  6. Y. Wang, Z. Feng, Z. Liao, “FedRF-Adapt: Robust and Communication-Efficient Federated Domain Adaptation via Random Features”, Workshop on Timely and Private Machine Learning over Networks, 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSPW 2024), 2024.

  7. J. Wang, S. Zhang, J. Cai, Z. Liao, C. Arenz, R. Betzholz, “Robustness of random-control quantum-state tomography”, Physical Review A 108 (2 Aug. 2023), 022408.

  8. Y. Chitour, Z. Liao, R. Couillet, “A geometric approach of gradient descent algorithms in linear neural networks”, Mathematical Control and Related Fields, 13(3) (2023), 918–945.

  9. L. Gu, Y. Du, Y. Zhang, D. Xie, S. Pu, R. C. Qiu, Z. Liao, ““Lossless” Compression of Deep Neural Networks: A High-dimensional Neural Tangent Kernel Approach”, The 36th Conference on Neural Information Processing Systems (NeurIPS'2022), 2022.


CCF-Hikvision Open Fund 20210008: Random Matrix Theory and Information Bottleneck for Neural Network Compression

This project is led by Prof. Kai Wan and myself as PI, and investigates efficient compression schemes of large-scale neural network models with strong theoretical guarantees. The project leads to the following scientific publications:

  1. H. Tiomoko, Z. Liao, R. Couillet, “Random matrices in service of ML footprint: ternary random features with no performance loss”, The Tenth International Conference on Learning Representations (ICLR'2022), 2022. preprint

  2. L. Gu, Y. Du, Y. Zhang, D. Xie, S. Pu, R. C. Qiu, Z. Liao, ““Lossless” Compression of Deep Neural Networks: A High-dimensional Neural Tangent Kernel Approach”, The 36th Conference on Neural Information Processing Systems (NeurIPS'2022), 2022.

  3. Y. Song, K. Wan, Z. Liao, G. Caire, “An Achievable and Analytic Solution to Information Bottleneck for Gaussian Mixtures”, 2023.

See more details of the project in Chinese here.


NSFC-12141107: Mathematical theory and methods for Reconfigurable Intelligent Surface (RIS) assisted wireless communication

This project (2022.01-2025.12) is led by Prof. R. C. Qiu and investigates the (information) theoretical limits of RIS assisted wireless communication system, dynamical system, as well as the theory of large dimensional random matrices.

See the project homepage here.