Publications

(2023). Convergence of First-Order Algorithms for Meta-Learning with Moreau Envelopes. arXiv:2301.06806.

PDF Cite

(2022). Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation. arxiv:2206.03588.

PDF Cite

(2022). A Damped Newton Method Achieves Global O(1/k^2) and Local Quadratic Convergence Rate. NeurIPS 2022.

PDF Cite

(2021). ZeroSARAH: Efficient Nonconvex Finite-Sum Optimization with Zero Full Gradient Computation. arxiv:2103.01447.

PDF Cite

(2020). Lower Bounds and Optimal Algorithms for Personalized Federated Learning. NeurIPS 2020.

PDF Cite

(2020). Adaptive Learning of the Optimal Mini-Batch Size of SGD. OPT ML workshop, NeurIPS 2020.

PDF Cite