高清无码

高清无码

Jingzhao Zhang

optimization, learning theory, artificial intelligence

CV

Short Bio

Assistant Professor @ Tsinghua, IIIS

Jointly Affiliated as PI @ Shanghai Qizhi Institute

高清无码Short Bio

Jingzhao Zhang is an assistant professor at Tsinghua, IIIS. He graduated in 2022 from MIT EECS PhD program under the supervision of Prof. Ali Jadbabaie and Prof. Suvrit Sra. His research focused on providing theoretical analyses to practical large-scale algorithms. He now aims to propose theory that are simple and can predict experiment observations. Jingzhao Zhang is also interested in machine learning applications, specifically those involving dynamical system formulations. He received Ernst A. Guillemin SM Thesis Award and George M. Sprowls PhD Thesis Award.

高清无码What's new

Fall 2023 optimization class materials are now available online.

Please check our ICLR2024 workshop on Bridging the Gap between Theory and Practice for Learning.

Uploaded the research project on the two-phase scaling law paper in the research section (Aug 2023).

If you want to join as an intern, please prepare a 15min presentation on a recent DL / ML / AI paper and then send me an email.

If you are interested in joining as a PhD, please refer to my post here.

高清无码Research interests

I am interested in theoretical explanations of practical optimization algorithms.

I am working on developing faster training algorithms.

I enjoy applying optimization algorithms to real world problems.

高清无码Our group

PhD students:

Jingwei Li

Lesi Chen

Bei Luo

Xinran Gu

Hongyi Zhou

Undergraduate students:

Huaqing Zhang

Jiazheng Li

Hong Lu

Alumni:

Peiyuan Zhang (PhD at Yale)

Yusong Zhu (PhD at UT Austin)

Kaiyue Wen (PhD at Stanford)

高清无码Research Projects

For a complete list, please refer to my Google scholar page.


2024: Statistical learning in LLMs.

A presentation on several recent works.


2023: Two phases of scaling laws for kNN classifiers.

A short presentation on the arxiv manuscript .


2022: On the nonsmoothness of neural network training.

A tale of three recent works: why is neural network training non-smooth from an optimization perspective, and how should we analyze the process?


2021: Theoretical understanding of adaptive gradient methods

My phd defense presentation.


2019: An ODE perspective for Nesterov's accelerated gradient method

My master thesis (RQE at MIT) presentation.

高清无码Teaching

Fall 2023 Introduction to Optimization

References:

Bertsimas, Dimitris, and John N. Tsitsiklis. Introduction to linear optimization.

Boyd, Stephen P., and Lieven Vandenberghe. Convex optimization.

Bubeck, Sébastien. Convex optimization: Algorithms and complexity.

Grading: 40% HW + 30 % Midterm + 30 % Final

Weekly schedule:

1.Linear programming and Polyhdra. lecture, scribe

2.Simplex and Duality. lecture, scribe

3.Linear Duality and Ellipsoid. lecture, scribe

4.Ellipsoid and Convexity. lecture, scribe

5.Convex Optimization, MaxCut. lecture, scribe

6. SDP Relaxation; Lagrangian Duality. lecture, scribe

7. Lagrangian Duality and KKT. lecture, scribe

8. Midterm

9. Newton's method. lecture, scribe

10. Self-concordance and Convergence of Newton. lecture, scribe

11. Interior Point Method. lecture, scribe

12. Gradient Method and Oracle Complexity. lecture, scribe

13. Gradient Methods with Stochasticity, Nonconvexity and Mirror Maps. lecture, scribe

14. Mirror Descent and Online Learning. lecture, scribe

15. Final

高清无码Related information

Email

高清无码

Google Scholar

//scholar.google.com/citations?user=8NudxYsAAAAJ&hl=en
TOP