I am currently in the final year of my Ph.D. in Applied Mathematics, where my research lies at the intersection of theoretical machine learning, federated learning, online learning, and non-convex optimization. My work focuses on building rigorous mathematical foundations for modern learning algorithms, with particular emphasis on optimization landscapes, generalization guarantees, and distributed learning settings. I am deeply interested in advancing both the theoretical understanding and the practical applicability of learning systems.

My academic journey began with an M.Sc. in Mathematics, where I specialized in Real Analysis, Functional Analysis, Operator Theory, Probability, and Statistical Learning Theory. This strong mathematical background has shaped my approach to research—grounded in rigorous proofs, while also mindful of computational challenges. Alongside theory, I also explore applied machine learning, especially in Python, and enjoy bridging abstract concepts with practical implementations.

As I move toward the completion of my doctoral work, I am actively seeking postdoctoral opportunities where I can continue to deepen my research in theoretical machine learning and optimization, while also collaborating with interdisciplinary teams. I am especially keen to contribute to projects that blend mathematical rigor with impactful real-world applications, and to grow as an independent researcher in academia or industry research labs.

For more information about my work, please see my resume.

News

  • Apr 2025 Paper accepted at ICASSP 2025 — "Online Learning with Non-convex Losses: New Condition to Achieve Small Dynamic Regret".
  • 2025 New preprint submitted — "Generalization of FedAvg Under Constrained Polyak-Łojasiewicz Type Conditions".
  • 2025 New preprint submitted — "Localized Growth Conditions for Decentralized FedAvg: Convergence to Global Optimal Points".
  • 2025 New preprint submitted — "A PL-type Framework for Dynamic Regret in Non-Convex, Non-Smooth Online Composite Optimization".