Research

Overview

My research bridges computational mathematics, machine learning, and scientific computing to develop data-driven numerical algorithms that overcome the limitations of traditional methods. By embedding learning mechanisms within classical numerical frameworks, these algorithms adapt automatically to problem structure, scale, and uncertainty, achieving stable and scalable performance on large, heterogeneous, and ill-conditioned systems in physics, engineering, and data science.

Stochastic Optimization Scientific Machine Learning Numerical Linear Algebra High-Performance Computing

Research Directions

Stochastic Optimization

I develop scalable stochastic estimators and iterative methods for large-scale optimization and inference, focusing on bias-controlled, low-variance approximations to linear solves, traces, and log-determinants that remain stable even on ill-conditioned systems.

Stochastic Iterative Methods Preconditioned SGD Gaussian Processes
Stochastic Optimization

Data-Driven Hybrid Solvers

I develop data-driven hybrid solvers that combine PDE structure with learning-based components to handle regimes where classical multigrid, ILU, and sparse approximate inverses break down. These physics-constrained learned solvers retain the stability and interpretability of numerical methods while adapting to global coupling, anisotropy, and multiscale heterogeneity in realistic models.

PDE Solvers Operator Learning Green's Functions
Data-Driven Hybrid Solvers

High-Performance Numerical Algorithms

On the high-performance side, I design preconditioners and eigensolvers for large sparse problems on distributed-memory and GPU-accelerated systems. These methods preserve the stability of classical solvers while exposing multilevel parallelism, yielding robust, predictable performance for large-scale PDE and simulation workloads.

Distributed-Memory Systems GPU Acceleration Mixed-Precision
High-Performance Numerical Algorithms