My story

How I got here

I've always been drawn to problems where the answer isn't obvious and the data is messy. That instinct has taken me from pure mathematics at Stanford to the trading floor at Morgan Stanley to leading an AI team at Lenovo. Each chapter built on the last, and each one taught me something I couldn't have learned any other way.

Choosing Mathematics

I chose math because I wanted the deepest toolkit I could get. While most of my peers were optimizing for a specific career path — CS, pre-med, consulting — I bet on versatility. Stochastic calculus, convex optimization, machine learning, distributed systems. I wanted to be the person who could look at any quantitative problem and know how to break it down from first principles.

That bet paid off faster than I expected. Stochastic calculus became the language I used to model volatility surfaces at Morgan Stanley. Convex optimization became the backbone of my portfolio construction code. The machine learning coursework gave me the foundation to build production AI systems at Lenovo. Every class that felt abstract at the time turned out to be exactly what I needed next.

Where theory met the real world

Morgan Stanley was where everything clicked. For the first time I was working with data that had real stakes behind it — TAQ feeds with millions of rows per day, timestamps at the microsecond, every entry a decision someone made with actual capital. The pace and the precision were on a different level from anything I'd seen in academia.

I worked on equity stock-selection, ranking thousands of stocks by predicted alpha using IC/IR-ranked signals with cross-sectional scoring. I improved simulated returns by 15% over the existing production model. But the microstructure work was where I really found my footing — I built pipelines for queue imbalance, adverse selection metrics, and fill probability, then developed a GRU-based volatility forecaster fusing order book data with HMM-tagged regime indicators. The vol model cut out-of-sample MAE by 8% on stressed weeks.

That summer crystallized something for me: the hard part of quantitative work is never finding patterns. Any model can find patterns. The hard part is distinguishing the patterns that persist from the ones that vanish the moment you trade on them.

Building AI systems that ship

Morgan Stanley gave me strong quantitative foundations. The next step was clear: I wanted to build full systems — not just models, but the infrastructure, orchestration, and deployment that turn a model into something people rely on every day. That's what I do now at Lenovo, where I lead a team of 5 engineers building iChain.

iChain is Lenovo's enterprise human-AI platform. 200+ production agents across 10+ business units. A supply chain manager uses an agent I architected to decide whether to reroute a shipment. A demand planner uses another to adjust forecasts in real time. These aren't demos. They run in production, and they have to be right, fast, and cost-efficient.

I designed the orchestration engine on LangGraph — routing, parallel tool execution, batching, concurrency controls. The kind of infrastructure that separates a prototype from a product. The results: 17% lower error rate, 32% reduction in inference cost, 41% improvement in p95 latency. I also trained a domain-adapted 7B model via QLoRA as our automated quality scorer — it outperformed on our internal eval by 12% F1, which reinforced my conviction that domain-specific data matters more than raw model scale.

LangGraph QLoRA vLLM Multi-Agent