I am a 2nd-year PhD student in Computing and Mathematical Sciences at Caltech, advised by professors Yisong Yue and Adam Wierman. My research is in statistical machine learning, motivated by applications in sustainability.
Previously, I studied Computer Science at Stanford where I was a member of the Sustainability and AI Lab and president of Code the Change. I also spent a year studying public policy as a Schwarzman Scholar at Tsinghua in Beijing.
I derive the bias-variance decomposition of mean squared error for both estimators and predictors, and I show how they are related for linear models.
I’ve almost never been able to write correct Python import
statements on the first go. Behavior is inconsistent between Python 2.7 and Python 3.6 (the two versions that I test here), and there is no single method for guaranteeing that imports will always work. This post is my dive into how to resolve common importing problems. Unless otherwise stated, all examples here work with both Python 2.7 and 3.6.
In July 2020, GitHub announced that in “Mid-2021 - Personal access or OAuth tokens will be required for all authenticated Git operations.” In December 2020, GitHub set that date for August 13, 2021–i.e., today. Effectively, GitHub users can no longer access their existing GitHub repos with their username and password using the git command line. Instead, users must use either SSH or a personal access token. This post describes how to set up personal access tokens and use them with Git.
I prove key properties of Schur Complements and use them to derive the matrix inversion lemma.
While trying to learn about the linear quadratic regulator (LQR) controller, I came across UC Berkeley’s course on deep reinforcement learning. Sadly, their lecture slides on model-based planning (Lec. 10 in the 2020 offering of CS285) are riddled with typos, equations cutoff from the slides, and dense notation. This post presents my own derivations of the LQR controller for discrete-time finite-horizon time-varying systems.
Given an undirected graph \(G = (V, E)\), a common task is to identify clusters among the nodes. It is a well-known fact that the sign of entries in the second eigenvector of the normalized Graph Laplacian matrix provides a convenient way to partition the graph into two clusters; this “spectral clustering” method has strong theoretical foundations. In this post, I highlight several theoretical works that generalize the technique for \(k\)-way clustering.
The upgrade experience to MathJax 3 was far from smooth.