Sitemap
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Pages
About Me
About Me
Posts
Gaussian Processes (GP) for Time Series Forecasting
Published:
Time-series forecasting is a critical application of Gaussian Processes (GPs), as they offer a flexible and probabilistic framework for predicting future values in sequential data. GPs not only provide point predictions but also quantify uncertainty, making them particularly useful in scenarios where confidence in predictions is important.
Connections among autoregressive (AR) processes, Cochrane-Orcutt correction, Ornstein-Uhlenbeck (OU) processes, and Gaussian Processes (GP)
Published:
In this post, we’ll explore four important concepts in time series modeling and stochastic processes: Autoregressive processes, Cochrane-Orcutt correction, Ornstein-Uhlenbeck (OU) processes, and Gaussian processes (GPs). After explaining each concept, we will also examine their connections and differences. In the end, we will provide some literature of the applications in driving behavior (car-following) modeling.
Modeling Autocorrelation: FFT vs Gaussian Processes
Published:
Autocorrelation is a key property of time series data, describing the dependency of a variable on its past values. Both the Fourier Transform (FT) and Gaussian Processes (GP) can model autocorrelation, but they operate in fundamentally different domains: FFT in the frequency domain and GP in the time domain. Despite their differences, the two methods are mathematically connected through the spectral representation theorem. This blog explores the core concepts, their mathematical underpinnings, and practical differences.
Proof: unbiasedness of ordinary least squares (OLS)
Published:
Consider the linear regression model: \(\mathbf{y} = \mathbf{X}\boldsymbol{\beta} + \boldsymbol{\varepsilon},\) where:
- \(\mathbf{y}\) is an \(n \times 1\) vector of observations.
- \(\mathbf{X}\) is an \(n \times p\) design matrix (full column rank).
- \(\boldsymbol{\beta}\) is a \(p \times 1\) vector of unknown parameters.
- \(\boldsymbol{\varepsilon}\) is an \(n \times 1\) vector of errors with
- \(\mathbb{E}[\boldsymbol{\varepsilon}|\mathbf{X}] = \mathbf{0}\).
From Ordinary Least Squares (OLS) to Generalized Least Squares (GLS)
Published:
Ordinary Least Squares (OLS) is one of the most widely used methods for linear regression. It provides unbiased estimates of the model parameters under the assumption that the error terms are independent and identically distributed (i.i.d.) with constant variance. However, real-world data often violate these assumptions. When the errors exhibit heteroskedasticity (non-constant variance) or correlation, OLS estimates remain UNBIASED (see this post) but lose their efficiency, leading to incorrect standard errors and confidence intervals.
Random Effects and Hierarchical Models in Driving Behaviors Modeling
Published:
In many driving behavior studies, we model how a following vehicle responds to the movement of a lead vehicle. For example, the Intelligent Driver Model (IDM) uses a set of parameters \(\boldsymbol{\theta} = (v_0, T, a_ {\text{max}}, b, s_0)\) to describe a driver’s response in terms of desired speed, time headway, maximum acceleration, comfortable deceleration, and minimal spacing. A critical challenge, however, is that not all drivers behave the same way. Some maintain larger headways, others brake more aggressively, and still others prefer smoother accelerations.
Heterogeneity and Hierarchical Models: Understanding Pooled, Unpooled, and Hierarchical Approaches
Published:
Hierarchical models are powerful tools in statistical modeling and machine learning, enabling us to represent data with complex dependency structures. These models are particularly useful in contexts where data is naturally grouped or exhibits multi-level variability. A critical aspect of hierarchical models lies in their hyperparameters, which control the relationships between different levels of the model.
Understanding the Log-Sum-Exp Trick and Its Application in Hidden Markov Models (HMMs)
Published:
The log-sum-exp trick is a critical technique in numerical computations involving logarithms and exponentials. It is widely used in machine learning, especially in algorithms like the forward-backward procedure in Hidden Markov Models ( HMMs). In this post, we will cover:
Matrix Derivative of Frobenius norm involving Hadamard Product
Published:
Problem: Solve $\frac{\partial\left|\boldsymbol{A}\circ (\boldsymbol{Y}-\boldsymbol{W}^\top\boldsymbol{X})\right|_ {F}^{2}}{\partial\boldsymbol{W}}$ and $\frac{\partial\left|\boldsymbol{A}\circ ( \boldsymbol{Y}-\boldsymbol{W}^\top\boldsymbol{X})\right|_{F}^{2}}{\partial\boldsymbol{X}}$, where $\circ$ denotes the Hadamard product, and all variables are matrices.
portfolio
Portfolio item number 1
Short description of portfolio item number 1
Portfolio item number 2
Short description of portfolio item number 2