### Hidden Markov Model of Filtering, Smoothing – Probabilistic Interpretation

#### by allenlu2007

## 前文 Kalman filter 是從 (linear) state space model 出發:

## 我們可以從 Hidden Markov Model (HMM) 出發，更 general 的 probabilistic model:

Sometimes it is called Bayes filter or recursive Bayesian filter.

Kalman filter 是一個 linear space + Gaussian distribution 的特例。

好處:

1. 更 general

2. 更好的 nonlinear filter or large dimensional filter: Particle filter, Unscented KF, EnKF

The following is reference from xxx

上述的 joint PDF 其實是 Bayesian Network (BN) 的直接結果。Joint PDF 等於所有的 conditional PDF product!

The key to solve the estimation problem is divided into filtering and smoothing!!

The first equation is directly from Markov property (since t1:n is given, there is only one proportional parameter!)

同樣的結論 from another article:

(1) state is X; output is e

(1a) combine prediction and update into one equation

(2) 多了 smoothing (backward pass). Smoothing 只有 one step; 不像 filter: prediction+update

(3) summation vs. integration

如何把上述 equation 轉為常見的 Kalman filter and smoother?

## Filtering

Filtering 分為兩步: prediction and update (or innovation)

Prediction:

(y is state, t is observed data)

or ( x is state, z is observed data)

or ( x is state, e is observed data)

Update (innovation)

or more precisely

where alpha > 1

### Case 1: Linear model : Kalman filter

After prediction …

After update:

一個重點是 Kt and Sigma_t 和 observation data 無關，可以 offline compute. 問題是如果 Sigma_x, Sigma_z, 和初始的 Sigma_t 如果不準，所算出的 Kt 自然也不準。可能需要用 EM algorithm 事先 training.

Or use EnKF to compute the K!

### Case 2: Nonlinear model: Extended Kalman filter (1st order approximation)

如果 nonlinear (therefore non-Gaussian) but well define function with reasonable dimension.

We can use extended Kalman filter. It is essentially a linear approximation of the nonlinear function.

接下來就和 KF 是一樣的。

### Case 3: Nonlinear model: Unscented Kalman filter (2nd order approximation)

比較 Taylor expansion and statistical linearisation 對 input Gaussian PDF to output 的差異如下圖

可以看到 statistical linearisation 所產生的 output PDF 的 1st and 2nd order (mean and variance) 都比 Taylor expansion linearisation 的近似好。這是 unscented KF 的基本概念。

D 是 Gaussian 的 dimension. 1-D 3 points; 2-D 5 points, etc.

Sigma point KF 的好處:

* 不需要計算 derivative

* Based on deterministic sampling (非 Monte Carlo)

* Gaussian approximation filter with exact nonlinear models (非 Taylor expansion approximation).

缺點: 收歛速度比較慢

### Case 4: particle filter (or sequential Monte Carlo filter)

For nonlinear and non-Gaussian and high dimension case. There is no well defined nonlinear function.

### Case 5: Ensemble Kalman filter (EnKF)

### A special case of particle filter but everything is Gaussian. The only problem is high dimensional so it is difficult to compute the covariance matrix.

The EnKF is a monte carlo approximation of the Kalman filter, which avoids evolving the covariance matrix of the state vector x. K = CH’ (HCH’ + R ) ^ -1

Recall in Kalman filter, Kt and Sigma_t are independent of the observed data (only depends on Q and R)!! In the EnKF, this is not the case, but it is better because it takes the observed data into account? and avoid the inaccuracy prior assumption of the state covariance?

Q: what’s the link of HMM and conventional Markov chain?

(a) conventional Markov chain (如下), 沒有 show time step, only the state transition and transition probability.

BTW, the state transition and probability is time invariant.

(b) conventional Markov chain, the state is directly observable.

© HMM 的圖如最上。each state circle includes all possible states; and shows the time step transition.

Answer: Daphne Kroller 的解釋很清楚 (PGM in coursera). Conventional Markov chain 定義了 state transition model and transition probability, 而 hidden markov model 代表的是 temporal unroll 的圖。 Markov property and time invariant 是一般 HMM 的 default 假設。 當然 HMM, state 是 unobservable.