What is Estimation?

• $\theta$ 可以是一個 fixed value, 或是一個 random variable (數位通信送0 or 1), 或是 continuous probability distribution.
• An estimate of the parameter $\theta$ is named $\hat{\theta}$. $\hat{\theta}$ 一般是 function of g. 因為 g 是 random vector, $\hat{\theta}$ 可視為一個 random variable.
• An estimation $\hat{\theta}$ compared with true value $\theta$ 包含兩種 errors
• Random error (precision: estimation variance, noise, jitter, ..)
• Systematic error (accuracy: estimation bias, calibration error, wrong model, ..)

$\hat{\theta}$ 是一個 random variable condition on $\theta$.

It’s called bias condition on $\theta$

$\hat{\theta}$ is an unbiasd estimate for which  $b(\theta)=0$  for all $\theta$.  大多數情況下，我們希望找 unbiased estimator.  這很直覺，有例外嗎?

and ensemble mean-square error

Score 定義

score 是 Log-likelihood function 的一階導數，也是 Likelihood function 的 sensitivity function.

ML estimator  $\hat{\theta}$ 就是 s(g) = 0

Why ML?

An ML estimate is:

• Efficient if an efficient estimate exists
• Asypmptotically efficient (as you get more or better data)
• Asypmptotically unbiased
• Asypmptotically consistent
• Usually easy to compute
• A way of rigorously enforcing agreement with the data
• A way of doing estimation with no prior information

Why NOT ML?

ML estimation is:
– A way of rigorously enforcing agreement with the data
– A way of doing estimation with no prior information

Data are noisy. Rigorous agreement with noisy data will give noisy estimates
even though that is the best you can do without bias!
You always have some prior information and you should use it, even
though it might introduce bias.
One way to use a prior pr() is with the weighted likelihood:

WL argmax

pr(gj) pr() = argmax

pr(jg) :
This estimate is also called the maximum a posteriori or MAP estimate,

參考文件