allenlu2007

A great WordPress.com site

Month: June, 2014

Factor Graph and Examples

by allenlu2007

 

PGM 基本上分三類: direct graph, undirect graph, factor graph.

Factor graph –> bipartite graph 

message passing algorithm:  sum-product algorithm, max-product algorithm

 

Applications:

LDPC, turbo code, Sudoku solver, Viterbi, Kalman filter, HMM, and some FFT

 

LDPC and Sudoku 都有類似的結構。

Sudoku example:

 

NewImage

NewImage

 

NewImage

 

 

PGM example – Sudoku 數獨

by allenlu2007

 

PGM example

Sudoku

LDPC

–>

bipartite graph

 

HMM example of Viterbi and Forward Backward

by allenlu2007

 

 

HMM topology

 

Viterbi is to compute max of joint PDF: Max-product belief propagation

x^map = arg max p(X)

Forward backward is to compute marginal PDF:  sum product belief propagation

 

Kalman filter is different from Viterbi

Kalman filter is foward backward

 

Max-product algorithm

Max-sum algorithm

Random Markov Field and Huygens’ Principle

by allenlu2007

 

Quantum mechanics : probability wave

Can it be explained by random markov field?

https://www.google.com.tw/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&cad=rja&uact=8&ved=0CEsQFjAD&url=http%3A%2F%2Fwww.ece.uvic.ca%2F~bctill%2Fpapers%2Fnumacoust%2FEnders_2001.pdf&ei=nc6RU7XOI4TEkgX68IHwAw&usg=AFQjCNGx6C3khmIkaganuilQ5QyVLUrPMA&sig2=0JYgrBJVz6zWT2tzzJhf6A

 

Ordinary Linear Regression vs. Bayesian Linear Learning

by allenlu2007

 

Discriminative vs. generative learning

The distribution p(y|x) is the natural distribution for classifying a given example x into a class y, which is why algorithms that model this directly are called discriminative algorithms. Generative algorithms model p(x,y), which can be tranformed into p(y|x) by applying Bayes rule and then used for classification. However, the distribution p(x,y) can also be used for other purposes. For example you could use p(x,y) to generate likely (x,y) pairs.

From the description above you might be thinking that generative models are more generally useful and therefore better, but it’s not as simple as that. This paper is a very popular reference on the subject of discriminative vs. generative classifiers, but it’s pretty heavy going. The overall gist is that discriminative models generally outperform generative models in classification tasks.

 

The following is from Bishop video lecture in MLSS 2009?

Regression is discriminative learning (?)

Pro: simple; sometime ignore irrelevant information

Con: lost some information

 

Bayesian Learning (generative ?)

Find the likelihood function

Find the conjugate prior

Find the posterior distribution

 

Pro: complicated in computation

Con: More information to know the posterior

 

Can do more like: 

loss function?

rejection

unbalance ? (to correct the prior)

and ??

 

 

Statistics Applications in Mixed Signal Circuit Design

by allenlu2007

 

Circuit designers only care about statistics when talking about yield.
For example, the distribution of bandgap voltage, or the sizing of the current steering DAC.
It is under the “design for yield” goal.  The design is on a deterministic base.  Statistics is merely a tool to optimize the yield. 
 
This article is from a different perspective.  We assume some quantity (voltage, timing, etc.) to be estimated from a statistical point of view.  Either the quantity itself is uncertain (with noise, e.g. DC level of a modulated signal); or the measurement component contains bias (deterministic) or noise (e.g. comparator in ADC). 
 
The key is that there could be error in estimation.  How we can control the error using statistical method.
 
 
Problems to estimate
 
Case 1: quantitation
 
(a) voltage (or current) to quantize:  ADC
 
(b) delay or phase to quantize: digital PLL
 
 
Case 2: function
(a) DC offset cancellation
 
(b) timing recovery or detection
 
Case 3: 
 
 
 
Examples:
 
1. ADC: the most intuitive way to implement ADC is using 2^N-1 comparator to quantize input voltage
like flash ADC.   If comparator makes error, flash ADC is quite robust since it is using thermomic encoder. 
Thermonic coding is robust against decision error, including both systematic error (bias) or random error.
However, it comes with the price of highly redundancy.  
 
The only problem of flash controller is area and power, not scalable with the bit resolution.
 
The other extreme is to use SAR ADC, with only one comparator and a DAC.  
The becomes a noisy search problem!!!
 
There are two popular searching algorithms.   One is linear search; the other is binary search algorithms.
Linear search is similar to flash ADC, trading time step with voltage step.  Not practical at high resolution.
 
Binary search is most efficient search algorithm.  However, when making a error in the searching, it is not recoverable.   In addition, the earlier the error decision, the larger the error.   The goal is to minimize error 
when making a error comparison.  
 
Conventionally, there are two ways to handle this case:  reduce radix and redundant bits