The section on Kalman Filtering is so simple that the derivation of the following sections does not make sense. just for me ~~
@NitinDhiman
9 жыл бұрын
Hi, I am not clear what each element of Information Matrix convey. How is it related to the entropy? Is it correct to make a statement that higher the value of the element (i,j) of Information matrix, more certain I am about the link between node i and node j? Does it also mean that it provide more information?
@Effesianable
9 жыл бұрын
Nitin Dhiman The entropy is E[-log(p(X)], thus a scalar. I guess the word information is used in multiple contexts and should not be regarded as a strict mathematical definition. For the multivariate normal distribution the entropy depends linearly on the log(Covariance). The inverse covariance matrix is also called precision matrix, so yes in some (indirect) way it describes how strong the r.v. are correlated. For a diagonal covariance matrix the situation is quite clear. Computing the Fisher information for normal distribution might give you further insight on the choice of the name 'information matrix' ;-)
@mostafamohsen250
4 жыл бұрын
why does the EIF not have a Kalman Gain?
@Ahmed-xc5be
Жыл бұрын
Probably because it is not a Kalman Filter
@AttilaLengyel94
6 жыл бұрын
15:58 Shouldn't it be just t instead of t-1?
@AnkitVashisht
5 жыл бұрын
yes
@Effesianable
9 жыл бұрын
The scale factor on the first slide is wrong when talking about a multivariate normal distribution. It should be (2pi)^(-n/2)
@CyrillStachniss
9 жыл бұрын
Effesianable No, the equation is correct (see det(..))
Пікірлер: 10