Solved-We say that a random variable X belongs to a Gaussian mixture model- Solution

$30.00 $19.00

Questions: Let X1; X2; :::; Xn be n > 0 independent identically distributed random variables with cdf FX (x) and pdf fX (x) = FX0(x). Derive an expression for the cdf and pdf of Y1 = max(X1; X2; :::; Xn) and Y2 = min(X1; X2; :::; Xn) in terms of FX (x). [10 points] We…

You’ll get a: . zip file solution

 

 
Categorys:
Tags:

Description

5/5 – (2 votes)

Questions:

  1. Let X1; X2; :::; Xn be n > 0 independent identically distributed random variables with cdf FX (x) and pdf fX (x) = FX0(x). Derive an expression for the cdf and pdf of Y1 = max(X1; X2; :::; Xn) and Y2 = min(X1; X2; :::; Xn) in terms of FX (x). [10 points]

  1. We say that a random variable X belongs to a Gaussian mixture model (GMM) if X PKi=1 piN ( i; i2)

where pi is the ‘mixing probability’ for each of the K constituent Gaussians, with

K

8i; 0

pi 1.

i=1 pi = 1;

P

To draw a sample from a GMM, we do the following: (1) One of the K Gaussians is randomly chosen as per the PMF fp1; p2; :::; pK g (thus, a Gaussian with a higher mixing probability has a higher chance of being picked). (2) Let the index of the chosen Gaussian be (say) m. Then, you draw the value from N ( m; m2). If X belongs to a GMM as de ned here, obtain expressions for E(X); Var(X) and the MGF of X.

random variable of the form X

i N

(

; 2) for each i

1; 2; :::; K

g

. De ne another random

Now consider a

K

p

X

i

i

2 f

variable Z =

P

. Derive an expression for E(Z); Var(Z) and the PDF, MGF of Z. [2+2+2+2+2+2+3=15

points]

i=1

i

i

3. Using Markov’s inequality, prove the following one-sided version of Chebyshev’s inequality for random vari-

able X with mean and variance 2: P (X)

2

if > 0, and P (X) 1

2

2 + 2

2 + 2

if < 0. [15 points]

  1. Given stu you’ve learned in class, prove the following bounds: P (X x) e tx X (t) for t > 0, and P (X x) e tx X (t) for t < 0. Here X (t) represents the MGF of random variable X for parameter t.

1

Now consider that X denotes the sum of n independent Bernoulli random variables X1; X2; :::; Xn where

n

e (et 1)

E(Xi) = pi. Let =

i=1 pi. Then show that P (X > (1 + ) )

for any t 0; > 0. You may

e(1+ )t

x

ex. Further show how to tighten this bound by choosing an optimal value of t.

use the inequality 1 +P

[15 points]

  1. Consider N independent random variables X1; X2; :::; XN , such that each variable Xi takes on the values 1; 2; 3; 4; 5 with probability 0:05; 0:4; 0:15; 0:3; 0:1 respectively. For di erent values of

N 2 f5; 10; 20; 50; 100; 200; 500; 1000; 5000; 10000g, do as follows:

    1. Plot the (empirically determined) distribution of the average of these random variables (Xavg(N) =

PN

i=1 Xi=N) in the form of a histogram with 50 bins.

    1. Empirically determine the CDF of Xavg(N) using the ecdf command of MATLAB (this is called the empirical CDF). On a separate gure, plot the empirical CDF. On this, overlay the CDF of a Gaussian having the same mean and variance as Xavg(N). To get the CDF of the Gaussian, use the normcdf function of MATLAB.

    1. Let E(N) denote the empirical CDF and (N) denote the Gaussian CDF. Compute the maximum absolute di erence (MAD) between E(N)(x) and (N)(x) numerically, at all values x returned by ecdf. For this, read the documentation of ecdf carefully. Plot a graph of MAD as a function of N. [4+4+3+4 = 15 points]

  1. Read in the images T1.jpg and T2.jpg from the homework folder using the MATLAB function imread and cast them as a double array. These are magnetic resonance images of a portion of the human brain, acquired with di erent settings of the MRI machine. They both represent the same anatomical structures and are perfectly aligned (i.e. any pixel at location (x; y) in both images represents the exact same physical entity). Consider random variables I1; I2 which denote the pixel intensities from the two images respectively. Write a piece of MATLAB code to shift the second image along the X direction by tx pixels where tx is an integer ranging from -10 to +10. While doing so, assign a value of 0 to unoccupied pixels. For each shift, compute the following measures of dependence between the rst image and the shifted version of the second image:

the correlation coe cient ,

a measure of dependence called quadratic mutual information (QMI) de ned as

i1

i2 (pI1I2 (i1; i2)

2

joint pmf) of I

1

and

pI1 (i1)pI2 (i2)) , where pI1I2 (i1; i2) represents the normalized joint histogram (i.e.P,

P

I2 (‘normalized’ means that the entries sum up to one).

For computing the joint histogram, use a bin-width of 10 in both I1 and I2. For computing the marginal histogram, you need to integrate the joint histogram along one of the two directions respectively. You should write your own joint histogram routine in MATLAB – do not use any inbuilt functions for it. Plot a graph of the values of versus tx, and another graph of the values of QMI versus tx.

Repeat exactly the same steps when the second image is a negative of the rst image, i.e. I2 = 255 I1.

Comment on all the plots. In particular, what do you observe regarding the relationship between the dependence measures and the alignment between the two images? Your report should contain all four plots labelled properly, and the comments on them as mentioned before. [30 points]

2