Assignment 1 Solution

$29.99 $18.99

This assignment is supposed to be submitted either through Canvas or on paper by January 30 10:00AM (start of the class) to the TA.   1. Consider N data points independent and uniformly distributed in a p-dimensional unit ball (for every , ) centered at the origin. The median distance from the origin to the…

You’ll get a: . zip file solution

 

 
Categorys:
Tags:

Description

Rate this product

This assignment is supposed to be submitted either through Canvas or on paper by January 30 10:00AM (start of the class) to the TA.

 

1. Consider N data points independent and uniformly distributed in a p-dimensional unit ball (for every

, ) centered at the origin. The median distance from the origin to the closest data point is given by the expression:

 

 

 

Prove this expression (8 points). Compute the median distance for (2 points).

 

Hint: The volume of a ball in p dimensions is , where is the radius of the ball, and

is the Gamma function. A point being the closest point to the origin means that there is no point in the N data points that has a smaller distance to the origin than itself. What is the probability for that to happen with a uniform distribution in a unit ball?

 

2. Compute the gradient and Hessian of the function

 

 

Find at least 3 stationary points of this function (3 points). Compute the Hessian at each stationary point you

found (5 points). Show that
is the only local maximum of this function (2 point).

 

 

3. Show that the function has only one stationary point (2 points), and that it is neither a minimum nor a maximum, but is a saddle point (2 points).
4. If A and B are positive definite matrices, prove that the matrix
is also positive definite (6 points).

 

 

 

5. Derive the forward (computing ) (4 points) and backpropagation (computing the gradients) (6 points) functions for a 1-hidden layer neural network with k hidden nodes, a sigmoidal transformation function (for

each node and the final layer generating ): , and a 2-class

cross-entropy loss function at the final layer for the neural network (hint:

you are allowed to combine the sigmoid and cross-entropy layers into a single one, it might be easier). is

of dimensionality , , .