Homework 02 Solution

$29.99 $18.99

The matrix A (with coefficients aij ) is available from the class webpage (see below), and was constructed as follows. We take aij = rij−2 max{cos θij , 0}, where rij denotes the distance between lamp j and the midpoint of patch i, and θij denotes the angle between the upward normal of patch i…

You’ll get a: . zip file solution

 

 
Categorys:
Tags:

Description

Rate this product

The matrix A (with coefficients aij ) is available from the class webpage (see below), and was constructed as follows. We take

aij = rij−2 max{cos θij , 0},

where rij denotes the distance between lamp j and the midpoint of patch i, and θij denotes the angle between the upward normal of patch i and the vector from the midpoint of patch i to lamp j, as shown in the figure. This model takes into account “self-shading” (i.e., the fact that a patch is illuminated only by lamps in the halfspace it faces) but not shading of one patch caused by another. Of course we could use a more complex illumination model, including shading and even reflections. This just changes the matrix relating the lamp powers to the patch illumination levels.

The problem is to determine lamp powers that make the illumination levels Ii close to a given desired level Ides. In other words, we want to choose the n-vector x such that

n

aij xj ≈ Ides, i = 1, . . . , m,

j=1

but we also have to observe the power limits 0 ≤ xj ≤ 1. This is an example of a constrained optimization problem. The objective is to achieve an illumination level that is as uniform as possible; the constraint is that the components of x must satisfy 0 ≤ xj ≤ 1. Finding the exact solution of this minimization problem requires specialized numerical techniques for constrained optimization. However, we can solve it approximately using least-squares.

In this problem we consider two approximate methods that are based on least-squares, and compare them for the data generated using [A,Ides]= HW1Prob2=ch8ex8, with the MATLAB ch8ex8HW1Prob2..m. The elements of A are the coefficients aij . In this example we have m = 11, n = 7 so A is 11 × 7, and Ides = 2.

  1. Saturate the least-squares solution. The first method is simply to ignore the bounds on the lamp powers. We solve the least-squares problem

m n

130 8 Linear least-squares

3)

8.9 De-noising using least-squares. The figure shows a signal of length 1000, corrupted with noise. We are asked to estimate the original signal. This is called signal reconstruction, or de-noising, or smoothing. In this problem we apply a smoothing method based on least-squares.

0.5

corr

0

x

0.5

0

200

400

600

800

1000

i

We will represent the corrupted signal as a vector xcor of size 1000. (The values can be obtained as xcor = =HW1Probch8ex93 using the file ch8ex9HW1Prob.3.m.) The estimated signal (i.e., the variable in the problem) will be represented as a vector xˆ of size 1000.

The idea of the method is as follows. We assume that the noise in the signal is the small and rapidly varying component. To reconstruct the signal, we decompose xcor in two parts

xcor = xˆ + v

where v is small and rapidly varying, and xˆ is close to xcor (ˆx ≈ xcor) and slowly varying (ˆxi+1 ≈ xˆi). We can achieve such a decomposition by choosing xˆ as the solution of the least-squares problem

2

999

2

(8.10)

minimize

x − xcor + µ

i=1(xi+1 − xi) ,

The first term

x

x

cor

2 measures how much x deviates

where µ is a positive constant.

999

2

from xcor. The second term,

i=1(xi+1 − xi) , penalizes rapid changes of the signal

between two samples. By minimizing a weighted sum of both terms, we obtain an estimate

xˆ that is close to xcor (i.e., has a small value of

xcor

2) and varies slowly (i.e., has a

small value of

999

2

i=1(ˆxi+1

i)

). The parameter µ is used to adjust the relative weight

of both terms.

Problem (8.10) is a least-squares problem, because it can be expressed as

minimize

Ax − b2

where

A =

I

,

b =

xcor

,

D

0

µ

and D is a 999 × 1000-matrix defined as

1

1

0

0

· · ·

0

0

0

0

0

1

1

0

· · ·

0

0

0

0

0

0

1

1

0

0

0

0

D =

..

..

..

..

· · ·

..

..

..

..

.

.

.

. .

· · ·

.

.

.

.

0

0

0

0

· · ·

1

1

0

0

0

0

0

0

0

1

1

0

· · ·

0

0

0

0

0

0

1

1

Exercises

131

The matrix A is quite large (1999 × 1000), but also very sparse, so we will solve the least-squares problem using the Cholesky factorization method. You should verify that the normal equations are given by

(I + µDT D)x = xcor.

(8.11)

MATLAB and Octave provide special routines for solving sparse linear equations, and they are used as follows. There are two types of matrices: full (or dense) and sparse. If you define a matrix, it is considered full by default, unless you specify that it is sparse. You can convert a full matrix to sparse format using the command A = sparse(A), and a sparse matrix to full format using the command A = full(A).

When you type x = A\b where A is n×n, MATLAB chooses different algorithms depend-ing on the type of A. If A is full it uses the standard method for general matrices (LU or Cholesky factorization, depending on whether A is symmetric positive definite or not). If A is sparse, it uses an LU or Cholesky factorization algorithm that takes advantage of sparsity. In our application, the matrix I + µDT D is sparse (in fact tridiagonal), so if we make sure to define it as a sparse matrix, the normal equations will be solved much more quickly than if we ignore the sparsity.

The command to create a sparse zero matrix of dimension m×n is A = sparse(m,n). The command A = speye(n) creates a sparse n × n-identity matrix. If you add or multiply sparse matrices, the result is automatically considered sparse.

This means you can solve the normal equations (8.11) by the following MATLAB code (assuming µ and xcor are defined):

D = sparse(999,1000);

D(:,1:999) = -speye(999);

D(:,2:1000) = D(:,2:1000) + speye(999);

xhat = (speye(1000) + mu*D’*D) \ xcor;

Solve the least-squares problem (8.10) with the vector xcor defined in ch8ex9.m, for three values of µ: µ = 1, µ = 100, and µ = 10000. Plot the three reconstructed signals xˆ. Discuss the effect of µ on the quality of the estimate xˆ.

8.10 Suppose you are given m (m ≥ 2) straight lines

Li = {pi + tiqi | ti R}, i = 1, . . . , m

in Rn. Each line is defined by two n-vectors pi, qi. The vector pi is a point on the line; the vector qi specifies the direction. We assume that the vectors qi are normalized (qi = 1) and that at least two of them are linearly independent. (In other words, the vectors qi are not all scalar multiples of the same vector, so the lines are not all parallel.) We denote by

di(y) = min y − ui = min y − pi − tiqi

uiLi ti

the distance of a point y to the line Li.

Express the following problem as a linear least-squares problem. Find the point y Rn that minimizes the sum of its squared distances to the m lines, i.e., find the solution of the optimization problem

m

minimize di(y)2

i=1

with variable y. Express the least-squares problem in the standard form

minimize Ax b2

with a left-invertible (zero nullspace) matrix A.

  1. Clearly state what the variables x in the least-squares problem are and how A and b are defined.

132 8 Linear least-squares

(b) Explain why A has a zero nullspace.

4)8.11 In this problem we use least-squares to fit a circle to given points (ui, vi) in a plane, as shown in the figure.

We use (uc, vc) to denote the center of the circle and R for its radius. A point (u, v) is on the circle if (u − uc)2 + (v − vc)2 = R2. We can therefore formulate the fitting problem as

m

(ui − uc)2 + (vi − vc)2 − R2 2

minimize

i=1

with variables uc, vc, R.

Show that this can be written as a linear least-squares problem if we make a change of variables and use as variables uc, vc, and w = u2c + vc2 − R2.

(a)

Define A, b, and x in the equivalent linear least-squares formulation.

(b)

Show that the optimal solution uc, vc, w of the least-squares problem satisfies uc2 +

vc2 − w ≥ 0. (This is necessary to compute R =

from the result uc,

uc2 + vc2 − w

vc, w.)

Test your formulation on the problem data in the file ch8ex11HW1Prob4.m on the course website. The commands

[u,v]=HW=1Prob4ch8ex11

plot(u, v, ’o’);

axis square

will create a plot of the m = 50 points (ui, vi) in the figure. The following code plots the 50 points and the computed circle.

t = linspace(0, 2*pi, 1000);

plot(u, v, ’o’, R * cos(t) + uc, R * sin(t) + vc, ’-’); axis square

(assuming your MATLAB variables are called uc, vc, and R).

The solution of a least-squares problem

8.12 Let A be a left-invertible m × n-matrix.

(a) Show that the (m + n) × (m + n) matrix

I A

AT 0

is nonsingular.