Thursday, August 28, 2008

Activity 15: White Balancing

Correcting images captured under different light conditions such that white appears white is a technique called white balancing. To do this, we have to subject an object to different lighting conditions and compare the whiteness of both object and image.


tunsten

automatic

cloudy

daylight

fluorescent

There is variation in the whiteness in the image for different lighting conditions. There are two methods that can be used in performing white balancing, namely, the Reference White and the Gray World algorithm. Here, we determined first the RGB values of white which is used as a reference. The gray world algorithm on the other hand works by assuming the image is gray. On the other hand, the gray world algorithm is implemented by averaging the R-G-B of each channel in the unbalanced image. The red channel of the unbalanced image is then divided by the R above. The same is done for the blue and green channels.


Rating: I'll give myself 9.0 points. Since I was able to do the activity with a good degree of acccuracy. Thanks to Mark and Jeric for the sample images.

Monday, August 25, 2008

Activity 14: Stereometry

In this activity, we wish to recover the depth information and reconstruct the 3D shape of an object via stereometry. This method requires multiple images of an object (at least 2) where the camera position is varied and the object location is fixed or we can change the position of the object and the camera position is fixed. It should be noted that the image plane is the same as well as the object plane.

Here is the images of a rubik's cube which i used as a sample. I exhausted points in between the grids of the cube for the upper face because it is the only part of the image with an expected variation in depth.

The value of f=14mm and b=10mm. For my case, the resulting image is just the upper face inverted due to arrangement of points in the matrix used and data plotting order. The deeper part is the upper portion of the face.

Rating: I'm convinced that i was able to retrieve the depth information based from the results so I'll give myself a 9.50 points. Manual point location perhaps gave rise to errors which resulted to non-uniformities of points in the reconstruction.

The Code:
x1=[183 214 248 283 320 352 385; 182 214 248 283 320 352 386; 181 213 248 283 321 354 387; 180 212 247 283 321 355 389; 179 211 247 283 322 356 391; 178 210 247 283 323 358 393; 176 209 247 283 323 358 395];
x2=[144 174 209 243 280 313 346; 143 173 209 243 280 313 347; 141 172 209 244 280 314 349; 140 171 208 244 281 315 350; 137 169 208 244 281 316 352; 135 167 207 244 282 317 353; 133 165 206 243 281 318 353];
f=14.;
b=10,;
z=[];
z=(b*f)./(x1-x2);
n = 7 // a regular grid with n x n interpolation points
// will be used
x = linspace(0,1,n); y = x;
//z = cos(x')*cos(y);z = cos(x')*cos(y);
C = splin2d(x, y, z, "periodic");
m = 10; // discretisation parameter of the evaluation grid
xx = linspace(0,1,m); yy = xx;
[XX,YY] = ndgrid(xx,yy);
zz = interp2d(XX,YY, x, y, C);
emax = max(abs(zz - cos(xx')*cos(yy)));
xbasc()
plot3d(xx, yy, zz);//, flag=[2 4 4])
[X,Y] = ndgrid(x,y);
param3d1(X,Y,list(z,-9*ones(1,n)), flag=[0 0])
str = msprintf(" with %d x %d interpolation points. ermax = %g",n,n,emax);
//xtitle("spline interpolation of cos(x)cos(y)"+str)n

Monday, August 11, 2008

Activity 13: Photometric Stereo

Using Photometric stereo technique, we reconstruct the shape of a 3D object by capturing the image of the object at different positions of the light source.

We have to compute for the matrix of reflectance g using the equation


where V is the matrix of the light source positions and I is the image matrix.
Plotting the value of the elevations given by equations



the result is a hemisphere.


Rating: I'll give myself 10. since the reconstructed shape is similar to the original object.


The Code:
loadmatfile(’Photos.mat’);

V(1,: ) = [0.085832 0.17365 0.98106];
V(2,: ) = [0.085832 -0.17365 0.98106];
V(3,: ) = [0.17365 0 0.98481];
V(4,: ) = [0.16318 -0.34202 0.92542];

I(1, : ) = I1(: )’;
I(2, : ) = I2(: )’;
I(3, : ) = I3(: )’;
I(4, : ) = I4(: )’;

g = inv(V’*V)*V’*I;
g = g;
ag = sqrt((g(1,:).*g(1,: ))+(g(2,:).*g(2,: ))+(g(3,: ).*g(3,: )))+1e-6;

for i = 1:3
n(i,: ) = g(i,: )./(ag);
end

//get derivatives

dfx = -n(1,: )./(nz+1e-6);
dfy = -n(2,: )./(nz+1e-6);

//get estimate of line integrals

int1 = cumsum(matrix(dfx,128,128),2);
int2 = cumsum(matrix(dfy,128,128),1);
z = int1+int2;
plot3d(1:128, 1:128, z);