Wednesday, July 30, 2008

Activity 11: Canera Calibration

The objective of this activity is to map the coordinates of an object to the corrdinates of the camera and finally to the image via some transformations.


Figure 1 Mapping of the object to the camera and to the image

Solving the relations from the figure above, we wend up with the equations below. The subscript "i" refers to the image and "o' refers to the object. The final equation is only in 2D since the image is in a single plane.


The equations above can be transformed in matrix form.


We have to append the Q and d matrix as we increase the number of points that will be used to determine the mapping.

Qa = d


solving for the transformation matrix a,



The image used id a picture of a checkerboard


Figure 2 Image pf a checker board

The new pixel locations are the expected output of the program given an object location. I exploited other points and the result is consistent with small error. Errors may arise due to imperfections in the real camera transformations (like the radial distortion mentioned in the lecture).

Collaborators:

Julie for the image and Jeric for a the code

Rating: I'll give myself an 7.0 since it took me long before I figure out how to transform the object to camera and to image. I have to consult and ask for the help of my collaborators.


The Code:

x=[0 0 0 0 1 1 2 0 1 0 3 1 0 3 0 2 0 0 0 1 ];
y=[0 1 0 3 0 0 0 1 0 5 0 0 2 0 2 0 5 3 0 0 ];
z=[0 12 3 2 1 2 5 8 1 3 8 9 6 10 7 7 9 10 9];
yi=[127 139 126 164 115 115 102 139 113 191 90 114 152 88 152 101 193 166 125 112];
zi=[200 187 169 161 172 189 175 123 73 200 161 73 57 112 40 91 97 58 138 56];
obj=[];
im=[];
for i=0:length(x)-1
obj(2*i+1, :)=[x(i+1) y(i+1) z(i+1) 1 0 0 0 0 -yi(i+1).*x(i+1) -yi(i+1).*y(i+1) -yi(i+1).*z(i+1)];
obj(2*i+2, :)=[0 0 0 0 x(i+1) y(i+1) z(i+1) 1 -zi(i+1).*x(i+1) -zi(i+1).*y(i+1) -zi(i+1).*z(i+1)];
im(2*i+1)=yi(i+1);
im(2*i+2)=zi(i+1);
end
a=inv(obj’*obj)*obj’*im;
a(12)=1.0;
a=matrix(a, 4, 3)’;
testx=1;
testy=1;
testz=1;
ty=(a(1,1)*testx+a(1,2)*testy+a(1,3)*testz+a(1,4))/(a(3,1)*testx+a(3,2)*testy+a(3,3)*testz+a(3,4));
tz=(a(2,1)*testx+a(2,2)*testy+a(2,3)*testz+a(2,4))/(a(3,1)*testx+a(3,2)*testy+a(3,3)*testz+a(3,4));
ty
tz

Tuesday, July 22, 2008

A10: Preprocessing HandwrittenText

First, we to sample from the larger image such that details are focused on the texts. By doing so,we can remove unwanted informations by blocking their frequency. The figure below is the original sampled image.Figure 1 sample cropped image

The filter used is designed based from the information obtained from the fft of the image above.

Figure 2 FFT of the sample image

The maximums along the vertical should be blocked to remove the horizontal lines in the original image. But the center should be excluded. The figure below is the filter designed for this particular sample.

Figure 3 Designed filter

After filtering the image by convolution in the Fourier space, the inverse fft will now be the new image to be binarized. The threshold is predetermined using GIMP.

Figure 4 Inverse fft'd image

The image above is binarized and using some erosion and dilation, the unwanted geometries are removed. The resulting binary (thresholded) image, and reconstructed image and labeled image are shown in below.
a.) binary image
b. reconstructed image

c. labeled image


Rating: Based from the reconstructions I've done, no significant enhancement is observed.But, the results have small deviations from the original image. I am confident that I did well in the activity except regarding thereadability so I'll give myself 10.0points.

Code:
im=imread('C:\Documents and Settings\gpedemonte\Desktop\text.bmp');
imf=imread('C:\Documents and Settings\gpedemonte\Desktop\filter4.bmp');

ft1=fft2(im);
ft2=fftshift(imf);
imft=ft1.*ft2;

im3=real(fft2(imft));
scf(1);
imshow(im3,[]);
im4=im2bw(im3, 115/255);
scf(2);
imshow(im4,[]);
//
m=[1 1]
////
im5=erode(im4,m);
scf(3);
imshow(im5,[]);

im6=bwlabel(im5);
scf(4);
imshow(im6,[]);

Wednesday, July 16, 2008

A9: Binary Operations

Given an image of circles (see Figure 1), we have to measure the area of a single circle by sampling different regions of the image. The images are converted to binary depending on the threshold which I determined manually from ImageJ.

Figure 1 Image where the samples are derived

Some circles are not perfect, others are cut during sampling. Hence erroneous values of area will occur upon computation. To resolve this problem, we have to dilate and erode the circles to approximately restore their original shape. I used an opening operator- erosion followed by dilation. I need to erode the image first to remove very small blobs and then dilate it afterwards to cancel the erosion.

The area is estimated by determining the area of a single circle. The method is statistical since there are numerous samples. For each sampled image, each blob is labeled and the number of pixels on that blob is the area for that circle. This is done for the rest of the samples and a plot of the histogram of the areas is obtained.

Figure 2 Histogram of computed areas

The average area obtained from the samples is 540 pixels with a standard deviation of 117 pixels. The histogram has a peak at 540, which means the mean and the mode coincides, enough to say that the result is valid. Another valid method to verify the result is to determine the same area by cropping a single almost perfect circle. Finding the average area and standard deviation from several samples will give a better approximate value.In my case, the resulting average area is 535.0 with a standard deviation of 3 pixels.


Rating: The activity is quite easy (except for the automation). So I'll give myself 10. points

Code: just change the corresponding threshold for each image (see the index)

im=imread('G:\AP 186\a9\c1.jpg');
//converting the image with their corresponding threshold
im1=im2bw(im,0.81);
subplot(121);
imshow(im1, []);
//im2=im2bw(im,0.85);
//im3=im2bw(im,0.78 );
//im4=im2bw(im, 0.81);
//im5=im2bw(im, 0.82);
//im6=im2bw(im, 0.83);
//im7=im2bw(im, 0.83);
//im8=im2bw(im, 0.82);
//im9=im2bw(im, 0.78);
//im10=im2bw(im, 0.82);
//im11=im2bw(im, 0.80);
//im12=im2bw(im, 0.82);
//im13=im2bw(im, 0.76);
//im14=im2bw(im, 0.77);
//im15=im2bw(im, 0.81);
imn1=dilate(erode(im1));
subplot(122);
imshow(imn1, []);
L1=bwlabel(imn1);
area=[max(L1)];
//using pixel counting
for i=1:1:max(L1);
[x,y]=find(L1==i);
area(i)=length(y);
end;
area

Monday, July 14, 2008

A8: Morphological Operations

8


Answers to questions:

1. XOR corresponds to the complement of the union of A and B
2. NOT(A) and B corresponds to the intersection of the (complement of A) and (B)

Dilation and Erosion





















The fihures above are the images to be eroded and dilated with a 4 by 4, 4 by 2, 2 by 4 ones and a cross 5 pixels long and 1 pixel thick.

The results for erosion are the following images corresponding to their original image.

Eroded image of a square

Eroded image of a triangle

Eroded image of a circle

Eroded image of a cross


The eroded images not only followed the shape of the patterns but also reduced its size. In my predictions, I only accounted for the decrease in the size.


The results for dilation are the following images corresponding to their original image.

Dilated image of a square

Dilated image of a triangle

Dilated image of a circle

Dilated image of a cross


The dilated image as predicted inreased in size but again, the shape of the pattern follow by the image was not accounted in the predictions.

Rating: i'll give myself 8 points. I did this activity alone and I missed some of the predicted output.


code:

erosion and dilation: note- just change dilate to erode and vice versa

im=imread('C:\Documents and Settings\Instru\Desktop\a8_cross.bmp');
a1=[1 1 1 1; 1 1 1 1; 1 1 1 1; 1 1 1 1];
a2=[1 1 1 1; 1 1 1 1];
a3=[1 1; 1 1; 1 1; 1 1];
a4=[0 0 1 0 0; 0 0 1 0 0; 1 1 1 1 1; 0 0 1 0 0; 0 0 1 0 0];
im1=dilate(im,a1);
im2=dilate(im,a2);
im3=dilate(im,a3);
im4=dilate(im,a4);
subplot(221);
imshow(im1,[]);
subplot(222);
imshow(im2,[]);
subplot(223);
imshow(im3,[]);
subplot(224);
imshow(im4,[]);

Sunday, July 13, 2008

A7: Enhancement in the Frequency Domain

When the frequency of a sinusoid image is varied, the effect is changing the peaks of the fft. Higher frequencies corresponds to fft peaks shifted farther from the origin (0,0) which is expected.









Rotation of the original image causes the FFT to rotate by the same angle








Patterns produced by multiplying or adding the images resulted to peaks of the product and sum.










Fingerprint Enhancement

Designing the right filter depends really on the finger print image. I used the code of Jeric but the results seemed not to fit with my sample image. But changing the cut off frequency made a bit of enhancement of the finger print.











There seems no big enhancement in the image. I think I still didn't used the right filter and the exact cut-off frequency.

Line removal

To remove the vertical lines, fist we have to identify which frequency corresponds to the line as shown in the fft of the image. As expected, the frequency should appear farther from the center of the fft of the image like that in the first part. After identifying the frequency to block, we can now use the right filter and reconstruct the image.











The lines are successfully removed but the quality of the image was affected. Maybe because other frequencies were removed.

Rating: Based from the results of the supposed to be enhancements, I'm not satisfied with what I got. So I'll give myself 8.0 pointd. The key here is to find the right filter which I think I failed to do.

Code: courtesy of Jeric =)

Fingerprint: same with my code:

clear all;
im=imread('C:\Documents and Settings\gpedemonte\Desktop\AP 186\a7\lunar.gif');
im=im2gray(im);
im=im-mean(im);
ff=fft2(im);
h=scf(1);
imshow(abs(fftshift(ff)), []); xset('colormap', hotcolormap(64)); //show fft of image

////make exponential high pass filter
//
filter=mkfftfilter(im, 'exp', 65);
filter=fftshift(filter);
//
//perform enhanment
enhancedft=filter.*ff;
enhanced=real(fft2(enhancedft));
h=scf(2);
imshow(enhanced, []);
h=scf(3);
imshow(abs(fftshift(fft2(enhanced))), []); xset('colormap', hotcolormap(64));

Line removal: by jeric only

clear all;
stacksize(4e7);
im=imread('C:\Documents and Settings\gpedemonte\Desktop\AP 186\a7\lunar.gif');
im=im2gray(im);
im=im; //-mean(im);
ff=fft2(im);
h=scf(1);
ff=fftshift(ff);
imshow(abs(ff), []); xset('colormap', hotcolormap(64));

[x,y]=size(im);
enhancedft=ff;
for i=(x+1)/2-2:(x+1)/2+2
enhancedft(i,1:(y+2)*11/24)=0;
enhancedft(i,(y+2)*13/24:y)=0; //immediate process the ft.
end
enhanced=abs(fft2(enhancedft));

h=scf(2);
imshow(abs(enhancedft), []); xset('colormap', hotcolormap(64));
h=scf(3);
imshow(enhanced ,[]);

Monday, July 7, 2008

A6: Fourier transform model of image formation

This activity aims to familiarize us with Fourier Transform (FT) of images and other different image processing techniques related to the concept of FT.

A. Familiarization with discrete FFT









This images above are circles of small (center), medium (right) and big (left) radius. This corresponds to different aperture size when used for FFT.










The images on the left are the transformed image of the small circle image above: lefmost(shifted FT), middle (FT) and rightmost (FT'd twice). The expected image from analytic solutions is an airy disc, a series of dark and bright concentric rings. In this case however, the continuity of the rings of the FT'd image depends on the radius of the original image. The shifted image shifted the positions of the maximum and minimum. Unshifted image has a decreasing intensity starting from the corners while the shifted image shifted the coordinates of the maximum at the center of the image. The last image (FT'd twice) is supposed to be inverted. But by symmetry of the original image, it is not obsereved.










Another sample image on the leftmost part is used. The center left image is the FT'd image of the left image and the center right image is the FT'd image shifted. The same principles can be observed in shifting and this time, inversion of the final image when FT'd twice is observable (rightmost image) due to asymmetry of the input image.

B. Simulation of an image device...










The leftmost image is the original image to be convolved with the images of circles with different radii (image from left to right next to the original are the resulting images for increasing radius).
A circle is like a lens in FT that inverts any image convolved with it at higher values of radius. This is the case of the rightmost image wherein the radius of the circle is large enough to let the effect be an inverting lens. Below this value (at least for this case), the image becomes blurry. The effect of the decreasing radius approaches the airy disc-like convolved image. The effect becomes a slit that 'diffracts' the incoming signal resulting to interference which produces the maxima and minima in the disc.

C. Template matching using correlation









The first image on the left most side in the figures above is correlated with the next two image and their corresponding result is the next two image on the rightmost portion of the figures. From the inputs to be compared, the images are very different as depicted by dark regions of the resulting images where the value of correlation is low. The parts with yellow color are those positions where the images are very similar (high correlation). This is essential for template matching. the resulting conjugated image will tell how similar the input images are.

D. Edge detection using convolution integral










The images are the 'vip image' convolved with different patterns with varying edge values.
From the left, the first image is the result of convolution with a horizontal patterned edge where the horizontal edges are detected. The second image is the convolution with a vertically edged pattern. As expected, the vertical edges are detected shown by the distortions in that region. The third image is a convolution with a diagonally aligned pattern. The expected result should show distortions except on the diagonal. It is hardly seen in the image. Finally, the last image is convolved with a spot pattern. As expected, only the center stayed undistorted compared to the sorrounding edge. In this way, convolution as a tool for edge detection is verified.


Rating:
I had fun transforming the images and learnes more about the concept of Fourier Transform and other techniques related. The results I got are good enough to verify from the simulations in Scilab the physical concepts regarding Fourier Transform. Expected results were met and some agreement with the analytic results were satisfied. So I'll give myself a perfect 10. for this activity.


The codes: courtesy again of Dr. Soriano

FFT
im=imread('C:\Documents and Settings\AP186user24\Desktop\a_a6.bmp');
Igray = im2gray(im);

//unshifted
FIgray = fft2(Igray);
//imshow(abs(FIgray),[]);xset("colormap",hotcolormap(256))

//for the shifted output
FIshifted=fftshift(FIgray);
imshow(abs(FIshifted),[]);xset("colormap",hotcolormap(256))

//fft applied twice
//imshow(abs(fft2(FIgray)),[]);xset("colormap",hotcolormap(256))

CONVOLUTION
r=imread('C:\Documents and Settings\AP186user24\Desktop\circle_a6.bmp');
a=imread('C:\Documents and Settings\AP186user24\Desktop\vip.bmp');
rgray = im2gray(r);
agray = im2gray(a);
Fr = fftshift(rgray);
//aperture is already in the Fourier Plane and need not be FFT'ed
Fa = fft2(agray);
FRA = Fr.*(Fa);
IRA = fft2(FRA); //inverse FFT
FImage = abs(IRA);
imshow(FImage, [ ]);xset("colormap",hotcolormap(256))

CORRELATION
r=imread('C:\Documents and Settings\AP186user24\Desktop\corr1.bmp');
a=imread('C:\Documents and Settings\AP186user24\Desktop\corr4.bmp');
rgray = im2gray(r);
agray = im2gray(a);
Fr=fft2(rgray);
Fa=fft2(agray);
FI=Fa.*conj(Fr);
Fnew=fft2(FI);
Fimage=abs(Fnew);
imshow(Fimage, [ ]);xset("colormap",hotcolormap(256))

EDGE DETECTION
a=imread('C:\Documents and Settings\AP186user24\Desktop\vip.bmp');
agray = im2gray(a);
pattern1=[-1 -1 -1; 2 2 2; -1 -1 -1];
pattern2=[-1 2 -1; -1 2 -1; -1 2 -1];
pattern3=[-1 -1 2; -1 2 -1; 2 -1 -1];
pattern4=[-1 -1 -1; -1 8 -1; -1 -1 -1];
FImage=imcorrcoef(agray, pattern4);
imshow(FImage, [ ]);xset("colormap",hotcolormap(256))

end.........................................................................................................................................