Sunday, July 21, 2019 admin Comments(0)

Digital Image Processing, 3rd Edition,Instructor's Manual,Rafael C. Gonzalez part of the PDF past that point is negligible (this scaling reduces the standard n2 i xy 1 1 − 2 (sum of pixels in 1st row) − 2 zi n n z i ∈S x y y. Rafael C. Gonzalez sentative of processing techniques in these two categories. In order have uses in numerous other branches of digital image processing. (PDF) of the intensity levels in a given image, where the subscript is used for. Free download Digital Image Processing(2nd Edition) by Rafael C. Gonzalez and Tuesday, June 19, digital image processing by gonzalez pdf ebook free.

Language:English, Spanish, German
Country:El Salvador
Published (Last):05.06.2015
ePub File Size:15.80 MB
PDF File Size:8.53 MB
Distribution:Free* [*Register to download]
Uploaded by: LUIS Digital Image Processing, 2/E is a completely self-contained book. The companion web site Material removed from the previous edition, downloadable in convenient. PDF format. Presentation materials for. Digital Image Processing 1ST Edition by Rafael C Gonzalez available in Hardcover on, also read synopsis and reviews. Digital Image. Processing. Third Edition. Rafael C. Gonzalez. University of Tennessee. Richard E. Woods. MedData Interactive. Upper Saddle.

Skip to main content. Log In Sign Up. Douglas Sun. Copying, printing, posting, or any form of printed or electronic distribution of any part of this manual constitutes a violation of copyright law. As a security measure, this manual was encrypted during download with the serial number of your book, and with your personal information. Any printed or electronic copies of this file will bear that encryption, which will tie the copy to you. Please help us defeat piracy of intellectual property, one of the principal reasons for the increase in the cost of books.

He served as Chairman of the department from through. Made in India products and goods. Detection of discontinuous. Color fundamentals. It has 21 chapters. The Microcontroller and Embedded Systems Usin As in this modern world digital image processing has great value.

The leading textbook in its field for more than twenty years. It focuses on material that is fundamental and has a broad scope of application. Durga Maa Bhakti Songs Download. Examples of fields that use digital image processing. Solution manual of Introduction to Probability Mod So lets download this book. In bio-medical technology. Hello all. We recommend saving any open files in PDF Suite. Mordern digital and analog communication systems As in this modern world digital image processing has great value.

This site is an authorized reseller of the "PDF Suite" software application. Please refer to the Control Panel Uninstall section below. Field and wave in communication electronics Third It focuses on material that is fundamental and has a broad scope of application. Click to open the Start Menu.

Today I'm going to share with you a important book for electronics and electrical engineers and also for mechanical engineers. Hope that you'll like it. So to gain knowledge on digital image processing.

For more details and other Windows versions click here. Gonzalez received the B. He served as Chairman of the department from through Group Policy Editor gpedit. Microprocessor Lab Viva Questions with Answers.

Find Us On Facebook. The Adode and Acrobat trademarks and copyrights are used for comparison and gonzalez digital image processing pdf free download for the users only. Anonymous 18 October at Unknown 20 June at. PDF has become the standard file format for document exchange.

Downlaod a program to the taskbarYou can pin a gonzalez digital image processing pdf free download Completely self-contained--and heavily illustrated--this introduction to basic concepts and methodologies for digital image processing is written at a level that truly is suitable for seniors and first-year graduate students in almost any technical gonzalez digital image processing pdf free download.

Today I'm going to share with you a important book for electronics and frse engineers and also for mechanical engineers. So lets download this book.

Woods 9 out of 10 based on 10 ratings. The visitor of this site may be redirected at any digjtal to a third-part website to complete the download process downlload further notification. Mordern digital and analog communication systems In bio-medical technology.

Digital Image Processing Global Edition by Rafael C. Gonzalez 9781292223049

Flag for inappropriate content. Related titles. Jump to Page. Search inside document. Gonzalez digital image processing pdf free download smoothing and sharpening. Anubhav Shrivastava.

Sushma Reddy Poondru. Jason Anderson.

Charleston Souza. Ravi Tripathi. Raju Naidu. It is worthwhile to emphasize at this point that spatial enhancement and restora- tion are the same thing when it comes to noise reduction by spatial filtering. A good way to keep it brief and conclude coverage of restoration is to jump at this point to inverse filtering which follows directly from the model in Section 5.

At a minimum, we recommend a brief discussion on image reconstruction by covering Sections 5. Coverage of Chapter 6 also can be brief at the senior level by focusing on enough material to give the student a foundation on the physics of color Sec- tion 6. We typically con- clude a senior course by covering some of the basic aspects of image compres- sion Chapter 8. Interest in this topic has increased significantly as a result of the heavy use of images and graphics over the Internet, and students usually are easily motivated by the topic.

The amount of material covered depends on the time left in the semester. In a graduate course we add the following material to the material suggested in the previous section.

Sections 3. We cover Chapter 4 in its entirety with appropriate sections assigned as inde- pendent readying, depending on the level of the class. To Chapter 5 we add Sec- tions 5. A nice introduction to wavelets Chapter 7 can be achieved by a combination of classroom discussions and in- dependent reading.

The minimum number of sections in that chapter are 7. Sections 8. If additional time is available, a natural topic to cover next is morphological image processing Chapter 9.

The material in this chapter begins a transition from methods whose inputs and outputs are images to methods in which the in- puts are images, but the outputs are attributes about those images, in the sense defined in Section 1.

We recommend coverage of Sections 9. In this case, a good deal of Chapters 2 and 3 is review, with the exception of Section 3. De- pending on what is covered in the undergraduate course, many of the sections in Chapter 4 will be review as well. For Chapter 5 we recommend the same level of coverage as outlined in the previous section.

In Chapter 6 we add full-color image processing Sections 6. Chapters 7 and 8 are covered as outlined in the previous section. As noted in the previous section, Chapter 9 begins a transition from methods whose inputs and outputs are images to methods in which the inputs are images, but the out- puts are attributes about those images. As a minimum, we recommend coverage of binary morphology: Sections 9. Mention should be made about possible extensions to gray-scale images, but coverage of this material may not be possible, depending on the schedule.

In Chapter 10, we recommend Sections In Chapter 11 we typically cover Sections The key in organizing the syllabus is the background the students bring to the class. For example, in an electrical and computer engineering cur- riculum graduate students have strong background in frequency domain pro- cessing, so Chapter 4 can be covered much quicker than would be the case in which the students are from, say, a computer science program. The important aspect of a full year course is exposure to the material in all chapters, even when some topics in each chapter are not covered.

Because computer projects are in addition to course work and homework assignments, we try to keep the formal project reporting as brief as possible. In order to facilitate grading, we try to achieve uniformity in the way project reports are prepared. A useful report format is as follows: Page 1: Cover page. One to two pages max of technical discussion.

Page 3 or 4: Discussion of results. One to two pages max. Image results printed typically on a laser or inkjet printer. All images must contain a number and title referred to in the discussion of results.

Program listings, focused on any original code prepared by the stu- dent. For brevity, functions and routines provided to the student are referred to by name, but the code is not included. The entire report must be on a standard sheet size e. In particular, the review material on probability, matri- ces, vectors, and linear systems, was prepared using the same notation as in the book, and is focused on areas that are directly relevant to discussions in the text.

This allows the instructor to assign the material as independent reading, and spend no more than one total lecture period reviewing those subjects.

An- other major feature is the set of solutions to problems marked with a star in the book. These solutions are quite detailed, and were prepared with the idea of using them as teaching support. The on-line availability of projects and digital images frees the instructor from having to prepare experiments, data, and hand- outs for students. The fact that most of the images in the book are available for downloading further enhances the value of the web site as a teaching resource.

From the discussion in Section 2. Assuming equal spacing between elements, this gives elements and spaces on a line 1. If the size on the fovea of the imaged dot is less than the size of a single resolution element, we assume that the dot will be invisible to the eye.

In other words, the eye will not detect a dot if its diameter, d , is such that 0. Problem 2. Because interest lies only on the boundary shape and not on other spectral characteristics of the specimens, a single illu- mination source in the far ultraviolet wavelength of. A far-ultraviolet camera sensor would be needed to image the specimens.

Gonzalez Digital Image Processing PDF Free Download | Digital Image | Portable Document Format

So the target size is mm on the side. The strongest camera response determines the color. If all three responses are approximately equal, the object is white. A faster system would utilize three different cameras, each equipped with an individual filter. The analysis then would be based on polling the response of each camera.

This system would be a little more expensive, but it would be faster and more reliable. Otherwise further analysis would be re- quired to isolate the region of uniform color, which is all that is of interest in solving this problem]. If the intensity is quan- tized using m bits, then we have the situation shown in Fig.

In other words, 32, or fewer, intensity levels will produce visible false contouring. One way to subdivide this range is to let all levels between 0 and 63 be coded as 63, all levels between 64 and be coded as , and so on.

The image resulting from this type of subdivision is shown in Fig. Of course, there are other ways to subdivide the range [0, ] into four bands. These figures show why image data compression Chapter 8 is so important. The algorithm then simply looks for the appropriate match ev- ery time a diagonal segments is encountered in the boundary. So, the definition could be restated as follows: All other pixels of RU c are called hole pixels. Figure P2. The shortest 8-path is shown in Fig.

The length of the shortest m - path shown dashed is 5. Both of these shortest paths are unique in this case. It is easily verified that another 4-path of the same length exists between p and q. One possibility for the shortest 8-path it is not unique is shown in Fig. The length of a shortest m-path shown dashed is 6. This path is not unique.

Recall that this distance is independent of any paths that may exist between the points. Recall that the D 8 distance unlike the Euclidean distance counts diagonal seg- ments the same as horizontal and vertical segments, and, as in the case of the D 4 distance, is independent of whether or not a path exists between p and q.

Note that the size of the neighborhood i. The operator H computes the sum of pixel values in a given neighborhood. A simple example will suffice to show that Eq.

In this case H is the median operator. To prove the validity of Eq. In our work, the range of intensity values for 8-bit images is [0, ]. This range of values cannot be covered by 8 bits, but it is given in the problem statement that the result of subtraction has to be represented in 8 bits also, and, consistent with the range of values used for 8-bit images throughout the book, we assume that values of the 8-bit difference images are in the range [0, ].

What this means is that any subtraction of 2 pixels that yields a negative quantity will be clipped at 0.

Because image subtraction is an array operation see Section 2. We have already stated that negative results are clipped at 0. That is, repeatedly subtracting 0 from any value results in that value. The locations in b x , y that are not 0 will eventually decrease the corresponding values in d K x , y until they are 0. The maximum number of subtractions in which this takes place in the context of the present problem is , which corresponds to the condition at a location in which a x , y is and b x , y is 1.

Thus, we con- clude from the preceding discussion that repeatedly subtracting an image from another will result in a difference image whose components are 0 in the loca- tions in b x , y that are not zero and equal to the original values of a x , y at the locations in b x , y that are 0.

This result will be achieved in, at most, subtractions. Reversing the operation will result in a value of 0 in that same location. The resulting image, d x , y , can be used in two fundamental ways for change de- tection. One way is use pixel-by-pixel analysis. Note that the absolute value needs to be used to avoid errors canceling out. This is a much cruder test, so we will concentrate on the first approach. There are three fundamental factors that need tight control for difference- based inspection to work: The first condition basically addresses the requirement that comparisons be made between corresponding pixels.

Two images can be identical, but if they are displaced with respect to each other, comparing the differences between them makes no sense. One approach used often in conjunction with illumination control is intensity scaling based on actual conditions.

Finally, the noise content of a difference image needs to be low enough so that it does not materially affect comparisons between the golden and input im- ages. Good signal strength goes a long way toward reducing the effects of noise.

Obviously there are a number if variations of the basic theme just described. For example, additional intelligence in the form of tests that are more sophisti- cated than pixel-by-pixel threshold comparisons can be implemented. A tech- nique used often in this regard is to subdivide the golden image into different regions and perform different usually more than one tests in each of the re- gions, based on expected region content. Intensity interpolation is implemented using any of the methods in Section 2.

Then, by substituting this array into the last line of the previous equa- tion we have the 1-D transform along the columns of T x , v. In other words, when a kernel is separable, we can compute the 1-D transform along the rows of the image. Then we compute the 1-D transform along the columns of this in- termediate result to obtain the final 2-D transform, T u , v. We obtain the same result by computing the 1-D transform along the columns of f x , y followed by the 1-D transform along the rows of the intermediate result.

From Eq. From Fig. Based on the information in Fig. This value of z is reasonable, but any other given lens sizes would be also; the camera would just have to be positioned further away. It is given that the defects are circular, with the smallest defect having a diameter of 0. So, all that needs to be done is to determine if the image of a circle of diameter 0. This can be determined by using the same model as in Fig. In other words, a circular defect of diameter 0. If, in order for a CCD receptor to be activated, its area has to be excited in its entirety, then, it can be seen from Fig.

Chapter 3 Problem Solutions Problem 3. First subtract the minimum value of f denoted f min from f to yield a function whose minimum value is 0: Problem 3.

The question in the problem statement is to find the smallest value of E that will make the threshold behave as in the equation above. In this truth table, the values of the 8th bit are 0 for byte values 0 to , and 1 for byte values to , thus giving the transformation mentioned in the problem statement.

Note that the given transformed values of either 0 or simply indicate a binary image for the 8th bit plane. Any other two values would have been equally valid, though less conventional.

Continuing with the truth table concept, the transformation required to pro- duce an image of the 7th bit plane outputs a 0 for byte values in the range [0, 63], a 1 for byte values in the range [64, ], a 0 for byte values in the range [, ], and a 1 for byte values in the range [, ].

Similarly, the trans- formation for the 6th bit plane alternates between eight ranges of byte values, the transformation for the 5th bit plane alternates between 16 ranges, and so on.

Finally, the output of the transformation for the lowest-order bit plane al- ternates between 0 and depending on whether the byte values are even or odd.

Because the number of pixels would not change, this would cause the height of some of the remaining histogram peaks to increase in general. Typically, less variability in intensity level values will reduce contrast. Because the number of pixels would remain constant, the height of some of the histogram peaks would increase.

The general shape of the his- togram would now be taller and narrower, with no histogram components being located past The histogram equalization method has no provisions for this type of artificial intensity redis- tribution process.

We have assumed negligible round-off errors. First, this equation assumes only positive values for r. Recognition of this fact is important. Once recognized, the student can approach this diffi- culty in several ways. One good answer is to make some assumption, such as the standard deviation being small enough so that the area of the curve under p r r for negative values of r is negligible. Another is to scale up the values until the area under the negative part of the curve is negligible.

This is the cumulative distribution function of the Gaussian density, which is either integrated numerically, or its values are looked up in a table. A third, less important point, that the student should address is the high-end values of r.

One possibility here is to make the same assumption as above regarding the standard deviation. Another is to divide by a large enough value so that the area under the positive part of the PDF past that point is negligible this scaling reduces the standard deviation. Another approach the student can take is to work with histograms, in which case the transformation function would be in the form of a summation.

The is- sue of negative and high positive values must still be addressed, and the possible answers suggested above regarding these issues still apply. The student needs to indicate that the histogram is obtained by sampling the continuous function, so some mention should be made regarding the number of samples bits used.

The most likely answer is 8 bits, in which case the student needs to address the scaling of the function so that the range is [0, ]. Consider the probability density function in Fig.

Because p r r is a probability density function we know from the discussion in Section 3. However, we see from Fig. This implies a one-to-one mapping both ways, meaning that both forward and inverse transformations will be single-valued. Suppose that the neighborhood is moved one pixel to the right we are assuming rectangular neighborhoods.

This deletes the left- most column and introduces a new column on the right. The same concept applies to other modes of neighborhood motion: Thus, the only time that the histogram of the images formed by the operations shown in the problem statement can be de- termined in terms of the original histograms is when one both of the images is are constant.

In d we have the additional requirement that none of the pixels of g x , y can be 0. Assume for convenience that the histograms are not normalized, so that, for example, h f rk is the number of pixels in f x , y having intensity level rk.

Assume also that all the pixels in g x , y have constant value c. The pixels of both images are assumed to be positive. Finally, let u k denote the intensity levels of the pixels of the images formed by any of the arithmetic oper- ations given in the problem statement. Under the preceding set of conditions, the histograms are determined as follows: In other words, the values height of the compo- nents of h sum are the same as the components of h f , but their locations on the intensity axis are shifted right by an amount c.

Note that while the spacing between components of the resulting histograms in a and b was not affected, the spacing between components of h prod u k will be spread out by an amount c.

The preceding solutions are applicable if image f x , y is constant also. Their location would be affected as described a through d. When the images are blurred, the boundary points will give rise to a larger number of different values for the image on the right, so the histograms of the two blurred images will be different. Figure P3. The values are summarized in Table P3. It is easily verified that the sum of the numbers on the left column of the table is N 2.

A histogram is easily constructed from the entries in this table. A similar tedious procedure yields the results in Table P3. Table P3. Initially, it takes 8 additions to produce the response of the mask. However, when the mask moves one pixel location to the right, it picks up only one new column. This is the basic box-filter or moving-average equation. To this we add one subtraction and one addition to get R new. Thus, a total of 4 arithmetic operations are needed to update the response after one move.

This is a recursive procedure for moving from left to right along one row of the image. When we get to the end of a row, we move down one pixel the nature of the computation is the same and continue the scan in the opposite direction.

Because the coefficients of the mask sum to zero, this means that the sum of the products of the coefficients with the same pixel also sum to zero. Carrying out this argument for every pixel in the image leads to the conclusion that the sum of the elements of the convolution array also sum to zero.

This does not affect the conclusions reached in a , so cor- relating an image with a mask whose coefficients sum to zero will produce a correlation image whose elements also sum to zero. Let f x , y and h x , y denote the image and the filter function, respectively.

Then, the process of running h x , y over f x , y can be expressed as the following convolution: If h x , y is now applied to this image, the resulting image will be as shown in Fig. Note that the sum of the nonzero pixels in both Figs. Since the sum remains constant, the values of the nonzero elements will become smaller and smaller, as the number of applications of the filter increases.

In the limit, the values would get infinitely small, but, because the average value remains constant, this would require an image of infinite spatial proportions.

It is at this junction that border conditions become important. Although it is not required in the problem statement, it is instructive to discuss in class the effect of successive applications of h x , y to an image of finite proportions.

The net effect is that, because the values cannot diffuse out- ward past the boundary of the image, the denominator in the successive appli- cations of averaging eventually overpowers the pixel values, driving the image to zero in the limit. A simple example of this is given in Fig.

We see that, as long as the values of the blurred 1 can diffuse out, the sum, S, of the resulting pixels is 1. Here, we used the commonly made assumption that pixel value imme- diately past the boundary are 0. The mask operation does not go beyond the boundary, however. In this example, we see that the sum of the pixel values be- gins to decrease with successive applications of the mask. Thus, even in the extreme case when all cluster points are encom- passed by the filter mask, there are not enough points in the cluster for any of them to be equal to the value of the median remember, we are assuming that all cluster points are lighter or darker than the background points.

This conclusion obviously applies to the less extreme case when the number of cluster points encompassed by the mask is less than the maximum size of the cluster. Thus, two or more dif- ferent clusters cannot be in close enough proximity for the filter mask to encom- pass points from more than one cluster at any mask position.

It then follows that no two points from different clusters can be closer than the diagonal dimension of the mask minus one cell which can be occupied by a point from one of the clusters. Since this is known to be the largest gap, the next odd mask size up is guaranteed to encompass some of the pixels in the segment. This average value is a gray-scale value, not bi- nary, like the rest of the segment pixels.

Denote the smallest average value by A min , and the binary values of pixels in the thin segment by B. Clearly, A min is less than B. Then, setting the binarizing threshold slightly smaller than A min will create one binary pixel of value B in the center of the mask. The phenomenon in question is related to the horizontal separation between bars, so we can simplify the problem by consid- ering a single scan line through the bars in the image.

The key to answering this question lies in the fact that the distance in pixels between the onset of one bar and the onset of the next one say, to its right is 25 pixels. Consider the scan line shown in Fig. The response of the mask is the average of the pixels that it encompasses. In fact, the number of pixels belonging to the vertical bars and contained within the mask does not change, regardless of where the mask is located as long as it is contained within the bars, and not near the edges of the set of bars.

The fact that the number of bar pixels under the mask does not change is due to the peculiar separation between bars and the width of the lines in relation to the pixel width of the mask This constant response is the reason why no white gaps are seen in the image shown in the problem statement.

The averaging mask has n 2 points of which we are assuming that q 2 points are from the object and the rest from the background. Note that this assumption implies separation be- tween objects that, at a minimum, is equal to the area of the mask all around each object. The problem becomes intractable unless this assumption is made. This condition was not given in the problem statement on purpose in order to force the student to arrive at that conclusion.

If the instructor wishes to simplify the problem, this should then be mentioned when the problem is assigned. A further simplification is to tell the students that the intensity level of the back- ground is 0. Let B represent the intensity level of background pixels, let a i denote the in- tensity levels of points inside the mask and o i the levels of the objects. In addi- tion, let S a denote the set of points in the averaging mask, So the set of points in the object, and S b the set of points in the mask that are not object points.

Let the maximum expected average value of object points be denoted by Q max. If this was a fact specified by the instructor, or the student made this assumption from the beginning, then this answer follows almost by inspection. We want to show that the right sides of the first two equations are equal. All other elements are 0. This mask will perform differentiation in only one direction, and will ignore intensity transitions in the orthogonal direc- tion.

An image processed with such a mask will exhibit sharpening in only one direction. A Laplacian mask with a -4 in the center and 1s in the vertical and horizontal directions will obviously produce an image with sharpening in both directions and in general will appear sharper than with the previous mask.

In other words, the number of coefficients and thus size of the mask is a direct result of the definition of the second derivative. In fact, as explained in part b , just the opposite occurs.

To see why this is so, consider an image consisting of two vertical bands, a black band on the left and a white band on the right, with the transition be- tween the bands occurring through the center of the image. That is, the image has a sharp vertical edge through its center.

As the center of the mask moves more than two pixels on either side of the edge the entire mask will en- compass a constant area and its response would be zero, as it should. However, suppose that the mask is much larger. As its center moves through, say, the black 0 area, one half of the mask will be totally contained in that area.

However, de- pending on its size, part of the mask will be contained in the white area. The sum of products will therefore be different from 0. This means that there will be a response in an area where the response should have been 0 because the mask is centered on a constant area. The progressively increasing blurring as a result of mask size is evident in these results. Convolv- ing f x , y with the mask in Fig. Then, because these operations are linear, we can use superposition, and we see from the preceding equation that using two masks of the form in Fig.

Convolving this mask with f x , y produces g x , y , the unsharp result. The right side of this equation is recognized within the just-mentioned propor- tionality factors to be of the same form as the definition of unsharp masking given in Eqs. Thus, it has been demonstrated that subtract- ing the Laplacian from an image is proportional to unsharp masking. The fact that images stay in the linear range implies that images will not be saturated at the high end or be driven in the low end to such an extent that the camera will not be able to respond, thus losing image information irretrievably.

The only way to establish a benchmark value for illumination is when the variable daylight illumination is not present. Let f 0 x , y denote an image taken under artificial illumination only, with no moving objects e. This be- comes the standard by which all other images will be normalized. There are nu- merous ways to solve this problem, but the student must show awareness that areas in the image likely to change due to moving objects should be excluded from the illumination-correction approach.

One way is to select various representative subareas of f 0 x , y not likely to be obscured by moving objects and compute their average intensities. We then select the minimum and maximum of all the individual average values, denoted by, f min and f max. The objective then is to process any input image, f x , y , so that its minimum and maximum will be equal to f min and f max , respectively. Another implicit assumption is that moving objects com- prise a relatively small area in the field of view of the camera, otherwise these objects would overpower the scene and the values obtained from f 0 x , y would not make sense.

If the student selects another automated approach e. We support this conclusion with an example. Consider a one-pixel-thick straight black line running vertically through a white image. As the size of the neighborhood increases, we would have to be further and further from the line before the center point ceases to be called a boundary point.

Gonzalez digital pdf edition image processing 1st

That is, the thickness of the boundary detected increases as the size of the neighbor- hood increases. If the intensity is smaller than the intensity of all its neighbors, then increase it. Else, do not nothing. In rule 1, all positive differences mean that the intensity of the noise pulse z 5 is less than that of all its 4-neighbors. The converse is true when all the differences are negative. A mixture of positive and negative differences calls for no action because the center pixel is not a clear spike.

In this case the correction should be zero keep in mind that zero is a fuzzy set too. Membership function ZR is also a triangle. It is centered on 0 and overlaps the other two slightly. This diagram is similar to Fig. This rule is nothing more that computing 1 minus the minimum value of the outputs of step 2, and using the result to clip the ZR membership function. It is important to understand that the output of the fuzzy system is the center of gravity of the result of aggregation step 4 in Fig.

This would produce the complete ZR membership function in the implication step step 3 in Fig. The other two results would be zero, so the result of aggregation would be the ZR function.

This is as it should be because the differences are all positive, indicating that the value of z 5 is less than the value of its 4-neighbors. Fuzzify inputs. Apply fuzzy logical 3. Apply d2 d4 d6 d8 aggregation method max. Defuzzify center of v gravity. It is a phase term that accounts for the shift in the function. The magnitude of the Fourier transform is the same in both cases, as expected. The last step follows from Eq. Problem 4. The continuous Fourier trans- form of the given sine wave looks as in Fig.

In terms of Fig.

Free download Digital Image Processing(2nd Edition) by Rafael C. Gonzalez and Richard E. Woods

For some values of sampling, the sum of the two sines combine to form a single sine wave and a plot of the samples would appear as in Fig. Other values would result in functions whose samples can describe any shape obtainable by sampling the sum of two sines. But, we know from the translation property Table 4.

This proves that multiplication in the frequency domain is equal to convolution in the spatial domain. The proof that multiplication in the spatial domain is equal to convolution in the spatial domain is done in a similar way. Because, by the convolution theorem, the Fourier transform of the spatial convolution of two functions is the product their transforms, it follows that the Fourier transform of a tent function is a sinc func- tion squared.

Substitut- ing Eq. Substituting Eq. We do this by direct substitution into Eq. Note that this holds for positive and negative values of k. We prove the validity of Eq. The other half of the discrete convolution theorem is proved in a similar manner.

To avoid aliasing we have to sample at a rate that exceeds twice this frequency or 2 0. So, each square has to correspond to slightly more than one pixel in the imaging system. This is not the case in zooming, which introduces additional samples. Although no new detail in introduced by zooming, it certainly does not reduce the sampling rate, so zooming cannot result in aliasing. The linearity of the inverse transforms is proved in exactly the same way.

There are various ways of proving this. The vector is cen- tered at the origin and its direction depends on the value of the argument. This means that the vector makes an integer num- ber of revolutions about the origin in equal increments. This produces a zero sum for the real part of the exponent. Similar comments apply the imaginary part. Proofs of the other properties are given in Chapter 4.

Recall that when we refer to a function as imaginary, its real part is zero. We use the term complex to denote a function whose real and imaginary parts are not zero. We prove only the forward part the Fourier transform pairs. Similar techniques are used to prove the inverse part. Because f x , y is imaginary, we can express it as j g x , y , where g x , y is a real function. Then the proof is as follows: And conversely. From Example 4. If f x , y is real and odd, then F u , v is imaginary and odd, and conversely.

Because f x , y is real, we know that the real part of F u , v is even and its imaginary part is odd. If we can show that F is purely imaginary, then we will have completed the proof. If f x , y is imaginary and even, then F u , v is imaginary and even, and conversely. We know that when f x , y is imaginary, then the real part of F u , v is odd and its imaginary part is even. If we can show that the real part is 0, then we will have proved this property.

Because f x , y is imagi- nary, we can express it as j g x , y , where g is a real function. If f x , y is imaginary and odd, then F u , v is real and odd, and conversely. If f x , y is imaginary, we know that the real part of F u , v is odd and its imaginary part is even.

If f x , y is complex and even, then F u , v is complex and even, and conversely. Here, we have to prove that both the real and imaginary parts of F u , v are even. Recall that if f x , y is an even function, both its real and imaginary parts are even.

The second term is the DFT of a purely imaginary even function, which we know is imaginary and even. Thus, we see that the the transform of a complex, even function, has an even real part and an even imaginary part, and is thus a complex even function. This concludes the proof. The proof parallels the proof in h. The second term is the DFT of purely imaginary odd function, which we know is real and odd. Thus, the sum of the two is a complex, odd function, as we wanted to prove.

Imagine the image on the left being duplicated in- finitely many times to cover the x y -plane. The result would be a checkerboard, with each square being in the checkerboard being the image and the black ex- tensions. Now imagine doing the same thing to the image on the right. The results would be identical. Thus, either form of padding accomplishes the same separation between images, as desired.

These can be strong horizontal and vertical edges. These sharp transitions in the spatial domain introduce high-frequency components along the vertical and horizontal axes of the spectrum. This is as expected; padding an image with zeros decreases its average value. The last step follows from the fact that k 1 x and k 2 y are integers, which makes the two rightmost exponentials equal to 1.

The other part of the convolution theorem is done in a similar manner. Consider next the second derivative. We can generate a filter for using with the DFT simply by sampling this function: In summary, we have the following Fourier transform pair relating the Laplacian in the spatial and frequency domains: Thus, we see that the amplitude of the filter decreases as a function of distance from the origin of the centered filter, which is the characteristic of a lowpass filter.

A similar argument is easily carried out when considering both variables simultaneously. From property 3 in Table 4. The negative limiting value is due to the order in which the derivatives are taken. The important point here is that the dc term is eliminated and higher frequencies are passed, which is the characteristic of a highpass filter.

As in Problem 4. For val- ues away from the center, H u , v decreases as in Problem 4. The important point is the the dc term is eliminated and the higher frequencies are passed, which is the characteristic of a highpass filter. The Fourier transform is a linear process, while the square and square roots involved in computing the gradient are nonlinear operations. The Fourier transform could be used to compute the derivatives as differences as in Problem 4.

The explanation will be clearer if we start with one variable. This result is for continuous functions. To use them with discrete variables we simply sample the function into its desired dimensions.

The inverse Fourier transform of 1 gives an impulse at the origin in the highpass spatial filters. However, the dark center area is averaged out by the lowpass filter. The reason the final result looks so bright is that the discontinuity edge on boundaries of the ring are much higher than anywhere else in the image, thus dominating the display of the result. The order does not matter.

Digital Image Processing 1st Edition

We know that this term is equal to the average value of the image. So, there is a value of K after which the result of repeated lowpass filtering will simply produce a constant image. Note that the answer applies even as K approaches infinity.

In this case the filter will ap- proach an impulse at the origin, and this would still give us F 0, 0 as the result of filtering. We want all values of the filter to be zero for all values of the distance from the origin that are greater than 0 i. However, the filter is a Gaussian function, so its value is always greater than 0 for all finite values of D u , v.

But, we are dealing with digital numbers, which will be designated as zero whenever the value of the filter is less than one-half the smallest positive number representable in the computer being used. As given in the problem statement, the value of this number is c min. So, values of K for which for which the filter function is greater than 0. Because the exponential decreases as a function of increasing distance from the origin, we choose the smallest possible value of D 2 u , v , which is 1.

This result guarantees that the lowpass filter will act as a notch pass filter, leaving only the value of the trans- form at the origin. The image will not change past this value of K. The solution to the problem parallels the solution of Problem 4. Here, however, the filter will approach a notch filter that will take out F 0, 0 and thus will produce an image with zero average value this implies negative pixels. So, there is a value of K after which the result of repeated highpass filtering will simply produce a constant image.

We want all values of the filter to be 1 for all values of the distance from the origin that are greater than 0 i. This is the same requirement as in Problem 4. Although high-frequency emphasis helps some, the improve- ment is usually not dramatic see Fig. Thus, if an image is histogram equalized first, the gain in contrast improvement will essentially be lost in the fil- tering process.

Therefore, the procedure in general is to filter first and histogram- equalize the image after that. The preceding equation is easily modified to accomplish this: Next, we assume that the equations hold for n. From this result, it is evident that the contribution of illumination is an impulse at the origin of the frequency plane. A notch filter that attenuates only this com- ponent will take care of the problem.

Extension of this development to multiple impulses stars is implemented by considering one star at a time. The form of the filter will be the same. At the end of the procedure, all individual images are combined by addition, followed by intensity scaling so that the relative bright- ness between the stars is preserved.

Perform a median filtering operation. Follow 1 by high-frequency emphasis. Histogram-equalize this result. Compute the average gray level, K 0.

Perform the transformations shown in Fig. Figure P5. Problem 5. Draw a profile of an ideal edge with a few points valued 0 and a few points valued 1. The geometric mean will give only values of 0 and 1, whereas the arithmetic mean will give intermediate values blur. Because the center of the mask can be outside the original black area when this happens, the figure will be thickened.

For the noise spike to be visible, its value must be considerably larger than the value of its neighbors. Also keep in mind that the power in the numerator is 1 plus the power in the denominator. It is most visible when surrounded by light values. The center pixel the pepper noise , will have little influence in the sums.

If the area spanned by the filter is approximately con- stant, the ratio will approach the value of the pixels in the neighborhood—thus reducing the effect of the low-value pixel. The center pixel will now be the largest. However, the exponent is now negative, so the small numbers will dominate the result.

That constant is the value of the pixels in the neighborhood. So the ratio is just that value. For salt noise the image will become very light. The opposite is true for pepper noise—the image will become dark.

The terms of the sum in the de- nominator are 1 divided by individual pixel values in the neighborhood. Thus, low pixel values will tend to produce low filter responses, and vice versa.

If, for example, the filter is centered on a large spike surrounded by zeros, the response will be a low output, thus reducing the effect of the spike. The Fourier transform of the 1 gives an impulse at the origin, and the exponentials shift the origin of the impulse, as discussed in Section 4.

Then, the components of motion are as follows: They can be found, for example, the Handbook of Mathematical Functions, by Abramowitz, or other similar ref- erence. Any of the techniques dis- cussed in this chapter for handling uniform blur along one dimension can then be applied to the problem.

The image is then converted back to rectangular co- ordinates after restoration. The mathematical solution is simple. Any of the methods in Sections 5. Set all pixels in the image, ex- cept the cross hairs, to that intensity value. Denote the Fourier transform of this image by G u , v. Because the characteristics of the cross hairs are given with a high degree of accuracy, we can construct an image of the background of the same size using the background intensity levels determined previously.

We then construct a model of the cross hairs in the correct location determined from the given image using the dimensions provided and intensity level of the cross hairs. Denote by F u , v the Fourier transform of this new image. In the likely event of vanishing values in F u , v , we can construct a radially-limited filter us- ing the method discussed in connection with Fig.

Because we know F u , v and G u , v , and an estimate of H u , v , we can refine our estimate of the blur- ring function by substituting G and H in Eq. The resulting filter in either case can then be used to deblur the image of the heart, if desired.

But, we know from the statement of Problem 4. Therefore, we have reduced the problem to computing the Fourier transform of a Gaussian function. From the basic form of the Gaussian Fourier transform pair given in entry 13 of Table 4.

Keep in mind that the preceding derivations are based on assuming continuous variables. A discrete filter is obtained by sampling the continuous function. Its purpose is to gain familiarity with the vari- ous terms of the Wiener filter. This is as far as we can reasonably carry this problem.

It is worthwhile pointing out to students that a filter in the frequency domain for the Laplacian operator is discussed in Section 4. However, substituting that solution for P u , v here would only increase the number of terms in the filter and would not help in simplifying the expression. Furthermore, we can use superposition and obtain the response of the system first to F u , v and then to N u , v because we know that the image and noise are uncorrelated.

The sum of the two individual responses then gives the complete response. The principal steps are as follows: Select coins as close as possible in size and content as the lost coins. Select a background that approximates the texture and brightness of the photos of the lost coins.

Set up the museum photographic camera in a geometry as close as possi- ble to give images that resemble the images of the lost coins this includes paying attention to illumination.

Obtain a few test photos. To simplify experimentation, obtain a TV camera capable of giving images that re- semble the test photos. This can be done by connecting the camera to an image processing system and generating digital images, which will be used in the experiment. Obtain sets of images of each coin with different lens settings. The re- sulting images should approximate the aspect angle, size in relation to the area occupied by the background , and blur of the photos of the lost coins.

The lens setting for each image in 3 is a model of the blurring process for the corresponding image of a lost coin. Digitize the impulse. Its Fourier transform is the transfer function of the blurring process. Digitize each blurred photo of a lost coin, and obtain its Fourier trans- form. At this point, we have H u , v and G u , v for each coin. Obtain an approximation to F u , v by using a Wiener filter. Equation 5. In general, several experimental passes of these basic steps with various different settings and parameters are required to obtain acceptable results in a problem such as this.

The intensity at that point is double the intensity of all other points. From the definition of the Radon transform in Eq. We do this by substituting the convolution expression into Eq. This completes the proof. Chapter 6 Problem Solutions Problem 6. These are the trichromatic coefficients. We are interested in tristimulus values X , Y , and Z , which are related to the trichromatic coefficients by Eqs. Note however, that all the tristimulus coefficients are divided by the same constant, so their percentages relative to the trichromatic coefficients are the same as those of the coefficients.

Problem 6. Values in between are easily seen to follow from these simple relations. The key to solving this problem is to realize that any color on the border of the triangle is made up of proportions from the two vertices defining the line segment that contains the point. The line segment connecting points c 3 and c is shown extended dashed seg- ment until it intersects the line segment connecting c 1 and c 2.

The point of in- tersection is denoted c 0. Because we have the values of c 1 and c 2 , if we knew c 0 , we could compute the percentages of c 1 and c 2 contained in c 0 by using the method described in Problem 6. Let the ratio of the content of c 1 and c 2 in c 0 be denoted by R If we now add color c 3 to c 0 , we know from Problem 6. For any position of a point along this line we could determine the percentage of c 3 and c 0 , again, by using the method described in Problem 6.

What is important to keep in mind that the ratio R 12 will remain the same for any point along the segment connect- ing c 3 and c 0. The color of the points along this line is different for each position, but the ratio of c 1 to c 2 will remain constant. So, if we can obtain c 0 , we can then determine the ratio R 12 , and the percent- age of c 3 , in color c.

The point c 0 is not difficult to obtain. The intersection of these two lines gives the coordinates of c 0. The lines can be determined uniquely because we know the coordinates of the two point pairs needed to determine the line coefficients. Solving for the intersec- tion in terms of these coordinates is straightforward, but tedious. Our interest here is in the fundamental method, not the mechanics of manipulating simple equations so we do not give the details.

At this juncture we have the percentage of c 3 and the ratio between c 1 and c 2. Let the percentages of these three colors composing c be denoted by p 1 , p 2 , and p 3 respectively. Finally, note that this problem could have been solved the same way by intersecting one of the other two sides of the triangle.

Going to another side would be necessary, for example, if the line we used in the preceding discussion had an infinite slope. A simple test to determine if the color of c is equal to any of the vertices should be the first step in the procedure; in this case no additional calculations would be required.

With a specific filter in place, only the objects whose color cor- responds to that wavelength will produce a significant response on the mono- chrome camera. A motorized filter wheel can be used to control filter position from a computer. If one of the colors is white, then the response of the three filters will be approximately equal and high.

If one of the colors is black, the response of the three filters will be approximately equal and low. We can create Table P6.

Pdf edition digital 1st image processing gonzalez

Thus, we get the monochrome displays shown in Fig. For a color to be gray, all RGB components have to be equal, so there are shades of gray. The others decrease in saturation from the corners toward the black or white point.