Image enhancement refers to some image degradation characteristics by some image processing methods such as antialiasing, contour, contrast, etc. To improve the visual effect of the image, improve the clarity of the image, or highlight some images in the image. Image, “useful” compression is various “useless” information, conversion of images into forms more convenient for people or computer analysis and processing.
Image optimization can be divided into two categories: spatial domain method and frequency domain method.
The spatial domain can simply be understood as the area containing pixels in the image. The spatial domain method refers to the spatial domain, that is, the image itself, and directly performs many linear or nonlinear operations on the image, and enhances the gray value of the image.
The frequency domain law is to see the image as a twodimensional signal in the transformation field of the image, which amplifies it based on the twodimensional Fourier transform signal. An enhanced signal based on a twodimensional Fourier transform.
The spatial domain method is divided into two categories: point operation and pattern processing. Raster transmission is considered by the neighborhood of a single pixel, including grayscale image conversion, histogram correction, and pseudo color enhancement technology; A questionnaire
Common techniques for frequency domain methods include lowpass filtering, highpass filtering, and isomorphic filtering.
As shown in the figure, the commonly used image optimization method is:
1.1 Linear grayscale conversion
In general, the coordinate information of the pixel point is not changed, and only the gray value of the pixel point is changed.
Lesson 1:Scale_Image Sh The image scaling value.
void ScaleImage(const HObject& Image, HObject* ImageScaled,
const HTuple& Mult, const HTuple& Add)
Principle:Scale_Image takes the following changes to the input image (the image):
g’: = g * mol + add
You can use grayscale [Gmin, Gmax]Then use the following formulas to define the parameter
the influence:Open the contrast of the image to make the dark part of the image darker and brighter.
Optics 2:Emphasis – Introduce image contrast.
void Emphasize(const HObject& Image, HObject* ImageEmphasize,
const HTuple& MaskWidth, const HTuple& MaskHeight,
const HTuple& Factor)
First, the program uses the low image (medium image) for filtering. Calculated based on the gray value (average) and the original gray value (original value), as shown below:
Through the size of the parameter 3 and 4, the larger the value, the stronger the image value, and the stronger the image comparison. As a rule, parameter 5 – Ratio (contrast intensity) improves the image.
the influence:Enhance the high frequency area (edge and corner) of the image to make the image look sharper.
1.2 Nonlinear grayscale transformation
Simple linear grayscale conversion can solve the problem of the visual image as a whole to a certain extent, but the improvement in image details is relatively limited. A set of nonlinear transformation techniques can solve this problem. Nonlinear conversion does not cover the entire gray range of the image, but expands the grayscale range with selection, and the grayscale of other ranges can be compressed. Commonly used nonlinear transformations include numbers and exponential transformations.
Lesson 1:log_image cale The number of images
void LogImage(const HObject& Image, HObject* LogImage,
const HTuple& Base)
Grayscale image blending can expand the grayscale with a smaller value or the grayscale with a larger compression value. Digitization is a useful nonlinear inverter mapping function that can be used to expand a narrow, lowvalue pixel in the image, and a wide range of highgrayvalue pixels in the input image. explained.
The gray value of the image is obtained to obtain the new gray value.
Optics 2:POW_IMAGE INDEX changes the image
void PowImage(const HObject& Image, HObject* PowImage,
const HTuple& Exponent)
The difference between the index mapping relationship and the number transformation is that the exponential transformation can selectively enhance the contrast between a low gray area or the contrast of a high gray area according to different values of the gamma exponent. The principle of its transformation is as follows:
1.3 Balanced Constraints
The histogram balance starts with the gray histogram of the image to generate a histogram of 0 to 255 gray values. statistics. The number of people each is a gray value shown in the graph. In the histogram, the histogram is then balanced to make the gray value of pixels evenly distributed, thus improving the overall image contrast effect and making the image clearer.
Lesson 1:Equ_histo_image – histogram of the live image
void EquHistoImage(const HObject& Image, HObject* ImageEquHisto)
The starting point is the histogram of the input image. Do the following simple gray value transformation f(g):
H(x) describes the relative frequency of the gray x value. For a UINT2 image, the only difference is to replace the value 255 with another maximum value. If this value is set, the maximum value is calculated from the allowed confidence with the input image. If there is no setting, the value of the system parameter is “int2_bits” (if the value is (that is, other than 1)). If neither value is set, the validity bit is set to 16.
Original chart map:
Straight square schematic diagram effect diagram:
Image sharpening is the contour of the image compensation, and enhances the edge of the image and part of the ash jump to make the image sharper. Image sharpening is about emphasizing edge properties, outlines, or specific line targets in images.
Lesson 1:Shock_filter – Ippact Filters for images.
void ShockFilter(const HObject& Image, HObject* SharpenedImage,
const HTuple& Theta, const HTuple& Iterations,
const HTuple& Mode, const HTuple& Sigma)
Code example:
read_image (Image, 'datacode/ecc200/ecc200_cpu_015')
fill_interlace (Image, ImageFilled, 'odd')
shock_filter (ImageFilled, SharpenedImage, 0.5, 10, 'laplace', 1.5)
Frequency frequency is the grayscale characteristics of the image. Low frequency features are not obvious changes in the serotype, which is the general plan of the image. High frequency features are violent changes in the grayscale of the image, and represent image noise. IF properties represent image details, textures, and other details.
When is the Fourier transform used for frequency domain analysis?

Pictures with certain texture characteristics can be understood as stripes, such as canvas, wood panels, paper and other materials.

Characteristics of the low or lownoise comparator signal must be extracted.

Image size is too large or should be calculated using oversize filter. It is currently converted to frequency domain computation which has the advantage of speed. Since spatial domain filtering is a convolutional operation (summary weighted summary), the frequency domain calculation is directly multiplied.
In Halcon using frequency domains for detection, there are two more critical steps: ① Create appropriate filters; ② Conversion between spatial domain and frequency domain.
3.1 average rating
The principle of average filtering is to add the average value to the gray pixel value in the vicinity. The filter area is like a small “window”. An internal gray value is added to the average value and then a value is assigned to each pixel in the ‘window’.
Lesson 1:Average image – average smoothness.
void MeanImage(const HObject& Image, HObject* ImageMean,
const HTuple& MaskWidth, const HTuple& MaskHeight)
The operator performs linear smoothing on the gray value of all input image(s). The filter matrix consists of a number (equal to) and the size is mask x mask. Wrap results except Maskheight x Maskwidth. For edge processing, the gray value is reflected at the edge of the image.
3.2 Filtration of the median value
The principle of medium filtration is similar to medium filtration. The difference is that it focuses on pixels and takes the selected neighborhood field as a filter. This shape can be square or circular. Then sort the gray value in the area to average the sort result to map the average value to the pixels in the area.
Lesson 1:Midian_image – Calculation of an average value filter with a different mask.
void MedianImage(const HObject& Image, HObject* ImageMedian,
const HTuple& MaskType, const HTuple& Radius,
const HTuple& Margin)
The mask type is the shape of the adjacent domain, the radius is the size of the convolution kernel, and the margin is the edge processing method. Since the stroke often cannot move the filter window, it is necessary to pad the pixels.
Continue: Refers to extended boundary pixels
“Cycle”: denotes the pixel cycle span boundary
Function: Suitable for removing some insulation noise and keeping most of the edges.
3.3 Gaussian filter
Gaussian filtering is a discrete 2D Gaussian function suitable for removing Gaussian noise.
Lesson 1: GAUSS_FILTER – Smoothly using the discrete GAUSS function.
void GaussFilter(const HObject& Image, HObject* ImageGauss,
const HTuple& Size)
It uses a separate Gaussian function
Frequency frequency is the grayscale characteristics of the image. Low frequency features are not obvious changes in the serotype, which is the general plan of the image. High frequency features are violent changes in the grayscale of the image, and represent image noise. IF properties represent image details, textures, and other details.
When is the Fourier transform used for frequency domain analysis?

Pictures with certain texture characteristics can be understood as stripes, such as canvas, wood panels, paper and other materials.

Characteristics of the low or lownoise comparator signal must be extracted.

Image size is too large or should be calculated using oversize filter. It is now converted to frequency domain arithmetic which is fast. Since spatial domain filtering is a convolutional operation (summary weighted summary), the frequency domain calculation is directly multiplied.
In Halcon using frequency domains for detection, there are two more critical steps: ① Create appropriate filters; ② Conversion between spatial domain and frequency domain.
4.1 Qualcomm Nomination Method
Details such as edges or thin lines in the image correspond to the highfrequency component of the image spectrum. Therefore, highbandwidth filtering is used to make the highfrequency components pass smoothly, making the edge or fineband details of the picture clear, and the picture sharpness is achieved. Qualcomm filtering can be achieved using the null frequency method or the frequency domain method. In the space domain, there is a convolution method similar to the median adjacent filter field method with a low level of airspace passage, but the square matrix of the stimulus response h is different.
Lesson 1:Gen_GAUSS_FILTER – Creates a Gaussian filter in the frequency domain
void GenGaussFilter(HObject* ImageGauss, const HTuple& Sigma1,
const HTuple& Sigma2, const HTuple& Phi,
const HTuple& Norm, const HTuple& Mode,
const HTuple& Width, const HTuple& Height)
Note that this is the frequency domain and the gaussian filter smoothed over the image is the time domain gaussian filter. Bypass filtering is used to remove noise.
Principle:First generate a Gaussian kernel and then perform a Fourier transform based on the input parameters (atmospheric domain or time domain of the frequency domain transform tool) to get a SOcalled frequency domain filter with defined modes and specified precision. I want to convert rft_generc then my gaussian filter mode should be written as ‘rft’. Frequency domain filter is supported by selecting a parameter.
Optics 2:RFT_GENERC – Calculate the true value of the image at the speed of the Fourier transform.
void RftGeneric(const HObject& Image, HObject* ImageFFT,
const HTuple& Direction, const HTuple& Norm,
const HTuple& ResultType, const HTuple& Width)
Principle:In fact, the filter received by Gen_GAUSS_FILTER is an image. Although it is a frequency domain filter, it is just an image. The materiality of the spatial convolution kernel for the time domain is the same. This frequency domain diagram has been transformed (there is no distinction between convolution and time domain) and the image changes after convolution. , Then restore it to the date range, and you will find that the original picture has changed a lot.
Optics 2:Convol_fft – convoluted image with frequency domain filter.
void ConvolFft(const HObject& ImageFFT, const HObject& ImageFilter,
HObject* ImageConvol)
Principle:The ImageFFT composite pixels are multiplied by the corresponding filter pixels.
Code example:
dev_close_window ()
dev_update_off ()
Path := 'lcd/mura_defects_blur_'
read_image (Image, Path + '01')
get_image_size (Image, Width, Height)
dev_open_window_fit_size (0, 0, Width, Height, 640, 480, WindowHandle)
set_display_font (WindowHandle, 14, 'mono', 'true', 'false')
dev_set_draw ('margin')
dev_set_line_width (3)
dev_set_color ('red')
ScaleFactor := 0.4
calculate_lines_gauss_parameters (17, [25,3], Sigma, Low, High)
for f := 1 to 3 by 1
read_image (Image, Path + f$'.2i')
decompose3 (Image, R, G, B)
* correct side illumination
rft_generic (B, ImageFFT, 'to_freq', 'none', 'complex', Width)
gen_gauss_filter (ImageGauss, 100, 100, 0, 'n', 'rft', Width, Height)
convol_fft (ImageFFT, ImageGauss, ImageConvol)
rft_generic (ImageConvol, ImageFFT1, 'from_freq', 'none', 'byte', Width)
sub_image (B, ImageFFT1, ImageSub, 2, 100)
* perform the actual inspection
zoom_image_factor (ImageSub, ImageZoomed, ScaleFactor, ScaleFactor, 'constant')
* avoid border effects when using lines_gauss()
get_domain (ImageZoomed, Domain)
erosion_rectangle1 (Domain, RegionErosion, 7, 7)
reduce_domain (ImageZoomed, RegionErosion, ImageReduced)
lines_gauss (ImageReduced, Lines, Sigma, Low, High, 'dark', 'true', 'gaussian', 'true')
hom_mat2d_identity (HomMat2DIdentity)
hom_mat2d_scale_local (HomMat2DIdentity, 1 / ScaleFactor, 1 / ScaleFactor, HomMat2DScale)
affine_trans_contour_xld (Lines, Defects, HomMat2DScale)
*
dev_display (Image)
dev_display (Defects)
if (f < 3)
disp_continue_message (WindowHandle, 'black', 'true')
stop ()
endif
endfor