Let YT Be a Continuoustime Moving Average System
Moving Average Filters
Steven W. Smith , in Digital Signal Processing: A Practical Guide for Engineers and Scientists, 2003
Relatives of the Moving Average Filter
In a perfect world, filter designers would only have to deal with time domain or frequency domain encoded information, but never a mixture of the two in the same signal. Unfortunately, there are some applications where both domains are simultaneously important. For instance, television signals fall into this nasty category. Video information is encoded in the time domain, that is, the shape of the waveform corresponds to the patterns of brightness in the image. However, during transmission the video signal is treated according to its frequency composition, such as its total bandwidth, how the carrier waves for sound and color are added, elimination & restoration of the DC component, etc. As another example, electromagnetic interference is best understood in the frequency domain, even if
the signal's information is encoded in the time domain. For instance, the temperature monitor in a scientific experiment might be contaminated with 60 hertz from the power lines, 30 kHz from a switching power supply, or 1320 kHz from a local AM radio station. Relatives of the moving average filter have better frequency domain performance, and can be useful in these mixed domain applications.
Multiple-pass moving average filters involve passing the input signal through a moving average filter two or more times. Figure 15-3a shows the overall filter kernel resulting from one, two and four passes. Two passes are equivalent to using a triangular filter kernel (a rectangular filter kernel convolved with itself). After four or more passes, the equivalent filter kernel looks like a Gaussian (recall the Central Limit Theorem). As shown in (b), multiple passes produce an "s" shaped step response, as compared to the straight line of the single pass. The frequency responses in (c) and (d) are given by Eq. 15-2 multiplied by itself for each pass. That is, each time domain convolution results in a multiplication of the frequency spectra.
Figure 15-4 shows the frequency response of two other relatives of the moving average filter. When a pure Gaussian is used as a filter kernel, the frequency response is also a Gaussian, as discussed in Chapter 11. The Gaussian is important because it is the impulse response of many natural and manmade systems. For example, a brief pulse of light entering a long fiber optic transmission line will exit as a Gaussian pulse, due to the different paths taken by the photons within the fiber. The Gaussian filter kernel is also used extensively in image processing because it has unique properties that allow fast two-dimensional convolutions (see Chapter 24). The second frequency response in Fig. 15-4 corresponds to using a Blackman window as a filter kernel. (The term window has no meaning here; it is simply part of the accepted name of this curve). The exact shape of the Blackman window is given in Chapter 16 (Eq. 16-2, Fig. 16-2); however, it looks much like a Gaussian.
How are these relatives of the moving average filter better than the moving average filter itself? Three ways: First, and most important, these filters have better stopband attenuation than the moving average filter. Second, the filter kernels taper to a smaller amplitude near the ends. Recall that each point in the output signal is a weighted sum of a group of samples from the input. If the filter kernel tapers, samples in the input signal that are farther away are given less weight than those close by. Third, the step responses are smooth curves, rather than the abrupt straight line of the moving average. These last two are usually of limited benefit, although you might find applications where they are genuine advantages.
The moving average filter and its relatives are all about the same at reducing random noise while maintaining a sharp step response. The ambiguity lies in how the risetime of the step response is measured. If the risetime is measured from 0% to 100% of the step, the moving average filter is the best you can do, as previously shown. In comparison, measuring the risetime from 10% to 90% makes the Blackman window better than the moving average filter. The point is, this is just theoretical squabbling; consider these filters equal in this parameter.
The biggest difference in these filters is execution speed. Using a recursive algorithm (described next), the moving average filter will run like lightning in your computer. In fact, it is the fastest digital filter available. Multiple passes of the moving average will be correspondingly slower, but still very quick. In comparison, the Gaussian and Blackman filters are excruciatingly slow, because they must use convolution. Think a factor of ten times the number of points in the filter kernel (based on multiplication being about 10 times slower than addition). For example, expect a 100 point Gaussian to be 1000 times slower than a moving average using recursion.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780750674447500522
Finite position set phase-locked loop operating under nonideal grid voltages
Alfeu J. Sguarezi Filho , ... Rogério V. Jacomini , in Model Predictive Control for Doubly-Fed Induction Generators and Three-Phase Power Converters, 2022
10.3.2 Representation of the MAF
The MAF is a filtering technique based on finite impulse response (FIR) filters and it can perform as an ideal low pass filter in determined terms. The PLL based on MAF was developed in [126]. Although this filter is easy to implement, it can have a high computational cost.
The representation of the output using the MAF in the continuous time domain can be achieved using the input expression as follows:
(10.13)
The expresses the length of the window, and the MAF transfer function using Eq. (10.13) can be described as follows:
(10.14)
To achieve the steady state actions, the MAF expressed using Eq. (10.14) needs the same time of its window length. The response of the MAF decreases when the window length increases.
The magnitude and phase of Eq. (10.14) can be achieved substituting and considering some manipulations. Hence, it can be represented as follows:
(10.15)
It can be seen in (10.15) that the MAF has null gain at frequencies ( ) and unity gain at null frequency in hertz. Hence, the MAF retains the frequency elements of integer multiples of in hertz, and one passes the DC element.
The MAF employed in practice requires the discrete time representation. In this way, in this book it is assumed that the window length of the filter includes samples ( is an integer) of the input signal in which and T is the sampling time. In this way, the discrete equation of MAF can be represented as
(10.16)
Eq. (10.16) in the z domain can be expressed as
(10.17)
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780323909648000191
Basic Linear Filtering with Application to Image Enhancement
Alan C. Bovik , Scott T. Acton , in The Essential Guide to Image Processing, 2009
10.3.1 Moving Average Filter
The moving average filter can be described in several equivalent ways. First, using the notion of windowing introduced in Chapter 4, the moving average can be defined as an algebraic operation performed on local image neighborhoods according to a geometric rule defined by the window. Given an image f to be filtered and a window B that collects gray level pixels according to a geometric rule (defined by the window shape), then the moving average-filtered image g is given by
where the operation AVE computes the sample average of its. Thus, the local average is computed over each local neighborhood of the image, producing a powerful smoothing effect. The windows are usually selected to be symmetric, as with those used for binary morphological image filtering (Chapter 4).
Since the average is a linear operation, it is also true that
10.23
Because the noise process q is assumed to be zero-mean in the sense of (10.20), then the last term in (10.23) will tend to zero as the filter window is increased. Thus, the moving average filter has the desirable effect of reducing zero-mean image noise toward zero. However, the filter also effects the original image information. It is desirable that at each n, but this will not be the case everywhere in the image if the filter window is too large. The moving average filter, which is lowpass, will blur the image, especially as the window span is increased. Balancing this tradeoff is often a difficult task.
The moving average filter operation (10.22) is actually a linear convolution. In fact, the impulse response of the filter is defined as having value 1/R over the span covered by the window when centered at the spatial origin (0, 0), and zero elsewhere, where R is the number of elements in the window.
For example, if the window is , which is the most common configuration (it is defined in Chapter 4), then the average filter impulse response is given by
10.24
The frequency response of the moving average filter (10.24) is:
10.25
The half-peak bandwidth is often used for image processing filters. The half-peak (or 3 dB) cutoff frequencies occur on the locus of points (U, V) where falls to 1/2. For the filter (10.25), this locus intersects the U-axis and V-axis at the cutoffs cycles/pixel.
As depicted in Fig. 10.2, the magnitude response of the filter (10.25) exhibits considerable sidelobes. In fact, the number of sidelobes in the range is P. As P is increased, the filter bandwidth naturally decreases (more high-frequency attenuation or smoothing), but the overall sidelobe energy does not. The sidelobes are in fact a significant drawback, since there is considerable noise leakage at high noise frequencies. These residual noise frequencies remain to degrade the image. Nevertheless, the moving average filter has been commonly used because of its general effectiveness in the sense of (10.21) and because of its simplicity (ease of programming).
The moving average filter can be implemented either as a direct 2D convolution in the space domain, or using DFTs to compute the linear convolution (see Chapter 5).
Since application of the moving average filter balances a tradeoff between noise smoothing and image smoothing, the filter span is usually taken to be an intermediate value. For images of the most common sizes, e.g., or , typical (SQUARE) average filter sizes range from to . The upper end provides significant (and probably excessive) smoothing, since 225 image samples are being averaged to produce each new image value. Of course, if an image suffers from severe noise, then a larger window might be used. A large window might also be acceptable if it is known that the original image is very smooth everywhere.
Figure 10.3 depicts the application of the moving average filter to an image that has had zero-mean white Gaussian noise added to it. In the current context, the distribution (Gaussian) of the noise is not relevant, although the meaning can be found in Chapter 7. The original image is included for comparison. The image was filtered with SQUARE-shaped moving average filters of window sizes and , producing images with significantly different appearances from each other as well as the noisy image. With the filter, the noise is inadequately smoothed, yet the image has been blurred noticeably. The result of the moving average filter is much smoother, although the noise influence is still visible, with some higher noise frequency components managing to leak through the filter, resulting in a mottled appearance.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012374457900010X
Phase-locked loops and their design
Wenzhao Liu , Frede Blaabjerg , in Control of Power Electronic Converters and Systems, 2021
10.3.2 Moving average filter–based PLLs
Fig. 10.7 shows a conventional SRF-PLL with moving average filter (MAF), which is referred as MAF-PLL [ 15]. The MAF is a linear-phase filter which can be described as
(10.11)
Notice that including the MAF inside the SRF-PLL control loop can significantly improve its filtering capability, but slow down its dynamic response considerably [15]. The reason is that the MAF in-loop will cause a phase delay. It is on the condition that the window length of MAF is equal to the nominal period of the input signals. The selection for the window length is recommended to be equal to the fundamental period of the grid voltage (T ω = T), when the grid harmonic is unclear and DC offset may be presented in the PLL input [15,16]. T ω is the window length of MAF, more choices for the window length of the MAF such as T ω = T/2 and T ω = T/6 are suitable for applications when there are possible odd-order harmonics in the input of PLLs [17]. The MAF will pass the DC component and completely block frequency components of integer multiples of 1/T ω .
Furthermore, in order to improve the dynamic of the MAF-PLL while maintaining better harmonics-filter performance, several methods are proposed in the literature. In Ref. [18], a proportional integral derivative (PID) controller is used instead of the conventional PI controller as the LF of the MAF-PLL, which can provide an additional degree of freedom. Therefore, it enables the designer to effectively compensate for the phase delay caused by the MAF by arranging a pole-zero cancellation in the design [9].
In addition, a special lead compensator is added before the PI controller in the MAF-PLL [19], where the transfer function of the compensator is inverse of the MAF's transfer function; therefore, it will be able to reduce the phase delay in the MAF-PLL control loop. It should be emphasized here that, the MAF-PLL with a window length equal to the input fundamental period can remove all of the harmonics up to the aliasing frequency in addition to the fundamental-frequency disturbance components.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128194324000135
Analysis of continuous and discrete time signals
Alvar M. Kabe , Brian H. Sako , in Structural Dynamics Fundamentals and Advanced Applications, 2020
5.4.3.1 Linear difference equations
We begin our discussion with the following simple example that develops a recursive implementation of the FIR moving average filter.
Example 5.4-3 Consider the moving average filter,
(5.4-89)
From the previous discussion, we note that Eq. (5.4-89) is a FIR filter with impulse response given by
(5.4-90)
Furthermore, defines a causal LTI system. Consider the following difference equation,
(5.4-91)
with the additional requirement that causality holds. In other words, if for , then for . For a causal input, , it is easy to show (see Problem 5.12),
(5.4-92)
Hence, (5.4-91) is equivalent to the FIR filter in (5.4-89) and, therefore, defines a causal LTI system with a finite impulse response, despite having a recursive definition. Also, note that the filter defined recursively by (5.4-91) has a linear phase response since the moving average is a linear phase FIR filter. This example illustrates that not all recursive filters have infinite impulse responses and nonlinear phase responses.
It is worthwhile to exam (5.4-91) if we remove the requirement of causality. To evaluate , we need to know . Suppose that we have defined the initial condition, . For the causal input, , we have for ,
(5.4-93)
For , we can solve for in (5.4-91) and obtain
(5.4-94)
which results in
(5.4-95)
If , then (5.4-93) and (5.4-95) imply that . This shows that the difference equation is not linear, since linear systems produce zero output for zero input.
Let us further investigate the reason for the nonlinearity by examining the general difference Eq. (5.4-88), to a causal input signal, . First note that a zero input produces the homogeneous solution, , that satisfies
(5.4-96)
Consider a solution of the form, , then substitution into (5.4-96) shows that is a root of the characteristic equation,
(5.4-97)
Suppose that there are distinct roots, of (5.4-97). Then the general homogeneous solution to Eq. (5.4-96) is given by
(5.4-98)
where are constants to be determined. If the characteristic polynomial does not have distinct roots, then additional independent solutions must be determined from roots with multiplicities greater than one. If a characteristic root, , has multiplicity , then additional linearly independent solutions may be obtained by differentiating with respect to , which produces the solutions (Henrici, 1964),
(5.4-99)
A particular solution, , to Eq. (5.4-88) requires that we specify the initial values for . For a causal input, , let us also consider a particular solution that is initially at rest so that . Then can be determined recursively by Eq. (5.4-88). Observe that for a zero input, , we obtain the zero output, , i.e., imposing causality avoids the nonlinearity issue we noted earlier. To show that is sufficient and necessary, we need to consider the general solution of Eq. (5.4-88), which can be expressed as
(5.4-100)
Since is causal, satisfies the initial conditions,
(5.4-101)
Therefore, if we want to satisfy a specific set of initial conditions, , Eqs. (5.4-101) and (5.4-98) lead to the following system of linear equations (for distinct roots),
(5.4-102)
If the characteristic roots are distinct, is nonsingular and the coefficients of the homogeneous solution are given by . This also holds in the case of roots having multiplicities by using the additional solutions in (5.4-99). Therefore, if any of the initial conditions are nonzero, then is also nonzero and, therefore, is nontrivial. Clearly then, if and only if . Furthermore, since is causal, we conclude that specifying zero initial conditions is necessary and sufficient for producing a causal solution with a causal input. In this case, observe that for a zero input, . On the other hand, if we impose nonzero initial conditions, then for a zero input, its output, , is nonzero. This is the nonlinearity that we noted earlier. To summarize, in order for the difference Eq. (5.4-88) to represent a linear system, we must impose zero initial conditions, which is equivalent to requiring that the system be causal.
Since the input and output discrete-time signals are causal, their z-transforms are given by
(5.4-103)
where the ROC for each is given by the exterior of a circle in the z-plane. The difference Eq. (5.4-88) can now be represented in terms of the z-transforms,
(5.4-104)
where is the filter's transfer function. Since represents a causal system, it has a ROC that is equal to > R} for some radius, . If we further require that the filter be stable, then from our earlier remarks in Section 5.4.1, . This implies that the poles of must lie within the interior of the unit circle.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128216156000058
Evaluation of mechanical variables by registration and analysis of electromyographic activity
Rita Q. Fuentes-Aguilar , Alejandro Garcia-Gonzalez , in Biosignal Processing and Classification Using Computational Learning and Intelligence, 2022
19.3.1.2 Signal smoothing
The EMG signal is smoothing using a moving average filter. A time window is defined to average an amount of data. If this technique is used in a rectified signal, as done in this case, the method is called average rectified value (ARV). The information obtained from this step is the estimation of the amplitude behavior. An alternative is the use of the RMS or a low-pass filter at 6 Hz, 2nd-order or higher, or the use of Hilbert algorithm.
Root mean square value The root mean square (RMS) value of EMG signal was the first characteristic to consider. The RMS value of a vector X can be calculated as follows:
(19.1)
where N is the number of elements in the vector, and is the Nth element of said vector. The same equation can be better expressed in the following manner:
(19.2)
The second characteristic attribute calculated from each pedaling stage was the maximum windowed, without overlapping, RMS value. The EMG signals were root mean squared with a 25 ms (or 25 elements) window as proposed by [19].
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128201251000324
Digital Filters
James D. Broesch , in Digital Signal Processing, 2009
Instant Summary
In this chapter we have worked with both Finite Impulse Response (FIR) filters and Infinite Impulse Response (IIR) filters.
The FIR filters are essentially sophisticated versions of the simple moving average filter. An FIR is designed by specifying the transfer function H(ω). The function H(ω) is then converted to a sequence using the IDFT. This sequence, h(n), then becomes the coefficients of the filter. The FIR is then realized by convolving the input with h(n).
The FIR filter has a number of significant advantages. It is unconditionally stable, easily designed, and easily implemented. It is possible to design an FIR filter with a linear phase delay. The one major disadvantage of the FIR is that it can require a large number of computations to implement.
IIR filters are more complex and much more difficult to understand intuitively than FIR filters. We worked through a design of an IIR filter using the approach of placing poles and zeroes appropriately around the z-plane. From the pole/zero graph we then generated the z-transform in factored form and evaluated the partial fraction into a standard polynomial form. From there, we put the z-transform in the standard form of the definition and could find the coefficients of the IIR by simple inspection. The high potential performance of the IIR was noted, but we also pointed out the risks of using the IIR.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780750689762000067
Linear System Analysis
John Semmlow , in Signals and Systems for Bioengineers (Second Edition), 2012
8.3 Two-Dimensional Filtering—Images
FIR filters can be extended to two dimensions and applied to images. A wide range of useful image filters can be designed using MATLAB's Image Processing Toolbox, but again a taste of image filtering can be had using standard MATLAB.
Two-dimensional filters have two-dimensional impulse responses. Usually these filters consist of square matrices with an odd number of rows and columns. Implementation of a two-dimensional filter can be achieved using two-dimensional convolution. The equation for two-dimensional convolution is a straightforward extension of the one-dimensional discrete convolution equation (Equation 7.3):
(8.18)
where K 1 and K 2 are the dimensions in pixels of the image, b(k 1,k 2) are the filter coefficients, x(k 1,k 2) is the original image, and y(m,n) is the filtered image. In two-dimensional convolution, the impulse function, now a matrix, moves across the image as illustrated in Figure 8.20. The filtered image is obtained from the center pixel (black dot in Figure 8.20) where the value of this pixel is the two-dimensional summation of the product of the impulse response matrix (a 3×3 matrix in Figure 8.20) and the original image. As the impulse response matrix slides horizontally and vertically across the original image, the new filtered image is constructed.
Two-dimension convolution is implemented in MATLAB using:
-
y=conv2(x,b,'options'); % Two-dimensional convolution
where y is the output image, x is the input image, b is the impulse response matrix, and the options are the same as for one-dimensional convolution including the option same.
The next example applies a two-dimensional moving average filter to a noise-filled image of red blood cells.
It would be nice to know the frequency characteristics of the moving average filter used in Example 8.8. To determine the frequency characteristics of a two-dimensional filter we take the Fourier transform of the impulse response, but now we use a two-dimensional version of the Fourier transform. The two-dimensional discrete Fourier transform is defined as:
Example 8.8
Load the image of blood cells found as variable cells in file blood_cell_noisy.mat. This image contains noise, black and white speckles known appropriately as salt and pepper noise. Filter the noise with a 5×5 moving average filter. Display the original and filtered images side by side.
Solution: Load the image and display using pcolor with the proper shading option as in previous examples. The matrix that generates a 5×5 moving average is:
(8.19)
After generating this matrix, apply it to the image using conv2 with the same option and display.
-
% Ex 8_8 Application of moving average filter to blood cell image with
-
% "salt and pepper" noise.
-
%
-
load blood_cell_noisy; % Load blood cell image
-
subplot(1,2,1);
-
pcolor(cells); % Display image
-
shading interp; % Same as previous examples
-
colormap(bone);
-
axis('square'); % Adjust image shape
-
%
-
b=[ones(5,5)/25; % Define moving average impulse response matrix
-
y=conv2(cells,b,'same'); % Filter image using convolution
-
subplot(1,2,2);
-
pcolor(y); % Display image as above
-
shading interp;
-
colormap(bone);
-
axis('square');
Results: The original and filtered images are shown in Figure 8.21. The noise in the original image is substantially reduced by the moving average filter.
(8.20)
where the impulse response function, b(m,n), is an M×N matrix. Of course the function can be any matrix, so this equation can also be used to obtain the discrete Fourier transform of an image. The two-dimensional Fourier transform is implemented in MATLAB using:
-
X=fft2(x,nrows,ncols); % Two-dimensional Fourier transform
where x is the original matrix, X is the output, and nrows and ncols are optional arguments that are used to pad out the number of rows and columns in the input matrix. The output matrix is now in spatial frequency, cycles/distance. When displaying the two-dimensional Fourier transform, it is common to shift the zero frequency position to the center of the display and show the spectrum on either side. This improves visualization of the spectrum and is accomplished with routine fftshift:
-
Xshift=fftshift(X); % Shift zero to center and reflect
where X is the original Fourier transform and Xshift is the shifted version. The spectrum can then be plotted using any of MATLAB's three-dimensional plotting routines such as mesh or surf. The next example provides an example of the use of these routines to determine the spatial frequency characteristics of the moving average filter.
Example 8.9
Determine and plot the two-dimensional spectrum of the 5×5 moving average filter used in Example 8.8.
Solution: Generate the impulse response as in the last example, take the absolute value of the two-dimensional Fourier transform, shift, and then display using mesh.
-
%EX 8.9 Spectrum of a 5×5 moving average filter
-
% Construct moving average filter
-
b=ones(5,5)/25; % Filter impulse response matrix
-
B=abs(fft2(b,124,124)); % Determine 2D magnitude Fourier transform
-
B=fftshift(B); % Shift before plotting
-
mesh(B); % Plot using mesh
Results: The plot generated by this program is shown in Figure 8.22. The spectrum is one of a two-dimensional low-pass filter that reduces higher spatial frequencies equally in both dimensions. The salt-and-pepper noise found in the image of blood cells has high spatial frequencies since it involves rapid changes in intensity values within 1 or 2 pixels. Since this low-pass filter significantly attenuates these frequencies, it is effective at reducing this type of noise. It is not possible to determine the spatial frequency scale of the horizontal axes without knowing image size, the equivalent of signal period in 2D data. Assuming the blood cell image is printed as 5 cm on a side, then the fundamental spatial frequency is f 1 =1/D=1/5=0.2 cycles/cm and the two axes should be multiplied by this scale factor.
Other more complicated filters can be designed, but most require support from software packages such as the MATLAB Image Processing Toolbox. An exception is the filter that does spatial differentiation. Use of the two-point central difference algorithm with a skip factor of 1 leads to an impulse response of b=[1 0 −1]. A two-dimensional equivalent might look like:
(8.21)
The corner values are half the main horizontal value, because they are on the edges of the filter array and should logically be less. This filter is known as the Sobel operator and provides an estimate of the spatial derivative in the horizontal direction. The filter matrix can be transposed to operate in the vertical direction. The next example demonstrates the application of the Sobel operator.
Example 8.10
Load the MR image of the brain in variable brain of file Brain.mat. Apply the Sobel filter to this image to generate a new image that enhances changes that go from dark to light (going left to right). Flip the Sobel operator left to right, then apply it again to the original image to generate another image that enhances change going in the opposite direction. Finally generate a fourth image that is white if either of the Sobel filtered images is above a threshold, and black otherwise. Adjust the threshold to highlight the vertical boundaries in the original image.
Solution: Load the image, generate the Sobel impulse response and apply as is, and flipped, to the original image using conv2. Use a double for-loop to check the value of each pixel in the two images and set the pixel in the new image to 1 if either pixel is above the threshold and below if otherwise. Adjust the threshold empirically to highlight the vertical boundaries.
-
% Example 8.10 Apply the Sobel operator to the image of bone
-
%
-
load Brain; % Load image
-
-
%
-
b=[1 0 −1; 2 0 −2; 1 0 −1]; % Define Sobel filter
-
brain1=conv2(brain,b,'same'); % Apply Sobel filter
-
brain2=conv2(brain,fliplr(b),'same'); % Apply Sobel filter flipped
-
........display brain, brain1, and brain2 images as in previous examples........
-
-
%
-
[M,N]=size(brain); % Find size of image
-
thresh=.4; % Threshold. Determined % empirically.
-
brain3=zeros(M,N);
-
for m=1:M
-
for n=1:N
-
if brain1(m,n) > thresh || brain2(m,n) > thresh % Test each pixel in each image
-
brain3(m,n)=1; % Set new image pixel to white
-
else
-
brain3(m,n)=0; % else set to black
-
end
-
end
-
end
-
........display image brain4 and title........
-
Results: The images produced in this example are shown in Figure 8.23. The Sobel filtered images are largely gray since, except for the edges of the brain, the image intensities do not change abruptly. However, slightly lighter areas can be seen Problem 11 through 14 where more abrupt changes occur. If these two images are thresholded at the appropriate level, an image delineating the vertical boundaries is seen. This is a rough example of edge detection which is often an important step in separating out regions of interest in biomedical images. Identifying regions or tissue of interest in a biomedical image is known as image segmentation. Other examples of image filtering are given in the Problem 11 t.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123849823000080
Digital Signal Processing
Kevin M. Lynch , ... Matthew L. Elwin , in Embedded Computing in C with the PIC32 Microcontroller, 2016
22.3.1 Moving Average Filter
Suppose we have a sensor signal x(n) that has been corrupted by high-frequency noise (Figure 22.9). We would like to find the low-frequency signal underneath the noise.
The simplest filter to try is a moving average filter (MAF). A moving average filter calculates the output z(n) as a running average of the input signals x(n),
(22.6)
i.e., the FIR filter coefficients are b 0 = b 1 = ⋯ = b P = 1/(P + 1). The output z(n) is a smoothed and delayed version of x(n). The more samples P + 1 we average over, the smoother and more delayed the output. The delay occurs because the output z(n) is a function of only the current and previous inputs x(n − j),0 ≤ j ≤ P (see Figure 22.9).
To find the frequency response of a third-order, four-sample MAF, we test it on some sinusoidal inputs at different frequencies (Figure 22.10). We find that the phase ϕ of the (reconstructed) output sinusoid relative to the input sinusoid, and the ratio G of the amplitude of their amplitudes, depend on the frequency. For the four test frequencies in Figure 22.10, we get the following table:
Frequency | Gain G | Phase ϕ |
---|---|---|
0.25f N | 0.65 | − 67.5° |
0.5f N | 0 | NA |
0.67f N | 0.25 | 0° |
f N | 0 | NA |
Testing the response at many different frequencies, we can plot the frequency response in Figure 22.11. Two things to note about the gain plot:
- •
-
Gains are usually plotted on a log scale. This allows representation of a much wider range of gains.
- •
-
Gains are often expressed in decibels, which are related to gains by the following relationship:
Examining Figure 22.11 shows that low frequencies are passed with a gain of G = 1 and no phase shift. The gain drops monotonically as the frequency increases, until it reaches G = 0 ( dB) at input frequencies f = 0.5f N . (The plot truncates the dip to .) The gain then begins to rise again, before falling once more to G = 0 at f = f N . The MAF behaves somewhat like a low-pass filter, but not a very good one; high frequency signals can get through with gains of 0.25 or more. Still, it works reasonably well as a signal smoother for input frequencies below 0.5f N .
Given a set of filter coefficients b = [b0 b1 b2 …], we can plot the frequency response in MATLAB using
Causal vs. acausal filters
A filter is called causal if its output is the result of only current and past inputs, i.e., past inputs "cause" the current output. Causal filters are the only option for real-time implementation. If we are post-processing data, however, we can choose an acausal version of the filter, where the outputs at time n are a function of past as well as future inputs. Such acausal filters can eliminate the delay associated with only using past inputs to calculate the current value. For example, a five-sample MAF which calculates the average of the past two inputs, the current input, and the next two inputs is acausal.
Zero padding
When a filter is first initialized, there are no past inputs. In this case we can assume the nonexistent past inputs were all zero. The output transient caused by this assumption will end at the (P + 1)th input.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124201651000226
AI and retinal image analysis at Baidu
Yehui Yang , ... Le Van La , in Computational Retinal Image Analysis, 2019
2.1.1 Focus and clarity assessment
The input fundus image is first converted to greyscale. A 3 × 3 and 5 × 5 moving average filter is then applied to generate two filtered images. This step is important as a low-pass filter will affect a "focused" retinal image more than a "blurred" one.
To quantify the difference between the filtered image and the original image, Sobel operators are applied to the aforementioned images (i.e., the greyscale version of the original image, the 3 × 3 filtered image, and the 5 × 5 filtered image). The Sobel filter is a gradient operator that captures focus information [2]. For each image, four Sobel operators (as shown in Fig. 5) are used and the resulting four gradient maps are added to produce a single map. The mean of the resulting feature map describes the focus information of the corresponding image. At this point, three numbers (i.e., the sum of the Sobel feature maps of nonfiltered, 3 × 3 filtered, and 5 × 5 filtered) are generated for each fundus image. The difference between these three numbers is used to ascertain whether the image is in focus.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780081028162000204
Source: https://www.sciencedirect.com/topics/engineering/moving-average-filter
0 Response to "Let YT Be a Continuoustime Moving Average System"
Post a Comment