问题
I'm an experienced software engineer with some minor college DSP knowledge. I'm working on a smartphone application to process signal data, such as from the microphone (sampled at 44100 Hz) and the accelerometer (sampled at 32-50 Hz). My applications would be, for example, pitch detectors and so forth.
I want to implement a low-pass filter (LPF) on the phone to remove aliased frequencies, particularly for the accelerometer, which has a low sampling rate. However, I am finding a contradiction when trying to apply the fast FFT-based convolution method. Any help would be appreciated.
Here is my line of reasoning:
I am reading a signal, and I want use a LPF to do anti-aliasing (remove aliased frequencies).
To implement the LPF on my smartphone, I choose to apply an FIR filter (namely, a windowed sinc function) to the time-domain signal. Let x[n] be my signal and f[n] be the coefficients of my filter kernel. So I want to perform convolution between x[n] and f[n], where x[n] is of length N (typically 512) and f[n] is of length M (typically 256).
I implemented a simple 1D convolution on my smartphone (Android and iPhone). The algorithm is the typical nested loop version and runs in O(N M). It is running too slowly on the smartphone for N=512 and M=256.
I then looked at the fast convolution algorithm that uses FFTs and runs in O(N lgN). Specifically, the filtered signal is from: filtered x[n] = IFFT(FFT(x) .* FFT(f)), where FFT is the fft, IFFT is the inverse FFT, and .* is element-by-element multiplication of two arrays.
However, I find a contradiction in that process: IFFT(FFT(x) .* FFT(f)). This requires that I take the FFT of x[n], but x[n] may have aliased frequencies. This is exactly my initial problem from step 1!
So, how can I resolve this contradiction? How can I use fast convolution to implement a LPF if the fast convolution internally requires a LPF?
NOTE: I've been told by some EE guys that some microphones have a hardware-based LPF built in, but I cannot be sure with my smartphone's microphone or the accelerometer.
回答1:
To put it simply: calculating FFT(x) does not introduce aliasing.
Aliasing is introduced every time a signal is sampled. I think the root of your confusion is that there are two sampling processes for the audio signal: once to take continuous sound and make it a 44.1 kHz signal, and then again in the downsampling step you want to add.
Say there was a spurious tone at 30 kHz (for instance): it must be rejected by the smartphone's hardware. Once you have those 44.1 kHz samples, you're stuck with whatever alias products got through the sampler. You cannot undo aliasing after sampling (this is not strictly true, but it's true for baseband signals, which is what you are dealing with). You should go ahead and assume that the phone designers got this right, and you won't have to worry about alias products from signal content higher than ~20 kHz.
Which brings us to the second sampling step. You are quite correct that you need to apply another anti-alias filter before you downsample. Any signal content below 20 kHz but above 2x your downsampled rate will alias into the output unless you attenuate it first. The key is that you are calculating FFT(x) BEFORE downsampling, then applying the filter, then downsampling. This is what allows you to get alias-protected outputs.
Most likely the smartphone has a delta-sigma ADC, which uses a relatively mellow analog anti-alias filter, either 1 or 2 pole, then samples at an extremely high rate (64 * 44.1 kHz or higher) then applies digital filters in its downsampling process. MEMS accelerometers similarly have intrinsic anti-alias protection. If you want to test this, use a sine wave source hooked up to an electrodynamic shaker (or a beefy subwoofer cone) and shake your phone at a few kHz. You should see no output on the accelerometer signal. Then drive a tweeter at 30 kHz and see if the mic shows anything.
回答2:
Microphones on smartphones always have analog low-pass filters prior to sampling. If aliasing already occurred in a signal, it is impossible to remove in general. For this reason every microphone's A/D converter has low-pass filtering implemented in alalog domain -- before discretization even occurs. Unless you yourself are downsampling or resampling the signal in some way, you should not worry about aliasing. Fast convolution and time domain discrete circular convolution are mathematically equivalent, so there's no reason for one to have aliasing if the other one does not have it.
回答3:
The low pass filter must be placed before the ADC, i.e. it must be an analog circuit of some kind. You cannot remove the aliases from the sampled signal because you do not know which part of the sampled signal is due to aliasing, that is the whole point of why aliasing is a problem.
回答4:
FFT based convolution is an optimization for when you need a low-pass with a cutoff frequency below the low-pass filter required for anti-aliasing before the sampling. e.g. 2 low pass filters, one in hardware, one in software. This is commonly done when the 2 low pass filters can create a given filter quality (passband flatness, etc.) better or cheaper than one hardware filter before the sampler, or if the sampler itself introduces noise (mostly) in the spectrum above the desired signal.
回答5:
No contradiction. X
has high frequencies. Y = X*F
no longer has high frequencies because F
filters (i.e., multiplies) them out.
来源:https://stackoverflow.com/questions/8156662/how-can-i-use-fast-fft-based-convolution-to-implement-a-lpf-if-the-fast-convolut