How can I use fast FFT-based convolution to implement a LPF if the fast convolution requires a LPF?

后端 未结 5 2070
南旧
南旧 2021-02-06 13:06

I\'m an experienced software engineer with some minor college DSP knowledge. I\'m working on a smartphone application to process signal data, such as from the microphone (sample

相关标签:
5条回答
  • 2021-02-06 13:26

    FFT based convolution is an optimization for when you need a low-pass with a cutoff frequency below the low-pass filter required for anti-aliasing before the sampling. e.g. 2 low pass filters, one in hardware, one in software. This is commonly done when the 2 low pass filters can create a given filter quality (passband flatness, etc.) better or cheaper than one hardware filter before the sampler, or if the sampler itself introduces noise (mostly) in the spectrum above the desired signal.

    0 讨论(0)
  • 2021-02-06 13:32

    The low pass filter must be placed before the ADC, i.e. it must be an analog circuit of some kind. You cannot remove the aliases from the sampled signal because you do not know which part of the sampled signal is due to aliasing, that is the whole point of why aliasing is a problem.

    0 讨论(0)
  • 2021-02-06 13:34

    No contradiction. X has high frequencies. Y = X*F no longer has high frequencies because F filters (i.e., multiplies) them out.

    0 讨论(0)
  • 2021-02-06 13:50

    To put it simply: calculating FFT(x) does not introduce aliasing.

    Aliasing is introduced every time a signal is sampled. I think the root of your confusion is that there are two sampling processes for the audio signal: once to take continuous sound and make it a 44.1 kHz signal, and then again in the downsampling step you want to add.

    Say there was a spurious tone at 30 kHz (for instance): it must be rejected by the smartphone's hardware. Once you have those 44.1 kHz samples, you're stuck with whatever alias products got through the sampler. You cannot undo aliasing after sampling (this is not strictly true, but it's true for baseband signals, which is what you are dealing with). You should go ahead and assume that the phone designers got this right, and you won't have to worry about alias products from signal content higher than ~20 kHz.

    Which brings us to the second sampling step. You are quite correct that you need to apply another anti-alias filter before you downsample. Any signal content below 20 kHz but above 2x your downsampled rate will alias into the output unless you attenuate it first. The key is that you are calculating FFT(x) BEFORE downsampling, then applying the filter, then downsampling. This is what allows you to get alias-protected outputs.

    Most likely the smartphone has a delta-sigma ADC, which uses a relatively mellow analog anti-alias filter, either 1 or 2 pole, then samples at an extremely high rate (64 * 44.1 kHz or higher) then applies digital filters in its downsampling process. MEMS accelerometers similarly have intrinsic anti-alias protection. If you want to test this, use a sine wave source hooked up to an electrodynamic shaker (or a beefy subwoofer cone) and shake your phone at a few kHz. You should see no output on the accelerometer signal. Then drive a tweeter at 30 kHz and see if the mic shows anything.

    0 讨论(0)
  • 2021-02-06 13:51

    Microphones on smartphones always have analog low-pass filters prior to sampling. If aliasing already occurred in a signal, it is impossible to remove in general. For this reason every microphone's A/D converter has low-pass filtering implemented in alalog domain -- before discretization even occurs. Unless you yourself are downsampling or resampling the signal in some way, you should not worry about aliasing. Fast convolution and time domain discrete circular convolution are mathematically equivalent, so there's no reason for one to have aliasing if the other one does not have it.

    0 讨论(0)
提交回复
热议问题