fft

Frequency Analysis in Python

主宰稳场 提交于 2019-12-18 11:59:06
问题 I'm trying to use Python to retrieve the dominant frequencies of a live audio input. For the moment I am experimenting using the audio stream my Laptop's built in microphone, but when testing the following code, I am getting very poor results. # Read from Mic Input and find the freq's import pyaudio import numpy as np import bge import wave chunk = 2048 # use a Blackman window window = np.blackman(chunk) # open stream FORMAT = pyaudio.paInt16 CHANNELS = 1 RATE = 1920 p = pyaudio.PyAudio()

FFT real/imaginary/abs parts interpretation

匆匆过客 提交于 2019-12-18 11:17:15
问题 I'm currently learning about discret Fourier transform and I'm playing with numpy to understand it better. I tried to plot a "sin x sin x sin" signal and obtained a clean FFT with 4 non-zero points. I naively told myself : "well, if I plot a "sin + sin + sin + sin" signal with these amplitudes and frequencies, I should obtain the same "sin x sin x sin" signal, right? Well... not exactly (First is "x" signal, second is "+" signal) Both share the same amplitudes/frequencies, but are not the

Implement Hann Window

旧街凉风 提交于 2019-12-18 10:46:45
问题 I take blocks of incoming data and pass them through fftw to get some spectral information. Everything seems to be working, however I think I'm getting some aliasing issues. I've been trying to work out how to implement a hann window on my blocks of data. Google has failed me for examples. Any ideas or links I should be looking at? double dataIn[2048] > /* windowing here? */ > FFT > double freqBins[2048] Update Thanks to Oli for pointing out the issue I'm actually trying to fix is spectral

Pitch recognition of musical notes on a smart phone

风格不统一 提交于 2019-12-18 09:56:27
问题 With limited resources such as slower CPUs, code size and RAM, how best to detect the pitch of a musical note, similar to what an electronic or software tuner would do? Should I use: Kiss FFT FFTW Discrete Wavelet Transform autocorrelation zero crossing analysis octave-spaced filters other? In a nutshell, what I am trying to do is to recognize a single musical note, two octaves below middle-C to two octaves above, played on any (reasonable) instrument. I'd like to be within 20% of the

The result of fft in tensorflow is different from numpy

大憨熊 提交于 2019-12-18 06:48:18
问题 I want to use the fft in tensorflow. But I found the result is different when use the FFT function in numpy and tensorflow respectively. Especially when the size of input array is large import tensorflow as tf import numpy as np aa = tf.lin_space(1.0, 10000.0, 10000) bb = tf.lin_space(1.0, 10000.0, 10000) dd = tf.concat([[aa],[bb]],axis = 0) c_input = tf.complex(dd[0,:], dd[1,:]) Spec = tf.fft(c_input) sess = tf.Session() uuu = sess.run(Spec) print(uuu) aaa = np.linspace(1.0, 10000.0, 10000)

Analyzing seasonality of Google trend time series using FFT

心已入冬 提交于 2019-12-18 05:04:10
问题 I am trying to evaluate the amplitude spectrum of the Google trends time series using a fast Fourier transformation. If you look at the data for 'diet' in the data provided here it shows a very strong seasonal pattern: I thought I could analyze this pattern using a FFT, which presumably should have a strong peak for a period of 1 year. However when I apply a FFT like this ( a_gtrend_ham being the time series multiplied with a Hamming window): import matplotlib.pyplot as plt import numpy as np

Low pass filter using FFT instead of convolution implementation

守給你的承諾、 提交于 2019-12-18 04:54:21
问题 Implementing a low pass FIR filter, when should one use FFT and IFFT instead of time-domain convolution? The goal is to achieve the lowest CPU time required for real-time calculations. As I know, FFT has about O(n log n) complexity, but convolution in the time domain is of O(n²) complexity. To implement a low pass filter in the frequency domain, one should use FFT, then multiply each value with filtering coefficients (which are translated into frequency domain), then make IFFT. So, the

How to filter FFT data (for audio visualisation)?

半城伤御伤魂 提交于 2019-12-17 21:52:58
问题 I was looking at this Web Audio API demo, part of this nice book If you look at the demo, the fft peaks fall smoothly. I'm trying to do same with Processing in Java mode using the minim library. I've looked at how this is done with the web audio api in the doFFTAnalysis() method and tried to replicate this with minim. I also tried to port how abs() works with the complex type: / 26.2.7/3 abs(__z): Returns the magnitude of __z. 00565 template<typename _Tp> 00566 inline _Tp 00567 __complex_abs

Translation from Complex-FFT to Finite-Field-FFT

百般思念 提交于 2019-12-17 20:58:51
问题 Good afternoon! I am trying to develop an NTT algorithm based on the naive recursive FFT implementation I already have. Consider the following code ( coefficients ' length, let it be m , is an exact power of two): /// <summary> /// Calculates the result of the recursive Number Theoretic Transform. /// </summary> /// <param name="coefficients"></param> /// <returns></returns> private static BigInteger[] Recursive_NTT_Skeleton( IList<BigInteger> coefficients, IList<BigInteger> rootsOfUnity, int

How to get the fundamental frequency using Harmonic Product Spectrum?

对着背影说爱祢 提交于 2019-12-17 20:40:11
问题 I'm trying to get the pitch from the microphone input. First I have decomposed the signal from time domain to frequency domain through FFT. I have applied Hamming window to the signal before performing FFT. Then I get the complex results of FFT. Then I passed the results to Harmonic product spectrum, where the results get downsampled and then multiplied the downsampled peaks and gave a value as a complex number. Then what should I do to get the fundamental frequency? public float[]