Java: How to get current frequency of audio input?

ぐ巨炮叔叔 提交于 2019-12-08 12:37:19

问题


I want to analyse the current frequency of the microphone input to synchronize my LEDs with the music playing. I know how to capture the sound from the microphone, but I don't know about FFT, which I often saw while searching for a solution to get the frequency.

I want to test if the current volume of a certain frequency is bigger than a set value. The code should be looking something like this:

 if(frequency > value) { 
   LEDs on
 else {
   LEDs off
 }

My problem is how to implement FFT in Java. For better understanding, here is a link to a YouTube video, that shows really good what I'm trying to achieve.

The whole code:

public class Music {

    static AudioFormat format;
    static DataLine.Info info;

    public static void input() {
        format = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, 44100, 16, 2, 4, 44100, false);

        try {
            info = new DataLine.Info(TargetDataLine.class, format);
            final TargetDataLine targetLine = (TargetDataLine) AudioSystem.getLine(info);
            targetLine.open();

            AudioInputStream audioStream = new AudioInputStream(targetLine);

            byte[] buf = new byte[256]

            Thread targetThread = new Thread() {
                public void run() {
                    targetLine.start();
                    try {
                        audioStream.read(buf);
                    } catch (IOException e) {
                        e.printStackTrace();
                    }
                }
            };

            targetThread.start();
    } catch (LineUnavailableException e) {
        e.printStackTrace();
    } catch (IOException e) {
        e.printStackTrace();
    }

}

Edit: I tried using the JavaFX AudioSpectrumListener of the MediaPlayer, which works really good as long as I use a .mp3 file. The problem is, that I have to use a byte array in which I store the microphone input. I asked another question for this problem here.


回答1:


Using the JavaFFT class from here, you can do something like this:

import javax.sound.sampled.*;

public class AudioLED {

    private static final float NORMALIZATION_FACTOR_2_BYTES = Short.MAX_VALUE + 1.0f;

    public static void main(final String[] args) throws Exception {
        // use only 1 channel, to make this easier
        final AudioFormat format = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, 44100, 16, 1, 2, 44100, false);
        final DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);
        final TargetDataLine targetLine = (TargetDataLine) AudioSystem.getLine(info);
        targetLine.open();
        targetLine.start();
        final AudioInputStream audioStream = new AudioInputStream(targetLine);

        final byte[] buf = new byte[256]; // <--- increase this for higher frequency resolution
        final int numberOfSamples = buf.length / format.getFrameSize();
        final JavaFFT fft = new JavaFFT(numberOfSamples);
        while (true) {
            // in real impl, don't just ignore how many bytes you read
            audioStream.read(buf);
            // the stream represents each sample as two bytes -> decode
            final float[] samples = decode(buf, format);
            final float[][] transformed = fft.transform(samples);
            final float[] realPart = transformed[0];
            final float[] imaginaryPart = transformed[1];
            final double[] magnitudes = toMagnitudes(realPart, imaginaryPart);

            // do something with magnitudes...
        }
    }

    private static float[] decode(final byte[] buf, final AudioFormat format) {
        final float[] fbuf = new float[buf.length / format.getFrameSize()];
        for (int pos = 0; pos < buf.length; pos += format.getFrameSize()) {
            final int sample = format.isBigEndian()
                    ? byteToIntBigEndian(buf, pos, format.getFrameSize())
                    : byteToIntLittleEndian(buf, pos, format.getFrameSize());
            // normalize to [0,1] (not strictly necessary, but makes things easier)
            fbuf[pos / format.getFrameSize()] = sample / NORMALIZATION_FACTOR_2_BYTES;
        }
        return fbuf;
    }

    private static double[] toMagnitudes(final float[] realPart, final float[] imaginaryPart) {
        final double[] powers = new double[realPart.length / 2];
        for (int i = 0; i < powers.length; i++) {
            powers[i] = Math.sqrt(realPart[i] * realPart[i] + imaginaryPart[i] * imaginaryPart[i]);
        }
        return powers;
    }

    private static int byteToIntLittleEndian(final byte[] buf, final int offset, final int bytesPerSample) {
        int sample = 0;
        for (int byteIndex = 0; byteIndex < bytesPerSample; byteIndex++) {
            final int aByte = buf[offset + byteIndex] & 0xff;
            sample += aByte << 8 * (byteIndex);
        }
        return sample;
    }

    private static int byteToIntBigEndian(final byte[] buf, final int offset, final int bytesPerSample) {
        int sample = 0;
        for (int byteIndex = 0; byteIndex < bytesPerSample; byteIndex++) {
            final int aByte = buf[offset + byteIndex] & 0xff;
            sample += aByte << (8 * (bytesPerSample - byteIndex - 1));
        }
        return sample;
    }

}

What does the Fourier Transform do?

In very simple terms: While a PCM signal encodes audio in the time domain, a Fourier transformed signal encodes audio in the frequency domain. What does this mean?

In PCM each value encodes an amplitude. You can imagine this like the membrane of a speaker that swing back and forth with certain amplitudes. The position of the speaker membrane is sampled a certain time per second (sampling rate). In your example the sampling rate is 44100 Hz, i.e. 44100 times per second. This is the typical rate for CD quality audio. For your purposes you probably don't need this high a rate.

To transform from the time domain to the frequency domain, you take a certain number of samples (let's say N=1024) and transform them using the fast Fourier transform (FFT). In primers about the Fourier transform you will see a lot of info about the continuous case, but what you need to pay attention to is the discrete case (also called discrete Fourier transform, DTFT), because we are dealing with digital signals, not analog signals.

So what happens when you transform 1024 samples using the DTFT (using its fast implementation FFT)? Typically, the samples are real numbers, not complex numbers. But the output of the DTFT is complex. This is why you usually get two output arrays from one input array. One array for the real part and one for the imaginary part. Together they form one array of complex numbers. This array represents the frequency spectrum of your input samples. The spectrum is complex, because it has to encode two aspects: magnitude (amplitude) and phase. Imagine a sine wave with amplitude 1. As you might remember from math way back, a sine wave crosses through the origin (0, 0), while a cosine wave cuts the y-axis at (0, 1). Apart from this shift both waves are identical in amplitude and shape. This shift is called phase. In your context we don't care about phase, but only about amplitude/magnitude, but the complex numbers you get encode both. To convert one of those complex numbers (r, i) to a simple magnitude value (how loud at a certain frequency), you simply calculate m=sqrt(r*r+i*i). The outcome is always positive. A simple way to understand why and how this works is to imagine a cartesian plane. Treat (r,i) as vector on that plane. Because of the Pythagorean theorem the length of that vector from the origin is just m=sqrt(r*r+i*i).

Now we have magnitudes. But how do they relate to frequencies? Each of the magnitude values corresponds to a certain (linearly spaced) frequency. The first thing to understand is that the output of the FFT is symmetric (mirrored at the midpoint). So of the 1024 complex numbers, only the first 512 are of interest to us. And which frequencies does that cover? Because of the Nyquist–Shannon sampling theorem a signal sampled with SR=44100 Hz cannot contain information about frequencies greater than F=SR/2=22050 Hz (you may realize that this is the upper boundary of human hearing, which is why it was chosen for CDs). So the first 512 complex values you get from the FFT for 1024 samples of a signal sampled at 44100 Hz cover the frequencies 0 Hz - 22050 Hz. Each so-called frequency bin covers 2F/N = SR/N = 22050/512 Hz = 43 Hz (bandwidth of bin).

So the bin for 11025 Hz is right at index 512/2=256. The magnitude may be at m[256].

To put this to work in your application you need to understand one more thing: 1024 samples of a 44100 Hz signal cover a very short amount of time, i.e. 23ms. With that short a time you will see sudden peaks. It's better to aggregate multiple of those 1024 samples into one value before thresholding. Alternatively you could also use a longer DTFT, e.g. 1024*64, however, I advise against making the DTFT very long as it creates a large computational burden.




回答2:


I think hendrik has the basic plan, but I hear your pain about understanding the process of getting there!

I assume you are getting your byte array via a TargetDataLine and it is returning bytes. Converting the bytes to floats will take a bit of manipulation, and depend upon the AudioFormat. A typical format has 44100 frames per second, and 16-bit encoding (two bytes to form one data point) and stereo. This would mean 4 bytes make up a single frame consisting of a left and a right value.

Example code that shows how to read and handle the incoming stream of individual bytes can be found in the java audio tutorial Using Files and Format Converters. Scroll down to the first "code snippet" in the section "Reading Sound Files". The key point where you would convert the incoming data to floats occurs at the spot marked as follows:

// Here, do something useful with the audio data that's 
// now in the audioBytes array...

At this point you can take the two bytes (assuming 16-bit encoding) and append them into a single short, and scale the value to a normalized float (range from -1 to 1). There are several StackOverflow questions that show algorithms for doing this conversion.

You may have to also go through a process editing where the sample code reads from an AudioInputStream (as in the example) vs. a TargetDataLine, but I think if that poses a problem, there are also StackOverflow questions that can help with that.

For the FFTFactory recommended by hendrik, I suspect that using the transform method with just a float[] for input will suffice. But I haven't gotten into the details or tried running this myself yet. (It looks promising. I suspect a search might also uncover other FFT libraries with more complete documentation. I recall something being available perhaps from MIT. I'm probably only a couple steps ahead of you technically.)

In any event, at the point above where the conversion happens, you can add to the input array for transform() until it is full, and on that iteration call the transform() method.

Interpreting the output from the method might be best accomplished on a separate thread. I'm thinking, hand off the results of the FFT call, or hand off the transform() call itself via some sort of loose coupling. (Are you familiar with this term and multi-threaded coding?)

Significant insights into how Java encodes sound and sound formats can be found in tutorials that directly precede the one linked above.

Another great resource, if you want to better understand how to interpret FFT results, can be found as a free download: "The Scientists and Engineers Guide to DSP"



来源:https://stackoverflow.com/questions/53997426/java-how-to-get-current-frequency-of-audio-input

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!