Result from processing audio signal with Goertzel algorithm

こ雲淡風輕ζ 提交于 2020-01-01 06:00:08

问题


I made a little signal processing app. It processes audio signal (morse code) on certain frequency with Goerztel algorithm. Application saves temporary file to the filesystem and after recording is finished, starts to detect signals. Now I got the result with bunch of magnitudes.

I don't really know what to read from those magnitudes. How can I decode the morse code from those magnitudes? How can I read them? Tried to find references, but nowhere is explained what is the result and how to read it.

EDIT:

My morse code application is made with Delphi and uses Windows Beep function to send signals with certain frequency. I'm using 1200 Hz for signals. Also pauses between signals and words and morse beeps are like wikipedia described. All is accurate.

Goertzel.java:

 public class Goertzel {

        private float samplingRate;
        private float targetFrequency;
        private int n;

        private double coeff, Q1, Q2;
        private double sine, cosine;

        public Goertzel(float samplingRate, float targetFrequency, int inN) {
            this.samplingRate = samplingRate;
            this.targetFrequency = targetFrequency;
            n = inN;

            sine = Math.sin(2 * Math.PI * (targetFrequency / samplingRate));
            cosine = Math.cos(2 * Math.PI * (targetFrequency / samplingRate));
            coeff = 2 * cosine;
        }

        public void resetGoertzel() {
            Q1 = 0;
            Q2 = 0;
        }

        public void initGoertzel() {
            int k;
            float floatN;
            double omega;

            floatN = (float) n;
            k = (int) (0.5 + ((floatN * targetFrequency) / samplingRate));
            omega = (2.0 * Math.PI * k) / floatN;
            sine = Math.sin(omega);
            cosine = Math.cos(omega);
            coeff = 2.0 * cosine;

            resetGoertzel();
        }

        public void processSample(double sample) {
            double Q0;

            Q0 = coeff * Q1 - Q2 + sample;
            Q2 = Q1;
            Q1 = Q0;
        }

        public double[] getRealImag(double[] parts) {
            parts[0] = (Q1 - Q2 * cosine);
            parts[1] = (Q2 * sine);

            return parts;
        }

        public double getMagnitudeSquared() {
            return (Q1 * Q1 + Q2 * Q2 - Q1 * Q2 * coeff);
        }
    }

SoundCompareActivity.java

import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import android.app.Activity;
import android.media.AudioFormat;
import android.media.AudioRecord;
import android.media.MediaRecorder;
import android.os.Bundle;
import android.util.Log;
import android.view.View;
import android.view.View.OnClickListener;
import android.widget.Button;

    public class SoundCompareActivity extends Activity {

        private static final int RECORDER_SAMPLE_RATE = 8000; // at least 2 times
                                                                // higher than sound
                                                                // frequency,
        private static final int RECORDER_CHANNELS = AudioFormat.CHANNEL_CONFIGURATION_MONO;
        private static final int RECORDER_AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;

        private AudioRecord recorder = null;
        private int bufferSize = 0;
        private Thread recordingThread = null;
        private boolean isRecording = false;

        private Button startRecBtn;
        private Button stopRecBtn;

        /** Called when the activity is first created. */
        @Override
        public void onCreate(Bundle savedInstanceState) {
            super.onCreate(savedInstanceState);
            setContentView(R.layout.main);

            startRecBtn = (Button) findViewById(R.id.button1);
            stopRecBtn = (Button) findViewById(R.id.button2);

            startRecBtn.setEnabled(true);
            stopRecBtn.setEnabled(false);

            bufferSize = AudioRecord.getMinBufferSize(RECORDER_SAMPLE_RATE,
                    RECORDER_CHANNELS, RECORDER_AUDIO_ENCODING);

            startRecBtn.setOnClickListener(new OnClickListener() {

                @Override
                public void onClick(View v) {
                    Log.d("SOUNDCOMPARE", "Start Recording");

                    startRecBtn.setEnabled(false);
                    stopRecBtn.setEnabled(true);
                    stopRecBtn.requestFocus();

                    startRecording();
                }
            });

            stopRecBtn.setOnClickListener(new OnClickListener() {

                @Override
                public void onClick(View v) {
                    Log.d("SOUNDCOMPARE", "Stop recording");

                    startRecBtn.setEnabled(true);
                    stopRecBtn.setEnabled(false);
                    startRecBtn.requestFocus();

                    stopRecording();
                }
            });
        }

        private void startRecording() {
            recorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
                    RECORDER_SAMPLE_RATE, RECORDER_CHANNELS,
                    RECORDER_AUDIO_ENCODING, bufferSize);

            recorder.startRecording();

            isRecording = true;

            recordingThread = new Thread(new Runnable() {

                @Override
                public void run() {
                    writeAudioDataToTempFile();
                }
            }, "AudioRecorder Thread");

            recordingThread.start();
        }

        private String getTempFilename() {
            File file = new File(getFilesDir(), "tempaudio");

            if (!file.exists()) {
                file.mkdirs();
            }

            File tempFile = new File(getFilesDir(), "signal.raw");

            if (tempFile.exists())
                tempFile.delete();

            return (file.getAbsolutePath() + "/" + "signal.raw");
        }

        private void writeAudioDataToTempFile() {
            byte data[] = new byte[bufferSize];
            String filename = getTempFilename();
            FileOutputStream os = null;

            try {
                os = new FileOutputStream(filename);
            } catch (FileNotFoundException e) {
                e.printStackTrace();
            }

            int read = 0;

            if (os != null) {
                while (isRecording) {
                    read = recorder.read(data, 0, bufferSize);

                    if (read != AudioRecord.ERROR_INVALID_OPERATION) {
                        try {
                            os.write(data);
                        } catch (IOException e) {
                            e.printStackTrace();
                        }
                    }
                }

                try {
                    os.close();
                } catch (IOException e) {
                    e.printStackTrace();
                }
            }
        }

        private void deleteTempFile() {
            File file = new File(getTempFilename());

            file.delete();
        }

        private void stopRecording() {
            if (recorder != null) {
                isRecording = false;

                recorder.stop();
                recorder.release();

                recorder = null;
                recordingThread = null;
            }

            new MorseDecoder().execute(new File(getTempFilename()));    
        }
    }

MorseDecoder.java:

import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.ShortBuffer;
import android.media.AudioFormat;
import android.media.AudioRecord;
import android.os.AsyncTask;
import android.util.Log;

public class MorseDecoder extends AsyncTask<File, Void, Void> {
    private FileInputStream is = null;

    @Override
    protected Void doInBackground(File... files) {
        int index;
        //double magnitudeSquared; 
        double magnitude; 

        int bufferSize = AudioRecord.getMinBufferSize(8000,
                AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT);

        Goertzel g = new Goertzel(8000, 1200, bufferSize);
        g.initGoertzel();

        for (int i = 0; i < files.length; i++) {
            byte[] data = new byte[bufferSize];

            try {
                is = new FileInputStream(files[i]);

                while(is.read(data) != -1) {
                    ShortBuffer sbuf = ByteBuffer.wrap(data).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer();
                    short[] audioShorts = new short[sbuf.capacity()];
                    sbuf.get(audioShorts);

                    float[] audioFloats = new float[audioShorts.length];

                    for (int j = 0; j < audioShorts.length; j++) {
                        audioFloats[j] = ((float)audioShorts[j]) / 0x8000;
                    }

                    for (index = 0; index < audioFloats.length; index++) { 
                        g.processSample(data[index]); 
                    }

                    magnitude = Math.sqrt(g.getMagnitudeSquared());


                    Log.d("SoundCompare", "Relative magnitude = " + magnitude);

                    g.resetGoertzel();
                }

                is.close();

            } catch (FileNotFoundException e) {
                e.printStackTrace();
            } catch (IOException e) {
                e.printStackTrace();
            }
        }

        return null;
    }
}

EDIT2:

Notices some bugs in processing samples. Changed code in while loop.

while(is.read(data) != -1) {
                    ShortBuffer sbuf = ByteBuffer.wrap(data).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer();
                    short[] audioShorts = new short[sbuf.capacity()];
                    sbuf.get(audioShorts);

                    float[] audioFloats = new float[audioShorts.length];

                    for (int j = 0; j < audioShorts.length; j++) {
                        audioFloats[j] = ((float)audioShorts[j]) / 0x8000;
                    }

                    for (index = 0; index < audioFloats.length; index++) { 
                        g.processSample(audioFloats[index]); 

                        magnitude = Math.sqrt(g.getMagnitudeSquared());
                        Log.d("SoundCompare", "Relative magnitude = " + magnitude);
                    }

                    //magnitude = Math.sqrt(g.getMagnitudeSquared());


                    //Log.d("SoundCompare", "Relative magnitude = " + magnitude);

                    g.resetGoertzel();
                }

Regards, evilone


回答1:


The output of your Goertzel filter will increase when a tone within its passband is present, and then decrease when the tone is removed. In order to detect pulses of a tone, e.g. morse code, you need some kind of threshold detector on the output of the filter which will just give a boolean value for "tone present" / "tone not present" on a sample-by-sample basis. Try plotting the output values and it should be obvious once you see it in graphical form.




回答2:


Plot the signal magnitudes on a graph versus time (some CW decoding apps for the PC do this in real-time). Now figure out what the graph for each Morse code symbol should look like. Then study some pattern matching algorithms. If there is enough noise present, you may want to try some statistical pattern matching methods.

Here's the Wikipedia link for proper Morse Code timing.



来源:https://stackoverflow.com/questions/7339763/result-from-processing-audio-signal-with-goertzel-algorithm

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!