问题
I am new when it comes to audio signal processing.
Currently I have connected device to my PC that sends me audio data from mic/playback track. I have already created host application with usage of Steinberg ASIO SDK 2.3 which connects to the device and in repeating callback returns raw data. Signal is 24bit and frequency can be chosen whatever I like, let's say 44100 hZ, 2pan's, single channel. I have converted this signal also to double <-1.0, 1.0> because I am doing some signal processing on it.
What I would like to do now is to add recording functionality to my host. For example on button click, incoming data is being continuously converted to WAV file and when I click other button it stops and saves.
I have read already about WAV files, file formats, bitstream formats (RIFF), and somehow have an overall idea how the WAV file looks like. I also checked a lot of forum threads, stackoverflow's threads or code-projects posts and everywhere I find something related to topic but I can't get an idea how can I make ongoing recording in real time. A lot of code I had found is about converting data array to WAV after doing modifications to it. I would like to make ongoing conversion and make WAV file appending/expanding till I tell it to stop.
For example could I somehow modify this?
#include <fstream>
template <typename T>
void write(std::ofstream& stream, const T& t) {
stream.write((const char*)&t, sizeof(T));
}
template <typename T>
void writeFormat(std::ofstream& stream) {
write<short>(stream, 1);
}
template <>
void writeFormat<float>(std::ofstream& stream) {
write<short>(stream, 3);
}
template <typename SampleType>
void writeWAVData(
char const* outFile,
SampleType* buf,
size_t bufSize,
int sampleRate,
short channels)
{
std::ofstream stream(outFile, std::ios::binary);
stream.write("RIFF", 4);
write<int>(stream, 36 + bufSize);
stream.write("WAVE", 4);
stream.write("fmt ", 4);
write<int>(stream, 16);
writeFormat<SampleType>(stream); // Format
write<short>(stream, channels); // Channels
write<int>(stream, sampleRate); // Sample Rate
write<int>(stream, sampleRate * channels * sizeof(SampleType)); // Byterate
write<short>(stream, channels * sizeof(SampleType)); // Frame size
write<short>(stream, 8 * sizeof(SampleType)); // Bits per sample
stream.write("data", 4);
stream.write((const char*)&bufSize, 4);
stream.write((const char*)buf, bufSize);
}
And in callback somehow:
writeWAVData("mySound.wav", mySampleBuffer, mySampleBufferSize, 44100, 1);
I am grateful for any hint / link / suggestion / form of help.
回答1:
The difference between your use case and the code you've seen on line is that in your use case, you don't know in advance how long the file is going to end up being, since you don't know when the user will press the stop button.
The way to handle this is to start by writing out the WAV header as usual, but don't worry for now about the values you write for the file-size-specific fields (i.e. the field after "RIFF" and the field after "data"). You can leave those fields set to zero for now.
Then write out the audio samples as you receive them, i.e. appending them to the end of the file.
Finally, after the user has pressed stop and you are about to close the file, you'll need to go back and overwrite those two header-fields with the correct values. You can do this now because at this point you know how many bytes of audio data you wrote into the file. Once you've done that, the file should be well-formed and usable. You can use e.g. ofstream::seekp(fieldOffset, ios_base::beg) to seek back to the appropriate offsets from the top of the file for the fields you need to modify.
来源:https://stackoverflow.com/questions/28229815/record-convert-audio-data-to-wav-in-real-time