I'm building a fairly simple Android app (sdk revision 14: ICS) which allows users to pick two audio clips at a time (all are RIFF/WAV format, little-endian, signed PCM-16 bit encoding) and combine them in various ways to create new sounds. The most basic method I'm using for this combination is as follows:
//...sound samples are read in to memory as raw byte arrays elsewhere
//...offset is currently set to 45 so as to skip the 44 byte header of basic
//RIFF/WAV files
...
//Actual combination method
public byte[] makeChimeraAll(int offset){
for(int i=offset;i<bigData.length;i++){
if(i < littleData.length){
bigData[i] = (byte) (bigData[i] + littleData[i]);
}
else{
//leave bigData alone
}
}
return bigData;
}
the returned byte array can then be played via the AudioTrack class thusly:
....
hMain.setBigData(hMain.getAudioTransmutation().getBigData()); //set the shared bigData
// to the bigData in AudioTransmutation object
hMain.getAudioProc().playWavFromByteArray(hMain.getBigData(), 22050 + (22050*
(freqSeekSB.getProgress()/100)), 1024); //a SeekBar allows the user to adjust the freq
//ranging from 22050 hz to 44100 hz
....
public void playWavFromByteArray(byte[] audio,int sampleRate, int bufferSize){
int minBufferSize = AudioTrack.getMinBufferSize(sampleRate,
AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT);
AudioTrack at = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate,
AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT,
minBufferSize, AudioTrack.MODE_STREAM);
int i = 0;
at.play();
at.write(audio, 0, audio.length);
at.stop();
at.release();
for(i=0;i<audio.length;i++){
Log.d("me","the byte value at audio index " + i + " is " + audio[i]);
}
}
The result of a combination and playback using the code above is close to what I want (both samples are still discernible in the resulting hybridized sound) but there are also a lot of cracks, pops, and other noise.
So, three questions: First, am I using AudioTrack correctly? Second, where is endianness accounted for in the AudioTrack configuration? The sounds play fine by themselves and sound almost like what I would expect when combined so the little-endian nature of the RIFF/WAV format seems to be communicated somewhere, but I'm not sure where. Finally, what is the byte value range I should expect to see for signed 16-bit PCM encoding? I would expect to see values ranging from −32768 to 32767 in logcat from the Log.d(...) invocation above, but instead the results tend to be within the range of -100 to 100 (with some outliers beyond that). Could combined byte values beyond the 16-bit range account for the noise, perhaps?
thanks, CCJ
UPDATE: major thanks to Bjorne Roche and William the Coderer! I now read in the audio data to short[] structures, endianness of the DataInputStream is accounted for using the EndianInputStream from William (http://stackoverflow.com/questions/8028094/java-datainputstream-replacement-for-endianness) and the combination method has been changed to this:
//Audio Chimera methods!
public short[] makeChimeraAll(int offset){
//bigData and littleData are each short arrays, populated elsewhere
int intBucket = 0;
for(int i=offset;i<bigData.length;i++){
if(i < littleData.length){
intBucket = bigData[i] + littleData[i];
if(intBucket > SIGNED_SHORT_MAX){
intBucket = SIGNED_SHORT_MAX;
}
else if (intBucket < SIGNED_SHORT_MIN){
intBucket = SIGNED_SHORT_MIN;
}
bigData[i] = (short) intBucket;
}
else{
//leave bigData alone
}
}
return bigData;
}
the hybrid audio output quality with these improvements is awesome!
I am not familiar with android audio, so I can't answer all your questions, but I can tell you what the fundamental problem is: adding audio data byte-by-byte won't work. Since it sort-of works, and from looking at your code, and the fact that it's most common, I'm going to assume you have 16-bit PCM data. Yet everywhere, you are dealing with bytes. Bytes are not appropriate for processing audio (unless the audio happens to be 8-bit)
Bytes are aprox +/- 128. You say "I would expect to see values ranging from −32768 to 32767 in logcat from the Log.d(...) invocation above, but instead the results tend to be within the range of -100 to 100 (with some outliers beyond that)" Well, how could you possibly go to that range when you are printing values from a byte array? The correct datatype for 16 bit signed data is short, not byte. If you were printing short values, you'd see the range you expected.
You must convert your bytes to shorts and sum the shorts. This will take care of much of the misc noise you are hearing. Since you are reading right off the file, though, why bother converting? why not read it off the file as a short using something like this http://docs.oracle.com/javase/1.4.2/docs/api/java/io/DataInputStream.html#readShort()
The next issue is that you must deal with out-of-range values, rather than letting them "wrap around". The simplest solution is simply to do the summing as integers, "clip" into the short range, and then store the clipped output. This will get rid of your clicks and pops.
In psuedo-code, the entire process will look something like this:
file1 = Open file 1
file2 = Open file 2
output = Open output for writing
numSampleFrames1 = file1.readHeader()
numSampleFrames2 = file2.readHeader()
numSampleFrames = min( numSampleFrames1, numSampleFrames2 )
output.createHeader( numSampleFrames )
for( int i=0; i<numSampleFrames * channels; ++i ) {
//read data from file 1
int a = file1.readShort();
//read data from file 2, and add it to data we read from file 1
a += file2.readShort();
//clip into range
if( a > Short.MAX_VALUE )
a = Short.MAX_VALUE;
if( a < Short.MIN_VALUE )
a = Short.MIN_VALUE;
//write it to the output
output.writeShort( (Short) a );
}
You will get a little distortion from the "clipping" step, but there's no simple way around that, and clipping is MUCH better than wrap-around. (that said, unless your tracks are extremely "hot", and heavy in the low frequencies, the distortion shouldn't be too noticeable. If it is a problem, you can do other things: multiply a by .5 for example and skip the clipping, but then your output will be much quieter, which, on a phone, is probably not what you want).
来源:https://stackoverflow.com/questions/11000933/using-androids-audiotrack-to-combine-bytes-of-sound-samples-produces-noise