问题
So, I would like to get a sound file and convert it in packets, and send it to another computer. I would like that the other computer be able to play the packets as they arrive.
I am using AVAudioPlayer to try to play this packets, but I couldn't find a proper way to serialize the data on the peer1 that the peer2 can play.
The scenario is, peer1 has a audio file, split the audio file in many small packets, put them on a NSData and send them to peer2. Peer 2 receive the packets and play one by one, as they arrive.
Does anyone have know how to do this? or even if it is possible?
EDIT:
Here it is some piece of code to illustrate what I would like to achieve.
// This code is part of the peer1, the one who sends the data
- (void)sendData
{
int packetId = 0;
NSString *soundFilePath = [[NSBundle mainBundle] pathForResource:@"myAudioFile" ofType:@"wav"];
NSData *soundData = [[NSData alloc] initWithContentsOfFile:soundFilePath];
NSMutableArray *arraySoundData = [[NSMutableArray alloc] init];
// Spliting the audio in 2 pieces
// This is only an illustration
// The idea is to split the data into multiple pieces
// dependin on the size of the file to be sent
NSRange soundRange;
soundRange.length = [soundData length]/2;
soundRange.location = 0;
[arraySoundData addObject:[soundData subdataWithRange:soundRange]];
soundRange.length = [soundData length]/2;
soundRange.location = [soundData length]/2;
[arraySoundData addObject:[soundData subdataWithRange:soundRange]];
for (int i=0; i<[arraySoundData count]; i++)
{
NSData *soundPacket = [arraySoundData objectAtIndex:i];
if(soundPacket == nil)
{
NSLog(@"soundData is nil");
return;
}
NSMutableData* message = [[NSMutableData alloc] init];
NSKeyedArchiver* archiver = [[NSKeyedArchiver alloc] initForWritingWithMutableData:message];
[archiver encodeInt:packetId++ forKey:PACKET_ID];
[archiver encodeObject:soundPacket forKey:PACKET_SOUND_DATA];
[archiver finishEncoding];
NSError* error = nil;
[connectionManager sendMessage:message error:&error];
if (error) NSLog (@"send greeting failed: %@" , [error localizedDescription]);
[message release];
[archiver release];
}
[soundData release];
[arraySoundData release];
}
// This is the code on peer2 that would receive and play the piece of audio on each packet
- (void) receiveData:(NSData *)data
{
NSKeyedUnarchiver* unarchiver = [[NSKeyedUnarchiver alloc] initForReadingWithData:data];
if ([unarchiver containsValueForKey:PACKET_ID])
NSLog(@"DECODED PACKET_ID: %i", [unarchiver decodeIntForKey:PACKET_ID]);
if ([unarchiver containsValueForKey:PACKET_SOUND_DATA])
{
NSLog(@"DECODED sound");
NSData *sound = (NSData *)[unarchiver decodeObjectForKey:PACKET_SOUND_DATA];
if (sound == nil)
{
NSLog(@"sound is nil!");
}
else
{
NSLog(@"sound is not nil!");
AVAudioPlayer *audioPlayer = [AVAudioPlayer alloc];
if ([audioPlayer initWithData:sound error:nil])
{
[audioPlayer prepareToPlay];
[audioPlayer play];
} else {
[audioPlayer release];
NSLog(@"Player couldn't load data");
}
}
}
[unarchiver release];
}
So, here is what I am trying to achieve...so, what I really need to know is how to create the packets, so peer2 can play the audio.
It would be a kind of streaming. Yes, for now I am not worried about the order that the packet are received or played...I only need to get the sound sliced and them be able to play each piece, each slice, without need to wait for the whole file be received by peer2.
Thanks!
回答1:
It seems you are solving wrong task, because AVAudioPlayer capable play only whole audiofile. You should use Audio Queue Service from AudioToolbox framework instead, to play audio on packet-by-packet basis. In fact you need not divide audiofile into real sound packets, you can use any data block like in your own example above, but then you should read received data chuncks using Audiofile Service or Audio File Stream Services functions (from AudioToolbox) and feed them to audioqueue buffers.
If you nevertheless want to divide audiofile into sound packets, you can easily do it with Audiofile Service functions. Audiofile consist of header where its properties like number of packets, samplerate, number of channels etc. are stored, and raw sound data.
Use AudioFileOpenURL to open audiofile and take all its properties with AudioFileGetProperty function. Basicaly you need only kAudioFilePropertyDataFormat and kAudioFilePropertyAudioDataPacketCount properties:
AudioFileID fileID; // the identifier for the audio file
CFURLRef fileURL = ...; // file URL
AudioStreamBasicDescription format; // structure containing audio header info
UInt64 packetsCount;
AudioFileOpenURL(fileURL,
0x01, //fsRdPerm, // read only
0, //no hint
&fileID
);
UInt32 sizeOfPlaybackFormatASBDStruct = sizeof format;
AudioFileGetProperty (
fileID,
kAudioFilePropertyDataFormat,
&sizeOfPlaybackFormatASBDStruct,
&format
);
propertySize = sizeof(packetsCount);
AudioFileGetProperty(fileID, kAudioFilePropertyAudioDataPacketCount, &propertySize, &packetsCount);
Then you can take any range of audiopackets data with:
OSStatus AudioFileReadPackets (
AudioFileID inAudioFile,
Boolean inUseCache,
UInt32 *outNumBytes,
AudioStreamPacketDescription *outPacketDescriptions,
SInt64 inStartingPacket,
UInt32 *ioNumPackets,
void *outBuffer
);
回答2:
Here is simplest class to play files with AQ Note that you can play it from any point (just set currentPacketNumber)
#import <Foundation/Foundation.h>
#import <AudioToolbox/AudioToolbox.h>
@interface AudioFile : NSObject {
AudioFileID fileID; // the identifier for the audio file to play
AudioStreamBasicDescription format;
UInt64 packetsCount;
UInt32 maxPacketSize;
}
@property (readwrite) AudioFileID fileID;
@property (readwrite) UInt64 packetsCount;
@property (readwrite) UInt32 maxPacketSize;
- (id) initWithURL: (CFURLRef) url;
- (AudioStreamBasicDescription *)audioFormatRef;
@end
// AudioFile.m
#import "AudioFile.h"
@implementation AudioFile
@synthesize fileID;
@synthesize format;
@synthesize maxPacketSize;
@synthesize packetsCount;
- (id)initWithURL:(CFURLRef)url{
if (self = [super init]){
AudioFileOpenURL(
url,
0x01, //fsRdPerm, read only
0, //no hint
&fileID
);
UInt32 sizeOfPlaybackFormatASBDStruct = sizeof format;
AudioFileGetProperty (
fileID,
kAudioFilePropertyDataFormat,
&sizeOfPlaybackFormatASBDStruct,
&format
);
UInt32 propertySize = sizeof (maxPacketSize);
AudioFileGetProperty (
fileID,
kAudioFilePropertyMaximumPacketSize,
&propertySize,
&maxPacketSize
);
propertySize = sizeof(packetsCount);
AudioFileGetProperty(fileID, kAudioFilePropertyAudioDataPacketCount, &propertySize, &packetsCount);
}
return self;
}
-(AudioStreamBasicDescription *)audioFormatRef{
return &format;
}
- (void) dealloc {
AudioFileClose(fileID);
[super dealloc];
}
// AQPlayer.h
#import <Foundation/Foundation.h>
#import "AudioFile.h"
#define AUDIOBUFFERS_NUMBER 3
#define MAX_PACKET_COUNT 4096
@interface AQPlayer : NSObject {
@public
AudioQueueRef queue;
AudioQueueBufferRef buffers[AUDIOBUFFERS_NUMBER];
NSInteger bufferByteSize;
AudioStreamPacketDescription packetDescriptions[MAX_PACKET_COUNT];
AudioFile * audioFile;
SInt64 currentPacketNumber;
UInt32 numPacketsToRead;
}
@property (nonatomic) SInt64 currentPacketNumber;
@property (nonatomic, retain) AudioFile * audioFile;
-(id)initWithFile:(NSString *)file;
-(NSInteger)fillBuffer:(AudioQueueBufferRef)buffer;
-(void)play;
@end
// AQPlayer.m
#import "AQPlayer.h"
static void AQOutputCallback(void * inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer) {
AQPlayer * aqp = (AQPlayer *)inUserData;
[aqp fillBuffer:(AudioQueueBufferRef)inBuffer];
}
@implementation AQPlayer
@synthesize currentPacketNumber;
@synthesize audioFile;
-(id)initWithFile:(NSString *)file{
if ([self init]){
audioFile = [[AudioFile alloc] initWithURL:[NSURL fileURLWithPath:file]];
currentPacketNumber = 0;
AudioQueueNewOutput ([audioFile audioFormatRef], AQOutputCallback, self, CFRunLoopGetCurrent (), kCFRunLoopCommonModes, 0, &queue);
bufferByteSize = 4096;
if (bufferByteSize < audioFile.maxPacketSize) bufferByteSize = audioFile.maxPacketSize;
numPacketsToRead = bufferByteSize/audioFile.maxPacketSize;
for(int i=0; i<AUDIOBUFFERS_NUMBER; i++){
AudioQueueAllocateBuffer (queue, bufferByteSize, &buffers[i]);
}
}
return self;
}
-(void) dealloc{
[audioFile release];
if (queue){
AudioQueueDispose(queue, YES);
queue = nil;
}
[super dealloc];
}
- (void)play{
for (int bufferIndex = 0; bufferIndex < AUDIOBUFFERS_NUMBER; ++bufferIndex){
[self fillBuffer:buffers[bufferIndex]];
}
AudioQueueStart (queue, NULL);
}
-(NSInteger)fillBuffer:(AudioQueueBufferRef)buffer{
UInt32 numBytes;
UInt32 numPackets = numPacketsToRead;
BOOL isVBR = [audioFile audioFormatRef]->mBytesPerPacket == 0 ? YES : NO;
AudioFileReadPackets(
audioFile.fileID,
NO,
&numBytes,
isVBR ? packetDescriptions : 0,
currentPacketNumber,
&numPackets,
buffer->mAudioData
);
if (numPackets > 0) {
buffer->mAudioDataByteSize = numBytes;
AudioQueueEnqueueBuffer (
queue,
buffer,
isVBR ? numPackets : 0,
isVBR ? packetDescriptions : 0
);
}
else{
// end of present data, check if all packets are played
// if yes, stop play and dispose queue
// if no, pause queue till new data arrive then start it again
}
return numPackets;
}
回答3:
Apple already has written something that can do this: AUNetSend and AUNetReceive. AUNetSend is an effect AudioUnit that sends audio to an AUNetReceive AudioUnit on another computer.
Unfortunately these AUs are not available on the iPhone, though.
来源:https://stackoverflow.com/questions/2399498/objective-c-how-to-serialize-audio-file-into-small-packets-that-can-be-played