问题
In my Application I need to capture a video and Put a watermark on that video. The watermark should be Text(Time and Notes). I saw a code using \"QTKit\" Frame work. However I read that the framework is not available for iPhone.
Thanks in Advance.
回答1:
Use AVFoundation
. I would suggest grabbing frames with AVCaptureVideoDataOutput
, then overlaying the captured frame with the watermark image, and finally writing captured and processed frames to a file user AVAssetWriter
.
Search around stack overflow, there are a ton of fantastic examples detailing how to do each of these things I have mentioned. I haven't seen any that give code examples for exactly the effect you would like, but you should be able to mix and match pretty easily.
EDIT:
Take a look at these links:
iPhone: AVCaptureSession capture output crashing (AVCaptureVideoDataOutput) - this post might be helpful just by nature of containing relevant code.
AVCaptureDataOutput
will return images as CMSampleBufferRef
s.
Convert them to CGImageRef
s using this code:
- (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!
return newImage;
}
From there you would convert to a UIImage,
UIImage *img = [UIImage imageWithCGImage:yourCGImage];
Then use
[img drawInRect:CGRectMake(x,y,height,width)];
to draw the frame to a context, draw a PNG of the watermark over it, and then add the processed images to your output video using AVAssetWriter
. I would suggest adding them in real time so you're not filling up memory with tons of UIImages.
How do I export UIImage array as a movie? - this post shows how to add the UIImages you have processed to a video for a given duration.
This should get you well on your way to watermarking your videos. Remember to practice good memory management, because leaking images that are coming in at 20-30fps is a great way to crash the app.
回答2:
Adding a watermark is quite more simple. You just need to use a CALayer and AVVideoCompositionCoreAnimationTool. The code can be just copied and assembled in the same order. I have just tried to insert some comments in between for better understanding.
Let's assume you recorded the video already so we are going to create the AVURLAsset first:
AVURLAsset* videoAsset = [[AVURLAsset alloc]initWithURL:outputFileURL options:nil];
AVMutableComposition* mixComposition = [AVMutableComposition composition];
AVMutableCompositionTrack *compositionVideoTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVAssetTrack *clipVideoTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, videoAsset.duration)
ofTrack:clipVideoTrack
atTime:kCMTimeZero error:nil];
[compositionVideoTrack setPreferredTransform:[[[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] preferredTransform]];
With just this code you would be able to export the video but we want to add the layer with the watermark first. Please note that some code may seem redundant but it is necessary for everything to work.
First we create the layer with the watermark image:
UIImage *myImage = [UIImage imageNamed:@"icon.png"];
CALayer *aLayer = [CALayer layer];
aLayer.contents = (id)myImage.CGImage;
aLayer.frame = CGRectMake(5, 25, 57, 57); //Needed for proper display. We are using the app icon (57x57). If you use 0,0 you will not see it
aLayer.opacity = 0.65; //Feel free to alter the alpha here
If we don't want an image and want text instead:
CATextLayer *titleLayer = [CATextLayer layer];
titleLayer.string = @"Text goes here";
titleLayer.font = @"Helvetica";
titleLayer.fontSize = videoSize.height / 6;
//?? titleLayer.shadowOpacity = 0.5;
titleLayer.alignmentMode = kCAAlignmentCenter;
titleLayer.bounds = CGRectMake(0, 0, videoSize.width, videoSize.height / 6); //You may need to adjust this for proper display
The following code sorts the layer in proper order:
CGSize videoSize = [videoAsset naturalSize];
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height);
videoLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height);
[parentLayer addSublayer:videoLayer];
[parentLayer addSublayer:aLayer];
[parentLayer addSublayer:titleLayer]; //ONLY IF WE ADDED TEXT
Now we are creating the composition and add the instructions to insert the layer:
AVMutableVideoComposition* videoComp = [[AVMutableVideoComposition videoComposition] retain];
videoComp.renderSize = videoSize;
videoComp.frameDuration = CMTimeMake(1, 30);
videoComp.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
/// instruction
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, [mixComposition duration]);
AVAssetTrack *videoTrack = [[mixComposition tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVMutableVideoCompositionLayerInstruction* layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
instruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];
videoComp.instructions = [NSArray arrayWithObject: instruction];
And now we are ready to export:
_assetExport = [[AVAssetExportSession alloc] initWithAsset:mixComposition presetName:AVAssetExportPresetMediumQuality];//AVAssetExportPresetPassthrough
_assetExport.videoComposition = videoComp;
NSString* videoName = @"mynewwatermarkedvideo.mov";
NSString *exportPath = [NSTemporaryDirectory() stringByAppendingPathComponent:videoName];
NSURL *exportUrl = [NSURL fileURLWithPath:exportPath];
if ([[NSFileManager defaultManager] fileExistsAtPath:exportPath])
{
[[NSFileManager defaultManager] removeItemAtPath:exportPath error:nil];
}
_assetExport.outputFileType = AVFileTypeQuickTimeMovie;
_assetExport.outputURL = exportUrl;
_assetExport.shouldOptimizeForNetworkUse = YES;
[strRecordedFilename setString: exportPath];
[_assetExport exportAsynchronouslyWithCompletionHandler:
^(void ) {
[_assetExport release];
//YOUR FINALIZATION CODE HERE
}
];
[audioAsset release];
[videoAsset release];
回答3:
Already the answer given by @Julio works fine in case of objective-c Here's the same code base for Swift 3.0:
WATERMARK & Generating SQUARE or CROPPED video like Instagram
Getting the output file from Documents Directory & create AVURLAsset
//output file
let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first
let outputPath = documentsURL?.appendingPathComponent("squareVideo.mov")
if FileManager.default.fileExists(atPath: (outputPath?.path)!) {
do {
try FileManager.default.removeItem(atPath: (outputPath?.path)!)
}
catch {
print ("Error deleting file")
}
}
//input file
let asset = AVAsset.init(url: filePath)
print (asset)
let composition = AVMutableComposition.init()
composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)
//input clip
let clipVideoTrack = asset.tracks(withMediaType: AVMediaTypeVideo)[0]
Create the layer with the watermark image:
//adding the image layer
let imglogo = UIImage(named: "video_button")
let watermarkLayer = CALayer()
watermarkLayer.contents = imglogo?.cgImage
watermarkLayer.frame = CGRect(x: 5, y: 25 ,width: 57, height: 57)
watermarkLayer.opacity = 0.85
Create the layer with Text as watermark instead of image:
let textLayer = CATextLayer()
textLayer.string = "Nodat"
textLayer.foregroundColor = UIColor.red.cgColor
textLayer.font = UIFont.systemFont(ofSize: 50)
textLayer.alignmentMode = kCAAlignmentCenter
textLayer.bounds = CGRect(x: 5, y: 25, width: 100, height: 20)
The following code sorts the layer in proper order:
let videoSize = clipVideoTrack.naturalSize
let parentlayer = CALayer()
let videoLayer = CALayer()
parentlayer.frame = CGRect(x: 0, y: 0, width: videoSize.height, height: videoSize.height)
videoLayer.frame = CGRect(x: 0, y: 0, width: videoSize.height, height: videoSize.height)
parentlayer.addSublayer(videoLayer)
parentlayer.addSublayer(watermarkLayer)
parentlayer.addSublayer(textLayer) //for text layer only
Adding the layers over the video in proper order for watermark
let videoSize = clipVideoTrack.naturalSize
let parentlayer = CALayer()
let videoLayer = CALayer()
parentlayer.frame = CGRect(x: 0, y: 0, width: videoSize.height, height: videoSize.height)
videoLayer.frame = CGRect(x: 0, y: 0, width: videoSize.height, height: videoSize.height)
parentlayer.addSublayer(videoLayer)
parentlayer.addSublayer(watermarkLayer)
parentlayer.addSublayer(textLayer) //for text layer only
Cropping the video in square format - of 300*300 in size
//make it square
let videoComposition = AVMutableVideoComposition()
videoComposition.renderSize = CGSize(width: 300, height: 300) //change it as per your needs.
videoComposition.frameDuration = CMTimeMake(1, 30)
videoComposition.renderScale = 1.0
//Magic line for adding watermark to the video
videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayers: [videoLayer], in: parentlayer)
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeMakeWithSeconds(60, 30))
Rotate to Portrait
//rotate to potrait
let transformer = AVMutableVideoCompositionLayerInstruction(assetTrack: clipVideoTrack)
let t1 = CGAffineTransform(translationX: clipVideoTrack.naturalSize.height, y: -(clipVideoTrack.naturalSize.width - clipVideoTrack.naturalSize.height) / 2)
let t2: CGAffineTransform = t1.rotated(by: .pi/2)
let finalTransform: CGAffineTransform = t2
transformer.setTransform(finalTransform, at: kCMTimeZero)
instruction.layerInstructions = [transformer]
videoComposition.instructions = [instruction]
Final step to export the video
let exporter = AVAssetExportSession.init(asset: asset, presetName: AVAssetExportPresetMediumQuality)
exporter?.outputFileType = AVFileTypeQuickTimeMovie
exporter?.outputURL = outputPath
exporter?.videoComposition = videoComposition
exporter?.exportAsynchronously() { handler -> Void in
if exporter?.status == .completed {
print("Export complete")
DispatchQueue.main.async(execute: {
completion(outputPath)
})
return
} else if exporter?.status == .failed {
print("Export failed - \(String(describing: exporter?.error))")
}
completion(nil)
return
}
This will export the video in square size with watermark as Text Or Image
Thanks
回答4:
Simply Download the code and Use it.It is in Apple developer documentation Page.
http://developer.apple.com/library/ios/#samplecode/AVSimpleEditoriOS/Listings/AVSimpleEditor_AVSERotateCommand_m.html
回答5:
Here's the example on swift3 how to insert both animated (array of images/slides/frames) and static image watermarks into the recorded video.
It uses CAKeyframeAnimation to animate the frames, and it uses AVMutableCompositionTrack, AVAssetExportSession and AVMutableVideoComposition together with AVMutableVideoCompositionInstruction to combine everything together.
来源:https://stackoverflow.com/questions/7205820/iphone-watermark-on-recorded-video