但却无法提供我所寻找的答案。
我有一个AVMutableComposition()。我试图在这种单一的组成中应用AVCompositionTrack单一的MULTIPLE AVMediaTypeVideo。这是因为我用2个不同AVMediaTypeVideo来源的不同CGSize的和preferredTransforms的AVAsset就是他们来自何处。
AVMutableComposition()
AVCompositionTrack
AVMediaTypeVideo
CGSize
preferredTransforms
AVAsset
因此,应用它们指定的唯一方法preferredTransforms是在2个不同的轨道中提供它们。但是,无论出于何种原因,只有第一个轨道实际上会提供任何视频,就像第二个轨道永远不在那儿一样。
所以,我尝试了
1)使用AVMutableVideoCompositionLayerInstruction,并使用和AVVideoComposition一起使用AVAssetExportSession,效果很好,我仍在进行转换,但可以。但是视频的处理时间远远超过1分钟,这在我的情况下不适用。
AVMutableVideoCompositionLayerInstruction
AVVideoComposition
AVAssetExportSession
2)使用多个音轨时,不显示,AVAssetExportSession并且不会出现相同类型的第二音轨。现在,我可以将它们全部放在一条轨道上,但是所有视频都将具有与第一个视频相同的大小和preferredTransform,这是我绝对不想要的,因为它在各个方面都可以延伸。
所以我的问题是
1)在不使用AVAssetExportSession?的情况下将指令仅应用于轨道 // FAR的首选方式。
2)减少出口时间?(我尝试过使用,PresetPassthrough但是如果您的exporter.videoComposition指令位于我的指令所在的位置,则无法使用它。这是我知道的唯一可以放置指令的位置,不确定是否可以将它们放置在其他位置。
PresetPassthrough
exporter.videoComposition
这是我的一些代码(无需导出器,因为我不需要在任何地方导出任何东西,只需在AVMutableComposition合并项目之后再做一些工作即可。
func merge() { if let firstAsset = controller.firstAsset, secondAsset = self.asset { let mixComposition = AVMutableComposition() let firstTrack = mixComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) do { //Don't need now according to not being able to edit first 14seconds. if(CMTimeGetSeconds(startTime) == 0) { self.startTime = CMTime(seconds: 1/600, preferredTimescale: Int32(600)) } try firstTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, CMTime(seconds: CMTimeGetSeconds(startTime), preferredTimescale: 600)), ofTrack: firstAsset.tracksWithMediaType(AVMediaTypeVideo)[0], atTime: kCMTimeZero) } catch _ { print("Failed to load first track") } //This secondTrack never appears, doesn't matter what is inside of here, like it is blank space in the video from startTime to endTime (rangeTime of secondTrack) let secondTrack = mixComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) // secondTrack.preferredTransform = self.asset.preferredTransform do { try secondTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, secondAsset.duration), ofTrack: secondAsset.tracksWithMediaType(AVMediaTypeVideo)[0], atTime: CMTime(seconds: CMTimeGetSeconds(startTime), preferredTimescale: 600)) } catch _ { print("Failed to load second track") } //This part appears again, at endTime which is right after the 2nd track is suppose to end. do { try firstTrack.insertTimeRange(CMTimeRangeMake(CMTime(seconds: CMTimeGetSeconds(endTime), preferredTimescale: 600), firstAsset.duration-endTime), ofTrack: firstAsset.tracksWithMediaType(AVMediaTypeVideo)[0] , atTime: CMTime(seconds: CMTimeGetSeconds(endTime), preferredTimescale: 600)) } catch _ { print("failed") } if let loadedAudioAsset = controller.audioAsset { let audioTrack = mixComposition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: 0) do { try audioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, firstAsset.duration), ofTrack: loadedAudioAsset.tracksWithMediaType(AVMediaTypeAudio)[0] , atTime: kCMTimeZero) } catch _ { print("Failed to load Audio track") } } } }
编辑
苹果指出:“通过实现AVVideoCompositionInstruction协议的类的实例的NSArray指示视频合成的指令。对于数组中的第一条指令,timeRange.start必须小于或等于播放或其他处理的最早时间。尝试(请注意,通常为kCMTimeZero)。对于后续指令,timeRange.start必须等于前一条指令的结束时间,最后一条指令的结束时间必须大于或等于播放或其他指令的最新时间将尝试进行处理(请注意,这通常是与AVVideoComposition实例相关联的资产的持续时间)。”
这只是说明如果您决定使用ANY指令,则必须在指令内分层整个组合(这是我的理解)。为什么是这样?在这个示例中,我将如何仅应用指令说轨道2而完全不改变轨道1或3:
曲目1从0-10秒开始,曲目2从10-20秒开始,曲目3从20-30秒开始。
对此的任何解释都可能会回答我的问题(如果可行)。
好的,所以对于我的确切问题,我必须CGAffineTransform在Swift中应用特定的转换才能获得所需的特定结果。我正在发布的当前作品适用于拍摄/获取的任何照片以及视频
CGAffineTransform
//This method gets the orientation of the current transform. This method is used below to determine the orientation func orientationFromTransform(_ transform: CGAffineTransform) -> (orientation: UIImageOrientation, isPortrait: Bool) { var assetOrientation = UIImageOrientation.up var isPortrait = false if transform.a == 0 && transform.b == 1.0 && transform.c == -1.0 && transform.d == 0 { assetOrientation = .right isPortrait = true } else if transform.a == 0 && transform.b == -1.0 && transform.c == 1.0 && transform.d == 0 { assetOrientation = .left isPortrait = true } else if transform.a == 1.0 && transform.b == 0 && transform.c == 0 && transform.d == 1.0 { assetOrientation = .up } else if transform.a == -1.0 && transform.b == 0 && transform.c == 0 && transform.d == -1.0 { assetOrientation = .down } //Returns the orientation as a variable return (assetOrientation, isPortrait) } //Method that lays out the instructions for each track I am editing and does the transformation on each individual track to get it lined up properly func videoCompositionInstructionForTrack(_ track: AVCompositionTrack, _ asset: AVAsset) -> AVMutableVideoCompositionLayerInstruction { //This method Returns set of instructions from the initial track //Create inital instruction let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track) //This is whatever asset you are about to apply instructions to. let assetTrack = asset.tracks(withMediaType: AVMediaTypeVideo)[0] //Get the original transform of the asset var transform = assetTrack.preferredTransform //Get the orientation of the asset and determine if it is in portrait or landscape - I forget which, but either if you take a picture or get in the camera roll it is ALWAYS determined as landscape at first, I don't recall which one. This method accounts for it. let assetInfo = orientationFromTransform(transform) //You need a little background to understand this part. /* MyAsset is my original video. I need to combine a lot of other segments, according to the user, into this original video. So I have to make all the other videos fit this size. This is the width and height ratios from the original video divided by the new asset */ let width = MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.width/assetTrack.naturalSize.width var height = MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.height //If it is in portrait if assetInfo.isPortrait { //We actually change the height variable to divide by the width of the old asset instead of the height. This is because of the flip since we determined it is portrait and not landscape. height = MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.width //We apply the transform and scale the image appropriately. transform = transform.scaledBy(x: height, y: height) //We also have to move the image or video appropriately. Since we scaled it, it could be wayy off on the side, outside the bounds of the viewing. let movement = ((1/height)*assetTrack.naturalSize.height)-assetTrack.naturalSize.height //This lines it up dead center on the left side of the screen perfectly. Now we want to center it. transform = transform.translatedBy(x: 0, y: movement) //This calculates how much black there is. Cut it in half and there you go! let totalBlackDistance = MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.width-transform.tx transform = transform.translatedBy(x: 0, y: -(totalBlackDistance/2)*(1/height)) } else { //Landscape! We don't need to change the variables, it is all defaulted that way (iOS prefers landscape items), so we scale it appropriately. transform = transform.scaledBy(x: width, y: height) //This is a little complicated haha. So because it is in landscape, the asset fits the height correctly, for me anyway; It was just extra long. Think of this as a ratio. I forgot exactly how I thought this through, but the end product looked like: Answer = ((Original height/current asset height)*(current asset width))/(Original width) let scale:CGFloat = ((MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.height)*(assetTrack.naturalSize.width))/MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.width transform = transform.scaledBy(x: scale, y: 1) //The asset can be way off the screen again, so we have to move it back. This time we can have it dead center in the middle, because it wasn't backwards because it wasn't flipped because it was landscape. Again, another long complicated algorithm I derived. let movement = ((MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.width-((MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.height)*(assetTrack.naturalSize.width)))/2)*(1/MyAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize.height/assetTrack.naturalSize.height) transform = transform.translatedBy(x: movement, y: 0) } //This creates the instruction and returns it so we can apply it to each individual track. instruction.setTransform(transform, at: kCMTimeZero) return instruction }
现在有了这些方法,我们现在可以对资产适当地应用正确和适当的转换,并使所有内容都变得干净整洁。
func merge() { if let firstAsset = MyAsset, let newAsset = newAsset { //This creates our overall composition, our new video framework let mixComposition = AVMutableComposition() //One by one you create tracks (could use loop, but I just had 3 cases) let firstTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) //You have to use a try, so need a do do { //Inserting a timerange into a track. I already calculated my time, I call it startTime. This is where you would put your time. The preferredTimeScale doesn't have to be 600000 haha, I was playing with those numbers. It just allows precision. At is not where it begins within this individual track, but where it starts as a whole. As you notice below my At times are different You also need to give it which track try firstTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, CMTime(seconds: CMTimeGetSeconds(startTime), preferredTimescale: 600000)), of: firstAsset.tracks(withMediaType: AVMediaTypeVideo)[0], at: kCMTimeZero) } catch _ { print("Failed to load first track") } //Create the 2nd track let secondTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) do { //Apply the 2nd timeRange you have. Also apply the correct track you want try secondTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, self.endTime-self.startTime), of: newAsset.tracks(withMediaType: AVMediaTypeVideo)[0], at: CMTime(seconds: CMTimeGetSeconds(startTime), preferredTimescale: 600000)) secondTrack.preferredTransform = newAsset.preferredTransform } catch _ { print("Failed to load second track") } //We are not sure we are going to use the third track in my case, because they can edit to the end of the original video, causing us not to use a third track. But if we do, it is the same as the others! var thirdTrack:AVMutableCompositionTrack! if(self.endTime != controller.realDuration) { thirdTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) //This part appears again, at endTime which is right after the 2nd track is suppose to end. do { try thirdTrack.insertTimeRange(CMTimeRangeMake(CMTime(seconds: CMTimeGetSeconds(endTime), preferredTimescale: 600000), self.controller.realDuration-endTime), of: firstAsset.tracks(withMediaType: AVMediaTypeVideo)[0] , at: CMTime(seconds: CMTimeGetSeconds(endTime), preferredTimescale: 600000)) } catch _ { print("failed") } } //Same thing with audio! if let loadedAudioAsset = controller.audioAsset { let audioTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: 0) do { try audioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, self.controller.realDuration), of: loadedAudioAsset.tracks(withMediaType: AVMediaTypeAudio)[0] , at: kCMTimeZero) } catch _ { print("Failed to load Audio track") } } //So, now that we have all of these tracks we need to apply those instructions! If we don't, then they could be different sizes. Say my newAsset is 720x1080 and MyAsset is 1440x900 (These are just examples haha), then it would look a tad funky and possibly not show our new asset at all. let mainInstruction = AVMutableVideoCompositionInstruction() //Make sure the overall time range matches that of the individual tracks, if not, it could cause errors. mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, self.controller.realDuration) //For each track we made, we need an instruction. Could set loop or do individually as such. let firstInstruction = videoCompositionInstructionForTrack(firstTrack, firstAsset) //You know, not 100% why this is here. This is 1 thing I did not look into well enough or understand enough to describe to you. firstInstruction.setOpacity(0.0, at: startTime) //Next Instruction let secondInstruction = videoCompositionInstructionForTrack(secondTrack, self.asset) //Again, not sure we need 3rd one, but if we do. var thirdInstruction:AVMutableVideoCompositionLayerInstruction! if(self.endTime != self.controller.realDuration) { secondInstruction.setOpacity(0.0, at: endTime) thirdInstruction = videoCompositionInstructionForTrack(thirdTrack, firstAsset) } //Okay, now that we have all these instructions, we tie them into the main instruction we created above. mainInstruction.layerInstructions = [firstInstruction, secondInstruction] if(self.endTime != self.controller.realDuration) { mainInstruction.layerInstructions += [thirdInstruction] } //We create a video framework now, slightly different than the one above. let mainComposition = AVMutableVideoComposition() //We apply these instructions to the framework mainComposition.instructions = [mainInstruction] //How long are our frames, you can change this as necessary mainComposition.frameDuration = CMTimeMake(1, 30) //This is your render size of the video. 720p, 1080p etc. You set it! mainComposition.renderSize = firstAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize //We create an export session (you can't use PresetPassthrough because we are manipulating the transforms of the videos and the quality, so I just set it to highest) guard let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality) else { return } //Provide type of file, provide the url location you want exported to (I don't have mine posted in this example). exporter.outputFileType = AVFileTypeMPEG4 exporter.outputURL = url //Then we tell the exporter to export the video according to our video framework, and it does the work! exporter.videoComposition = mainComposition //Asynchronous methods FTW! exporter.exportAsynchronously(completionHandler: { //Do whatever when it finishes! }) } }
这里有很多事情要做,但无论如何,还是必须要做的!抱歉,发布时间花了很长时间,如果您有任何疑问,请告诉我。