小编典典

Swift:视频记录只有一种尺寸,但是呈现的尺寸不正确

swift

目标是使用Swift在设备上捕获全屏视频。在下面的代码中,视频捕获似乎是在全屏模式下进行的(录制摄像机预览时使用的是全屏模式),但是视频的渲染以不同的分辨率进行。具体来说,对于5S,似乎在320x568发生捕获,而在发生渲染320x480

您如何捕获和渲染全屏视频?

视频捕获代码:

private func initPBJVision() {
    // Store PBJVision in var for convenience
    let vision = PBJVision.sharedInstance()

    // Configure PBJVision
    vision.delegate = self
    vision.cameraMode = PBJCameraMode.Video
    vision.cameraOrientation = PBJCameraOrientation.Portrait
    vision.focusMode = PBJFocusMode.ContinuousAutoFocus
    vision.outputFormat = PBJOutputFormat.Preset
    vision.cameraDevice = PBJCameraDevice.Back

    // Let taps start/pause recording
    let tapHandler = UITapGestureRecognizer(target: self, action: "doTap:")
    view.addGestureRecognizer(tapHandler)

    // Log status
    print("Configured PBJVision")
}


private func startCameraPreview() {
    // Store PBJVision in var for convenience
    let vision = PBJVision.sharedInstance()

    // Connect PBJVision camera preview to <videoView>
    // -- Get preview width
    let deviceWidth = CGRectGetWidth(view.frame)
    let deviceHeight = CGRectGetHeight(view.frame)

    // -- Configure PBJVision's preview layer
    let previewLayer = vision.previewLayer
    previewLayer.frame = CGRectMake(0, 0, deviceWidth, deviceHeight)
    previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
    ...
}

视频渲染代码:

func exportVideo(fileUrl: NSURL) {
    // Create main composition object
    let videoAsset = AVURLAsset(URL: fileUrl, options: nil)
    let mainComposition = AVMutableComposition()
    let compositionVideoTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
    let compositionAudioTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))

    // -- Extract and apply video & audio tracks to composition
    let sourceVideoTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0]
    let sourceAudioTrack = videoAsset.tracksWithMediaType(AVMediaTypeAudio)[0]
    do {
        try compositionVideoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), ofTrack: sourceVideoTrack, atTime: kCMTimeZero)
    } catch {
        print("Error with insertTimeRange. Video error: \(error).")
    }
    do {
        try compositionAudioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), ofTrack: sourceAudioTrack, atTime: kCMTimeZero)
    } catch {
        print("Error with insertTimeRange. Audio error: \(error).")
    }

    // Add text to video
    // -- Create video composition object
    let renderSize = compositionVideoTrack.naturalSize
    let videoComposition = AVMutableVideoComposition()
    videoComposition.renderSize = renderSize
    videoComposition.frameDuration = CMTimeMake(Int64(1), Int32(videoFrameRate))

    // -- Add instruction to  video composition object
    let instruction = AVMutableVideoCompositionInstruction()
    instruction.timeRange = CMTimeRangeMake(kCMTimeZero, videoAsset.duration)
    let videoLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: compositionVideoTrack)
    instruction.layerInstructions = [videoLayerInstruction]
    videoComposition.instructions = [instruction]

    // -- Define video frame
    let videoFrame = CGRectMake(0, 0, renderSize.width, renderSize.height)
    print("Video Frame: \(videoFrame)")  // <-- Prints frame of 320x480 so render size already wrong here 
    ...

阅读 270

收藏
2020-07-07

共1个答案

小编典典

如果我理解正确,似乎您误解了设备屏幕宽度不等于相机预览(和捕获)大小的事实。

videoGravity您的属性previewLayer指示如何在图层内拉伸/调整预览。它不会影响捕获输出。

输出的实际帧大小取决于当前的sessionPreset属性AVCaptureSession。正如我通过阅读PBJVision
lib的GitHub存储库所了解的那样,它的单例已针对此问题(称为captureSessionPreset)。您可以在您的initPBJVision方法中更改它。

在此可以找到会话预设的可能值。

2020-07-07