swift test2 #vision

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了swift test2 #vision相关的知识,希望对你有一定的参考价值。

class VideoCropper {
    static func cropSquareVideo(inputUrl: NSURL, outputUrl: NSURL, callback: (result: NSURL) -> Void) {
        
        let asset = AVAsset(URL: inputUrl)
        let clipVideoTrack = asset.tracksWithMediaType(AVMediaTypeVideo).first!
        
        let videoComposition: AVMutableVideoComposition = AVMutableVideoComposition()
        videoComposition.frameDuration = CMTimeMake(1, 60)
        videoComposition.renderSize = CGSizeMake(clipVideoTrack.naturalSize.height, clipVideoTrack.naturalSize.height)
        
        let instruction: AVMutableVideoCompositionInstruction = AVMutableVideoCompositionInstruction()
        instruction.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeMakeWithSeconds(60, 30))
        
        let transformer: AVMutableVideoCompositionLayerInstruction =
            AVMutableVideoCompositionLayerInstruction(assetTrack: clipVideoTrack)
        
        let t1: CGAffineTransform = CGAffineTransformMakeTranslation(clipVideoTrack.naturalSize.height, 0)
        let t2: CGAffineTransform = CGAffineTransformRotate(t1, CGFloat(M_PI_2))
        
        let finalTransform: CGAffineTransform = t2
        
        transformer.setTransform(finalTransform, atTime: kCMTimeZero)
        
        instruction.layerInstructions = [transformer]
        videoComposition.instructions = [instruction]
        
        let exporter = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetHighestQuality)
        exporter!.videoComposition = videoComposition
        exporter!.outputFileType = AVFileTypeQuickTimeMovie
        exporter!.outputURL = outputUrl
        
        exporter!.exportAsynchronouslyWithCompletionHandler({
            callback(result: outputUrl)
        })
    }
}

以上是关于swift test2 #vision的主要内容,如果未能解决你的问题,请参考以下文章

Swift 中 Vision/CoreML 对象识别器的精度

在 Swift 4 上为 Google ML Vision 框架旋转 UIImage

Swift Vision Framework - VNRecognizeTextRequest:传递给不带参数的调用的参数

Swift中Vision / CoreML对象识别器的精度

如何将额外的参数传递给 Vision 框架?

在 macOS 中使用 Vision 和 CoreML 对图像进行分类