是否可以更改 CVPixelBuffer 中捕获的 AR 图像的分辨率?

Posted

技术标签:

【中文标题】是否可以更改 CVPixelBuffer 中捕获的 AR 图像的分辨率?【英文标题】:Is it possible to change a resolution of captured AR Images in CVPixelBuffer? 【发布时间】:2019-04-08 16:49:18 【问题描述】:

我在 ARKit 应用程序中使用预训练的 CoreML 模型。我正在从 ARCamera 捕获图像并将它们放入 CVPixelBuffer 进行处理:

let pixelBuffer: CVPixelBuffer? = (sceneView.session.currentFrame?.capturedImage)

ARKit 可以捕获 YCbCr 格式的像素缓冲区。要在 iPhone 的显示屏上正确渲染这些图像,您需要访问像素缓冲区的 lumachroma 平面,并使用 float4x4 ycbcrToRGBTransform 矩阵将全范围 YCbCr 值转换为 sRGB。所以我了解如何处理颜色。

但我想知道是否可以更改 CVPixelBuffer 中 Captured AR Images 的分辨率

怎么做?我需要一个尽可能低的处理。

【问题讨论】:

【参考方案1】:

是的,你可以做到。方法如下!

/**
 Resizes a CVPixelBuffer to a new width and height.
 */
func resizePixelBuffer(_ pixelBuffer: CVPixelBuffer,
                       width: Int, height: Int) -> CVPixelBuffer? 
    return resizePixelBuffer(pixelBuffer, cropX: 0, cropY: 0,
                             cropWidth: CVPixelBufferGetWidth(pixelBuffer),
                             cropHeight: CVPixelBufferGetHeight(pixelBuffer),
                             scaleWidth: width, scaleHeight: height)


func resizePixelBuffer(_ srcPixelBuffer: CVPixelBuffer,
                       cropX: Int,
                       cropY: Int,
                       cropWidth: Int,
                       cropHeight: Int,
                       scaleWidth: Int,
                       scaleHeight: Int) -> CVPixelBuffer? 

    CVPixelBufferLockBaseAddress(srcPixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
    guard let srcData = CVPixelBufferGetBaseAddress(srcPixelBuffer) else 
        print("Error: could not get pixel buffer base address")
        return nil
    
    let srcBytesPerRow = CVPixelBufferGetBytesPerRow(srcPixelBuffer)
    let offset = cropY*srcBytesPerRow + cropX*4
    var srcBuffer = vImage_Buffer(data: srcData.advanced(by: offset),
                                  height: vImagePixelCount(cropHeight),
                                  width: vImagePixelCount(cropWidth),
                                  rowBytes: srcBytesPerRow)

    let destBytesPerRow = scaleWidth*4
    guard let destData = malloc(scaleHeight*destBytesPerRow) else 
        print("Error: out of memory")
        return nil
    
    var destBuffer = vImage_Buffer(data: destData,
                                   height: vImagePixelCount(scaleHeight),
                                   width: vImagePixelCount(scaleWidth),
                                   rowBytes: destBytesPerRow)

    let error = vImageScale_ARGB8888(&srcBuffer, &destBuffer, nil, vImage_Flags(0))
    CVPixelBufferUnlockBaseAddress(srcPixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
    if error != kvImageNoError 
        print("Error:", error)
        free(destData)
        return nil
    

    let releaseCallback: CVPixelBufferReleaseBytesCallback =  _, ptr in
        if let ptr = ptr 
            free(UnsafeMutableRawPointer(mutating: ptr))
        
    

    let pixelFormat = CVPixelBufferGetPixelFormatType(srcPixelBuffer)
    var dstPixelBuffer: CVPixelBuffer?
    let status = CVPixelBufferCreateWithBytes(nil, scaleWidth, scaleHeight,
                                              pixelFormat, destData,
                                              destBytesPerRow, releaseCallback,
                                              nil, nil, &dstPixelBuffer)
    if status != kCVReturnSuccess 
        print("Error: could not create new pixel buffer")
        free(destData)
        return nil
    
    return dstPixelBuffer

用法:

if let pixelBuffer = sceneView.session.currentFrame?.capturedImage, let resizedBuffer = resizePixelBuffer(pixelBuffer, width: 320, height: 480) 
    //Core Model Processing

参考:https://github.com/hollance/CoreMLHelpers/tree/master/CoreMLHelpers

【讨论】:

Sohil,你能给我解释一个现象吗:即使resizePixelBuffer(pixelBuffer, width: 1, height: 1) CoreML 模型处理仍然可以正常工作!为什么? 你得到预测了吗?这是不可能的! 请测试一下,它工作正常。我将它与VNRecognizedObjectObservation 一起使用。

以上是关于是否可以更改 CVPixelBuffer 中捕获的 AR 图像的分辨率?的主要内容,如果未能解决你的问题,请参考以下文章

iOS 中的 CVPixelBuffer 是啥?

从 ARKit 获取 RGB“CVPixelBuffer”

为一项测试更改 pytest 捕获

调整CVPixelBuffer的大小

使用 UIImage.draw() 写入 CVPixelBuffer 太慢了

如何将 YUV 帧(来自 OTVideoFrame)转换为 CVPixelBuffer