将 ARKit 面部跟踪 3D 网格投影到 2D 图像坐标
Posted
技术标签:
【中文标题】将 ARKit 面部跟踪 3D 网格投影到 2D 图像坐标【英文标题】:Projecting the ARKit face tracking 3D mesh to 2D image coordinates 【发布时间】:2020-10-28 04:19:18 【问题描述】:我正在使用 ARKit 收集面部网格 3D 顶点。我已阅读:Mapping image onto 3D face mesh 和 Tracking and Visualizing Faces。
我有以下结构:
struct CaptureData
var vertices: [SIMD3<Float>]
var verticesformatted: String
let verticesDescribed = vertices.map( "\($0.x):\($0.y):\($0.z)" ).joined(separator: "~")
return "<\(verticesDescribed)>"
我有一个 Strat 按钮来捕获顶点:
@IBAction private func startPressed()
captureData = [] // Clear data
currentCaptureFrame = 0 //inital capture frame
fpsTimer = Timer.scheduledTimer(withTimeInterval: 1/fps, repeats: true, block: (timer) -> Void in self.recordData())
private var fpsTimer = Timer()
private var captureData: [CaptureData] = [CaptureData]()
private var currentCaptureFrame = 0
还有一个停止按钮来停止捕获(保存数据):
@IBAction private func stopPressed()
do
fpsTimer.invalidate() //turn off the timer
let capturedData = captureData.map$0.verticesformatted.joined(separator:"")
let dir: URL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).last! as URL
let url = dir.appendingPathComponent("facedata.txt")
try capturedData.appendLineToURL(fileURL: url as URL)
catch
print("Could not write to file")
重新编码数据的功能
private func recordData()
guard let data = getFrameData() else return
captureData.append(data)
currentCaptureFrame += 1
获取帧数据的功能
private func getFrameData() -> CaptureData?
let arFrame = sceneView?.session.currentFrame!
guard let anchor = arFrame?.anchors[0] as? ARFaceAnchor else return nil
let vertices = anchor.geometry.vertices
let data = CaptureData(vertices: vertices)
return data
ARSCN 扩展:
extension ViewController: ARSCNViewDelegate
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor)
guard let faceAnchor = anchor as? ARFaceAnchor else return
currentFaceAnchor = faceAnchor
if node.childNodes.isEmpty, let contentNode = selectedContentController.renderer(renderer, nodeFor: faceAnchor)
node.addChildNode(contentNode)
selectedContentController.session = sceneView?.session
selectedContentController.sceneView = sceneView
/// - Tag: ARFaceGeometryUpdate
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor)
guard anchor == currentFaceAnchor,
let contentNode = selectedContentController.contentNode,
contentNode.parent == node
else return
selectedContentController.session = sceneView?.session
selectedContentController.sceneView = sceneView
selectedContentController.renderer(renderer, didUpdate: contentNode, for: anchor)
我正在尝试使用来自Tracking and Visualizing Faces的示例代码:
// Transform the vertex to the camera coordinate system.
float4 vertexCamera = scn_node.modelViewTransform * _geometry.position;
// Camera projection and perspective divide to get normalized viewport coordinates (clip space).
float4 vertexClipSpace = scn_frame.projectionTransform * vertexCamera;
vertexClipSpace /= vertexClipSpace.w;
// XY in clip space is [-1,1]x[-1,1], so adjust to UV texture coordinates: [0,1]x[0,1].
// Image coordinates are Y-flipped (upper-left origin).
float4 vertexImageSpace = float4(vertexClipSpace.xy * 0.5 + 0.5, 0.0, 1.0);
vertexImageSpace.y = 1.0 - vertexImageSpace.y;
// Apply ARKit's display transform (device orientation * front-facing camera flip).
float4 transformedVertex = displayTransform * vertexImageSpace;
// Output as texture coordinates for use in later rendering stages.
_geometry.texcoords[0] = transformedVertex.xy;
我还阅读了有关投影点的信息(但不确定哪个更适用):
func projectPoint(_ point: SCNVector3) -> SCNVector3
我的问题是如何使用上面的示例代码,将收集到的 3D 人脸网格顶点转换为 2D 图像坐标??
我想获取 3D 网格顶点及其对应的 2D 坐标。
目前,我可以像这样捕获面部网格点:
我想将我的网格点转换为图像坐标并像这样一起显示它们:
预期结果:
有什么建议吗?提前致谢!
【问题讨论】:
【参考方案1】:也许你可以使用SCNSceneRenderer
的projectPoint
函数。
extension ARFaceAnchor
// struct to store the 3d vertex and the 2d projection point
struct VerticesAndProjection
var vertex: SIMD3<Float>
var projected: CGPoint
// return a struct with vertices and projection
func verticeAndProjection(to view: ARSCNView) -> [VerticesAndProjection]
let points = geometry.vertices.compactMap( (vertex) -> VerticesAndProjection? in
let col = SIMD4<Float>(SCNVector4())
let pos = SIMD4<Float>(SCNVector4(vertex.x, vertex.y, vertex.z, 1))
let pworld = transform * simd_float4x4(col, col, col, pos)
let vect = view.projectPoint(SCNVector3(pworld.position.x, pworld.position.y, pworld.position.z))
let p = CGPoint(x: CGFloat(vect.x), y: CGFloat(vect.y))
return VerticesAndProjection(vertex:vertex, projected: p)
)
return points
这是获取位置的便捷方法:
extension matrix_float4x4
/// Get the position of the transform matrix.
public var position: SCNVector3
get
return SCNVector3(self[3][0], self[3][1], self[3][2])
如果您想检查投影是否正常,请将调试子视图添加到 ARSCNView
实例,然后使用其他几个扩展在视图上绘制 2d 点,例如:
extension UIView
private struct drawCircleProperty
static let circleFillColor = UIColor.green
static let circleStrokeColor = UIColor.black
static let circleRadius: CGFloat = 3.0
func drawCircle(point: CGPoint)
let circlePath = UIBezierPath(arcCenter: point, radius: drawCircleProperty.circleRadius, startAngle: CGFloat(0), endAngle: CGFloat(Double.pi * 2.0), clockwise: true)
let shapeLayer = CAShapeLayer()
shapeLayer.path = circlePath.cgPath
shapeLayer.fillColor = drawCircleProperty.circleFillColor.cgColor
shapeLayer.strokeColor = drawCircleProperty.circleStrokeColor.cgColor
self.layer.addSublayer(shapeLayer)
func drawCircles(points: [CGPoint])
self.clearLayers()
for point in points
self.drawCircle(point: point)
func clearLayers()
if let subLayers = self.layer.sublayers
for subLayer in subLayers
subLayer.removeFromSuperlayer()
您可以计算投影,并使用以下方法绘制点:
let points:[ARFaceAnchor.VerticesAndProjection] = faceAnchor.verticeAndProjection(to: sceneView)
// keep only the projected points
let projected = points.map $0.projected
// draw the points !
self.debugView?.drawCircles(points: projected)
我可以看到投影到2d屏幕上的所有3d顶点(图片由https://thispersondoesnotexist.com生成)。
我将此代码添加到 Apple 演示项目中,可在此处获取 https://github.com/hugoliv/projectvertices.git
【讨论】:
感谢您的回复!对于这一行let vect = view.projectPoint(SCNVector3(pworld.position.x, pworld.position.y, pworld.position.z))
我得到Value of type 'float4x4' (aka 'simd_float4x4') has no member 'position'
错误
啊啊是的,正如我在帖子中所说,我给你一个方便的方法来从 simd_float4x4 中提取位置:不要忘记复制并粘贴以下扩展名:extension matrix_float4x4 /// Get the position of the transform matrix. public var position: SCNVector3 get return SCNVector3(self[3][0], self[3][1], self[3][2])
谢谢!只是仍然想知道二维坐标的项目点。如何验证二维数据的正确性和有效性?
你所说的«正确和有效»是什么意思?
我的问题是使用项目点,2D 实际上是做什么用的?像素坐标?我能够获得这些点,但不太确定它们是否有效并且实际上对应于 3D 顶点:***.com/questions/67305259/…以上是关于将 ARKit 面部跟踪 3D 网格投影到 2D 图像坐标的主要内容,如果未能解决你的问题,请参考以下文章
我的渲染技术进阶之旅关于ARCore的标准人脸3D模型canonical_face_mesh.fbx和2D面部网格参考纹理canonical_face_mesh.psd文件
我的渲染技术进阶之旅关于ARCore的标准人脸3D模型canonical_face_mesh.fbx和2D面部网格参考纹理canonical_face_mesh.psd文件