Swift 中的自定义相机和裁剪图像
Posted
技术标签:
【中文标题】Swift 中的自定义相机和裁剪图像【英文标题】:Custom Camera and Crop image in Swift 【发布时间】:2018-03-15 15:00:48 【问题描述】:我创建了一个自定义相机并实现了以下代码来裁剪拍摄的图像,我在预览层中显示了指南,因此我想裁剪出现在该区域中的图像。
func imageByCropToRect(rect:CGRect, scale:Bool) -> UIImage
var rect = rect
var scaleFactor: CGFloat = 1.0
if scale
scaleFactor = self.scale
rect.origin.x *= scaleFactor
rect.origin.y *= scaleFactor
rect.size.width *= scaleFactor
rect.size.height *= scaleFactor
var image: UIImage? = nil;
if rect.size.width > 0 && rect.size.height > 0
let imageRef = self.cgImage!.cropping(to: rect)
image = UIImage(cgImage: imageRef!, scale: scaleFactor, orientation: self.imageOrientation)
return image!
当注释下面的代码行并给出精确的裁剪图像时,此代码可以正常工作,尽管我希望图像流是全屏的,所以我必须使用下面的代码行。图片有点缩小了。
(self.previewLayer as! AVCaptureVideoPreviewLayer).videoGravity = AVLayerVideoGravity.resizeAspectFill
我该如何解决这个问题?裁剪代码错了吗?
这是完整的类代码
import UIKit
import AVFoundation
class CameraViewController: UIViewController
@IBOutlet weak var guideImageView: UIImageView!
@IBOutlet weak var guidesView: UIView!
@IBOutlet weak var cameraPreviewView: UIView!
@IBOutlet weak var cameraButtonView: UIView!
@IBOutlet weak var captureButton: UIButton!
var captureSession = AVCaptureSession()
var previewLayer: CALayer!
var captureDevice: AVCaptureDevice!
/// This will be true when the user clicks on the click photo button.
var takePhoto = false
override func viewDidLoad()
super.viewDidLoad()
override func viewWillAppear(_ animated: Bool)
super.viewWillAppear(animated)
captureSession = AVCaptureSession()
previewLayer = CALayer()
takePhoto = false
requestAuthorization()
private func userinteractionToButton(_ interaction: Bool)
captureButton.isEnabled = interaction
/// This function will request authorization, If authorized then start the camera.
private func requestAuthorization()
switch AVCaptureDevice.authorizationStatus(for: AVMediaType.video)
case .authorized:
prepareCamera()
case .denied, .restricted, .notDetermined:
AVCaptureDevice.requestAccess(for: AVMediaType.video, completionHandler: (granted) in
if !Thread.isMainThread
DispatchQueue.main.async
if granted
self.prepareCamera()
else
let alert = UIAlertController(title: "unable_to_access_the_Camera", message: "to_enable_access_go_to_setting_privacy_camera_and_turn_on_camera_access_for_this_app", preferredStyle: UIAlertControllerStyle.alert)
alert.addAction(UIAlertAction(title: "ok", style: .default, handler: _ in
self.navigationController?.popToRootViewController(animated: true)
))
self.present(alert, animated: true, completion: nil)
else
if granted
self.prepareCamera()
else
let alert = UIAlertController(title: "unable_to_access_the_Camera", message: "to_enable_access_go_to_setting_privacy_camera_and_turn_on_camera_access_for_this_app", preferredStyle: UIAlertControllerStyle.alert)
alert.addAction(UIAlertAction(title: "ok", style: .default, handler: _ in
self.navigationController?.popToRootViewController(animated: true)
))
self.present(alert, animated: true, completion: nil)
)
/// Will see if the primary camera is avilable, If found will call method which will asign the available device to the AVCaptureDevice.
private func prepareCamera()
// Resets the session.
self.captureSession.sessionPreset = AVCaptureSession.Preset.photo
if #available(ios 10.0, *)
let availableDevices = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .back).devices
self.assignCamera(availableDevices)
else
// Fallback on earlier versions
// development, need to test this on iOS 8
if let availableDevices = AVCaptureDevice.default(for: AVMediaType.video)
self.assignCamera([availableDevices])
else
self.showAlert()
/// Assigns AVCaptureDevice to the respected the variable, will begin the session.
///
/// - Parameter availableDevices: [AVCaptureDevice]
private func assignCamera(_ availableDevices: [AVCaptureDevice])
if availableDevices.first != nil
captureDevice = availableDevices.first
beginSession()
else
self.showAlert()
/// Configures the camera settings and begins the session, this function will be responsible for showing the image on the UI.
private func beginSession()
do
let captureDeviceInput = try AVCaptureDeviceInput(device: captureDevice)
captureSession.addInput(captureDeviceInput)
catch
print(error.localizedDescription)
let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
self.previewLayer = previewLayer
self.cameraPreviewView.layer.addSublayer(self.previewLayer)
self.previewLayer.frame = self.view.layer.frame
self.previewLayer.frame.origin.y = +self.cameraPreviewView.frame.origin.y
(self.previewLayer as! AVCaptureVideoPreviewLayer).videoGravity = AVLayerVideoGravity.resizeAspectFill
self.previewLayer.masksToBounds = true
self.cameraPreviewView.clipsToBounds = true
captureSession.startRunning()
self.view.bringSubview(toFront: self.cameraPreviewView)
self.view.bringSubview(toFront: self.cameraButtonView)
self.view.bringSubview(toFront: self.guidesView)
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [((kCVPixelBufferPixelFormatTypeKey as NSString) as String):NSNumber(value:kCVPixelFormatType_32BGRA)]
dataOutput.alwaysDiscardsLateVideoFrames = true
if captureSession.canAddOutput(dataOutput)
captureSession.addOutput(dataOutput)
captureSession.commitConfiguration()
let queue = DispatchQueue(label: "com.letsappit.camera")
dataOutput.setSampleBufferDelegate(self, queue: queue)
self.userinteractionToButton(true)
/// Get the UIImage from the given CMSampleBuffer.
///
/// - Parameter buffer: CMSampleBuffer
/// - Returns: UIImage?
func getImageFromSampleBuffer(buffer:CMSampleBuffer, orientation: UIImageOrientation) -> UIImage?
if let pixelBuffer = CMSampleBufferGetImageBuffer(buffer)
let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
let context = CIContext()
let imageRect = CGRect(x: 0, y: 0, width: CVPixelBufferGetWidth(pixelBuffer), height: CVPixelBufferGetHeight(pixelBuffer))
if let image = context.createCGImage(ciImage, from: imageRect)
return UIImage(cgImage: image, scale: UIScreen.main.scale, orientation: orientation)
return nil
/// This function will destroy the capture session.
func stopCaptureSession()
self.captureSession.stopRunning()
if let inputs = captureSession.inputs as? [AVCaptureDeviceInput]
for input in inputs
self.captureSession.removeInput(input)
func showAlert()
let alert = UIAlertController(title: "Unable to access the camera", message: "It appears that either your device doesn't have camera or its broken", preferredStyle: .alert)
alert.addAction(UIAlertAction(title: "cancel", style: .cancel, handler: _ in
self.navigationController?.dismiss(animated: true, completion: nil)
))
self.present(alert, animated: true, completion: nil)
@IBAction func didTapClick(_ sender: Any)
userinteractionToButton(false)
takePhoto = true
override func prepare(for segue: UIStoryboardSegue, sender: Any?)
if segue.identifier == "showImage"
let vc = segue.destination as! ShowImageViewController
vc.image = sender as! UIImage
extension CameraViewController: AVCaptureVideoDataOutputSampleBufferDelegate
func captureOutput(_ captureOutput: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection)
if connection.isVideoOrientationSupported
connection.videoOrientation = .portrait
if takePhoto
takePhoto = false
// Rotation should be unlocked to work.
var orientation = UIImageOrientation.up
switch UIDevice.current.orientation
case .landscapeLeft:
orientation = .left
case .landscapeRight:
orientation = .right
case .portraitUpsideDown:
orientation = .down
default:
orientation = .up
if let image = self.getImageFromSampleBuffer(buffer: sampleBuffer, orientation: orientation)
DispatchQueue.main.async
let newImage = image.imageByCropToRect(rect: self.guideImageView.frame, scale: true)
self.stopCaptureSession()
self.previewLayer.removeFromSuperlayer()
self.performSegue(withIdentifier: "showImage", sender: newImage)
这是视图层次结构图
【问题讨论】:
在您的项目中牢记这一点:openradar.me/36292067 @Scriptable 请看一下我的代码,我没有使用 UIImagePickerControllerEditedImage。 最好在这里尽可能多地解释,我不需要点击其他地方来理解您的问题 @Scriptable 我已经使用 AVCaptureSession、CALayer、AVCaptureDevice 构建了相机应用程序,然后在预览层上流式传输视频,一旦按下按钮,就会从 CMSampleBuffer 中获取图像。 如果您有空闲时间,您可以浏览该项目。提前致谢。 【参考方案1】:目前还不清楚问题出在哪里。我会使用调试器或一些打印语句来确定问题是图像还是显示图像的视图。打印出裁剪图像的大小以确保它是正确的。
然后,在 viewDidAppear 中的 ShowImageViewController 中打印出图像视图大小,确保它是正确的。
【讨论】:
好的,我会尝试使用调试器并查看那里的视图层次结构。【参考方案2】:对于缩小裁剪图像的校正,您必须使用图像方向将裁剪功能更改为此。
func croppedInRect(rect: CGRect) -> UIImage?
func rad(_ degree: Double) -> CGFloat
return CGFloat(degree / 180.0 * .pi)
var rectTransform: CGAffineTransform
switch imageOrientation
case .left:
rectTransform = CGAffineTransform(rotationAngle: rad(90)).translatedBy(x: 0, y: -self.size.height)
case .right:
rectTransform = CGAffineTransform(rotationAngle: rad(-90)).translatedBy(x: -self.size.width, y: 0)
case .down:
rectTransform = CGAffineTransform(rotationAngle: rad(-180)).translatedBy(x: -self.size.width, y: -self.size.height)
default:
rectTransform = .identity
rectTransform = rectTransform.scaledBy(x: self.scale, y: self.scale)
var cgImage = self.cgImage
if cgImage == nil
let ciContext = CIContext()
if let ciImage = self.ciImage
cgImage = ciContext.createCGImage(ciImage, from: ciImage.extent)
if let imageRef = cgImage?.cropping(to: rect.applying(rectTransform))
let result = UIImage(cgImage: imageRef, scale: self.scale, orientation: self.imageOrientation)
return result
return nil
【讨论】:
以上是关于Swift 中的自定义相机和裁剪图像的主要内容,如果未能解决你的问题,请参考以下文章
带有 UIImagePickerController 和 AVFoundation 的自定义相机