在上传到Google Cloud存储之前调整图像大小并进行压缩
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了在上传到Google Cloud存储之前调整图像大小并进行压缩相关的知识,希望对你有一定的参考价值。
我已经做了一个功能,在保持纵横比的同时将图像调整到最大宽度和高度。另外我正在基于compressionQuality压缩图像 - 我用3024x4032的jpeg图像@ 11.7 mb测试了这个。
maxWidth = 800px
maxHeight = 1200px
compressionQuality = 0.5
该功能确实将图像尺寸从11.7mb减小到0.51mb,但宽度和高度没有正确减小。上传到Firebase后,图像尺寸是1600 x 2134px的两倍...但它应该是800x1066px(一半)
你能看出什么问题吗?
import UIKit
import Foundation
class ImageEdit {
static let instance = ImageEdit()
func resizeAndCompressImageWith(image: UIImage, maxWidth: CGFloat, maxHeight: CGFloat, compressionQuality: CGFloat) -> Data? {
let horizontalRatio = maxWidth / image.size.width
let verticalRatio = maxHeight / image.size.height
let ratio = min(horizontalRatio, verticalRatio)
let newSize = CGSize(width: image.size.width * ratio, height: image.size.height * ratio)
var newImage: UIImage
if #available(ios 10.0, *) {
let renderFormat = UIGraphicsImageRendererFormat.default()
renderFormat.opaque = false
let renderer = UIGraphicsImageRenderer(size: CGSize(width: newSize.width, height: newSize.height), format: renderFormat)
newImage = renderer.image {
(context) in
image.draw(in: CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height))
}
} else {
UIGraphicsBeginImageContextWithOptions(CGSize(width: newSize.width, height: newSize.height), true, 0)
image.draw(in: CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height))
newImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
}
let data = UIImageJPEGRepresentation(newImage, compressionQuality)
return data
}
}
这是将图像上传到firebase的代码。
func uploadImageToFirebaseAndReturnImageURL(directory: String, image: UIImage!, maxWidth: CGFloat, maxHeight: CGFloat, compressionQuality: CGFloat, handler: @escaping(_ imageURL: String) -> ()) {
let imageName = NSUUID().uuidString // create unique image name
if let uploadData = ImageEdit.instance.resizeAndCompressImageWith(image: image, maxWidth: maxWidth, maxHeight: maxHeight, compressionQuality: compressionQuality) {
DB_STORE.child(directory).child(imageName).putData(uploadData, metadata: nil, completion: { (metadata, error) in
if error != nil {
print(error ?? "Image upload failed for unknown reason")
return
}
// if URL exist, then return imageURL
if let imageURL = metadata?.downloadURL()?.absoluteString {
handler (imageURL)
}
return
})
}
}
我将您的代码复制到测试项目中并添加了一些打印语句,并尝试调整原始大小为3360x2108的图像大小。 (注意:我在此测试代码中使用强制解包,但不建议将其用于任何生产代码)。
这是调用调整大小代码的函数:
func resizeImage() {
guard let image = UIImage.init(named: "landscape") else {
return
}
print("Original Image Size: width: (image.size.width) height: (image.size.height)")
let _ = ImageEdit.instance.resizeAndCompressImageWith(image: image, maxWidth: 800.0, maxHeight: 1200.0, compressionQuality: 0.5)
}
这是我的调整大小代码的更新版本。我只是添加一些代码来在最后实例化一些图像实例,以在转换后注销它们的实际大小:
import UIKit
class ImageEdit {
static let instance = ImageEdit()
func resizeAndCompressImageWith(image: UIImage, maxWidth: CGFloat, maxHeight: CGFloat, compressionQuality: CGFloat) -> Data? {
let horizontalRatio = maxWidth / image.size.width
let verticalRatio = maxHeight / image.size.height
let ratio = min(horizontalRatio, verticalRatio)
print("Image Ratio: (ratio)")
let newSize = CGSize(width: image.size.width * ratio, height: image.size.height * ratio)
print("NewSize: (String(describing: newSize))")
var newImage: UIImage
if #available(iOS 10.0, *) {
print("UIGraphicsImageRendererFormat")
let renderFormat = UIGraphicsImageRendererFormat.default()
renderFormat.opaque = false
let renderer = UIGraphicsImageRenderer(size: CGSize(width: newSize.width, height: newSize.height), format: renderFormat)
newImage = renderer.image {
(context) in
image.draw(in: CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height))
}
} else {
print("UIGraphicsBeginImageContextWithOptions")
UIGraphicsBeginImageContextWithOptions(CGSize(width: newSize.width, height: newSize.height), true, 0)
image.draw(in: CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height))
newImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
}
print("NewImageSize: width: (newImage.size.width) height: (newImage.size.height)")
let png = UIImagePNGRepresentation(newImage)
let pngImg = UIImage.init(data: png!)!
print("PNG - width: (pngImg.size.width) - height: (pngImg.size.height)")
let data = UIImageJPEGRepresentation(newImage, compressionQuality)
let jpgImg = UIImage.init(data: data!)!
print("JPG - width: (jpgImg.size.width) - height: (jpgImg.size.height)")
let fullData = UIImageJPEGRepresentation(newImage, 1.0)
let jpgFull = UIImage.init(data: fullData!)!
print("JPG FULL - width: (jpgFull.size.width) - height: (jpgFull.size.height)")
return data
}
}
在iOS 11的模拟器上运行时,我将其记录到调试器:
Original Image Size: width: 3360.0 height: 2108.0
Image Ratio: 0.238095238095238
NewSize: (800.0, 501.904761904762)
UIGraphicsImageRendererFormat
NewImageSize: width: 800.0 height: 502.0
PNG - width: 2400.0 - height: 1506.0
JPG - width: 2400.0 - height: 1506.0
JPG FULL - width: 2400.0 - height: 1506.0
如果我评论你的
if #available(iOS 10.0, *) {
块我仍然看到相同的测量结果。
所以似乎newImage直接从中生成
UIGraphicsImageRendererFormat
or
UIGraphicsBeginImageContextWithOptions
生成具有指定大小的图像。但是,出于某种原因运行该图像
UIImagePNGRepresentation(newImage)
or
UIImageJPEGRepresentation(newImage, compressionQuality)
导致图像的尺寸比原始图像大3倍。即使我更新
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
to
UIGraphicsBeginImageContextWithOptions(size, false, UIScreen.main.scale)
这似乎并不重要。
在我的测试用例中,UIScreen.main.scale = 3.0。
因此,通过PNG或JPEG表示方法转换图像似乎会将其与UIScreen.main.scale相乘,以获得这些函数生成的最终图像大小。
我正在为我的项目做同样的功能。但是,我相信您目前的结果是预期的结果。由于未在UIGraphicsBeginImageContextWithOptions中设置UIGraphicsImageRendererFormat.scale或scale参数,因此它使用设备的主屏幕比例。通过将比例设置为1.0,您可以使用所需的CGSize作为这些方法的输入,并获得所需的UIImage和UIImagePNG / JPEGRepresentation大小。
以上是关于在上传到Google Cloud存储之前调整图像大小并进行压缩的主要内容,如果未能解决你的问题,请参考以下文章
Google Cloud App Engine 文件存储备份
在 Cloud AutoML Vision 中将图像导入 Google 存储时出错