Java截图优化
Posted gg22g2
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Java截图优化相关的知识,希望对你有一定的参考价值。
应为要用java截图,然后传递给opencv处理,java中截图是使用下边的代码:
Robot robot = new Robot();
BufferedImage screenCapture = robot.createScreenCapture(new Rectangle(0, 0, 1920, 1080));
但是在我的电脑的全屏截图要50ms左右,但当我减少截图区域后,耗时会有一个明显的成比例减少,这让我想要看一下为什么有这样。
private synchronized BufferedImage[]
createCompatibleImage(Rectangle screenRect, boolean isHiDPI) {
checkScreenCaptureAllowed();
checkValidRect(screenRect);
BufferedImage lowResolutionImage;
BufferedImage highResolutionImage;
DataBufferInt buffer;
WritableRaster raster;
BufferedImage[] imageArray;
//因为调用截图api返回的像素为int[],所有这里确定RGB的值分别在那些位
if (screenCapCM == null) {
/*
* Fix for 4285201
* Create a DirectColorModel equivalent to the default RGB ColorModel,
* except with no Alpha component.
*/
screenCapCM = new DirectColorModel(24,
/* red mask */ 0x00FF0000,
/* green mask */ 0x0000FF00,
/* blue mask */ 0x000000FF);
}
int[] bandmasks = new int[3];
bandmasks[0] = screenCapCM.getRedMask();
bandmasks[1] = screenCapCM.getGreenMask();
bandmasks[2] = screenCapCM.getBlueMask();
// 感觉是防止重复截图,等待下一帧吧
Toolkit.getDefaultToolkit().sync();
//
GraphicsConfiguration gc = GraphicsEnvironment
.getLocalGraphicsEnvironment()
.getDefaultScreenDevice().
getDefaultConfiguration();
gc = SunGraphicsEnvironment.getGraphicsConfigurationAtPoint(
gc, screenRect.getCenterX(), screenRect.getCenterY());
AffineTransform tx = gc.getDefaultTransform();
double uiScaleX = tx.getScaleX();
double uiScaleY = tx.getScaleY();
int[] pixels;
//我电脑这里 uiScaleX 和 uiScaleY 都是1.x,所以走的else分支
if (uiScaleX == 1 && uiScaleY == 1) {
pixels = peer.getRGBPixels(screenRect);
buffer = new DataBufferInt(pixels, pixels.length);
bandmasks[0] = screenCapCM.getRedMask();
bandmasks[1] = screenCapCM.getGreenMask();
bandmasks[2] = screenCapCM.getBlueMask();
raster = Raster.createPackedRaster(buffer, screenRect.width,
screenRect.height, screenRect.width, bandmasks, null);
SunWritableRaster.makeTrackable(buffer);
highResolutionImage = new BufferedImage(screenCapCM, raster,
false, null);
imageArray = new BufferedImage[1];
imageArray[0] = highResolutionImage;
} else {
Rectangle scaledRect;
if (peer.useAbsoluteCoordinates()) {
scaledRect = toDeviceSpaceAbs(gc, screenRect.x,
screenRect.y, screenRect.width, screenRect.height);
} else {
scaledRect = toDeviceSpace(gc, screenRect.x,
screenRect.y, screenRect.width, screenRect.height);
}
// 这里调用本地方法截图,这里是一个耗时点
pixels = peer.getRGBPixels(scaledRect);
//构建解析pixels中数据的对象,
buffer = new DataBufferInt(pixels, pixels.length);
raster = Raster.createPackedRaster(buffer, scaledRect.width,
scaledRect.height, scaledRect.width, bandmasks, null);
SunWritableRaster.makeTrackable(buffer);
highResolutionImage = new BufferedImage(screenCapCM, raster,
false, null);
// 这里大概意思就是,根据高分辨率图像,生成低分辨率图像,但是drawImage方法也挺耗时的
lowResolutionImage = new BufferedImage(screenRect.width,
screenRect.height, highResolutionImage.getType());
Graphics2D g = lowResolutionImage.createGraphics();
g.setRenderingHint(RenderingHints.KEY_INTERPOLATION,
RenderingHints.VALUE_INTERPOLATION_BILINEAR);
g.setRenderingHint(RenderingHints.KEY_RENDERING,
RenderingHints.VALUE_RENDER_QUALITY);
g.setRenderingHint(RenderingHints.KEY_ANTIALIASING,
RenderingHints.VALUE_ANTIALIAS_ON);
g.drawImage(highResolutionImage, 0, 0,
screenRect.width, screenRect.height,
0, 0, scaledRect.width, scaledRect.height, null);
g.dispose();
if(!isHiDPI) {
imageArray = new BufferedImage[1];
imageArray[0] = lowResolutionImage;
} else {
imageArray = new BufferedImage[2];
imageArray[0] = lowResolutionImage;
imageArray[1] = highResolutionImage;
}
}
return imageArray;
}
我把pixels = peer.getRGBPixels(scaledRect);方法单独拿出来测试,速度是26ms,也就是其他操作是优化的。
我最开始是查看opencv中是否有用一个int报错一个像素的rgb值的内容,但是好像并没有。所以我就尝试手动解析pixels,从中提取r,g,b的值。
//截图
pixels = peer.getRGBPixels(screenRect);
//解析像素
int length = screenRect.width * screenRect.height;
byte[] imgBytes = new byte[length * 3];
int byteIndex = 0;
for (int i = 0, pixel = 0; i < length; i++) {
pixel = pixels[i];
// pixel中是按照rgb格式排序,但是opencv默认是bgr格式
imgBytes[byteIndex++] = (byte) (pixel);
pixel = pixel >> 8;
imgBytes[byteIndex++] = (byte) (pixel);
imgBytes[byteIndex++] = (byte) (pixel >> 8);
}
通过测试,这里解析只需要3~4ms,然后把这个byte[]传递给mat就好了。整体截一张图并传递给opencv的耗时为30ms。
最后试了下在屏幕画面持续变化的情况下,一次截图并传给opencv的耗时为35ms。
以上是关于Java截图优化的主要内容,如果未能解决你的问题,请参考以下文章
uiautomatorviewer 优化定位符生成,支持生成Java,Python自动化代码
优化 C# 代码片段、ObservableCollection 和 AddRange