如何在不将图像保存在本地的情况下将捕获的图像(Surface View)传递给另一个片段?
Posted
技术标签:
【中文标题】如何在不将图像保存在本地的情况下将捕获的图像(Surface View)传递给另一个片段?【英文标题】:How to pass the captured image (Surface View) to another fragment without saving the image locally? 【发布时间】:2018-12-05 00:57:58 【问题描述】:我现在有一个来自 googlesamples 的相机应用程序,该应用程序捕获图像,然后将其保存到本地存储。我想要的是将图像转换为位图并将其传递给另一个视图(片段)
public class Camera2BasicFragment extends Fragment
implements View.OnClickListener, ActivityCompat.OnRequestPermissionsResultCallback
/**
* Conversion from screen rotation to JPEG orientation.
*/
private static final SparseIntArray ORIENTATIONS = new SparseIntArray();
private static final int REQUEST_CAMERA_PERMISSION = 1;
private static final String FRAGMENT_DIALOG = "dialog";
static
ORIENTATIONS.append(Surface.ROTATION_0, 90);
ORIENTATIONS.append(Surface.ROTATION_90, 0);
ORIENTATIONS.append(Surface.ROTATION_180, 270);
ORIENTATIONS.append(Surface.ROTATION_270, 180);
private static final String TAG = "Camera2BasicFragment";
private static final int STATE_PREVIEW = 0;
private static final int STATE_WAITING_LOCK = 1;
private static final int STATE_WAITING_PRECAPTURE = 2;
private static final int STATE_WAITING_NON_PRECAPTURE = 3;
private static final int STATE_PICTURE_TAKEN = 4;
private static final int MAX_PREVIEW_WIDTH = 1920;
private static final int MAX_PREVIEW_HEIGHT = 1080;
private final TextureView.SurfaceTextureListener mSurfaceTextureListener
= new TextureView.SurfaceTextureListener()
@Override
public void onSurfaceTextureAvailable(SurfaceTexture texture, int width, int height)
openCamera(width, height);
@Override
public void onSurfaceTextureSizeChanged(SurfaceTexture texture, int width, int height)
configureTransform(width, height);
@Override
public boolean onSurfaceTextureDestroyed(SurfaceTexture texture)
return true;
@Override
public void onSurfaceTextureUpdated(SurfaceTexture texture)
;
private String mCameraId;
private AutoFitTextureView mTextureView;
private CameraCaptureSession mCaptureSession;
private CameraDevice mCameraDevice;
private Size mPreviewSize;
private final CameraDevice.StateCallback mStateCallback = new CameraDevice.StateCallback()
@Override
public void onOpened(@NonNull CameraDevice cameraDevice)
// This method is called when the camera is opened. We start camera preview here.
mCameraOpenCloseLock.release();
mCameraDevice = cameraDevice;
createCameraPreviewSession();
@Override
public void onDisconnected(@NonNull CameraDevice cameraDevice)
mCameraOpenCloseLock.release();
cameraDevice.close();
mCameraDevice = null;
@Override
public void onError(@NonNull CameraDevice cameraDevice, int error)
mCameraOpenCloseLock.release();
cameraDevice.close();
mCameraDevice = null;
Activity activity = getActivity();
if (null != activity)
activity.finish();
;
private HandlerThread mBackgroundThread;
private Handler mBackgroundHandler;
private ImageReader mImageReader;
private File mFile;
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener
= new ImageReader.OnImageAvailableListener()
@Override
public void onImageAvailable(ImageReader reader)
mBackgroundHandler.post(new ImageSaver(reader.acquireNextImage(), mFile));
;
private CaptureRequest.Builder mPreviewRequestBuilder;
private CaptureRequest mPreviewRequest;
private int mState = STATE_PREVIEW;
private Semaphore mCameraOpenCloseLock = new Semaphore(1);
private boolean mFlashSupported;
private int mSensorOrientation;
private CameraCaptureSession.CaptureCallback mCaptureCallback
= new CameraCaptureSession.CaptureCallback()
private void process(CaptureResult result)
switch (mState)
case STATE_PREVIEW:
// We have nothing to do when the camera preview is working normally.
break;
case STATE_WAITING_LOCK:
Integer afState = result.get(CaptureResult.CONTROL_AF_STATE);
if (afState == null)
captureStillPicture();
else if (CaptureResult.CONTROL_AF_STATE_FOCUSED_LOCKED == afState ||
CaptureResult.CONTROL_AF_STATE_NOT_FOCUSED_LOCKED == afState)
// CONTROL_AE_STATE can be null on some devices
Integer aeState = result.get(CaptureResult.CONTROL_AE_STATE);
if (aeState == null ||
aeState == CaptureResult.CONTROL_AE_STATE_CONVERGED)
mState = STATE_PICTURE_TAKEN;
captureStillPicture();
else
runPrecaptureSequence();
break;
case STATE_WAITING_PRECAPTURE:
// CONTROL_AE_STATE can be null on some devices
Integer aeState = result.get(CaptureResult.CONTROL_AE_STATE);
if (aeState == null ||
aeState == CaptureResult.CONTROL_AE_STATE_PRECAPTURE ||
aeState == CaptureRequest.CONTROL_AE_STATE_FLASH_REQUIRED)
mState = STATE_WAITING_NON_PRECAPTURE;
break;
case STATE_WAITING_NON_PRECAPTURE:
// CONTROL_AE_STATE can be null on some devices
Integer aeState = result.get(CaptureResult.CONTROL_AE_STATE);
if (aeState == null || aeState != CaptureResult.CONTROL_AE_STATE_PRECAPTURE)
mState = STATE_PICTURE_TAKEN;
captureStillPicture();
break;
@Override
public void onCaptureProgressed(@NonNull CameraCaptureSession session,
@NonNull CaptureRequest request,
@NonNull CaptureResult partialResult)
process(partialResult);
@Override
public void onCaptureCompleted(@NonNull CameraCaptureSession session,
@NonNull CaptureRequest request,
@NonNull TotalCaptureResult result)
process(result);
;
private void showToast(final String text)
final Activity activity = getActivity();
if (activity != null)
activity.runOnUiThread(new Runnable()
@Override
public void run()
Toast.makeText(activity, text, Toast.LENGTH_SHORT).show();
);
private static Size chooseOptimalSize(Size[] choices, int textureViewWidth,
int textureViewHeight, int maxWidth, int maxHeight, Size aspectRatio)
// Collect the supported resolutions that are at least as big as the preview Surface
List<Size> bigEnough = new ArrayList<>();
// Collect the supported resolutions that are smaller than the preview Surface
List<Size> notBigEnough = new ArrayList<>();
int w = aspectRatio.getWidth();
int h = aspectRatio.getHeight();
for (Size option : choices)
if (option.getWidth() <= maxWidth && option.getHeight() <= maxHeight &&
option.getHeight() == option.getWidth() * h / w)
if (option.getWidth() >= textureViewWidth &&
option.getHeight() >= textureViewHeight)
bigEnough.add(option);
else
notBigEnough.add(option);
// Pick the smallest of those big enough. If there is no one big enough, pick the
// largest of those not big enough.
if (bigEnough.size() > 0)
return Collections.min(bigEnough, new CompareSizesByArea());
else if (notBigEnough.size() > 0)
return Collections.max(notBigEnough, new CompareSizesByArea());
else
Log.e(TAG, "Couldn't find any suitable preview size");
return choices[0];
public static Camera2BasicFragment newInstance()
return new Camera2BasicFragment();
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState)
return inflater.inflate(R.layout.fragment_camera2_basic, container, false);
@Override
public void onViewCreated(final View view, Bundle savedInstanceState)
view.findViewById(R.id.picture).setOnClickListener(this);
view.findViewById(R.id.info).setOnClickListener(this);
mTextureView = (AutoFitTextureView) view.findViewById(R.id.texture);
@Override
public void onActivityCreated(Bundle savedInstanceState)
super.onActivityCreated(savedInstanceState);
mFile = new File(getActivity().getExternalFilesDir(null), "pic.jpg");
@Override
public void onResume()
super.onResume();
startBackgroundThread();
// When the screen is turned off and turned back on, the SurfaceTexture is already
// available, and "onSurfaceTextureAvailable" will not be called. In that case, we can open
// a camera and start preview from here (otherwise, we wait until the surface is ready in
// the SurfaceTextureListener).
if (mTextureView.isAvailable())
openCamera(mTextureView.getWidth(), mTextureView.getHeight());
else
mTextureView.setSurfaceTextureListener(mSurfaceTextureListener);
@Override
public void onPause()
closeCamera();
stopBackgroundThread();
super.onPause();
private void requestCameraPermission()
if (shouldShowRequestPermissionRationale(Manifest.permission.CAMERA))
new ConfirmationDialog().show(getChildFragmentManager(), FRAGMENT_DIALOG);
else
requestPermissions(new String[]Manifest.permission.CAMERA, REQUEST_CAMERA_PERMISSION);
@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions,
@NonNull int[] grantResults)
if (requestCode == REQUEST_CAMERA_PERMISSION)
if (grantResults.length != 1 || grantResults[0] != PackageManager.PERMISSION_GRANTED)
ErrorDialog.newInstance(getString(R.string.request_permission))
.show(getChildFragmentManager(), FRAGMENT_DIALOG);
else
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
/**
* Sets up member variables related to camera.
*
* @param width The width of available size for camera preview
* @param height The height of available size for camera preview
*/
@SuppressWarnings("SuspiciousNameCombination")
private void setUpCameraOutputs(int width, int height)
Activity activity = getActivity();
CameraManager manager = (CameraManager) activity.getSystemService(Context.CAMERA_SERVICE);
try
for (String cameraId : manager.getCameraIdList())
CameraCharacteristics characteristics
= manager.getCameraCharacteristics(cameraId);
// We don't use a front facing camera in this sample.
Integer facing = characteristics.get(CameraCharacteristics.LENS_FACING);
if (facing != null && facing == CameraCharacteristics.LENS_FACING_FRONT)
continue;
StreamConfigurationMap map = characteristics.get(
CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
if (map == null)
continue;
// For still image captures, we use the largest available size.
Size largest = Collections.max(
Arrays.asList(map.getOutputSizes(ImageFormat.JPEG)),
new CompareSizesByArea());
mImageReader = ImageReader.newInstance(largest.getWidth(), largest.getHeight(),
ImageFormat.JPEG, /*maxImages*/2);
mImageReader.setOnImageAvailableListener(
mOnImageAvailableListener, mBackgroundHandler);
// Find out if we need to swap dimension to get the preview size relative to sensor
// coordinate.
int displayRotation = activity.getWindowManager().getDefaultDisplay().getRotation();
//noinspection ConstantConditions
mSensorOrientation = characteristics.get(CameraCharacteristics.SENSOR_ORIENTATION);
boolean swappedDimensions = false;
switch (displayRotation)
case Surface.ROTATION_0:
case Surface.ROTATION_180:
if (mSensorOrientation == 90 || mSensorOrientation == 270)
swappedDimensions = true;
break;
case Surface.ROTATION_90:
case Surface.ROTATION_270:
if (mSensorOrientation == 0 || mSensorOrientation == 180)
swappedDimensions = true;
break;
default:
Log.e(TAG, "Display rotation is invalid: " + displayRotation);
Point displaySize = new Point();
activity.getWindowManager().getDefaultDisplay().getSize(displaySize);
int rotatedPreviewWidth = width;
int rotatedPreviewHeight = height;
int maxPreviewWidth = displaySize.x;
int maxPreviewHeight = displaySize.y;
if (swappedDimensions)
rotatedPreviewWidth = height;
rotatedPreviewHeight = width;
maxPreviewWidth = displaySize.y;
maxPreviewHeight = displaySize.x;
if (maxPreviewWidth > MAX_PREVIEW_WIDTH)
maxPreviewWidth = MAX_PREVIEW_WIDTH;
if (maxPreviewHeight > MAX_PREVIEW_HEIGHT)
maxPreviewHeight = MAX_PREVIEW_HEIGHT;
// Danger, W.R.! Attempting to use too large a preview size could exceed the camera
// bus' bandwidth limitation, resulting in gorgeous previews but the storage of
// garbage capture data.
mPreviewSize = chooseOptimalSize(map.getOutputSizes(SurfaceTexture.class),
rotatedPreviewWidth, rotatedPreviewHeight, maxPreviewWidth,
maxPreviewHeight, largest);
// We fit the aspect ratio of TextureView to the size of preview we picked.
int orientation = getResources().getConfiguration().orientation;
if (orientation == Configuration.ORIENTATION_LANDSCAPE)
mTextureView.setAspectRatio(
mPreviewSize.getWidth(), mPreviewSize.getHeight());
else
mTextureView.setAspectRatio(
mPreviewSize.getHeight(), mPreviewSize.getWidth());
// Check if the flash is supported.
Boolean available = characteristics.get(CameraCharacteristics.FLASH_INFO_AVAILABLE);
mFlashSupported = available == null ? false : available;
mCameraId = cameraId;
return;
catch (CameraAccessException e)
e.printStackTrace();
catch (NullPointerException e)
// Currently an NPE is thrown when the Camera2API is used but not supported on the
// device this code runs.
ErrorDialog.newInstance(getString(R.string.camera_error))
.show(getChildFragmentManager(), FRAGMENT_DIALOG);
/**
* Opens the camera specified by @link Camera2BasicFragment#mCameraId.
*/
private void openCamera(int width, int height)
if (ContextCompat.checkSelfPermission(getActivity(), Manifest.permission.CAMERA)
!= PackageManager.PERMISSION_GRANTED)
requestCameraPermission();
return;
setUpCameraOutputs(width, height);
configureTransform(width, height);
Activity activity = getActivity();
CameraManager manager = (CameraManager) activity.getSystemService(Context.CAMERA_SERVICE);
try
if (!mCameraOpenCloseLock.tryAcquire(2500, TimeUnit.MILLISECONDS))
throw new RuntimeException("Time out waiting to lock camera opening.");
manager.openCamera(mCameraId, mStateCallback, mBackgroundHandler);
catch (CameraAccessException e)
e.printStackTrace();
catch (InterruptedException e)
throw new RuntimeException("Interrupted while trying to lock camera opening.", e);
/**
* Closes the current @link CameraDevice.
*/
private void closeCamera()
try
mCameraOpenCloseLock.acquire();
if (null != mCaptureSession)
mCaptureSession.close();
mCaptureSession = null;
if (null != mCameraDevice)
mCameraDevice.close();
mCameraDevice = null;
if (null != mImageReader)
mImageReader.close();
mImageReader = null;
catch (InterruptedException e)
throw new RuntimeException("Interrupted while trying to lock camera closing.", e);
finally
mCameraOpenCloseLock.release();
/**
* Starts a background thread and its @link Handler.
*/
private void startBackgroundThread()
mBackgroundThread = new HandlerThread("CameraBackground");
mBackgroundThread.start();
mBackgroundHandler = new Handler(mBackgroundThread.getLooper());
/**
* Stops the background thread and its @link Handler.
*/
private void stopBackgroundThread()
mBackgroundThread.quitSafely();
try
mBackgroundThread.join();
mBackgroundThread = null;
mBackgroundHandler = null;
catch (InterruptedException e)
e.printStackTrace();
/**
* Creates a new @link CameraCaptureSession for camera preview.
*/
private void createCameraPreviewSession()
try
SurfaceTexture texture = mTextureView.getSurfaceTexture();
assert texture != null;
// We configure the size of default buffer to be the size of camera preview we want.
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
// This is the output Surface we need to start preview.
Surface surface = new Surface(texture);
// We set up a CaptureRequest.Builder with the output Surface.
mPreviewRequestBuilder
= mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mPreviewRequestBuilder.addTarget(surface);
// Here, we create a CameraCaptureSession for camera preview.
mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
new CameraCaptureSession.StateCallback()
@Override
public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession)
// The camera is already closed
if (null == mCameraDevice)
return;
// When the session is ready, we start displaying the preview.
mCaptureSession = cameraCaptureSession;
try
// Auto focus should be continuous for camera preview.
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// Flash is automatically enabled when necessary.
setAutoFlash(mPreviewRequestBuilder);
// Finally, we start displaying the camera preview.
mPreviewRequest = mPreviewRequestBuilder.build();
mCaptureSession.setRepeatingRequest(mPreviewRequest,
mCaptureCallback, mBackgroundHandler);
catch (CameraAccessException e)
e.printStackTrace();
@Override
public void onConfigureFailed(
@NonNull CameraCaptureSession cameraCaptureSession)
showToast("Failed");
, null
);
catch (CameraAccessException e)
e.printStackTrace();
/**
* Configures the necessary @link android.graphics.Matrix transformation to `mTextureView`.
* This method should be called after the camera preview size is determined in
* setUpCameraOutputs and also the size of `mTextureView` is fixed.
*
* @param viewWidth The width of `mTextureView`
* @param viewHeight The height of `mTextureView`
*/
private void configureTransform(int viewWidth, int viewHeight)
Activity activity = getActivity();
if (null == mTextureView || null == mPreviewSize || null == activity)
return;
int rotation = activity.getWindowManager().getDefaultDisplay().getRotation();
Matrix matrix = new Matrix();
RectF viewRect = new RectF(0, 0, viewWidth, viewHeight);
RectF bufferRect = new RectF(0, 0, mPreviewSize.getHeight(), mPreviewSize.getWidth());
float centerX = viewRect.centerX();
float centerY = viewRect.centerY();
if (Surface.ROTATION_90 == rotation || Surface.ROTATION_270 == rotation)
bufferRect.offset(centerX - bufferRect.centerX(), centerY - bufferRect.centerY());
matrix.setRectToRect(viewRect, bufferRect, Matrix.ScaleToFit.FILL);
float scale = Math.max(
(float) viewHeight / mPreviewSize.getHeight(),
(float) viewWidth / mPreviewSize.getWidth());
matrix.postScale(scale, scale, centerX, centerY);
matrix.postRotate(90 * (rotation - 2), centerX, centerY);
else if (Surface.ROTATION_180 == rotation)
matrix.postRotate(180, centerX, centerY);
mTextureView.setTransform(matrix);
private void takePicture()
lockFocus();
private void lockFocus()
try
// This is how to tell the camera to lock focus.
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_TRIGGER,
CameraMetadata.CONTROL_AF_TRIGGER_START);
// Tell #mCaptureCallback to wait for the lock.
mState = STATE_WAITING_LOCK;
mCaptureSession.capture(mPreviewRequestBuilder.build(), mCaptureCallback,
mBackgroundHandler);
catch (CameraAccessException e)
e.printStackTrace();
private void runPrecaptureSequence()
try
// This is how to tell the camera to trigger.
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER,
CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER_START);
// Tell #mCaptureCallback to wait for the precapture sequence to be set.
mState = STATE_WAITING_PRECAPTURE;
mCaptureSession.capture(mPreviewRequestBuilder.build(), mCaptureCallback,
mBackgroundHandler);
catch (CameraAccessException e)
e.printStackTrace();
private void captureStillPicture()
try
final Activity activity = getActivity();
if (null == activity || null == mCameraDevice)
return;
final CaptureRequest.Builder captureBuilder =
mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
captureBuilder.addTarget(mImageReader.getSurface());
// Use the same AE and AF modes as the preview.
captureBuilder.set(CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
setAutoFlash(captureBuilder);
int rotation = activity.getWindowManager().getDefaultDisplay().getRotation();
captureBuilder.set(CaptureRequest.JPEG_ORIENTATION, getOrientation(rotation));
CameraCaptureSession.CaptureCallback CaptureCallback
= new CameraCaptureSession.CaptureCallback()
@Override
public void onCaptureCompleted(@NonNull CameraCaptureSession session,
@NonNull CaptureRequest request,
@NonNull TotalCaptureResult result)
showToast("Saved: " + mFile);
Log.d(TAG, mFile.toString());
unlockFocus();
;
mCaptureSession.stopRepeating();
mCaptureSession.abortCaptures();
mCaptureSession.capture(captureBuilder.build(), CaptureCallback, null);
catch (CameraAccessException e)
e.printStackTrace();
@Override
public void onClick(View view)
switch (view.getId())
case R.id.picture:
takePicture();
break;
case R.id.info:
Activity activity = getActivity();
if (null != activity)
new AlertDialog.Builder(activity)
.setMessage(R.string.intro_message)
.setPositiveButton(android.R.string.ok, null)
.show();
break;
private static class ImageSaver implements Runnable
private final Image mImage;
private final File mFile;
ImageSaver(Image image, File file)
mImage = image;
mFile = file;
@Override
public void run()
ByteBuffer buffer = mImage.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
FileOutputStream output = null;
try
output = new FileOutputStream(mFile);
output.write(bytes);
catch (IOException e)
e.printStackTrace();
finally
mImage.close();
if (null != output)
try
output.close();
catch (IOException e)
e.printStackTrace();
我有一个 ViewFragment,我想在其中检索我捕获的图像,但我究竟该如何实现呢?大家有什么建议
【问题讨论】:
为什么不将文件名作为字符串额外参数传递给另一个片段,然后从磁盘加载文件? 将位图转换为编码字符串并将该字符串传递给您的片段,在片段解码字符串到位图.. 【参考方案1】:你应该暂时保存它,如果你尝试用bundle转账,你可能会得到交易太大的错误。
【讨论】:
【参考方案2】:比如可以使用Bitmap
,因为它实现了Parcelable
,所以可以通过bundle传递。
但是!这不是解决方案,因为捆绑最大大小介于两者之间 512 - 1024kb
所以在我看来,最好保存照片,因为它会占用大量内存作为对象,当你在Fragment
中收到链接后,只需删除即可。
【讨论】:
好点。谢谢。最后一个问题,在我拍完照片后,我应该在上面的代码中的哪个位置调用 Fragment? @Critics,看起来像在 ImageSaver 关闭输出之后。这样您的文件就可以在下一个片段中显示了。以上是关于如何在不将图像保存在本地的情况下将捕获的图像(Surface View)传递给另一个片段?的主要内容,如果未能解决你的问题,请参考以下文章
如何在不使用画布的情况下将整个 div 数据转换为图像并将其保存到目录中?
如何在不将 csv 保存到磁盘的情况下将 csv 格式的数据从内存发送到数据库?
如何在不损失视网膜显示质量的情况下将 UIView 捕获到 UIImage
我可以在不将图像上传到 cloudinary 的情况下转换图像吗?
我们如何在不将 ViewController 对象推入其中的情况下将对象分配给 `UINavigationController`。