如何进行视频的采集和预览
视频的采集需要用到Camera这个API,谷歌在5.0引入了camrea2,为了适配所有的机型,我将分别介绍camera和camera2
预览可以使用SurfaceView和TextureView采集
camera的基本使用
1.打开摄像头
mCamera=Camera.open(mCameraId);
mCameraId为int型,1代表前置摄像头,0代表后置摄像头
2.设置预览的媒介
mCamera.setPreviewDisplay(mHolder);mCamera.setPreviewTexture(mTexture);
前者是SurfaceView预览,后者是TextureView预览
3.设置参数
Camera.Parameters parameters = mCamera.getParameters(); parameters.setPreviewFormat(ImageFormat.NV21); //预览数据格式parameters.setPictureFormat(ImageFormat.JPEG); //设置拍照图片格式parameters.setPreviewSize(mSize.width, mSize.height); //预览视频尺寸parameters.setPictureSize(nSize.width, nSize.height); //拍照图片尺寸
4.设置预览方向和拍摄图片的方向
//设置拍照后图片的方向,否则方向不对 if (mCameraId == 0) { parameters.setRotation(90); //后置 } else { parameters.setRotation(270); //前置 } mCamera.setParameters(parameters); mCamera.setDisplayOrientation(90); //预览角度默认0度,手机左侧为0
5.开启预览
mCamera.setPreviewCallback(this); //接收每一帧的数据 mCamera.startPreview();
camera2的基本使用
camera2相较于camera发生了很大的变化
先上两张网上找来的图
图一说明
Android Device相当于我们的app Camera Device相当于手机上的相机 app想要使用相机,就需要在app和相机之间建立一个连接 连接建立完成之后app就可以向相机发起数据请求 相机响应请求并向app返回数据 app得到数据之后可以通过各种surface处理数据图二说明
图二介绍了连接过程中用到的关键类CameraManager
负责管理所有的摄像头,如打开相机CameraDevice.StateCallback
相机设备状态的回调,可以判断相机是否打开CaptureRequest.Builder
通过Builder.build()创建一个CaptureRequest,如预览,拍照,录像等等 通过addTarget(Surface outputTarget)为对应的每一个CaptureRequest提供数据处理的地方,即数据着陆点CameraCaptureSession.StateCallback
通过 mCameraDevice.createCaptureSession创建一次会话,到了这里app和相机才算了建立起了连接camera2流程梳理
预览
1.通过CameraManager打开相机
mCameraManager = (CameraManager) mContext.getSystemService(Context.CAMERA_SERVICE);mCameraManager.openCamera(String.valueOf(mCameraId), new CameraStateCallback(), mCameraHandler);
2.在CameraDevice.StateCallback的onOpened回调中拿到相机对象,同时创建一个预览的请求
@Overridepublic void onOpened(@NonNull CameraDevice camera) { mCameraDevice = camera; startPreview(); } CaptureRequest.Builder builder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW); Surface surface = getSurface(); builder.addTarget(surface); mCaptureRequest = builder.build();
3.通过相机对象建立一个连接
mCameraDevice.createCaptureSession(Arrays.asList(surface, mPictureReader.getSurface()), new CameraSessionCallback(), mCameraHandler);
4.在连接建立成功的回调中发送之前创建的预览这个请求
public void onConfigured(@NonNull CameraCaptureSession session) { try { mCameraCaptureSession = session; //预览 mCameraCaptureSession.setRepeatingRequest(mCaptureRequest, null, null); } catch (CameraAccessException e) { e.printStackTrace(); } }
至此整个预览就完成了
拍照
拍照相当于一个CaptureRequest,如果我们已经完成了预览的流程,对于拍照只需要新创建一个请求,然后通过之前创建的Session发送这个请求即可。 在camera2中一次会话可以包含多个请求,因此我们只需要创建一次会话即可public void takePicture(String path) { mPicturePath = path; try { CaptureRequest.Builder builder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE); builder.addTarget(mPictureReader.getSurface()); //拍照 mCameraCaptureSession.capture(builder.build(), null, null); } catch (CameraAccessException e) { e.printStackTrace(); } }
mPictureReader的写法
//2代表ImageReader中最多可以获取两帧图像流mPictureReader = ImageReader.newInstance(mCaptureSize.getWidth(), mCaptureSize.getHeight(), ImageFormat.JPEG, 2);mPictureReader.setOnImageAvailableListener(new ImageReader.OnImageAvailableListener() { @Override public void onImageAvailable(ImageReader reader) { //将这帧数据转成字节数组,类似于Camera1的PreviewCallback回调的预览帧数据 savePicture(reader); } }, mCameraHandler);
由于我们设置的图片是JPEG格式,则在onImageAvailable回调中,直接可以将字节数组保存为jpg格式的图片
private void savePicture(ImageReader reader) { Image image = reader.acquireLatestImage(); ByteBuffer buffer = image.getPlanes()[0].getBuffer(); byte[] data = new byte[buffer.remaining()]; buffer.get(data); try { FileOutputStream fos = new FileOutputStream(mPicturePath); fos.write(data); fos.close(); Toast.makeText(mContext,"图片保存至:"+mPicturePath,Toast.LENGTH_SHORT).show(); } catch (FileNotFoundException e) { e.printStackTrace(); Log.d(TAG, "Camera2 FileNotFoundException"); } catch (IOException e) { e.printStackTrace(); Log.d(TAG, "Camera2 IOException"); } finally { image.close(); } }
摄像
摄像也相当于一个请求,我们同样可以创建一个摄像的请求,然后通过之前创建的session将这个请求发送出去 摄像和预览一样都是通过在CameraCaptureSession.StateCallback的回调中调用setRepeatingRequest方法两者的区别
预览是将数据展示在SurfaceView或者TextureView上 摄像是将数据保存成一个视频文件摄像数据的保存
通过ImageReader拿到原始的NV21数据ImageReader.newInstance(width,height,ImageFormat.NV21, 1);
注意Camera2已经不支持NV21格式了,通过源码可以看出当使用该格式时,会抛异常
protected ImageReader(int width, int height, int format, int maxImages, long usage) { mWidth = width; mHeight = height; mFormat = format; mMaxImages = maxImages; if (width < 1 || height < 1) { throw new IllegalArgumentException( "The image dimensions must be positive"); } if (mMaxImages < 1) { throw new IllegalArgumentException( "Maximum outstanding image count must be at least 1"); } if (format == ImageFormat.NV21) { throw new IllegalArgumentException( "NV21 format is not supported"); } mNumPlanes = ImageUtils.getNumPlanesForFormat(mFormat); nativeInit(new WeakReference<>(this), width, height, format, maxImages, usage); mSurface = nativeGetSurface(); mIsReaderValid = true; // Estimate the native buffer allocation size and register it so it gets accounted for // during GC. Note that this doesn't include the buffers required by the buffer queue // itself and the buffers requested by the producer. // Only include memory for 1 buffer, since actually accounting for the memory used is // complex, and 1 buffer is enough for the VM to treat the ImageReader as being of some // size. mEstimatedNativeAllocBytes = ImageUtils.getEstimatedNativeAllocBytes( width, height, format, /*buffer count*/ 1); VMRuntime.getRuntime().registerNativeAllocation(mEstimatedNativeAllocBytes); }
解决办法,改为YUV_420_888格式
mPreviewReader = ImageReader.newInstance(1280, 720, ImageFormat.YUV_420_888, 1);
然后将YUV_420_888格式转化为NV21即可,具体转换方式可以百度
通过MediaRecorder直接存储为经过编码压缩的文件
mMediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
两者区别
前者更灵活,拿到原始数据之后可以做一些自己的处理 后者更简单,无需我们在编码和压缩了封装
前面已经介绍了camera和camera2的用法,通过代码的练习,相信大家已经了解了两种camera的基本用法了
那么问题来了,我们怎样用这两种camera了
由于camera2是5.0中的api,为了兼容旧版手机,我们又不得不继续使用camera,但两者的用法却完全不一样 为此我封装了一个SofarCamera类,为camera和camera2的提供了统一的调用方法,只需要一个参数的区分,用户就可以自由切换要使用的是旧的camera还是5.0的camera2同时在封装过程中,我将数据和UI界面分离开,读者可以自己设计UI界面,用的时候只需要传递SurfaceView的holder或者TextureView的texture即可
protected SurfaceHolder mHolder; //SurfaceView预览protected SurfaceTexture mTexture; //TextureView预览
对外统一调用类SofarCamera
public class SofarCamera { public static final int CAMERA_FRONT=1; //前置摄像头 public static final int CAMERA_BACK=0; //后置摄像头 public static final int CAMERA1=1; //旧api public static final int CAMERA2=2; //新api private Context context; private int cameraId; private int cameraApi; private SurfaceHolder holder; private SurfaceTexture texture; private BaseCamera baseCamera; private SofarCamera(Builder builder){ this.context=builder.context; this.cameraId=builder.cameraId; this.cameraApi=builder.cameraApi; this.holder=builder.holder; this.texture=builder.texture; initCamera(); } private void initCamera(){ if(cameraApi==CAMERA1){ baseCamera=new Camera1(); }else if(cameraApi==CAMERA2){ baseCamera=new Camera2(); } baseCamera.setContext(context); baseCamera.setCameraId(cameraId); baseCamera.setDisplay(holder); baseCamera.setDisplay(texture); } public void openCamera(){ baseCamera.openCamera(); } public void destroyCamera(){ baseCamera.destroyCamera(); } public void switchCamera(){ if(cameraId==CAMERA_FRONT){ cameraId=CAMERA_BACK; }else { cameraId=CAMERA_FRONT; } destroyCamera(); initCamera(); openCamera(); } public void takePicture(String path){ baseCamera.takePicture(path); } public Builder newBuilder() { return new Builder(this); } public static final class Builder{ private Context context; private int cameraId; private int cameraApi; private SurfaceHolder holder; private SurfaceTexture texture; public Builder(){ } public Builder(SofarCamera camera){ this.cameraId=camera.cameraId; this.cameraApi=camera.cameraApi; this.holder=camera.holder; this.texture=camera.texture; } public Builder context(Context context){ this.context=context; return this; } public Builder cameraId(int cameraId){ this.cameraId=cameraId; return this; } public Builder cameraApi(@CameraApi int cameraApi){ this.cameraApi=cameraApi; return this; } public Builder holder(SurfaceHolder holder){ this.holder=holder; return this; } public Builder texture(SurfaceTexture texture){ this.texture=texture; return this; } public SofarCamera build() { return new SofarCamera(this); } } @IntDef({CAMERA1,CAMERA2}) @Retention(RetentionPolicy.SOURCE) public @interface CameraApi{ } }
Camera的抽象
对于Camera1和Camera2只需要继承该类,并实现必要的方法即可public abstract class BaseCamera { private static final String TAG = "BaseCamera"; protected Context mContext; protected int mCameraId = 1; //1前置 0后置 protected SurfaceHolder mHolder; //SurfaceView预览 protected SurfaceTexture mTexture; //TextureView预览 protected String mPicturePath; //拍照后的图片存储路径 //SurfaceView预览 public void setDisplay(SurfaceHolder holder) { mHolder = holder; } //TextureView预览 public void setDisplay(SurfaceTexture texture) { mTexture = texture; } //设置前置或后置摄像头 public void setCameraId(int cameraId) { if (cameraId != 1 && cameraId != 0) { Log.d(TAG, "error cameraId:" + cameraId + " BaseCamera cameraId only support 0 or 1 "); return; } mCameraId = cameraId; } public void setContext(Context context) { mContext = context; } public abstract void openCamera(); public abstract void destroyCamera(); public abstract void takePicture(String path); }
调用方法
使用camera
mSofarCamera = new SofarCamera.Builder() .context(this) .cameraApi(SofarCamera.CAMERA1) .cameraId(SofarCamera.CAMERA_BACK) .holder(holder) .build();
使用camera2
mSofarCamera = new SofarCamera.Builder() .context(this) .cameraApi(SofarCamera.CAMERA2) .cameraId(SofarCamera.CAMERA_BACK) .holder(holder) .build();
最后的话
由于篇幅有限,Camera1和Camera2的代码我就不贴了
读者可以阅读前面的使用介绍或者一些其他资料,自己来实现它我的实现已经放置在github上
Camera的封装放置在libplayer下的video包下
方法调用放置在app/demo/media/video下