1

本文分析的Android源代码来自Android-X86, 对应的版本是5.1, 因此可能与手机上的Android系统有点差异.

2 V4L2

V4L2是linux针对摄像头等视频设备的驱动, 应用程序只要对摄像头设备通过open获取文件描述符, 然后就能使用read, write, ioctl等操作对摄像头进行操作了. 在android HAL中同样也是这么做的, 直接对设备的操作相关的代码位于/hardware/libcamera/V4L2Camera.cpp中. 由于我的项目中使用了一个虚拟摄像头设备v4l2loopback, 但是有些android中用到的ioctl该设备的动作不符合预期, 需要进行修改, 所以本小节针对v4l2的做一个介绍, 内容主要来源于v4l2的官方文档.

V4L2是一套大而全的设备驱动, 但是很多功能对于摄像头来说用不到, 在android的HAL中, 用到的ioctl有:

接下来就对这些ioctl做一个说明.

2.1 ioctls

VIDIOC_QUERYCAP

用于查询设备的功能和类型, 基本上每个V4L2应用在open设备之后都要用这个ioctl来确定设备的类型. 使用时需要给定一个v4l2_capability类型的数据结构用于传出输出结果, 对于摄像头设备来说, v4l2_capability->capability的V4L2_CAP_VIDEO_CAPTURE位以及V4L2_CAP_STREAMING位需要为1.

V4L2_CAP_VIDEO_CAPTURE表示该设备支持视频捕获, 这也是摄像头的基本功能.

V4L2_CAP_STREAMING表示该设备支持Streaming I/O, 这是一种内存映射的方式来在内核与应用直接传输数据.

VIDIOC_ENUM_FMT

用于查询摄像头支持的图像格式. 使用时需要指定一个v4l2_fmtdesc类型的数据结构作为输出参数. 对于支持多种图像格式的设备来说, 需要设定v4l2_fmtdesc->index然后多次调用这个ioctl, 直到返回EINVAL. ioctl调用成功后, 可以通过v4l2_fmtdesc->pixelformat来获取该设备支持的图像格式, pixelformat可以是:

  • V4L2_PIX_FMT_MJPEG
  • V4L2_PIX_FMT_JPEG
  • V4L2_PIX_FMT_YUYV
  • V4L2_PIX_FMT_YVYU等.

VIDIOC_ENUM_FRAMESIZES

获取到图像格式之后, 还要进一步查询该种格式下设备支持的的分辨率, 这个ioctl就是干这个事. 使用是需要指定一个v4l2_frmsizeenum类型的数据结构, 并且设置v4l2_frmsizeenum->pixel_fromat为需要查询的图像格式, v4l2_frmsizeenum->index设置为0.

成功调用后, v4l2_frmivalenum->type可能有三种情况:

  1. V4L2_FRMSIZE_TYPE_DISCRETE: 可以递增的设置v4l2_frmsizeenum->index来重复调用直到返回EINVAL来获取该种图像格式下所有支持的分辨率. 此时, 可以通过v4l2_frmsizeenum->width和height来获取支持的分辨率的长宽.
  2. V4L2_FRMSIZE_TYPE_STEPWISE: 此时只有v4l2_frmsizeenum->stepwise是有效的, 并且不能再将index设为其他值重复调用此ioctl.
  3. V4L2_FRMSIZE_TYPE_CONTINUOUS: STEPWISE的一种特殊情况, 此时同样只有stepwise有效, 并且stepwise.step_width和stepwise.step_height都为1.

上述三种情况中, 第一种很好理解, 就是支持的分辨率. 但是STEPWISE和CONTINUOUS还不知道是什么意思. 可能是任意分辨率都支持的意思?

VIDIOC_ENUM_FRAMEINTERVALS

获取到图像格式以及图像分辨率之后, 还可以查询在这种格式及分辨率下摄像头设备支持的fps. 使用是需要给定v4l2_frmivalenum格式的一个数据结构, 并设置好index=0, pixel_format, width, height.

调用完成后, 同样需要检查v4l2_frmivalenum.type, 同样有DISCRETE, STEPWISE, CONTINUOUS三种情况.

VIDIOC_TRY_FMT/VIDIOC_S_FMT/VIDIOC_G_FMT

这三个ioctl用于设置以及获取图像格式. TRY_FMT和S_FMT的区别在于前者不改变驱动的状态.

需要设置图像格式时一般通过G_FMT先获取当前格式, 再修改参数, 最后S_FMT或者TRY_FMT.

VIDIOC_S_PARM/VIDIOC_G_PARM

用于设置和获取streaming io的参数. 需要指定一个v4l2_streamparm类型的数据结构.

VIDIOC_S_JPEGCOMP/VIDIOC_G_JPEGCOMP

用于设置和获取JPEG格式的相关参数

VIDIOC_REQBUFS

为了在用户程序和内核设备之间交换图像数据, 需要分配一块内存. 这块内存可以在内核设备中分配, 然后用户程序通过mmap映射到用户空间, 也可以在用户空间分配, 然后内核设备进入用户指针IO模式. 这两种情况都可以用这个ioctl来进行初始化. 使用时需要给定一个v4l2_requestbuffers, 并设定好type, memory, count. 可以多次调用这个ioctl来重新设置参数. count设为0表示free 掉所有内存.

VIDIOC_QUERYBUF

VIDIOC_REQBUFS之后, 可以随时通过此ioctl查询buffer的当前状态. 使用时需要给定一个v4l2_buffer类型的数据结构. 并设定好type和index. 其中index的有效值为[0, count-1]. 这里的count是VIDIOC_REQBUFS返回的count.

VIDIOC_QBUF/VIDIOC_DQBUF

VIDIOC_REQBUFS分配了内存后, V4l2设备驱动还不能直接用这些内存, 需要通过VIDIOC_QBUF将分配好的内存中的一帧压入驱动的接收队列中, 然后通过VIDIOC_DQBUF推出一帧数据的内存. 如果是摄像头这样的CAPTURE设备, 那压入的就是一个空的内存区间, 等到摄像头拍摄到数据填充到了这片内存之后, 就能推出一片包含有效图像数据的帧了. 如果摄像头还没完成填充动作, 那么VIDIOC_DQBUF就会阻塞在那, 除非在open时设置了O_NONBLOCK标记位.

VIDIOC_STREAMON/VIDIOC_STREAMOFF

STREAMON表示开始工作, 只有在这个ioctl调用之后, 摄像头才能开始捕捉图像, 填充数据. 相对的, STREAMOFF则表示停止工作, 此时在设备驱动中尚未被DQBUF 的图像会丢失.

3 像素编码格式

由于底层涉及到图像, 摄像头相关的代码中出现了不少像素编码格式, 所以简单了解一下摄像头会遇到的格式.

3.1 RGB

很简单, 一个像素由RGB三种颜色组成, 每种颜色占用8位, 所以一个像素需要3 字节存储. 有些RGB编码格式会用更少的位保存一些颜色, 例如RGB844, 绿色和蓝色分别用4位来保存, 这样一个像素就只需要2字节. 还有RGBA, 增加了一个alpha通道, 这样一个像素就需要32位来保存.

3.2 YUV

YUV同样有三个通道, Y表示亮度, 由RGB三色特定部分叠加到一起, U和V则分别是红色与亮度的差异和蓝色与亮度的差异. Y一般占用8位, 而UV则可以省略, 因此衍生出了YUV444, YUV420, YUV411等一系列编码. 名称中的三位数字表示YUC 三个信道的比例, 例如YUV444表示三种信道1:1:1, 而Y占用8位, 因此一个像素占用24位. YUV420并不是说V就完全省略了, 而是一行4:1:0, 一行4:0:1.

android系统的摄像头预览默认采用YUV420sp编码. YUV420又分为YUV420p和YUV420sp, 两者的区别在于UV数据的存放顺序不一样:

Android_Hardware_Camera_20160605_142139.png

Figure 1: 图片来自http://blog.csdn.net/jefry_xdz/article/details/7931018

4 Android Camera

4.1 Hardware

在Android-x85 5.1中, HAL层摄像头相关的类之间的关系大概如图所示:

android_camera_uml.png

SurfaceSize是对一个Surface的长宽参数的封装. SurfaceDesc是对一个Surface 的长宽参数, fps参数的封装.

V4L2Camera类是对V4L2设备驱动的一层封装, 直接通过ioctl控制V4L2设备.

CameraParameters是对摄像头参数的封装. 其中flatten和unfaltten相当于是对CameraParameters的序列化和反序列化.

camera_device其实是一个类似于抽象类的结构体, 定义了一些列摄像头应该实现的接口, 而CameraHardware继承了这个camera_device, 表示一个摄像头的抽象, 这个类主要实现了android定义的一个摄像头应该实现的动作, 如startPreview. 其底层通过V4L2Camera对象来操作摄像头设备, 每个CameraHardware对象都包含一个V4L2Camera对象. 每个实例还包含了一个CameraParameters对象, 保存摄像头的相关参数.

CameraFactory是一个摄像头的管理类, 整个安卓系统中只有一个实例, 其中通过读配置文件的方式创建了多个CameraHardware实例.

CameraFactory

CameraFactory类扮演着是一个摄像头设备的管理员的角色, 这个类查询手机上有几个摄像头, 这几个摄像头的设备路径分别是什么, 旋转角度是多少, 朝向(前置还是后置)等信息. Android-x86通过读取一个配置文件来获取机器上有多少个摄像头:

hardware/libcamera/CameraFactory.cpp

void CameraFactory::parseConfig(const char* configFile)
{
    ALOGD("CameraFactory::parseConfig: configFile = %s", configFile);

    FILE* config = fopen(configFile, "r");
    if (config != NULL) {
        char line[128];
        char arg1[128];
        char arg2[128];
        int  arg3;

        while (fgets(line, sizeof line, config) != NULL) {
            int lineStart = strspn(line, " \t\n\v" );

            if (line[lineStart] == '#')
                continue;

            sscanf(line, "%s %s %d", arg1, arg2, &arg3);
            if (arg3 != 0 && arg3 != 90 && arg3 != 180 && arg3 != 270)
                arg3 = 0;

            if (strcmp(arg1, "front") == 0) {
                newCameraConfig(CAMERA_FACING_FRONT, arg2, arg3);
            } else if (strcmp(arg1, "back") == 0) {
                newCameraConfig(CAMERA_FACING_BACK, arg2, arg3);
            } else {
                ALOGD("CameraFactory::parseConfig: Unrecognized config line '%s'", line);
            }
        }
    } else {
        ALOGD("%s not found, using camera configuration defaults", CONFIG_FILE);
        if (access(DEFAULT_DEVICE_BACK, F_OK) != -1){
            ALOGD("Found device %s", DEFAULT_DEVICE_BACK);
            newCameraConfig(CAMERA_FACING_BACK, DEFAULT_DEVICE_BACK, 0);
        }
        if (access(DEFAULT_DEVICE_FRONT, F_OK) != -1){
            ALOGD("Found device %s", DEFAULT_DEVICE_FRONT);
            newCameraConfig(CAMERA_FACING_FRONT, DEFAULT_DEVICE_FRONT, 0);
        }
    }
}

配置文件的路径位于/etc/camera.cfg, 格式为”front/back path_to_device orientation”, 例如”front /dev/video0 0″

值得一提的另一个函数是 cameraDeviceOpen, APP在打开摄像头的过程中会通过这个函数获取摄像头:

 1: int CameraFactory::cameraDeviceOpen(const hw_module_t* module,int camera_id, hw_device_t** device)
 2: {
 3:     ALOGD("CameraFactory::cameraDeviceOpen: id = %d", camera_id);
 4: 
 5:     *device = NULL;
 6: 
 7:     if (!mCamera || camera_id < 0 || camera_id >= getCameraNum()) {
 8:         ALOGE("%s: Camera id %d is out of bounds (%d)",
 9:              __FUNCTION__, camera_id, getCameraNum());
10:         return -EINVAL;
11:     }
12: 
13:     if (!mCamera[camera_id]) {
14:         mCamera[camera_id] = new CameraHardware(module, mCameraDevices[camera_id]);
15:     }
16:     return mCamera[camera_id]->connectCamera(device);
17: }

从第13行可知, 当android系统启动的时候, 就已经构建好了CameraFactory对象并且通过配置文件读取到机器中有多少摄像头, 对应的设备路径是什么. 但是此时并没有创建CameraHardware对象, 而是直到对应的摄像头第一次被打开的时候才创建.

CameraFactory类整个android系统只有一个实例, 那就是定义在CameraFactory.cpp中的gCameraFactory. camera_module_t定义了几个函数指针, 指向了CameraFactory中的static函数, 当调用这些指针指向的函数的时候, 实质上是在调用gCameraFactory对象中的相应方法:

hardware/libcamera/CameraHal.cpp

camera_module_t HAL_MODULE_INFO_SYM = {
    common: {
         tag:           HARDWARE_MODULE_TAG,
         version_major: 1,
         version_minor: 0,
         id:            CAMERA_HARDWARE_MODULE_ID,
         name:          "Camera Module",
         author:        "The Android Open Source Project",
         methods:       &android::CameraFactory::mCameraModuleMethods,
         dso:           NULL,
         reserved:      {0},
    },
    get_number_of_cameras:  android::CameraFactory::get_number_of_cameras,
    get_camera_info:        android::CameraFactory::get_camera_info,
};

上述代码定义了一个camera_module_t, 相应的函数定义如下:

hardware/libcamera/CameraFactory.cpp

int CameraFactory2::device_open(const hw_module_t* module,
                                       const char* name,
                                       hw_device_t** device)
{
    ALOGD("CameraFactory2::device_open: name = %s", name);

    /*
     * Simply verify the parameters, and dispatch the call inside the
     * CameraFactory instance.
     */

    if (module != &HAL_MODULE_INFO_SYM.common) {
        ALOGE("%s: Invalid module %p expected %p",
                __FUNCTION__, module, &HAL_MODULE_INFO_SYM.common);
        return -EINVAL;
    }
    if (name == NULL) {
        ALOGE("%s: NULL name is not expected here", __FUNCTION__);
        return -EINVAL;
    }

    int camera_id = atoi(name);
    return gCameraFactory.cameraDeviceOpen(module, camera_id, device);
}

int CameraFactory2::get_number_of_cameras(void)
{
    ALOGD("CameraFactory2::get_number_of_cameras");
    return gCameraFactory.getCameraNum();
}

int CameraFactory2::get_camera_info(int camera_id,
                                           struct camera_info* info)
{
    ALOGD("CameraFactory2::get_camera_info");
    return gCameraFactory.getCameraInfo(camera_id, info);
}

camera_device

camera_device同时被定义为了camera_device_t, 此处涉及到HAL的扩展规范. Android HAL定义了三个数据类型, struct hw_module_t, struct hw_module_methods_t, struct hw_device_t, 分别表示模块类型, 模块方法和设备类型. 当需要扩展HAL, 增加一种设备的时候, 就要实现以上这三种数据结构. 例如对于摄像头来说, 就需要定义camera_module_t, camera_device_t, 以及为hw_module_methods_t中的函数指针赋值, 其中只有一个open函数, 相当于初始化模块. 而且HAL还规定camera_module_t的第一个成员必须是hw_module_t, camera_device_t的第一个成员必须是hw_device_t, 接下来的其他成员可以自己定义.

更详细的原理说明参加这里.

在Android-x86中, camera_device_t的定义位于hardware/libhardware/include/hardware/hardware.h

hardware/libhardware/include/hardware/hardware.h

typedef struct camera_device {
    hw_device_t common;
    camera_device_ops_t *ops;
    void *priv;
} camera_device_t;

其中的camera_device_ops_t摄像头模块自己定义的一组函数接口, 在同一个文件中, 太长了就不贴出来了.

camera_module_t位于同一个目录下的camera_common.h中

hardware/libhardware/include/hardware/camera_common.h

typedef struct camera_module {
    hw_module_t common;
    int (*get_number_of_cameras)(void);
    int (*get_camera_info)(int camera_id, struct camera_info *info);
    int (*set_callbacks)(const camera_module_callbacks_t *callbacks);
    void (*get_vendor_tag_ops)(vendor_tag_ops_t* ops);
    int (*open_legacy)(const struct hw_module_t* module, const char* id,
            uint32_t halVersion, struct hw_device_t** device);

    /* reserved for future use */
    void* reserved[7];
} camera_module_t;

在framework调用HAL代码时, 通过hw_module_t->methods->open获取hw_device_t, 然后强制转化为camera_device_t, 就能调用camera_device_t->ops中的摄像头相关的函数了. 对于Android-x86的摄像头来说, ops中的函数指针的赋值位于继承了camera_device的CameraHardware类中.

CameraHardware

CameraHardware的众多接口主要是为了完成三个动作: 预览, 录制, 拍照. 其他的函数大多是设置参数等准备工作. 本小节以预览为例对代码流程做一个说明.

首先是对CameraHardware对象的初始化参数, 对应的函数为initDefaultParameters. 该函数通过调用V4L2Camera的getBestPreviewFmt, getBestPictureFmt, getAvailableSizes, getAvailableFps 来分别获取默认预览图像格式, 默认图像格式, 摄像头支持的分辨率和fps:

hardware/libcamera/CameraHardware.cpp

int pw = MIN_WIDTH;
int ph = MIN_HEIGHT;
int pfps = 30;
int fw = MIN_WIDTH;
int fh = MIN_HEIGHT;
SortedVector<SurfaceSize> avSizes;
SortedVector<int> avFps;

if (camera.Open(mVideoDevice) != NO_ERROR) {
    ALOGE("cannot open device.");
} else {

    // Get the default preview format
    pw = camera.getBestPreviewFmt().getWidth();
    ph = camera.getBestPreviewFmt().getHeight();
    pfps = camera.getBestPreviewFmt().getFps();

    // Get the default picture format
    fw = camera.getBestPictureFmt().getWidth();
    fh = camera.getBestPictureFmt().getHeight();

    // Get all the available sizes
    avSizes = camera.getAvailableSizes();

    // Add some sizes that some specific apps expect to find:
    //  GTalk expects 320x200
    //  Fring expects 240x160
    // And also add standard resolutions found in low end cameras, as
    //  android apps could be expecting to find them
    // The V4LCamera handles those resolutions by choosing the next
    //  larger one and cropping the captured frames to the requested size

    avSizes.add(SurfaceSize(480,320)); // HVGA
    avSizes.add(SurfaceSize(432,320)); // 1.35-to-1, for photos. (Rounded up from 1.3333 to 1)
    avSizes.add(SurfaceSize(352,288)); // CIF
    avSizes.add(SurfaceSize(320,240)); // QVGA
    avSizes.add(SurfaceSize(320,200));
    avSizes.add(SurfaceSize(240,160)); // SQVGA
    avSizes.add(SurfaceSize(176,144)); // QCIF

    // Get all the available Fps
    avFps = camera.getAvailableFps();
}

然后将这些参数转为文本形式, 设置到CameraParameters对象中:

hardware/libcamera/CameraHardware.cpp

    // Antibanding
    p.set(CameraParameters::KEY_SUPPORTED_ANTIBANDING,"auto");
    p.set(CameraParameters::KEY_ANTIBANDING,"auto");

    // Effects
    p.set(CameraParameters::KEY_SUPPORTED_EFFECTS,"none"); // "none,mono,sepia,negative,solarize"
    p.set(CameraParameters::KEY_EFFECT,"none");

    // Flash modes
    p.set(CameraParameters::KEY_SUPPORTED_FLASH_MODES,"off");
    p.set(CameraParameters::KEY_FLASH_MODE,"off");

    // Focus modes
    p.set(CameraParameters::KEY_SUPPORTED_FOCUS_MODES,"fixed");
    p.set(CameraParameters::KEY_FOCUS_MODE,"fixed");

#if 0
    p.set(CameraParameters::KEY_JPEG_THUMBNAIL_HEIGHT,0);
    p.set(CameraParameters::KEY_JPEG_THUMBNAIL_QUALITY,75);
    p.set(CameraParameters::KEY_SUPPORTED_JPEG_THUMBNAIL_SIZES,"0x0");
    p.set("jpeg-thumbnail-size","0x0");
    p.set(CameraParameters::KEY_JPEG_THUMBNAIL_WIDTH,0);
#endif

    // Picture - Only JPEG supported
    p.set(CameraParameters::KEY_SUPPORTED_PICTURE_FORMATS,CameraParameters::PIXEL_FORMAT_JPEG); // ONLY jpeg
    p.setPictureFormat(CameraParameters::PIXEL_FORMAT_JPEG);
    p.set(CameraParameters::KEY_SUPPORTED_PICTURE_SIZES, szs);
    p.setPictureSize(fw,fh);
    p.set(CameraParameters::KEY_JPEG_QUALITY, 85);

    // Preview - Supporting yuv422i-yuyv,yuv422sp,yuv420sp, defaulting to yuv420sp, as that is the android Defacto default
    p.set(CameraParameters::KEY_SUPPORTED_PREVIEW_FORMATS,"yuv422i-yuyv,yuv422sp,yuv420sp,yuv420p"); // All supported preview formats
    p.setPreviewFormat(CameraParameters::PIXEL_FORMAT_YUV422SP); // For compatibility sake ... Default to the android standard
    p.set(CameraParameters::KEY_SUPPORTED_PREVIEW_FPS_RANGE, fpsranges);
    p.set(CameraParameters::KEY_SUPPORTED_PREVIEW_FRAME_RATES, fps);
    p.setPreviewFrameRate( pfps );
    p.set(CameraParameters::KEY_SUPPORTED_PREVIEW_SIZES, szs);
    p.setPreviewSize(pw,ph);

    // Video - Supporting yuv422i-yuyv,yuv422sp,yuv420sp and defaulting to yuv420p
    p.set("video-size-values"/*CameraParameters::KEY_SUPPORTED_VIDEO_SIZES*/, szs);
    p.setVideoSize(pw,ph);
    p.set(CameraParameters::KEY_VIDEO_FRAME_FORMAT, CameraParameters::PIXEL_FORMAT_YUV420P);
    p.set("preferred-preview-size-for-video", "640x480");

    // supported rotations
    p.set("rotation-values","0");
    p.set(CameraParameters::KEY_ROTATION,"0");

    // scenes modes
    p.set(CameraParameters::KEY_SUPPORTED_SCENE_MODES,"auto");
    p.set(CameraParameters::KEY_SCENE_MODE,"auto");

    // white balance
    p.set(CameraParameters::KEY_SUPPORTED_WHITE_BALANCE,"auto");
    p.set(CameraParameters::KEY_WHITE_BALANCE,"auto");

    // zoom
    p.set(CameraParameters::KEY_SMOOTH_ZOOM_SUPPORTED,"false");
    p.set("max-video-continuous-zoom", 0 );
    p.set(CameraParameters::KEY_ZOOM, "0");
    p.set(CameraParameters::KEY_MAX_ZOOM, "100");
    p.set(CameraParameters::KEY_ZOOM_RATIOS, "100");
    p.set(CameraParameters::KEY_ZOOM_SUPPORTED, "false");

    // missing parameters for Camera2
    p.set(CameraParameters::KEY_FOCAL_LENGTH, 4.31);
    p.set(CameraParameters::KEY_HORIZONTAL_VIEW_ANGLE, 90);
    p.set(CameraParameters::KEY_VERTICAL_VIEW_ANGLE, 90);
    p.set(CameraParameters::KEY_SUPPORTED_JPEG_THUMBNAIL_SIZES, "640x480,0x0");

当准备工作做完之后, 调用 CameraHardware::startPreview 函数, 这个函数只有三行, 首先加个锁, 然后调用 startPreviewLocked, 所有的预览的工作都是在这个函数中完成.

hardware/libcamera/CameraHardware.cpp

status_t CameraHardware::startPreviewLocked()
{
    ALOGD("CameraHardware::startPreviewLocked");

    // 预览由一个独立的线程完成, 这几行检查预览是否已经开启. 一般来说是不会进入到if的
    if (mPreviewThread != 0) {
        ALOGD("CameraHardware::startPreviewLocked: preview already running");
        return NO_ERROR;
    }

    // 通过CameraParameters获取预览的长宽.
    int width, height;
    // If we are recording, use the recording video size instead of the preview size
    if (mRecordingEnabled && mMsgEnabled & CAMERA_MSG_VIDEO_FRAME) {
        mParameters.getVideoSize(&width, &height);
    } else {
        mParameters.getPreviewSize(&width, &height);
    }

    // 通过CameraParameters获取预览的fps
    int fps = mParameters.getPreviewFrameRate();
    ALOGD("CameraHardware::startPreviewLocked: Open, %dx%d", width, height);

    // 调用V4L2Camera的open函数打开摄像头设备
    status_t ret = camera.Open(mVideoDevice);
    if (ret != NO_ERROR) {
        ALOGE("Failed to initialize Camera");
        return ret;
    }
    ALOGD("CameraHardware::startPreviewLocked: Init");

    // 调用V4L2Camera的init函数初始化摄像头设备
    ret = camera.Init(width, height, fps);
    if (ret != NO_ERROR) {
        ALOGE("Failed to setup streaming");
        return ret;
    }

    // 用户要求的预览的长宽可能摄像头设备不支持, 摄像头实际工作的长宽通过以下函数获取.
    /* Retrieve the real size being used */
    camera.getSize(width, height);
    ALOGD("CameraHardware::startPreviewLocked: effective size: %dx%d",width, height);

    // 保存实际工作的长宽
    // If we are recording, use the recording video size instead of the preview size
    if (mRecordingEnabled && mMsgEnabled & CAMERA_MSG_VIDEO_FRAME) {
        /* Store it as the video size to use */
        mParameters.setVideoSize(width, height);
    } else {
        /* Store it as the preview size to use */
        mParameters.setPreviewSize(width, height);
    }

    // ???
    /* And reinit the memory heaps to reflect the real used size if needed */
    initHeapLocked();
    ALOGD("CameraHardware::startPreviewLocked: StartStreaming");

    // 通过V4L2Camera.StartStreaming让摄像头设备开始工作
    ret = camera.StartStreaming();
    if (ret != NO_ERROR) {
        ALOGE("Failed to start streaming");
        return ret;
    }

    // 初始化预览窗口
    // setup the preview window geometry in order to use it to zoom the image
    if (mWin != 0) {
        ALOGD("CameraHardware::setPreviewWindow - Negotiating preview format");
        NegotiatePreviewFormat(mWin);
    }

    ALOGD("CameraHardware::startPreviewLocked: starting PreviewThread");

    // 开启一个线程处理预览工作
    mPreviewThread = new PreviewThread(this);

    ALOGD("CameraHardware::startPreviewLocked: O - this:0x%p",this);

    return NO_ERROR;
}

再来看这个 PreviewThread, 这个类很简单, 就是调用了CameraHardware的previewThread方法, 这个方法根据fps计算出一个等待时间, 然后调用V4L2Camera的GrabRawFrame获取摄像头设备的图像, 然后转换成支持的图像格式, 最后放到显示窗口中显示图像.

V4L2Camera

V4L2Camera类主要是对V4L2设备的封装, 下面分析一下常用的几个接口, 如Open, Init, StartStreaming, GrabRawFrame, EnumFrameIntervals, EnumFrameSizes, EnumFrameFormats.

  • Open

    Open接口的逻辑比较简单, 通过 open 系统调用获取摄像头设备的文件描述符, 然后调用VIDIOC_QUERYCAP ioctl查询设备的能力, 由于是摄像头设备, 这里就要求是设备的V4L2_CAP_VIDEO_CAPTURE位被置为1, 所以有个检查. 最后调用EnumFrameFormats 获取摄像头支持的图像格式.

    hardware/libcamera/V4L2Camera.cpp

    int V4L2Camera::Open (const char *device)
    {
        int ret;
    
        /* Close the previous instance, if any */
        Close();
    
        memset(videoIn, 0, sizeof (struct vdIn));
    
        if ((fd = open(device, O_RDWR)) == -1) {
            ALOGE("ERROR opening V4L interface: %s", strerror(errno));
            return -1;
        }
    
        ret = ioctl (fd, VIDIOC_QUERYCAP, &videoIn->cap);
        if (ret < 0) {
            ALOGE("Error opening device: unable to query device.");
            return -1;
        }
    
        if ((videoIn->cap.capabilities & V4L2_CAP_VIDEO_CAPTURE) == 0) {
            ALOGE("Error opening device: video capture not supported.");
            return -1;
        }
    
        if (!(videoIn->cap.capabilities & V4L2_CAP_STREAMING)) {
            ALOGE("Capture device does not support streaming i/o");
            return -1;
        }
    
        /* Enumerate all available frame formats */
        EnumFrameFormats();
    
        return ret;
    }
    
  • EnumFrameFormats

    此函数获取摄像头设备的图像格式, 分辨率以及fps. 参见2.1可知, 设备支持的分辨率是对某个图像格式下才有意义, 不说明在什么图像格式下, 是无法获取支持的分辨率的, 同样, fps也是针对某个图像格式和某个分辨率的. 因此摄像头设备支持的图像格式, 分辨率和fps在调用完这个函数之后就全部知道了. 同时, 这个函数还设置好了 m_BestPreviewFmtm_BestPictureFmt 这两个参数, 这两个参数会被用来设置预览的默认格式.

    hardware/libcamera/V4L2Camera.cpp

    bool V4L2Camera::EnumFrameFormats()
    {
        ALOGD("V4L2Camera::EnumFrameFormats");
        struct v4l2_fmtdesc fmt;
    
        // Start with no modes
        m_AllFmts.clear();
    
        memset(&fmt, 0, sizeof(fmt));
        fmt.index = 0;
        fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    
        // 遍历地获取设备所有支持的图像格式
        while (ioctl(fd,VIDIOC_ENUM_FMT, &fmt) >= 0) {
            fmt.index++;
            ALOGD("{ pixelformat = '%c%c%c%c', description = '%s' }",
                    fmt.pixelformat & 0xFF, (fmt.pixelformat >> 8) & 0xFF,
                    (fmt.pixelformat >> 16) & 0xFF, (fmt.pixelformat >> 24) & 0xFF,
                    fmt.description);
    
            // 获取该种格式下设备支持的分辨率和fps
            //enumerate frame sizes for this pixel format
            if (!EnumFrameSizes(fmt.pixelformat)) {
                ALOGE("  Unable to enumerate frame sizes.");
            }
        };
    
        // 此时, 图像格式, 分辨率, fps已经都获取到了.
    
        // Now, select the best preview format and the best PictureFormat
        m_BestPreviewFmt = SurfaceDesc();
        m_BestPictureFmt = SurfaceDesc();
    
        unsigned int i;
        for (i=0; i<m_AllFmts.size(); i++) {
            SurfaceDesc s = m_AllFmts[i];
    
            // 此处设置最佳的拍照参数, 由于是拍照, 对fps就没没什么要求, 只要分辨率大就可以了
            // 因此优先寻找一个分辨率最大的那个SurfaceDesc赋值给m_BestPictureFmt
            // Prioritize size over everything else when taking pictures. use the
            // least fps possible, as that usually means better quality
            if ((s.getSize()  > m_BestPictureFmt.getSize()) ||
                (s.getSize() == m_BestPictureFmt.getSize() && s.getFps() < m_BestPictureFmt.getFps() )
                ) {
                m_BestPictureFmt = s;
            }
    
            // 此处设置最佳的预览参数, 对于预览来说, fps的权重更高
            // 因此优先寻找fps高的SurfaceDesc赋值给m_BestPreviewFmt
            // Prioritize fps, then size when doing preview
            if ((s.getFps()  > m_BestPreviewFmt.getFps()) ||
                (s.getFps() == m_BestPreviewFmt.getFps() && s.getSize() > m_BestPreviewFmt.getSize() )
                ) {
                m_BestPreviewFmt = s;
            }
    
        }
    
        return true;
    }
    
  • EnumFrameSizes

    此函数根据给定的pixfmt查询该格式下设备支持的分辨率.

    hardware/libcamera/V4L2Camera.cpp

    bool V4L2Camera::EnumFrameSizes(int pixfmt)
    {
        ALOGD("V4L2Camera::EnumFrameSizes: pixfmt: 0x%08x",pixfmt);
        int ret=0;
        int fsizeind = 0;
        struct v4l2_frmsizeenum fsize;
    
        // 设置好v4l2_frmsizeenum
        memset(&fsize, 0, sizeof(fsize));
        fsize.index = 0;
        fsize.pixel_format = pixfmt;
        // 循环调用VIDIOC_ENUM_FRAMESIZES ioctl查询所有支持的分辨率
        while (ioctl(fd, VIDIOC_ENUM_FRAMESIZES, &fsize) >= 0) {
            fsize.index++;
            // 根据输出结果的type分情况讨论
            if (fsize.type == V4L2_FRMSIZE_TYPE_DISCRETE) {
                ALOGD("{ discrete: width = %u, height = %u }",
                    fsize.discrete.width, fsize.discrete.height);
    
                // 这个变量保存设备支持的DISCRETE类型的分辨率的个数
                fsizeind++;
    
                // 继续查询这种分辨率下支持的fps
                if (!EnumFrameIntervals(pixfmt,fsize.discrete.width, fsize.discrete.height))
                    ALOGD("  Unable to enumerate frame intervals");
            } else if (fsize.type == V4L2_FRMSIZE_TYPE_CONTINUOUS) { // 如果type是CONTINUOUS或STEPWISE, 则不做任何事
                ALOGD("{ continuous: min { width = %u, height = %u } .. "
                    "max { width = %u, height = %u } }",
                    fsize.stepwise.min_width, fsize.stepwise.min_height,
                    fsize.stepwise.max_width, fsize.stepwise.max_height);
                ALOGD("  will not enumerate frame intervals.\n");
            } else if (fsize.type == V4L2_FRMSIZE_TYPE_STEPWISE) {
                ALOGD("{ stepwise: min { width = %u, height = %u } .. "
                    "max { width = %u, height = %u } / "
                    "stepsize { width = %u, height = %u } }",
                    fsize.stepwise.min_width, fsize.stepwise.min_height,
                    fsize.stepwise.max_width, fsize.stepwise.max_height,
                    fsize.stepwise.step_width, fsize.stepwise.step_height);
                ALOGD("  will not enumerate frame intervals.");
            } else {
                ALOGE("  fsize.type not supported: %d\n", fsize.type);
                ALOGE("     (Discrete: %d   Continuous: %d  Stepwise: %d)",
                    V4L2_FRMSIZE_TYPE_DISCRETE,
                    V4L2_FRMSIZE_TYPE_CONTINUOUS,
                    V4L2_FRMSIZE_TYPE_STEPWISE);
            }
        }
    
        // 如果设备不支持任何DISCRETE类型的分辨率, 尝试通过VIDIOC_TRY_FMT对设备设置分辨率, 如果设置成功, 也认为
        // 这个摄像头设备支持这种分辨率
        if (fsizeind == 0) {
            /* ------ gspca doesn't enumerate frame sizes ------ */
            /*       negotiate with VIDIOC_TRY_FMT instead       */
            static const struct {
                int w,h;
            } defMode[] = {
                {800,600},
                {768,576},
                {768,480},
                {720,576},
                {720,480},
                {704,576},
                {704,480},
                {640,480},
                {352,288},
                {320,240}
            };
    
            unsigned int i;
            for (i = 0 ; i < (sizeof(defMode) / sizeof(defMode[0])); i++) {
    
                fsizeind++;
                struct v4l2_format fmt;
                fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
                fmt.fmt.pix.width = defMode[i].w;
                fmt.fmt.pix.height = defMode[i].h;
                fmt.fmt.pix.pixelformat = pixfmt;
                fmt.fmt.pix.field = V4L2_FIELD_ANY;
    
                if (ioctl(fd,VIDIOC_TRY_FMT, &fmt) >= 0) {
                    ALOGD("{ ?GSPCA? : width = %u, height = %u }\n", fmt.fmt.pix.width, fmt.fmt.pix.height);
    
                    // Add the mode descriptor
                    m_AllFmts.add( SurfaceDesc( fmt.fmt.pix.width, fmt.fmt.pix.height, 25 ) );
                }
            }
        }
    
        return true;
    }
    

    可以看到, Android对于分辨率的类型只认DISCRETE, CONTINUOUS和STEPWISE只输出个日志, 不做任何事.

  • EnumFrameIntervals

    该函数通过VIDIOC_ENUM_FRAMEINTERVALS ioctl查询指定图像格式和分辨率下设备支持的fps.

    hardware/libcamera/V4L2Camera.cpp

    bool V4L2Camera::EnumFrameIntervalsi(int pixfmt, int width, int height)
    {
        ALOGD("V4L2Camera::EnumFrameIntervals: pixfmt: 0x%08x, w:%d, h:%d",pixfmt,width,height);
    
        struct v4l2_frmivalenum fival;
        int list_fps=0;
        // 设置参数
        memset(&fival, 0, sizeof(fival));
        fival.index = 0;
        fival.pixel_format = pixfmt;
        fival.width = width;
        fival.height = height;
    
        ALOGD("\tTime interval between frame: ");
        // 遍历的调用ioctl获取所有支持的fps
        while (ioctl(fd,VIDIOC_ENUM_FRAMEINTERVALS, &fival) >= 0)
        {
            fival.index++;
            // 同样只认DISCRETE
            if (fival.type == V4L2_FRMIVAL_TYPE_DISCRETE) {
                ALOGD("%u/%u", fival.discrete.numerator, fival.discrete.denominator);
                // 新建一个SurfaceDesc添加到成员变量m_AllFmts中
                m_AllFmts.add( SurfaceDesc( width, height, fival.discrete.denominator ) );
                list_fps++;
            } else if (fival.type == V4L2_FRMIVAL_TYPE_CONTINUOUS) {
                ALOGD("{min { %u/%u } .. max { %u/%u } }",
                    fival.stepwise.min.numerator, fival.stepwise.min.numerator,
                    fival.stepwise.max.denominator, fival.stepwise.max.denominator);
                break;
            } else if (fival.type == V4L2_FRMIVAL_TYPE_STEPWISE) {
                ALOGD("{min { %u/%u } .. max { %u/%u } / "
                    "stepsize { %u/%u } }",
                    fival.stepwise.min.numerator, fival.stepwise.min.denominator,
                    fival.stepwise.max.numerator, fival.stepwise.max.denominator,
                    fival.stepwise.step.numerator, fival.stepwise.step.denominator);
                break;
            }
        }
    
        // Assume at least 1fps
        if (list_fps == 0) {
            m_AllFmts.add( SurfaceDesc( width, height, 1 ) );
        }
    
        return true;
    }
    
  • Init

    Init函数是V4L2Camera类中最为复杂的一个方法.

    hardware/libcamera/V4L2Camera.cpp

    int V4L2Camera::Init(int width, int height, int fps)
    {
        ALOGD("V4L2Camera::Init");
    
        /* Initialize the capture to the specified width and height */
        static const struct {
            int fmt;            /* PixelFormat */
            int bpp;            /* bytes per pixel */
            int isplanar;       /* If format is planar or not */
            int allowscrop;     /* If we support cropping with this pixel format */
        } pixFmtsOrder[] = {
            {V4L2_PIX_FMT_YUYV,     2,0,1},
            {V4L2_PIX_FMT_YVYU,     2,0,1},
            {V4L2_PIX_FMT_UYVY,     2,0,1},
            {V4L2_PIX_FMT_YYUV,     2,0,1},
            {V4L2_PIX_FMT_SPCA501,  2,0,0},
            {V4L2_PIX_FMT_SPCA505,  2,0,0},
            {V4L2_PIX_FMT_SPCA508,  2,0,0},
            {V4L2_PIX_FMT_YUV420,   0,1,0},
            {V4L2_PIX_FMT_YVU420,   0,1,0},
            {V4L2_PIX_FMT_NV12,     0,1,0},
            {V4L2_PIX_FMT_NV21,     0,1,0},
            {V4L2_PIX_FMT_NV16,     0,1,0},
            {V4L2_PIX_FMT_NV61,     0,1,0},
            {V4L2_PIX_FMT_Y41P,     0,0,0},
            {V4L2_PIX_FMT_SGBRG8,   0,0,0},
            {V4L2_PIX_FMT_SGRBG8,   0,0,0},
            {V4L2_PIX_FMT_SBGGR8,   0,0,0},
            {V4L2_PIX_FMT_SRGGB8,   0,0,0},
            {V4L2_PIX_FMT_BGR24,    3,0,1},
            {V4L2_PIX_FMT_RGB24,    3,0,1},
            {V4L2_PIX_FMT_MJPEG,    0,1,0},
            {V4L2_PIX_FMT_JPEG,     0,1,0},
            {V4L2_PIX_FMT_GREY,     1,0,1},
            {V4L2_PIX_FMT_Y16,      2,0,1},
        };
    
        int ret;
    
        // If no formats, break here
        if (m_AllFmts.isEmpty()) {
            ALOGE("No video formats available");
            return -1;
        }
    
        // Try to get the closest match ...
        SurfaceDesc closest;
        int closestDArea = -1;
        int closestDFps = -1;
        unsigned int i;
        int area = width * height;
        for (i = 0; i < m_AllFmts.size(); i++) {
            SurfaceDesc sd = m_AllFmts[i];
    
            // Always choose a bigger or equal surface
            if (sd.getWidth() >= width &&
                sd.getHeight() >= height) {
    
                int difArea = sd.getArea() - area;
                int difFps = my_abs(sd.getFps() - fps);
    
                ALOGD("Trying format: (%d x %d), Fps: %d [difArea:%d, difFps:%d, cDifArea:%d, cDifFps:%d]",sd.getWidth(),sd.getHeight(),sd.getFps(), difArea, difFps, closestDArea, closestDFps);
    
                // 从摄像头设备支持的分辨率中寻找一个长宽都大于等于输入的长宽, 并且面积差得最少的一个分辨率
                // 如果这种分辨率有多个, 就寻找一个fps差异最小的
                // 找到的这个SurfaceDesc赋值给closest变量
                if (closestDArea < 0 ||
                    difArea < closestDArea ||
                    (difArea == closestDArea && difFps < closestDFps)) {
    
                    // Store approximation
                    closestDArea = difArea;
                    closestDFps = difFps;
    
                    // And the new surface descriptor
                    closest = sd;
                }
            }
        }
    
        // 如果可用的分辨率中没有长宽都大于等于输入的长宽的分辨率
        if (closestDArea == -1) {
            ALOGE("Size not available: (%d x %d)",width,height);
            return -1;
        }
    
        // 此时closest就是最接近输入参数的SurfaceDesc
        ALOGD("Selected format: (%d x %d), Fps: %d",closest.getWidth(),closest.getHeight(),closest.getFps());
    
        // 如果closest的长宽并不完全等于输入的长宽, 说明需要剪短
        // Check if we will have to crop the captured image
        bool crop = width != closest.getWidth() || height != closest.getHeight();
    
        // 遍历支持的像素格式
        // Iterate through pixel formats from best to worst
        ret = -1;
        for (i=0; i < (sizeof(pixFmtsOrder) / sizeof(pixFmtsOrder[0])); i++) {
    
            // If we will need to crop, make sure to only select formats we can crop...
            // 当需要剪短并且选中的像素格式支持, 或者不需要剪短, 才进入这个if
            if (!crop || pixFmtsOrder[i].allowscrop) {
    
                memset(&videoIn->format,0,sizeof(videoIn->format));
                videoIn->format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
                videoIn->format.fmt.pix.width = closest.getWidth();
                videoIn->format.fmt.pix.height = closest.getHeight();
                videoIn->format.fmt.pix.pixelformat = pixFmtsOrder[i].fmt;
    
                // 通过VIDIOC_TRY_FMT设置摄像头设备使用的像素格式
                ret = ioctl(fd, VIDIOC_TRY_FMT, &videoIn->format);
                // 检查调用成功并且再次确认使用的确实是closest的参数
                if (ret >= 0 &&
                    videoIn->format.fmt.pix.width ==  (uint)closest.getWidth() &&
                    videoIn->format.fmt.pix.height == (uint)closest.getHeight()) {
                    break;
                }
            }
        }
        if (ret < 0) {
            ALOGE("Open: VIDIOC_TRY_FMT Failed: %s", strerror(errno));
            return ret;
        }
    
        // 真正设置像素格式
        /* Set the format */
        memset(&videoIn->format,0,sizeof(videoIn->format));
        videoIn->format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
        videoIn->format.fmt.pix.width = closest.getWidth();
        videoIn->format.fmt.pix.height = closest.getHeight();
        videoIn->format.fmt.pix.pixelformat = pixFmtsOrder[i].fmt;
        ret = ioctl(fd, VIDIOC_S_FMT, &videoIn->format);
        if (ret < 0) {
            ALOGE("Open: VIDIOC_S_FMT Failed: %s", strerror(errno));
            return ret;
        }
    
    
        // 查询当前使用的图像格式
        /* Query for the effective video format used */
        memset(&videoIn->format,0,sizeof(videoIn->format));
        videoIn->format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
        ret = ioctl(fd, VIDIOC_G_FMT, &videoIn->format);
        if (ret < 0) {
            ALOGE("Open: VIDIOC_G_FMT Failed: %s", strerror(errno));
            return ret;
        }
    
        /* Note VIDIOC_S_FMT may change width and height. */
    
        /* Buggy driver paranoia. */
        // 为裁剪准备参数
        unsigned int min = videoIn->format.fmt.pix.width * 2;
        if (videoIn->format.fmt.pix.bytesperline < min)
            videoIn->format.fmt.pix.bytesperline = min;
        min = videoIn->format.fmt.pix.bytesperline * videoIn->format.fmt.pix.height;
        if (videoIn->format.fmt.pix.sizeimage < min)
            videoIn->format.fmt.pix.sizeimage = min;
    
        /* Store the pixel formats we will use */
        videoIn->outWidth           = width;
        videoIn->outHeight          = height;
        videoIn->outFrameSize       = width * height << 1; // Calculate the expected output framesize in YUYV
        videoIn->capBytesPerPixel   = pixFmtsOrder[i].bpp;
    
        // 开始裁剪
        /* Now calculate cropping margins, if needed, rounding to even */
        int startX = ((closest.getWidth() - width) >> 1) & (-2);
        int startY = ((closest.getHeight() - height) >> 1) & (-2);
    
        /* Avoid crashing if the mode found is smaller than the requested */
        if (startX < 0) {
            videoIn->outWidth += startX;
            startX = 0;
        }
        if (startY < 0) {
            videoIn->outHeight += startY;
            startY = 0;
        }
    
        /* Calculate the starting offset into each captured frame */
        videoIn->capCropOffset = (startX * videoIn->capBytesPerPixel) +
                (videoIn->format.fmt.pix.bytesperline * startY);
    
        ALOGI("Cropping from origin: %dx%d - size: %dx%d  (offset:%d)",
            startX,startY,
            videoIn->outWidth,videoIn->outHeight,
            videoIn->capCropOffset);
    
        /* sets video device frame rate */
        memset(&videoIn->params,0,sizeof(videoIn->params));
        videoIn->params.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
        videoIn->params.parm.capture.timeperframe.numerator = 1;
        videoIn->params.parm.capture.timeperframe.denominator = closest.getFps();
    
        // 设置fps
        /* Set the framerate. If it fails, it wont be fatal */
        if (ioctl(fd,VIDIOC_S_PARM,&videoIn->params) < 0) {
            ALOGE("VIDIOC_S_PARM error: Unable to set %d fps", closest.getFps());
        }
    
        /* Gets video device defined frame rate (not real - consider it a maximum value) */
        if (ioctl(fd,VIDIOC_G_PARM,&videoIn->params) < 0) {
            ALOGE("VIDIOC_G_PARM - Unable to get timeperframe");
        }
    
        ALOGI("Actual format: (%d x %d), Fps: %d, pixfmt: '%c%c%c%c', bytesperline: %d",
            videoIn->format.fmt.pix.width,
            videoIn->format.fmt.pix.height,
            videoIn->params.parm.capture.timeperframe.denominator,
            videoIn->format.fmt.pix.pixelformat & 0xFF, (videoIn->format.fmt.pix.pixelformat >> 8) & 0xFF,
            (videoIn->format.fmt.pix.pixelformat >> 16) & 0xFF, (videoIn->format.fmt.pix.pixelformat >> 24) & 0xFF,
            videoIn->format.fmt.pix.bytesperline);
    
        /* Configure JPEG quality, if dealing with those formats */
        if (videoIn->format.fmt.pix.pixelformat == V4L2_PIX_FMT_JPEG ||
            videoIn->format.fmt.pix.pixelformat == V4L2_PIX_FMT_MJPEG) {
    
            // 设置JPEG质量为100%
            /* Get the compression format */
            ioctl(fd,VIDIOC_G_JPEGCOMP, &videoIn->jpegcomp);
    
            /* Set to maximum */
            videoIn->jpegcomp.quality = 100;
    
            /* Try to set it */
            if(ioctl(fd,VIDIOC_S_JPEGCOMP, &videoIn->jpegcomp) >= 0)
            {
                ALOGE("VIDIOC_S_COMP:");
                if(errno == EINVAL)
                {
                    videoIn->jpegcomp.quality = -1; //not supported
                    ALOGE("   compression control not supported\n");
                }
            }
    
            /* gets video stream jpeg compression parameters */
            if(ioctl(fd,VIDIOC_G_JPEGCOMP, &videoIn->jpegcomp) >= 0) {
                ALOGD("VIDIOC_G_COMP:\n");
                ALOGD("    quality:      %i\n", videoIn->jpegcomp.quality);
                ALOGD("    APPn:         %i\n", videoIn->jpegcomp.APPn);
                ALOGD("    APP_len:      %i\n", videoIn->jpegcomp.APP_len);
                ALOGD("    APP_data:     %s\n", videoIn->jpegcomp.APP_data);
                ALOGD("    COM_len:      %i\n", videoIn->jpegcomp.COM_len);
                ALOGD("    COM_data:     %s\n", videoIn->jpegcomp.COM_data);
                ALOGD("    jpeg_markers: 0x%x\n", videoIn->jpegcomp.jpeg_markers);
            } else {
                ALOGE("VIDIOC_G_COMP:");
                if(errno == EINVAL) {
                    videoIn->jpegcomp.quality = -1; //not supported
                    ALOGE("   compression control not supported\n");
                }
            }
        }
    
        /* Check if camera can handle NB_BUFFER buffers */
        memset(&videoIn->rb,0,sizeof(videoIn->rb));
        videoIn->rb.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
        videoIn->rb.memory = V4L2_MEMORY_MMAP;
        videoIn->rb.count = NB_BUFFER; // 定义在V4L2Camera.h中, 为4
    
        // 要求设备分配内存
        ret = ioctl(fd, VIDIOC_REQBUFS, &videoIn->rb);
        if (ret < 0) {
            ALOGE("Init: VIDIOC_REQBUFS failed: %s", strerror(errno));
            return ret;
        }
    
        for (int i = 0; i < NB_BUFFER; i++) {
    
            memset (&videoIn->buf, 0, sizeof (struct v4l2_buffer));
            videoIn->buf.index = i;
            videoIn->buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
            videoIn->buf.memory = V4L2_MEMORY_MMAP;
    
            ret = ioctl (fd, VIDIOC_QUERYBUF, &videoIn->buf);
            if (ret < 0) {
                ALOGE("Init: Unable to query buffer (%s)", strerror(errno));
                return ret;
            }
    
            // 通过mmap将内核设备刚刚分配的内存映射到用户空间
            videoIn->mem[i] = mmap (0,
                                    videoIn->buf.length,
                                    PROT_READ | PROT_WRITE,
                                    MAP_SHARED,
                                    fd,
                                    videoIn->buf.m.offset);
    
            if (videoIn->mem[i] == MAP_FAILED) {
                ALOGE("Init: Unable to map buffer (%s)", strerror(errno));
                return -1;
            }
    
            ret = ioctl(fd, VIDIOC_QBUF, &videoIn->buf);
            if (ret < 0) {
                ALOGE("Init: VIDIOC_QBUF Failed");
                return -1;
            }
    
            nQueued++;
        }
    
        // Reserve temporary buffers, if they will be needed
        size_t tmpbuf_size=0;
        switch (videoIn->format.fmt.pix.pixelformat)
        {
            case V4L2_PIX_FMT_JPEG:
            case V4L2_PIX_FMT_MJPEG:
            case V4L2_PIX_FMT_UYVY:
            case V4L2_PIX_FMT_YVYU:
            case V4L2_PIX_FMT_YYUV:
            case V4L2_PIX_FMT_YUV420: // only needs 3/2 bytes per pixel but we alloc 2 bytes per pixel
            case V4L2_PIX_FMT_YVU420: // only needs 3/2 bytes per pixel but we alloc 2 bytes per pixel
            case V4L2_PIX_FMT_Y41P:   // only needs 3/2 bytes per pixel but we alloc 2 bytes per pixel
            case V4L2_PIX_FMT_NV12:
            case V4L2_PIX_FMT_NV21:
            case V4L2_PIX_FMT_NV16:
            case V4L2_PIX_FMT_NV61:
            case V4L2_PIX_FMT_SPCA501:
            case V4L2_PIX_FMT_SPCA505:
            case V4L2_PIX_FMT_SPCA508:
            case V4L2_PIX_FMT_GREY:
            case V4L2_PIX_FMT_Y16:
    
            case V4L2_PIX_FMT_YUYV:
                //  YUYV doesn't need a temp buffer but we will set it if/when
                //  video processing disable control is checked (bayer processing).
                //            (logitech cameras only)
                break;
    
            case V4L2_PIX_FMT_SGBRG8: //0
            case V4L2_PIX_FMT_SGRBG8: //1
            case V4L2_PIX_FMT_SBGGR8: //2
            case V4L2_PIX_FMT_SRGGB8: //3
                // Raw 8 bit bayer
                // when grabbing use:
                //    bayer_to_rgb24(bayer_data, RGB24_data, width, height, 0..3)
                //    rgb2yuyv(RGB24_data, pFrameBuffer, width, height)
    
                // alloc a temp buffer for converting to YUYV
                // rgb buffer for decoding bayer data
                tmpbuf_size = videoIn->format.fmt.pix.width * videoIn->format.fmt.pix.height * 3;
                if (videoIn->tmpBuffer)
                    free(videoIn->tmpBuffer);
                videoIn->tmpBuffer = (uint8_t*)calloc(1, tmpbuf_size);
                if (!videoIn->tmpBuffer) {
                    ALOGE("couldn't calloc %lu bytes of memory for frame buffer\n",
                        (unsigned long) tmpbuf_size);
                    return -ENOMEM;
                }
    
    
                break;
    
            case V4L2_PIX_FMT_RGB24: //rgb or bgr (8-8-8)
            case V4L2_PIX_FMT_BGR24:
                break;
    
            default:
                ALOGE("Should never arrive (1)- exit fatal !!\n");
                return -1;
        }
    
        return 0;
    }
    

    总的来说, Init函数做了以下几件事:

    1. 根据用户要求的长宽和fps, 从设备支持的分辨率和fps中找一个大于用户要求并且最接近的分辨率和fps. 然后设置摄像头设备使用此分辨率和fps. 最后由于实际使用的分辨率比用户请求的大, 还要计算一个裁剪偏差值, 以后使用图片的时候把多出来的那部分裁减掉.
    2. 如果摄像头设备使用了JPEG或者MJPEG压缩, 则设置图片质量是100%.
    3. 要求设备分配内存, 并映射到用户空间的videoIn->mem中. 然后压入摄像头设备的输入队列, 至此, 摄像头设备已经做好了捕捉图像的准备, 就等streamon命令了.
  • StartStreaming

    StartStreaming函数很简单, 除了调用STREAMON ioctl之外只是设置了videoIn 的isStreaming标记:

    hardware/libcamera/V4L2Camera.cpp

    int V4L2Camera::StartStreaming ()
    {
        enum v4l2_buf_type type;
        int ret;
    
        if (!videoIn->isStreaming) {
            type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    
            ret = ioctl (fd, VIDIOC_STREAMON, &type);
            if (ret < 0) {
                ALOGE("StartStreaming: Unable to start capture: %s", strerror(errno));
                return ret;
            }
    
            videoIn->isStreaming = true;
        }
    
        return 0;
    }
    

    调用这个函数之后, 摄像头就开始工作了.

  • GrabRawFrame

    StartStreaming之后, 还需要获取拍摄到的摄像头内容, 于是要调用GrabRawFrame.

    hardware/libcamera/V4L2Camera.cpp

    void V4L2Camera::GrabRawFrame (void *frameBuffer, int maxSize)
    {
        LOG_FRAME("V4L2Camera::GrabRawFrame: frameBuffer:%p, len:%d",frameBuffer,maxSize);
        int ret;
    
        /* DQ */
        // 调用DQBUF获取一帧数据
        memset(&videoIn->buf,0,sizeof(videoIn->buf));
        videoIn->buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
        videoIn->buf.memory = V4L2_MEMORY_MMAP;
        ret = ioctl(fd, VIDIOC_DQBUF, &videoIn->buf);
        if (ret < 0) {
            ALOGE("GrabPreviewFrame: VIDIOC_DQBUF Failed");
            return;
        }
    
        nDequeued++;
    
        // Calculate the stride of the output image (YUYV) in bytes
        int strideOut = videoIn->outWidth << 1;
    
        // And the pointer to the start of the image
        // Init中计算出来需要剪裁掉的偏移量, 此处就通过增加了偏移量来把图像剪裁为用户调用Init
        // 时设置的大小. 得到的src是图像的起始地址
        uint8_t* src = (uint8_t*)videoIn->mem[videoIn->buf.index] + videoIn->capCropOffset;
    
        LOG_FRAME("V4L2Camera::GrabRawFrame - Got Raw frame (%dx%d) (buf:%d@0x%p, len:%d)",videoIn->format.fmt.pix.width,videoIn->format.fmt.pix.height,videoIn->buf.index,src,videoIn->buf.bytesused);
    
        /* Avoid crashing! - Make sure there is enough room in the output buffer! */
        if (maxSize < videoIn->outFrameSize) {
    
            ALOGE("V4L2Camera::GrabRawFrame: Insufficient space in output buffer: Required: %d, Got %d - DROPPING FRAME",videoIn->outFrameSize,maxSize);
    
        } else {
    
            // 根据像素格式分别处理, 最终把图像数据保存到了输出参数framebuffer中.
            switch (videoIn->format.fmt.pix.pixelformat)
            {
                case V4L2_PIX_FMT_JPEG:
                case V4L2_PIX_FMT_MJPEG:
                    if(videoIn->buf.bytesused <= HEADERFRAME1) {
                        // Prevent crash on empty image
                        ALOGE("Ignoring empty buffer ...\n");
                        break;
                    }
    
                    if (jpeg_decode((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight) < 0) {
                        ALOGE("jpeg decode errors\n");
                        break;
                    }
                    break;
    
                case V4L2_PIX_FMT_UYVY:
                    uyvy_to_yuyv((uint8_t*)frameBuffer, strideOut,
                                 src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_YVYU:
                    yvyu_to_yuyv((uint8_t*)frameBuffer, strideOut,
                                 src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_YYUV:
                    yyuv_to_yuyv((uint8_t*)frameBuffer, strideOut,
                                 src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_YUV420:
                    yuv420_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_YVU420:
                    yvu420_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_NV12:
                    nv12_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_NV21:
                    nv21_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_NV16:
                    nv16_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_NV61:
                    nv61_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_Y41P:
                    y41p_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_GREY:
                    grey_to_yuyv((uint8_t*)frameBuffer, strideOut,
                                src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_Y16:
                    y16_to_yuyv((uint8_t*)frameBuffer, strideOut,
                                src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_SPCA501:
                    s501_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_SPCA505:
                    s505_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_SPCA508:
                    s508_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_YUYV:
                    {
                        int h;
                        uint8_t* pdst = (uint8_t*)frameBuffer;
                        uint8_t* psrc = src;
                        int ss = videoIn->outWidth << 1;
                        for (h = 0; h < videoIn->outHeight; h++) {
                            memcpy(pdst,psrc,ss);
                            pdst += strideOut;
                            psrc += videoIn->format.fmt.pix.bytesperline;
                        }
                    }
                    break;
    
                case V4L2_PIX_FMT_SGBRG8: //0
                    bayer_to_rgb24 (src,(uint8_t*) videoIn->tmpBuffer, videoIn->outWidth, videoIn->outHeight, 0);
                    rgb_to_yuyv ((uint8_t*) frameBuffer, strideOut,
                                (uint8_t*)videoIn->tmpBuffer, videoIn->outWidth*3, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_SGRBG8: //1
                    bayer_to_rgb24 (src,(uint8_t*) videoIn->tmpBuffer, videoIn->outWidth, videoIn->outHeight, 1);
                    rgb_to_yuyv ((uint8_t*) frameBuffer, strideOut,
                                (uint8_t*)videoIn->tmpBuffer, videoIn->outWidth*3, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_SBGGR8: //2
                    bayer_to_rgb24 (src,(uint8_t*) videoIn->tmpBuffer, videoIn->outWidth, videoIn->outHeight, 2);
                    rgb_to_yuyv ((uint8_t*) frameBuffer, strideOut,
                                (uint8_t*)videoIn->tmpBuffer, videoIn->outWidth*3, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_SRGGB8: //3
                    bayer_to_rgb24 (src,(uint8_t*) videoIn->tmpBuffer, videoIn->outWidth, videoIn->outHeight, 3);
                    rgb_to_yuyv ((uint8_t*) frameBuffer, strideOut,
                                (uint8_t*)videoIn->tmpBuffer, videoIn->outWidth*3, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_RGB24:
                    rgb_to_yuyv((uint8_t*) frameBuffer, strideOut,
                                src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_BGR24:
                    bgr_to_yuyv((uint8_t*) frameBuffer, strideOut,
                                src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                default:
                    ALOGE("error grabbing: unknown format: %i\n", videoIn->format.fmt.pix.pixelformat);
                    break;
            }
    
            LOG_FRAME("V4L2Camera::GrabRawFrame - Copied frame to destination 0x%p",frameBuffer);
        }
    
        // buffer用完之后入队, 重复利用
        /* And Queue the buffer again */
        ret = ioctl(fd, VIDIOC_QBUF, &videoIn->buf);
        if (ret < 0) {
            ALOGE("GrabPreviewFrame: VIDIOC_QBUF Failed");
            return;
        }
    
        nQueued++;
    
        LOG_FRAME("V4L2Camera::GrabRawFrame - Queued buffer");
    
    }
    

4.2 Framwork

JavaSDK层

Hardware的分析可以自底向上, 首先看V4L2Camera, 再看CameraHardware, 再到CameraFactory. Framework的代码自底向上看东西就太多了, 因此先从SDK中的摄像头部分看起. HAL和Framework说的都是C++的东西, 实现了安卓的底层. 但是实际上在开发app的时候用是SDK是JAVA语言编写的. 我们知道JAVA可以通过JNI来调用C++代码, 接下来就来看看ADK中是如何使用的.

首先考虑一段调用摄像头预览的代码:

Camera cam = Camera.open();           // 获取一个摄像头实例
cam.setPreviewDisplay(surfaceHolder); // 设置预览窗口
cam.startPreview();                   // 开始预览

第一行打开的是默认摄像头, 也可以换成 Camera.open(1) 打开其他摄像头, 这几个函数的定义在ADK中位于Camera.java中, open函数为:

frameworks/base/core/java/android/hardware/Camera.java

public static Camera open(int cameraId) {
    return new Camera(cameraId);
}

public static Camera open() {
    int numberOfCameras = getNumberOfCameras();
    CameraInfo cameraInfo = new CameraInfo();
    for (int i = 0; i < numberOfCameras; i++) {
        getCameraInfo(i, cameraInfo);
        if (cameraInfo.facing == CameraInfo.CAMERA_FACING_BACK) {
            return new Camera(i);
        }
    }
    return null;
}

可以看到, 直接open不加任何参数打开的其实是第一个后置摄像头. 总之最后open返回了一个Camera对象. 这里看到了一个熟悉的函数getNumberOfCameras, 在HAL中的camera_module_t中, 除了必须的hw_module_t, 还有两个函数指针 get_number_of_camerasget_camera_info, 估计这个 getNumberOfCameras 最终就是调用了get_number_of_cameras. 于是来看这个函数:

/**
 * Returns the number of physical cameras available on this device.
 */
public native static int getNumberOfCameras();

这个函数在Camera.java中只有一个声明, 表明这是一个native函数, 于是就要找其对应的JNI的定义.

frameworks/base/core/jni/android_hardware_Camera.cpp

static jint android_hardware_Camera_getNumberOfCameras(JNIEnv *env, jobject thiz)
{
    return Camera::getNumberOfCameras();
}

再来找这个C++中的Camera类, 这个类已经位于android framework中了, 但是getNumberOfCameras的定义其实是在它的父类CameraBase中:

frameworks/av/camera/CameraBase.cpp

template <typename TCam, typename TCamTraits>
int CameraBase<TCam, TCamTraits>::getNumberOfCameras() {
    const sp<ICameraService> cs = getCameraService();

    if (!cs.get()) {
        // as required by the public Java APIs
        return 0;
    }
    return cs->getNumberOfCameras();
}

可以看到这里只是简单的获取CameraService, 然后调用其getNumberOfCameras 函数, 再来看这个函数:

frameworks/av/camera/CameraBase.cpp

template <typename TCam, typename TCamTraits>
const sp<ICameraService>& CameraBase<TCam, TCamTraits>::getCameraService()
{
    Mutex::Autolock _l(gLock);
    if (gCameraService.get() == 0) {
        sp<IServiceManager> sm = defaultServiceManager();
        sp<IBinder> binder;
        do {
            binder = sm->getService(String16(kCameraServiceName));
            if (binder != 0) {
                break;
            }
            ALOGW("CameraService not published, waiting...");
            usleep(kCameraServicePollDelay);
        } while(true);
        if (gDeathNotifier == NULL) {
            gDeathNotifier = new DeathNotifier();
        }
        binder->linkToDeath(gDeathNotifier);
        gCameraService = interface_cast<ICameraService>(binder);
    }
    ALOGE_IF(gCameraService == 0, "no CameraService!?");
    return gCameraService;
}

可以看到gCameraService是一个sp<ICameraService>类型的单例, 第一次调用这个函数的时候对gCameraService初始化, 以后每次只是简单地返回这个变量. 在初始化的过程中, 用到了defaultServiceManager获取了一个sm, 并通过sm->getService获取到CameraService. defaultServiceManager这个函数位于frameworks/native/lib/binder/IServiceManager.cpp, 属于binder通信的一部分, 超出了本文的范围, 以后有空再写一篇博客说明.

open函数调用完之后, 就是setPreviewDisplay和startPreiview, 这两个函数同样是native的, 其实现类似, 下面就只看看startPreview:

frameworks/base/core/jni/android_hardware_Camera.cpp

static void android_hardware_Camera_startPreview(JNIEnv *env, jobject thiz)
{
    ALOGV("startPreview");
    sp<Camera> camera = get_native_camera(env, thiz, NULL);
    if (camera == 0) return;

    if (camera->startPreview() != NO_ERROR) {
        jniThrowRuntimeException(env, "startPreview failed");
        return;
    }
}

这段代码首先获取了一个Camera对象, 然后对其调用startPreview, get_native_camera的实习如下:

sp<Camera> get_native_camera(JNIEnv *env, jobject thiz, JNICameraContext** pContext)
{
    sp<Camera> camera;
    Mutex::Autolock _l(sLock);
    JNICameraContext* context = reinterpret_cast<JNICameraContext*>(env->GetLongField(thiz, fields.context));
    if (context != NULL) {
        camera = context->getCamera();
    }
    ALOGV("get_native_camera: context=%p, camera=%p", context, camera.get());
    if (camera == 0) {
        jniThrowRuntimeException(env,
                "Camera is being used after Camera.release() was called");
    }

    if (pContext != NULL) *pContext = context;
    return camera;
}

该函数通过env->GetLongField获取了一个JNICameraContext的对象的指针, 然后就能通过getCamera得到Camera对象了, 而这个JNICameraContext的对象的指针是在native_setup中设置的:

 1: static jint android_hardware_Camera_native_setup(JNIEnv *env, jobject thiz,
 2:     jobject weak_this, jint cameraId, jint halVersion, jstring clientPackageName)
 3: {
 4:     // Convert jstring to String16
 5:     const char16_t *rawClientName = env->GetStringChars(clientPackageName, NULL);
 6:     jsize rawClientNameLen = env->GetStringLength(clientPackageName);
 7:     String16 clientName(rawClientName, rawClientNameLen);
 8:     env->ReleaseStringChars(clientPackageName, rawClientName);
 9: 
10:     sp<Camera> camera;
11:     if (halVersion == CAMERA_HAL_API_VERSION_NORMAL_CONNECT) {
12:         // Default path: hal version is don't care, do normal camera connect.
13:         camera = Camera::connect(cameraId, clientName,
14:                 Camera::USE_CALLING_UID);
15:     } else {
16:         jint status = Camera::connectLegacy(cameraId, halVersion, clientName,
17:                 Camera::USE_CALLING_UID, camera);
18:         if (status != NO_ERROR) {
19:             return status;
20:         }
21:     }
22: 
23:     if (camera == NULL) {
24:         return -EACCES;
25:     }
26: 
27:     // make sure camera hardware is alive
28:     if (camera->getStatus() != NO_ERROR) {
29:         return NO_INIT;
30:     }
31: 
32:     jclass clazz = env->GetObjectClass(thiz);
33:     if (clazz == NULL) {
34:         // This should never happen
35:         jniThrowRuntimeException(env, "Can't find android/hardware/Camera");
36:         return INVALID_OPERATION;
37:     }
38: 
39:     // We use a weak reference so the Camera object can be garbage collected.
40:     // The reference is only used as a proxy for callbacks.
41:     sp<JNICameraContext> context = new JNICameraContext(env, weak_this, clazz, camera);
42:     context->incStrong((void*)android_hardware_Camera_native_setup);
43:     camera->setListener(context);
44: 
45:     // save context in opaque field
46:     env->SetLongField(thiz, fields.context, (jlong)context.get());
47:     return NO_ERROR;
48: }

注意第13行, 通过Camera::connect获取到了一个Camera对象, 这里终于又从ADK 层进入到了Framework层.

class Camera

在frameworks/av/camera/下有个Camera.cpp, 定义了一个Camera类, 由上一小节可以看到, ADK层通过JNI, 与这个类直接打交道, 进而进入Framework层, 可以说这个类是进入Framework的入口.

Camera类多重继承于CameraBase和BnCameraClient, 而这个CameraBase用到了模板:

frameworks/av/include/camera/CameraBase.h

template <typename TCam>
struct CameraTraits {
};

template <typename TCam, typename TCamTraits = CameraTraits<TCam> >
class CameraBase : public IBinder::DeathRecipient
{
public:
    typedef typename TCamTraits::TCamListener       TCamListener;
    typedef typename TCamTraits::TCamUser           TCamUser;
    typedef typename TCamTraits::TCamCallbacks      TCamCallbacks;
    typedef typename TCamTraits::TCamConnectService TCamConnectService;
}

这里除了用到模板还用到了模板的偏特化, Camera在实际继承CameraBase的时候, TCam就是Camera类型, 而TCamTraits用的是CameraTraits<Camera>, 但是这个模板特化并不是CameraBase.h中的CameraTraits, 而是定义在Camera.h中, 否则的话CameraTraits是空的, 也就没有TCamTraits::TCamListener这些东西了:

frameworks/av/include/camera/Camera.h

template <>
struct CameraTraits<Camera>
{
    typedef CameraListener        TCamListener;
    typedef ICamera               TCamUser;
    typedef ICameraClient         TCamCallbacks;
    typedef status_t (ICameraService::*TCamConnectService)(const sp<ICameraClient>&,
                                                           int, const String16&, int,
                                                           /*out*/
                                                           sp<ICamera>&);
    static TCamConnectService     fnConnectService;
};

mediaserver

mediaserver是一个独立的进程, 有着自己的main函数, 系统启动之后会启动mediaserver作为一个守护进程. mediaserver负责管理android应用需要用到的多媒体相关的服务, 例如音频, 视频播放, 摄像头等. 通过Android的binder机制与app进行通信.

frameworks/av/media/mediaserver/main_mediaserver.cpp

int main(int argc __unused, char** argv)
{
    signal(SIGPIPE, SIG_IGN);
    char value[PROPERTY_VALUE_MAX];
    bool doLog = (property_get("ro.test_harness", value, "0") > 0) && (atoi(value) == 1);
    pid_t childPid;
    if (doLog && (childPid = fork()) != 0) {
        // ...省略
    } else {
        // all other services
        if (doLog) {
            prctl(PR_SET_PDEATHSIG, SIGKILL);   // if parent media.log dies before me, kill me also
            setpgid(0, 0);                      // but if I die first, don't kill my parent
        }
        sp<ProcessState> proc(ProcessState::self());
        sp<IServiceManager> sm = defaultServiceManager();
        ALOGI("ServiceManager: %p", sm.get());
        AudioFlinger::instantiate();
        MediaPlayerService::instantiate();
        CameraService::instantiate();
        AudioPolicyService::instantiate();
        SoundTriggerHwService::instantiate();
        registerExtensions();
        ProcessState::self()->startThreadPool();
        IPCThreadState::self()->joinThreadPool();
    }
}

可以看到, 在main函数中分别对几大服务调用了instantiate初始化. 重点关注CameraService::instantiate() 这一行, 初始化了摄像头服务. 于是接下来就来看这个CameraService类. 这个类的声明就很长, 约五百行, 内部还定义了BasicClient, Client 等内部类. 但是并没有发现main函数中调用的instantiate函数. 考虑到CameraService多继承了BinderService<CameraService>, BnCameraService, DeathRecipient, camera_module_callbacks_t等四个类, 估计这个instantiate就是其中一个类定义的, 果然在BinderService.h中发现了这个函数:

frameworks/native/include/binder/BinderService.h

template<typename SERVICE>
class BinderService
{
public:
    static status_t publish(bool allowIsolated = false) {
        sp<IServiceManager> sm(defaultServiceManager());
        return sm->addService(
                String16(SERVICE::getServiceName()),
                new SERVICE(), allowIsolated);
    }

    static void publishAndJoinThreadPool(bool allowIsolated = false) {
        publish(allowIsolated);
        joinThreadPool();
    }

    static void instantiate() { publish(); }

    static status_t shutdown() { return NO_ERROR; }

private:
    static void joinThreadPool() {
        sp<ProcessState> ps(ProcessState::self());
        ps->startThreadPool();
        ps->giveThreadPoolName();
        IPCThreadState::self()->joinThreadPool();
    }
};

可以看到 instantiate 调用了 publish, 而 publish 首先取得了一个全局唯一的 IServiceManager 实例的指针, 并且向其中注册了一个新的服务, 结合CameraService继承了 BinderService<CameraService> 来看, 注册服务实际上调用的是:

sm->addService(
    String16(CameraSeivice::getServiceName()),
    new CameraService(), allowIsolated);

至此, main函数仅仅注册了CameraService, 那么什么时候使用CameraService呢? 这就要看main函数的最后两行:

ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();

这里开始就就到了binder通信的部分, 不在本文的范围内, 参见我的另一篇博文.

android_camera_framework_uml.png

CameraHardwareInterface

CameraService

  • BasicClient
  • Client

Leave a Reply

电子邮件地址不会被公开。 必填项已用*标注

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>