Android NDK开发详解相机之多摄像头 API

  • 逻辑摄像头和物理摄像头之间的区别
  • 多摄像头 API
  • 同时进行多个直播
  • 创建包含多个物理摄像头的会话
  • 使用一对物理摄像头
  • Zoom 示例用例
  • 镜头失真


注意:本页介绍的是 Camera2 软件包。除非您的应用需要 Camera2 的特定低层级功能,否则我们建议您使用 CameraX。CameraX 和 Camera2 都支持 Android 5.0(API 级别 21)及更高版本。

Android 9(API 级别 28)中引入了多摄像头功能。自发布以来,市场上已纷纷支持该 API 的设备。许多多摄像头用例都与特定硬件配置紧密耦合。换句话说,并非所有用例都与每台设备兼容,因此多摄像头功能非常适合采用 Play Feature Delivery。

一些典型用例包括:

缩放:根据剪裁区域或所需焦距切换摄像头。
深度:使用多个摄像头构建深度图。
焦外成像:使用推断的深度信息来模拟类似于数码单反相机的窄焦距。

逻辑摄像头和物理摄像头之间的区别

如需了解多摄像头 API,还需要了解逻辑摄像头和物理摄像头之间的区别。作为参考,假设有一台配有三个后置摄像头的设备。在此示例中,三个后置摄像头中的每个摄像头都被视为一个实体摄像头。然后,逻辑摄像头是两个或更多个物理摄像头的分组。逻辑摄像头的输出可以是来自某个底层物理摄像头的数据流,也可以是同时来自多个底层物理摄像头的融合数据流。无论采用哪种方式,数据流都由相机硬件抽象层 (HAL) 进行处理。

许多手机制造商会开发第一方相机应用,这些应用通常预安装在他们的设备上。为了使用硬件的所有功能,它们可能会使用私有或隐藏 API,或者接受其他应用无权访问的驱动程序实现所受到的特殊处理。一些设备通过提供来自不同物理摄像头的融合帧流来实现逻辑摄像头的概念,但仅向某些特权应用提供。通常情况下,框架只有一个物理摄像头。Android 9 之前的第三方开发者情况如下图所示:

AndroidStudio调用相机APP_数码相机

图 1. 相机功能通常仅适用于特权应用

从 Android 9 开始,Android 应用中不再允许使用私有 API。随着框架中加入多摄像头支持,Android 最佳实践强烈建议手机制造商为所有物理摄像头公开一个朝向相同方向的逻辑摄像头。第三方开发者应该在搭载 Android 9 及更高版本的设备上看到以下内容:

AndroidStudio调用相机APP_数码相机_02

图 2. 对 Android 9 及之后的所有相机设备拥有完整的开发者访问权限

逻辑摄像头提供的内容完全取决于摄像头 HAL 的 OEM 实现。例如,Pixel 3 等设备在实现其逻辑摄像头时,会确保根据请求的焦距和剪裁区域选择一个物理摄像头。

多摄像头 API

新 API 添加了以下新的常量、类和方法:

CameraMetadata.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA
    CameraCharacteristics.getPhysicalCameraIds()
    CameraCharacteristics.getAvailablePhysicalCameraRequestKeys()
    CameraDevice.createCaptureSession(SessionConfiguration config)
    CameraCharacteritics.LOGICAL_MULTI_CAMERA_SENSOR_SYNC_TYPE
    OutputConfiguration和SessionConfiguration

由于 Android 兼容性定义文档 (CDD) 发生了变化,开发者对多摄像头 API 也有一些预期。在 Android 9 之前就存在配备双摄像头的设备,但同时打开多个摄像头会涉及不断试验和错误。在 Android 9 及更高版本中,多摄像头提供了一组规则,用于指定何时可以打开属于同一逻辑摄像头的一对物理摄像头。

在大多数情况下,搭载 Android 9 及更高版本的设备会公开所有物理摄像头(不包括红外线等不太常见的传感器类型)以及一个易于使用的逻辑摄像头。对于保证有效的每个数据流组合,可将属于逻辑摄像头的一个数据流替换为来自底层物理摄像头的两个数据流。

同时进行多个直播

同时使用多个摄像头信息流介绍了在单个摄像头中同时使用多个信息流的规则。其中一项值得注意的是,相同的规则适用于多个摄像头。CameraMetadata.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA 说明了如何将逻辑 YUV_420_888 或原始信息流替换为两个物理信息流。也就是说,每个 YUV 或 RAW 类型的数据流都可以替换为两个类型和大小相同的数据流。您可以从单摄像头设备的以下有保证配置的摄像头数据流开始:

视频流 1:YUV 类型,大小为 MAXIMUM(来自逻辑摄像头 id = 0)

然后,您可以通过支持多摄像头的设备创建一个会话,用两个物理信息流替换该逻辑 YUV 信息流:

视频流 1:YUV 类型,MAXIMUM 尺寸来自物理摄像头 id = 1
音频流 2:YUV 类型,尺寸为 MAXIMUM,来自物理摄像头 id = 2

当且仅当 YUV 或 RAW 流是两个摄像头属于某个逻辑摄像头分组(列在 CameraCharacteristics.getPhysicalCameraIds() 下)的一部分时,您才可以将 YUV 或 RAW 流替换为两个等效流。

框架提供的保证只是同时从多个实体摄像头获取帧所需的最低要求。大多数设备都支持额外的视频流,有时甚至允许单独打开多个实体摄像头设备。由于这并不是框架的硬性保证,因此这样做需要通过试验和错误执行每台设备的测试和调整。

创建包含多个物理摄像头的会话

在支持多摄像头的设备上使用物理摄像头时,请打开单个 CameraDevice(逻辑摄像头),并在单个会话中与其进行交互。请使用 API 级别 28 中的新增 API CameraDevice.createCaptureSession(SessionConfiguration config) 创建单个会话。会话配置具有多种输出配置,每种输出配置都有一组输出目标,以及(可选)所需的物理摄像头 ID。

AndroidStudio调用相机APP_c++_03

图 3. SessionConfiguration 和 OutputConfiguration 模型

捕获请求具有与之关联的输出目标。框架会根据连接的输出目标来确定将请求发送到哪个物理(或逻辑)相机。如果输出目标对应于某个输出目标(作为输出配置与物理摄像头 ID 一起发送),则该物理摄像头会接收并处理请求。

使用一对物理摄像头

适用于多摄像头的相机 API 的另一个新增功能是,能够识别逻辑摄像头并查找其后面的物理摄像头。您可以定义一个函数来帮助识别可能的物理摄像头对,以用于替换其中一个逻辑摄像头数据流:
Kotlin

/**
     * Helper class used to encapsulate a logical camera and two underlying
     * physical cameras
     */
    data class DualCamera(val logicalId: String, val physicalId1: String, val physicalId2: String)

    fun findDualCameras(manager: CameraManager, facing: Int? = null): List {
        val dualCameras = MutableList()

        // Iterate over all the available camera characteristics
        manager.cameraIdList.map {
            Pair(manager.getCameraCharacteristics(it), it)
        }.filter {
            // Filter by cameras facing the requested direction
            facing == null || it.first.get(CameraCharacteristics.LENS_FACING) == facing
        }.filter {
            // Filter by logical cameras
            // CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA requires API >= 28
            it.first.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES)!!.contains(
                CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA)
        }.forEach {
            // All possible pairs from the list of physical cameras are valid results
            // NOTE: There could be N physical cameras as part of a logical camera grouping
            // getPhysicalCameraIds() requires API >= 28
            val physicalCameras = it.first.physicalCameraIds.toTypedArray()
            for (idx1 in 0 until physicalCameras.size) {
                for (idx2 in (idx1 + 1) until physicalCameras.size) {
                    dualCameras.add(DualCamera(
                        it.second, physicalCameras[idx1], physicalCameras[idx2]))
                }
            }
        }

        return dualCameras
    }

Java

/**
     * Helper class used to encapsulate a logical camera and two underlying
     * physical cameras
     */
    final class DualCamera {
        final String logicalId;
        final String physicalId1;
        final String physicalId2;

        DualCamera(String logicalId, String physicalId1, String physicalId2) {
            this.logicalId = logicalId;
            this.physicalId1 = physicalId1;
            this.physicalId2 = physicalId2;
        }
    }
    List findDualCameras(CameraManager manager, Integer facing) {
        List dualCameras = new ArrayList<>();

        List cameraIdList;
        try {
            cameraIdList = Arrays.asList(manager.getCameraIdList());
        } catch (CameraAccessException e) {
            e.printStackTrace();
            cameraIdList = new ArrayList<>();
        }

        // Iterate over all the available camera characteristics
        cameraIdList.stream()
                .map(id -> {
                    try {
                        CameraCharacteristics characteristics = manager.getCameraCharacteristics(id);
                        return new Pair<>(characteristics, id);
                    } catch (CameraAccessException e) {
                        e.printStackTrace();
                        return null;
                    }
                })
                .filter(pair -> {
                    // Filter by cameras facing the requested direction
                    return (pair != null) &&
                            (facing == null || pair.first.get(CameraCharacteristics.LENS_FACING).equals(facing));
                })
                .filter(pair -> {
                    // Filter by logical cameras
                    // CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA requires API >= 28
                    IntPredicate logicalMultiCameraPred =
                            arg -> arg == CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA;
                    return Arrays.stream(pair.first.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES))
                            .anyMatch(logicalMultiCameraPred);
                })
                .forEach(pair -> {
                    // All possible pairs from the list of physical cameras are valid results
                    // NOTE: There could be N physical cameras as part of a logical camera grouping
                    // getPhysicalCameraIds() requires API >= 28
                    String[] physicalCameras = pair.first.getPhysicalCameraIds().toArray(new String[0]);
                    for (int idx1 = 0; idx1 < physicalCameras.length; idx1++) {
                        for (int idx2 = idx1 + 1; idx2 < physicalCameras.length; idx2++) {
                            dualCameras.add(
                                    new DualCamera(pair.second, physicalCameras[idx1], physicalCameras[idx2]));
                        }
                    }
                });
return dualCameras;
}

物理摄像头的状态处理由逻辑摄像头控制。如需打开“双摄像头”,请打开与物理摄像头对应的逻辑摄像头:
Kotlin

fun openDualCamera(cameraManager: CameraManager,
                       dualCamera: DualCamera,
        // AsyncTask is deprecated beginning API 30
                       executor: Executor = AsyncTask.SERIAL_EXECUTOR,
                       callback: (CameraDevice) -> Unit) {

        // openCamera() requires API >= 28
        cameraManager.openCamera(
            dualCamera.logicalId, executor, object : CameraDevice.StateCallback() {
                override fun onOpened(device: CameraDevice) = callback(device)
                // Omitting for brevity...
                override fun onError(device: CameraDevice, error: Int) = onDisconnected(device)
                override fun onDisconnected(device: CameraDevice) = device.close()
            })
    }

Java

void openDualCamera(CameraManager cameraManager,
                        DualCamera dualCamera,
                        Executor executor,
                        CameraDeviceCallback cameraDeviceCallback
    ) {

        // openCamera() requires API >= 28
        cameraManager.openCamera(dualCamera.logicalId, executor, new CameraDevice.StateCallback() {
            @Override
            public void onOpened(@NonNull CameraDevice cameraDevice) {
               cameraDeviceCallback.callback(cameraDevice);
            }

            @Override
            public void onDisconnected(@NonNull CameraDevice cameraDevice) {
                cameraDevice.close();
            }

            @Override
            public void onError(@NonNull CameraDevice cameraDevice, int i) {
                onDisconnected(cameraDevice);
            }
        });
    }

除了选择要打开哪个相机的界面,该过程与在旧版 Android 中打开相机的过程相同。使用新的会话配置 API 创建拍摄会话会指示框架将某些目标与特定实体摄像头 ID 相关联:
Kotlin

/**
 * Helper type definition that encapsulates 3 sets of output targets:
 *
 *   1. Logical camera
 *   2. First physical camera
 *   3. Second physical camera
 */
typealias DualCameraOutputs =
        Triple?, MutableList?, MutableList?>

fun createDualCameraSession(cameraManager: CameraManager,
                            dualCamera: DualCamera,
                            targets: DualCameraOutputs,
                            // AsyncTask is deprecated beginning API 30
                            executor: Executor = AsyncTask.SERIAL_EXECUTOR,
                            callback: (CameraCaptureSession) -> Unit) {

    // Create 3 sets of output configurations: one for the logical camera, and
    // one for each of the physical cameras.
    val outputConfigsLogical = targets.first?.map { OutputConfiguration(it) }
    val outputConfigsPhysical1 = targets.second?.map {
        OutputConfiguration(it).apply { setPhysicalCameraId(dualCamera.physicalId1) } }
    val outputConfigsPhysical2 = targets.third?.map {
        OutputConfiguration(it).apply { setPhysicalCameraId(dualCamera.physicalId2) } }

    // Put all the output configurations into a single flat array
    val outputConfigsAll = arrayOf(
        outputConfigsLogical, outputConfigsPhysical1, outputConfigsPhysical2)
        .filterNotNull().flatMap { it }

    // Instantiate a session configuration that can be used to create a session
    val sessionConfiguration = SessionConfiguration(
        SessionConfiguration.SESSION_REGULAR,
        outputConfigsAll, executor, object : CameraCaptureSession.StateCallback() {
            override fun onConfigured(session: CameraCaptureSession) = callback(session)
            // Omitting for brevity...
            override fun onConfigureFailed(session: CameraCaptureSession) = session.device.close()
        })

    // Open the logical camera using the previously defined function
    openDualCamera(cameraManager, dualCamera, executor = executor) {

        // Finally create the session and return via callback
        it.createCaptureSession(sessionConfiguration)
    }
}

Java

/**
 * Helper class definition that encapsulates 3 sets of output targets:
 * 


 * 1. Logical camera
 * 2. First physical camera
 * 3. Second physical camera
 */
final class DualCameraOutputs {
    private final List logicalCamera;
    private final List firstPhysicalCamera;
    private final List secondPhysicalCamera;

    public DualCameraOutputs(List logicalCamera, List firstPhysicalCamera, List third) {
        this.logicalCamera = logicalCamera;
        this.firstPhysicalCamera = firstPhysicalCamera;
        this.secondPhysicalCamera = third;
    }

    public List getLogicalCamera() {
        return logicalCamera;
    }

    public List getFirstPhysicalCamera() {
        return firstPhysicalCamera;
    }

    public List getSecondPhysicalCamera() {
        return secondPhysicalCamera;
    }
}

interface CameraCaptureSessionCallback {
    void callback(CameraCaptureSession cameraCaptureSession);
}

void createDualCameraSession(CameraManager cameraManager,
                                 DualCamera dualCamera,
                                 DualCameraOutputs targets,
                                 Executor executor,
                                 CameraCaptureSessionCallback cameraCaptureSessionCallback) {

        // Create 3 sets of output configurations: one for the logical camera, and
        // one for each of the physical cameras.
        List outputConfigsLogical = targets.getLogicalCamera().stream()
                .map(OutputConfiguration::new)
                .collect(Collectors.toList());
        List outputConfigsPhysical1 = targets.getFirstPhysicalCamera().stream()
                .map(s -> {
                    OutputConfiguration outputConfiguration = new OutputConfiguration(s);
                    outputConfiguration.setPhysicalCameraId(dualCamera.physicalId1);
                    return outputConfiguration;
                })
                .collect(Collectors.toList());
        List outputConfigsPhysical2 = targets.getSecondPhysicalCamera().stream()
                .map(s -> {
                    OutputConfiguration outputConfiguration = new OutputConfiguration(s);
                    outputConfiguration.setPhysicalCameraId(dualCamera.physicalId2);
                    return outputConfiguration;
                })
                .collect(Collectors.toList());

        // Put all the output configurations into a single flat array
        List outputConfigsAll = Stream.of(
                        outputConfigsLogical, outputConfigsPhysical1, outputConfigsPhysical2
                )
                .filter(Objects::nonNull)
                .flatMap(Collection::stream)
                .collect(Collectors.toList());

        // Instantiate a session configuration that can be used to create a session
        SessionConfiguration sessionConfiguration = new SessionConfiguration(
                SessionConfiguration.SESSION_REGULAR,
                outputConfigsAll, executor, new CameraCaptureSession.StateCallback() {
            @Override
            public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) {
                cameraCaptureSessionCallback.callback(cameraCaptureSession);
            }
            // Omitting for brevity...
            @Override
            public void onConfigureFailed(@NonNull CameraCaptureSession cameraCaptureSession) {
                cameraCaptureSession.getDevice().close();
            }
        });

        // Open the logical camera using the previously defined function
        openDualCamera(cameraManager, dualCamera, executor, (CameraDevice c) ->
                // Finally create the session and return via callback
                c.createCaptureSession(sessionConfiguration));
    }

如需了解支持的数据流组合,请参阅 createCaptureSession。合并视频流适用于单个逻辑摄像头上的多个视频流。兼容性可扩展到使用相同的配置,并将其中一个视频流替换为来自属于同一逻辑摄像头的两个物理摄像头的两个视频流。

准备好相机会话后,分派所需的拍摄请求。拍摄请求的每个目标会从其关联的物理摄像头(如果有正在使用)接收其数据,或者回退到逻辑摄像头。

Zoom 示例用例

可以将物理摄像头合并到单个视频流中,以便用户在不同物理摄像头之间切换,体验不同的视野范围,从而有效捕获不同的“缩放级别”。

图 4. 为实现缩放级别用例更换摄像头的示例

AndroidStudio调用相机APP_c++_04

首先,选择一对物理摄像头,以便用户进行切换。为了达到最佳效果,您可以选择提供最小和最大可用焦距的一对摄像头。
Kotlin

fun findShortLongCameraPair(manager: CameraManager, facing: Int? = null): DualCamera? {

    return findDualCameras(manager, facing).map {
        val characteristics1 = manager.getCameraCharacteristics(it.physicalId1)
        val characteristics2 = manager.getCameraCharacteristics(it.physicalId2)

        // Query the focal lengths advertised by each physical camera
        val focalLengths1 = characteristics1.get(
            CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS) ?: floatArrayOf(0F)
        val focalLengths2 = characteristics2.get(
            CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS) ?: floatArrayOf(0F)

        // Compute the largest difference between min and max focal lengths between cameras
        val focalLengthsDiff1 = focalLengths2.maxOrNull()!! - focalLengths1.minOrNull()!!
        val focalLengthsDiff2 = focalLengths1.maxOrNull()!! - focalLengths2.minOrNull()!!

        // Return the pair of camera IDs and the difference between min and max focal lengths
        if (focalLengthsDiff1 < focalLengthsDiff2) {
            Pair(DualCamera(it.logicalId, it.physicalId1, it.physicalId2), focalLengthsDiff1)
        } else {
            Pair(DualCamera(it.logicalId, it.physicalId2, it.physicalId1), focalLengthsDiff2)
        }

        // Return only the pair with the largest difference, or null if no pairs are found
    }.maxByOrNull { it.second }?.first
}

Java

// Utility functions to find min/max value in float[]
    float findMax(float[] array) {
        float max = Float.NEGATIVE_INFINITY;
        for(float cur: array)
            max = Math.max(max, cur);
        return max;
    }
    float findMin(float[] array) {
        float min = Float.NEGATIVE_INFINITY;
        for(float cur: array)
            min = Math.min(min, cur);
        return min;
    }

DualCamera findShortLongCameraPair(CameraManager manager, Integer facing) {
        return findDualCameras(manager, facing).stream()
                .map(c -> {
                    CameraCharacteristics characteristics1;
                    CameraCharacteristics characteristics2;
                    try {
                        characteristics1 = manager.getCameraCharacteristics(c.physicalId1);
                        characteristics2 = manager.getCameraCharacteristics(c.physicalId2);
                    } catch (CameraAccessException e) {
                        e.printStackTrace();
                        return null;
                    }

                    // Query the focal lengths advertised by each physical camera
                    float[] focalLengths1 = characteristics1.get(
                            CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS);
                    float[] focalLengths2 = characteristics2.get(
                            CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS);

                    // Compute the largest difference between min and max focal lengths between cameras
                    Float focalLengthsDiff1 = findMax(focalLengths2) - findMin(focalLengths1);
                    Float focalLengthsDiff2 = findMax(focalLengths1) - findMin(focalLengths2);

                    // Return the pair of camera IDs and the difference between min and max focal lengths
                    if (focalLengthsDiff1 < focalLengthsDiff2) {
                        return new Pair<>(new DualCamera(c.logicalId, c.physicalId1, c.physicalId2), focalLengthsDiff1);
                    } else {
                        return new Pair<>(new DualCamera(c.logicalId, c.physicalId2, c.physicalId1), focalLengthsDiff2);
                    }

                }) // Return only the pair with the largest difference, or null if no pairs are found
                .max(Comparator.comparing(pair -> pair.second)).get().first;
    }

合理的架构是设置两个 SurfaceViews,每个数据流对应一个。这些 SurfaceViews 会根据用户互动情况进行交换,因此在任何给定时间都只有一个可见。

以下代码展示了如何打开逻辑摄像头、配置摄像头输出、创建摄像头会话以及启动两个预览数据流:
Kotlin

val cameraManager: CameraManager = ...

// Get the two output targets from the activity / fragment
val surface1 = ...  // from SurfaceView
val surface2 = ...  // from SurfaceView

val dualCamera = findShortLongCameraPair(manager)!!
val outputTargets = DualCameraOutputs(
    null, mutableListOf(surface1), mutableListOf(surface2))

// Here you open the logical camera, configure the outputs and create a session
createDualCameraSession(manager, dualCamera, targets = outputTargets) { session ->

  // Create a single request which has one target for each physical camera
  // NOTE: Each target receive frames from only its associated physical camera
  val requestTemplate = CameraDevice.TEMPLATE_PREVIEW
  val captureRequest = session.device.createCaptureRequest(requestTemplate).apply {
    arrayOf(surface1, surface2).forEach { addTarget(it) }
  }.build()

  // Set the sticky request for the session and you are done
  session.setRepeatingRequest(captureRequest, null, null)
}

Java

CameraManager manager = ...;

        // Get the two output targets from the activity / fragment
        Surface surface1 = ...;  // from SurfaceView
        Surface surface2 = ...;  // from SurfaceView

        DualCamera dualCamera = findShortLongCameraPair(manager, null);
                DualCameraOutputs outputTargets = new DualCameraOutputs(
                null, Collections.singletonList(surface1), Collections.singletonList(surface2));

        // Here you open the logical camera, configure the outputs and create a session
        createDualCameraSession(manager, dualCamera, outputTargets, null, (session) -> {
            // Create a single request which has one target for each physical camera
            // NOTE: Each target receive frames from only its associated physical camera
            CaptureRequest.Builder captureRequestBuilder;
            try {
                captureRequestBuilder = session.getDevice().createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
                Arrays.asList(surface1, surface2).forEach(captureRequestBuilder::addTarget);

                // Set the sticky request for the session and you are done
                session.setRepeatingRequest(captureRequestBuilder.build(), null, null);
            } catch (CameraAccessException e) {
                e.printStackTrace();
            }
        });

接下来,唯一要做的就是提供一个界面,以便用户在两个 surface 之间切换,例如点按一个按钮或点按两次 SurfaceView。您甚至可以执行某种形式的场景分析,并自动在两个声音流之间切换。

镜头失真

所有镜头都会产生一定程度的失真。在 Android 中,您可以使用 CameraCharacteristics.LENS_DISTORTION(取代现已废弃的 CameraCharacteristics.LENS_RADIAL_DISTORTION)查询镜头产生的失真。对于逻辑摄像头,失真很小,并且您的应用可以提高或降低来自摄像头的帧。对于物理相机,镜头配置可能截然不同,特别是在广角镜头上。

某些设备可以通过 CaptureRequest.DISTORTION_CORRECTION_MODE 实现自动失真校正。失真校正功能在大多数设备上默认处于开启状态。
Kotlin

val cameraSession: CameraCaptureSession = ...

        // Use still capture template to build the capture request
        val captureRequest = cameraSession.device.createCaptureRequest(
            CameraDevice.TEMPLATE_STILL_CAPTURE
        )

        // Determine if this device supports distortion correction
        val characteristics: CameraCharacteristics = ...
        val supportsDistortionCorrection = characteristics.get(
            CameraCharacteristics.DISTORTION_CORRECTION_AVAILABLE_MODES
        )?.contains(
            CameraMetadata.DISTORTION_CORRECTION_MODE_HIGH_QUALITY
        ) ?: false

        if (supportsDistortionCorrection) {
            captureRequest.set(
                CaptureRequest.DISTORTION_CORRECTION_MODE,
                CameraMetadata.DISTORTION_CORRECTION_MODE_HIGH_QUALITY
            )
        }

        // Add output target, set other capture request parameters...

        // Dispatch the capture request
        cameraSession.capture(captureRequest.build(), ...)

Java

CameraCaptureSession cameraSession = ...;

        // Use still capture template to build the capture request
        CaptureRequest.Builder captureRequestBuilder = null;
        try {
            captureRequestBuilder = cameraSession.getDevice().createCaptureRequest(
                    CameraDevice.TEMPLATE_STILL_CAPTURE
            );
        } catch (CameraAccessException e) {
            e.printStackTrace();
        }

        // Determine if this device supports distortion correction
        CameraCharacteristics characteristics = ...;
        boolean supportsDistortionCorrection = Arrays.stream(
                        characteristics.get(
                                CameraCharacteristics.DISTORTION_CORRECTION_AVAILABLE_MODES
                        ))
                .anyMatch(i -> i == CameraMetadata.DISTORTION_CORRECTION_MODE_HIGH_QUALITY);
        if (supportsDistortionCorrection) {
            captureRequestBuilder.set(
                    CaptureRequest.DISTORTION_CORRECTION_MODE,
                    CameraMetadata.DISTORTION_CORRECTION_MODE_HIGH_QUALITY
            );
        }

        // Add output target, set other capture request parameters...

        // Dispatch the capture request
        cameraSession.capture(captureRequestBuilder.build(), ...);

在此模式下设置拍摄请求可能会影响相机可生成的帧速率。您可以选择仅对静态图片拍摄设置失真校正。

本页面上的内容和代码示例受内容许可部分所述许可的限制。Java 和 OpenJDK 是 Oracle 和/或其关联公司的注册商标。

最后更新时间 (UTC):2023-11-08。