banner



How To Take Picture Using Camera In C++

The Android framework includes support for diverse cameras and camera features available on devices, allowing you to capture pictures and videos in your applications. This document discusses a quick, simple approach to image and video capture and outlines an advanced arroyo for creating custom camera experiences for your users.

Notation: This page describes the Photographic camera class, which has been deprecated. We recommend using the CameraX Jetpack library or, for specific use cases, the camera2, grade. Both CameraX and Camera2 work on Android 5.0 (API level 21) and college.

Considerations

Before enabling your application to use cameras on Android devices, you should consider a few questions most how your app intends to use this hardware feature.

  • Camera Requirement - Is the use of a camera so of import to your application that yous do not want your awarding installed on a device that does not accept a camera? If so, you lot should declare the camera requirement in your manifest.
  • Quick Picture or Customized Photographic camera - How will your application use the photographic camera? Are yous just interested in snapping a quick picture or video clip, or will your application provide a new way to utilise cameras? For getting a quick snap or clip, consider Using Existing Camera Apps. For developing a customized camera characteristic, check out the Building a Camera App section.
  • Foreground Services Requirement - When does your app interact with the camera? On Android nine (API level 28) and later, apps running in the background cannot admission the camera. Therefore, you should use the camera either when your app is in the foreground or as role of a foreground service.
  • Storage - Are the images or videos your awarding generates intended to be only visible to your application or shared so that other applications such equally Gallery or other media and social apps tin can use them? Practise y'all want the pictures and videos to be available even if your application is uninstalled? Check out the Saving Media Files section to see how to implement these options.

The basics

The Android framework supports capturing images and video through the android.hardware.camera2 API or camera Intent. Hither are the relevant classes:

android.hardware.camera2
This parcel is the primary API for decision-making device cameras. It can be used to take pictures or videos when you are building a photographic camera application.
Camera
This class is the older deprecated API for decision-making device cameras.
SurfaceView
This grade is used to present a alive camera preview to the user.
MediaRecorder
This class is used to record video from the camera.
Intent
An intent action type of MediaStore.ACTION_IMAGE_CAPTURE or MediaStore.ACTION_VIDEO_CAPTURE can be used to capture images or videos without straight using the Camera object.

Manifest declarations

Earlier starting evolution on your application with the Photographic camera API, yous should make certain your manifest has the appropriate declarations to allow utilise of camera hardware and other related features.

  • Photographic camera Permission - Your awarding must request permission to use a device camera.
    <uses-permission android:name="android.permission.CAMERA" />            

    Notation: If you are using the camera by invoking an existing photographic camera app, your application does not demand to request this permission.

  • Camera Features - Your application must also declare utilize of camera features, for example:
    <uses-characteristic android:name="android.hardware.camera" />            

    For a list of camera features, see the manifest Features Reference.

    Adding camera features to your manifest causes Google Play to foreclose your application from being installed to devices that practise non include a camera or do not support the camera features y'all specify. For more than information about using feature-based filtering with Google Play, meet Google Play and Characteristic-Based Filtering.

    If your application can use a camera or camera feature for proper performance, but does not require it, you should specify this in the manifest by including the android:required attribute, and setting it to simulated:

    <uses-feature android:name="android.hardware.camera" android:required="faux" />            
  • Storage Permission - Your awarding can save images or videos to the device's external storage (SD Bill of fare) if it targets Android 10 (API level 29) or lower and specifies the following in the manifest.
    <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />            
  • Audio Recording Permission - For recording audio with video capture, your application must asking the sound capture permission.
    <uses-permission android:proper noun="android.permission.RECORD_AUDIO" />            
  • Location Permission - If your awarding tags images with GPS location information, you must request the ACCESS_FINE_LOCATION permission. Annotation that, if your app targets Android 5.0 (API level 21) or higher, you also need to declare that your app uses the device's GPS:

    <uses-permission android:proper noun="android.permission.ACCESS_FINE_LOCATION" /> ... <!-- Needed merely if your app targets Android 5.0 (API level 21) or college. --> <uses-feature android:name="android.hardware.location.gps" />            

    For more than data about getting user location, see Location Strategies.

Using existing photographic camera apps

A quick fashion to enable taking pictures or videos in your application without a lot of actress lawmaking is to use an Intent to invoke an existing Android camera application. The details are described in the training lessons Taking Photos Just and Recording Videos Simply.

Building a photographic camera app

Some developers may require a camera user interface that is customized to the expect of their application or provides special features. Writing your own picture-taking code can provide a more than compelling experience for your users.

Note: The following guide is for the older, deprecated Camera API. For new or advanced camera applications, the newer android.hardware.camera2 API is recommended.

The general steps for creating a custom camera interface for your application are as follows:

  • Find and Access Photographic camera - Create code to check for the existence of cameras and request access.
  • Create a Preview Class - Create a photographic camera preview course that extends SurfaceView and implements the SurfaceHolder interface. This class previews the live images from the camera.
  • Build a Preview Layout - Once you have the photographic camera preview course, create a view layout that incorporates the preview and the user interface controls y'all desire.
  • Setup Listeners for Capture - Connect listeners for your interface controls to outset image or video capture in response to user deportment, such as pressing a push.
  • Capture and Save Files - Setup the code for capturing pictures or videos and saving the output.
  • Release the Photographic camera - Afterward using the camera, your application must properly release information technology for use by other applications.

Photographic camera hardware is a shared resources that must be advisedly managed so your application does not collide with other applications that may also want to utilize it. The following sections discusses how to detect photographic camera hardware, how to request access to a camera, how to capture pictures or video and how to release the camera when your application is done using it.

Caution: Retrieve to release the Camera object by calling the Camera.release() when your application is done using it! If your application does not properly release the camera, all subsequent attempts to access the photographic camera, including those by your own application, will fail and may cause your or other applications to be shut downward.

Detecting camera hardware

If your application does non specifically crave a camera using a manifest proclamation, you should check to encounter if a camera is available at runtime. To perform this check, utilise the PackageManager.hasSystemFeature() method, as shown in the example code beneath:

Kotlin

/** Check if this device has a camera */ individual fun checkCameraHardware(context: Context): Boolean {     if (context.packageManager.hasSystemFeature(PackageManager.FEATURE_CAMERA)) {         // this device has a camera         render true     } else {         // no camera on this device         return false     } }            

Java

/** Check if this device has a photographic camera */ private boolean checkCameraHardware(Context context) {     if (context.getPackageManager().hasSystemFeature(PackageManager.FEATURE_CAMERA)){         // this device has a camera         render true;     } else {         // no camera on this device         return false;     } }            

Android devices tin have multiple cameras, for instance a dorsum-facing camera for photography and a front end-facing camera for video calls. Android 2.3 (API Level ix) and later allows you to bank check the number of cameras bachelor on a device using the Camera.getNumberOfCameras() method.

Accessing cameras

If you accept adamant that the device on which your application is running has a camera, you must request to access it by getting an instance of Camera (unless you are using an intent to access the camera).

To access the primary photographic camera, utilize the Camera.open() method and be sure to catch whatever exceptions, as shown in the code below:

Kotlin

/** A prophylactic manner to go an case of the Photographic camera object. */ fun getCameraInstance(): Photographic camera? {     render try {         Camera.open() // attempt to become a Camera instance     } catch (due east: Exception) {         // Camera is not available (in use or does not exist)         null // returns null if camera is unavailable     } }            

Coffee

/** A condom fashion to get an example of the Photographic camera object. */ public static Camera getCameraInstance(){     Camera c = null;     endeavor {         c = Camera.open(); // attempt to go a Photographic camera instance     }     catch (Exception e){         // Camera is not available (in use or does not be)     }     return c; // returns cipher if photographic camera is unavailable }            

Caution: Always check for exceptions when using Photographic camera.open(). Failing to cheque for exceptions if the camera is in use or does non exist will cause your application to be shut down by the system.

On devices running Android two.3 (API Level ix) or higher, you tin can access specific cameras using Camera.open(int). The example code above will access the commencement, dorsum-facing camera on a device with more than one camera.

Checking photographic camera features

Once you obtain access to a photographic camera, yous tin can become further information nigh its capabilities using the Photographic camera.getParameters() method and checking the returned Camera.Parameters object for supported capabilities. When using API Level 9 or higher, use the Camera.getCameraInfo() to determine if a camera is on the front or back of the device, and the orientation of the image.

Creating a preview class

For users to effectively take pictures or video, they must be able to see what the device camera sees. A photographic camera preview class is a SurfaceView that can display the live image data coming from a camera, so users tin can frame and capture a moving-picture show or video.

The following example code demonstrates how to create a basic photographic camera preview class that can be included in a View layout. This class implements SurfaceHolder.Callback in club to capture the callback events for creating and destroying the view, which are needed for assigning the photographic camera preview input.

Kotlin

/** A basic Camera preview grade */ class CameraPreview(         context: Context,         private val mCamera: Camera ) : SurfaceView(context), SurfaceHolder.Callback {      private val mHolder: SurfaceHolder = holder.apply {         // Install a SurfaceHolder.Callback so we get notified when the         // underlying surface is created and destroyed.         addCallback(this@CameraPreview)         // deprecated setting, but required on Android versions prior to 3.0         setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS)     }      override fun surfaceCreated(holder: SurfaceHolder) {         // The Surface has been created, now tell the photographic camera where to draw the preview.         mCamera.employ {             effort {                 setPreviewDisplay(holder)                 startPreview()             } catch (e: IOException) {                 Log.d(TAG, "Mistake setting camera preview: ${e.message}")             }         }     }      override fun surfaceDestroyed(holder: SurfaceHolder) {         // empty. Take care of releasing the Photographic camera preview in your activity.     }      override fun surfaceChanged(holder: SurfaceHolder, format: Int, w: Int, h: Int) {         // If your preview can change or rotate, have care of those events here.         // Brand sure to end the preview earlier resizing or reformatting it.         if (mHolder.surface == null) {             // preview surface does not exist             return         }          // end preview before making changes         try {             mCamera.stopPreview()         } catch (e: Exception) {             // ignore: tried to end a non-existent preview         }          // set up preview size and make any resize, rotate or         // reformatting changes here          // start preview with new settings         mCamera.apply {             try {                 setPreviewDisplay(mHolder)                 startPreview()             } catch (due east: Exception) {                 Log.d(TAG, "Error starting camera preview: ${e.message}")             }         }     } }            

Coffee

/** A basic Camera preview form */ public grade CameraPreview extends SurfaceView implements SurfaceHolder.Callback {     private SurfaceHolder mHolder;     private Photographic camera mCamera;      public CameraPreview(Context context, Camera camera) {         super(context);         mCamera = camera;          // Install a SurfaceHolder.Callback and so we become notified when the         // underlying surface is created and destroyed.         mHolder = getHolder();         mHolder.addCallback(this);         // deprecated setting, but required on Android versions prior to three.0         mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);     }      public void surfaceCreated(SurfaceHolder holder) {         // The Surface has been created, at present tell the photographic camera where to draw the preview.         endeavor {             mCamera.setPreviewDisplay(holder);             mCamera.startPreview();         } catch (IOException eastward) {             Log.d(TAG, "Error setting photographic camera preview: " + e.getMessage());         }     }      public void surfaceDestroyed(SurfaceHolder holder) {         // empty. Take care of releasing the Camera preview in your activeness.     }      public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {         // If your preview can change or rotate, take intendance of those events hither.         // Brand sure to cease the preview before resizing or reformatting information technology.          if (mHolder.getSurface() == null){           // preview surface does not exist           render;         }          // finish preview before making changes         attempt {             mCamera.stopPreview();         } catch (Exception due east){           // ignore: tried to end a non-existent preview         }          // set preview size and make any resize, rotate or         // reformatting changes here          // start preview with new settings         try {             mCamera.setPreviewDisplay(mHolder);             mCamera.startPreview();          } catch (Exception e){             Log.d(TAG, "Error starting camera preview: " + e.getMessage());         }     } }            

If y'all want to prepare a specific size for your camera preview, set this in the surfaceChanged() method as noted in the comments higher up. When setting preview size, you must utilize values from getSupportedPreviewSizes(). Do not set arbitrary values in the setPreviewSize() method.

Note: With the introduction of the Multi-Window feature in Android 7.0 (API level 24) and higher, you can no longer assume the aspect ratio of the preview is the aforementioned every bit your action even after calling setDisplayOrientation(). Depending on the window size and aspect ratio, you may may have to fit a broad camera preview into a portrait-orientated layout, or vice versa, using a letterbox layout.

Placing preview in a layout

A photographic camera preview class, such as the example shown in the previous section, must be placed in the layout of an action along with other user interface controls for taking a moving picture or video. This department shows you how to build a basic layout and activity for the preview.

The following layout code provides a very basic view that can exist used to brandish a photographic camera preview. In this example, the FrameLayout element is meant to be the container for the camera preview grade. This layout type is used so that additional picture show information or controls can be overlaid on the alive camera preview images.

<?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"     android:orientation="horizontal"     android:layout_width="fill_parent"     android:layout_height="fill_parent"     >   <FrameLayout     android:id="@+id/camera_preview"     android:layout_width="fill_parent"     android:layout_height="fill_parent"     android:layout_weight="i"     />    <Push     android:id="@+id/button_capture"     android:text="Capture"     android:layout_width="wrap_content"     android:layout_height="wrap_content"     android:layout_gravity="center"     /> </LinearLayout>        

On most devices, the default orientation of the camera preview is landscape. This instance layout specifies a horizontal (landscape) layout and the code beneath fixes the orientation of the application to landscape. For simplicity in rendering a camera preview, you should change your application's preview activeness orientation to landscape past adding the following to your manifest.

<action android:name=".CameraActivity"           android:characterization="@string/app_name"            android:screenOrientation="mural">           <!-- configure this activity to use landscape orientation -->            <intent-filter>         <action android:proper name="android.intent.activeness.Main" />         <category android:name="android.intent.category.LAUNCHER" />     </intent-filter> </activeness>        

Note: A camera preview does not take to be in mural fashion. Starting in Android 2.two (API Level 8), you tin can use the setDisplayOrientation() method to set the rotation of the preview image. In order to change preview orientation as the user re-orients the phone, within the surfaceChanged() method of your preview class, first finish the preview with Photographic camera.stopPreview() change the orientation and and then commencement the preview again with Photographic camera.startPreview().

In the activity for your camera view, add your preview class to the FrameLayout chemical element shown in the example above. Your camera activity must also ensure that it releases the camera when it is paused or shut down. The following case shows how to alter a camera activeness to adhere the preview course shown in Creating a preview class.

Kotlin

form CameraActivity : Activity() {      private var mCamera: Camera? = nada     private var mPreview: CameraPreview? = null      override fun onCreate(savedInstanceState: Packet?) {         super.onCreate(savedInstanceState)         setContentView(R.layout.activity_main)          // Create an instance of Camera         mCamera = getCameraInstance()          mPreview = mCamera?.let {             // Create our Preview view             CameraPreview(this, it)         }          // Set the Preview view equally the content of our activity.         mPreview?.also {             val preview: FrameLayout = findViewById(R.id.camera_preview)             preview.addView(it)         }     } }            

Java

public class CameraActivity extends Action {      private Camera mCamera;     private CameraPreview mPreview;      @Override     public void onCreate(Bundle savedInstanceState) {         super.onCreate(savedInstanceState);         setContentView(R.layout.master);          // Create an instance of Camera         mCamera = getCameraInstance();          // Create our Preview view and prepare it every bit the content of our activeness.         mPreview = new CameraPreview(this, mCamera);         FrameLayout preview = (FrameLayout) findViewById(R.id.camera_preview);         preview.addView(mPreview);     } }            

Note: The getCameraInstance() method in the case above refers to the instance method shown in Accessing cameras.

Capturing pictures

One time you have built a preview class and a view layout in which to display it, you are fix to start capturing images with your application. In your awarding code, you must prepare upward listeners for your user interface controls to respond to a user activeness by taking a picture.

In social club to retrieve a picture, utilize the Camera.takePicture() method. This method takes iii parameters which receive data from the camera. In social club to receive data in a JPEG format, y'all must implement an Camera.PictureCallback interface to receive the image data and write it to a file. The following code shows a basic implementation of the Camera.PictureCallback interface to save an image received from the camera.

Kotlin

individual val mPicture = Camera.PictureCallback { data, _ ->     val pictureFile: File = getOutputMediaFile(MEDIA_TYPE_IMAGE) ?: run {         Log.d(TAG, ("Error creating media file, cheque storage permissions"))         return@PictureCallback     }      try {         val fos = FileOutputStream(pictureFile)         fos.write(information)         fos.shut()     } take hold of (e: FileNotFoundException) {         Log.d(TAG, "File non institute: ${eastward.bulletin}")     } take hold of (e: IOException) {         Log.d(TAG, "Error accessing file: ${e.message}")     } }            

Coffee

individual PictureCallback mPicture = new PictureCallback() {      @Override     public void onPictureTaken(byte[] data, Camera camera) {          File pictureFile = getOutputMediaFile(MEDIA_TYPE_IMAGE);         if (pictureFile == zero){             Log.d(TAG, "Error creating media file, check storage permissions");             render;         }          try {             FileOutputStream fos = new FileOutputStream(pictureFile);             fos.write(data);             fos.shut();         } catch (FileNotFoundException e) {             Log.d(TAG, "File non found: " + due east.getMessage());         } catch (IOException e) {             Log.d(TAG, "Error accessing file: " + e.getMessage());         }     } };            

Trigger capturing an image past calling the Camera.takePicture() method. The following example code shows how to call this method from a button View.OnClickListener.

Kotlin

val captureButton: Button = findViewById(R.id.button_capture) captureButton.setOnClickListener {     // get an image from the camera     mCamera?.takePicture(zip, null, picture) }            

Java

// Add a listener to the Capture button Push button captureButton = (Push) findViewById(R.id.button_capture); captureButton.setOnClickListener(     new View.OnClickListener() {         @Override         public void onClick(View v) {             // get an image from the camera             mCamera.takePicture(null, null, picture);         }     } );            

Note: The mPicture member in the post-obit example refers to the example lawmaking higher up.

Circumspection: Recollect to release the Photographic camera object by calling the Camera.release() when your application is done using it! For information almost how to release the camera, encounter Releasing the camera.

Capturing videos

Video capture using the Android framework requires careful direction of the Camera object and coordination with the MediaRecorder class. When recording video with Camera, you must manage the Camera.lock() and Camera.unlock() calls to allow MediaRecorder access to the camera hardware, in addition to the Camera.open() and Camera.release() calls.

Note: Starting with Android 4.0 (API level fourteen), the Photographic camera.lock() and Camera.unlock() calls are managed for you automatically.

Different taking pictures with a device photographic camera, capturing video requires a very item call order. You must follow a specific club of execution to successfully prepare for and capture video with your application, every bit detailed below.

  1. Open Camera - Use the Camera.open() to get an instance of the photographic camera object.
  2. Connect Preview - Prepare a live photographic camera image preview by connecting a SurfaceView to the camera using Camera.setPreviewDisplay().
  3. Offset Preview - Phone call Camera.startPreview() to begin displaying the live camera images.
  4. Start Recording Video - The following steps must be completed in order to successfully tape video:
    1. Unlock the Camera - Unlock the camera for use by MediaRecorder by calling Camera.unlock().
    2. Configure MediaRecorder - Telephone call in the following MediaRecorder methods in this club. For more information, run across the MediaRecorder reference documentation.
      1. setCamera() - Set the photographic camera to exist used for video capture, utilize your application'southward current instance of Photographic camera.
      2. setAudioSource() - Set the audio source, use MediaRecorder.AudioSource.CAMCORDER.
      3. setVideoSource() - Set the video source, utilise MediaRecorder.VideoSource.Photographic camera.
      4. Set the video output format and encoding. For Android 2.ii (API Level 8) and higher, use the MediaRecorder.setProfile method, and get a profile instance using CamcorderProfile.get(). For versions of Android prior to ii.2, you must set the video output format and encoding parameters:
        1. setOutputFormat() - Prepare the output format, specify the default setting or MediaRecorder.OutputFormat.MPEG_4.
        2. setAudioEncoder() - Set the sound encoding blazon, specify the default setting or MediaRecorder.AudioEncoder.AMR_NB.
        3. setVideoEncoder() - Set the video encoding type, specify the default setting or MediaRecorder.VideoEncoder.MPEG_4_SP.
      5. setOutputFile() - Ready the output file, apply getOutputMediaFile(MEDIA_TYPE_VIDEO).toString() from the instance method in the Saving Media Files section.
      6. setPreviewDisplay() - Specify the SurfaceView preview layout element for your application. Utilize the same object you specified for Connect Preview.

      Caution: You must call these MediaRecorder configuration methods in this order, otherwise your application will encounter errors and the recording will fail.

    3. Gear up MediaRecorder - Prepare the MediaRecorder with provided configuration settings by calling MediaRecorder.set().
    4. Kickoff MediaRecorder - Start recording video by calling MediaRecorder.start().
  5. Finish Recording Video - Telephone call the following methods in social club, to successfully complete a video recording:
    1. Stop MediaRecorder - End recording video by calling MediaRecorder.stop().
    2. Reset MediaRecorder - Optionally, remove the configuration settings from the recorder by calling MediaRecorder.reset().
    3. Release MediaRecorder - Release the MediaRecorder past calling MediaRecorder.release().
    4. Lock the Camera - Lock the camera so that future MediaRecorder sessions tin use information technology by calling Camera.lock(). Starting with Android 4.0 (API level 14), this telephone call is non required unless the MediaRecorder.ready() call fails.
  6. Cease the Preview - When your activity has finished using the camera, stop the preview using Camera.stopPreview().
  7. Release Photographic camera - Release the camera then that other applications can use it by calling Camera.release().

Annotation: Information technology is possible to use MediaRecorder without creating a camera preview first and skip the first few steps of this process. However, since users typically prefer to run across a preview before starting a recording, that process is not discussed here.

Tip: If your application is typically used for recording video, set setRecordingHint(boolean) to true prior to starting your preview. This setting can assist reduce the time it takes to start recording.

Configuring MediaRecorder

When using the MediaRecorder grade to tape video, you must perform configuration steps in a specific order and so call the MediaRecorder.fix() method to bank check and implement the configuration. The post-obit example code demonstrates how to properly configure and gear up the MediaRecorder class for video recording.

Kotlin

private fun prepareVideoRecorder(): Boolean {     mediaRecorder = MediaRecorder()      mCamera?.let { camera ->         // Footstep 1: Unlock and set camera to MediaRecorder         camera?.unlock()          mediaRecorder?.run {             setCamera(camera)              // Step 2: Set sources             setAudioSource(MediaRecorder.AudioSource.CAMCORDER)             setVideoSource(MediaRecorder.VideoSource.CAMERA)              // Footstep 3: Set a CamcorderProfile (requires API Level eight or higher)             setProfile(CamcorderProfile.get(CamcorderProfile.QUALITY_HIGH))              // Step 4: Set output file             setOutputFile(getOutputMediaFile(MEDIA_TYPE_VIDEO).toString())              // Step 5: Ready the preview output             setPreviewDisplay(mPreview?.holder?.surface)              setOutputFormat(MediaRecorder.OutputFormat.MPEG_4)             setAudioEncoder(MediaRecorder.AudioEncoder.DEFAULT)             setVideoEncoder(MediaRecorder.VideoEncoder.DEFAULT)               // Stride 6: Gear up configured MediaRecorder             return effort {                 prepare()                 true             } catch (east: IllegalStateException) {                 Log.d(TAG, "IllegalStateException preparing MediaRecorder: ${e.message}")                 releaseMediaRecorder()                 false             } catch (eastward: IOException) {                 Log.d(TAG, "IOException preparing MediaRecorder: ${e.message}")                 releaseMediaRecorder()                 false             }         }      }     return false }            

Java

private boolean prepareVideoRecorder(){      mCamera = getCameraInstance();     mediaRecorder = new MediaRecorder();      // Footstep one: Unlock and set camera to MediaRecorder     mCamera.unlock();     mediaRecorder.setCamera(mCamera);      // Step 2: Prepare sources     mediaRecorder.setAudioSource(MediaRecorder.AudioSource.CAMCORDER);     mediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA);      // Step 3: Gear up a CamcorderProfile (requires API Level 8 or higher)     mediaRecorder.setProfile(CamcorderProfile.get(CamcorderProfile.QUALITY_HIGH));      // Step iv: Set output file     mediaRecorder.setOutputFile(getOutputMediaFile(MEDIA_TYPE_VIDEO).toString());      // Pace v: Set the preview output     mediaRecorder.setPreviewDisplay(mPreview.getHolder().getSurface());      // Stride 6: Set up configured MediaRecorder     endeavour {         mediaRecorder.prepare();     } take hold of (IllegalStateException e) {         Log.d(TAG, "IllegalStateException preparing MediaRecorder: " + e.getMessage());         releaseMediaRecorder();         return faux;     } catch (IOException e) {         Log.d(TAG, "IOException preparing MediaRecorder: " + e.getMessage());         releaseMediaRecorder();         return false;     }     return true; }            

Prior to Android ii.2 (API Level 8), you must set the output format and encoding formats parameters direct, instead of using CamcorderProfile. This approach is demonstrated in the post-obit code:

Kotlin

              // Pace iii: Set output format and encoding (for versions prior to API Level 8)     mediaRecorder?.apply {         setOutputFormat(MediaRecorder.OutputFormat.MPEG_4)         setAudioEncoder(MediaRecorder.AudioEncoder.DEFAULT)         setVideoEncoder(MediaRecorder.VideoEncoder.DEFAULT)     }            

Java

              // Step 3: Set output format and encoding (for versions prior to API Level 8)     mediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);     mediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.DEFAULT);     mediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.DEFAULT);            

The post-obit video recording parameters for MediaRecorder are given default settings, however, yous may desire to accommodate these settings for your awarding:

  • setVideoEncodingBitRate()
  • setVideoSize()
  • setVideoFrameRate()
  • setAudioEncodingBitRate()
  • setAudioChannels()
  • setAudioSamplingRate()

Starting and stopping MediaRecorder

When starting and stopping video recording using the MediaRecorder class, you must follow a specific club, every bit listed below.

  1. Unlock the camera with Camera.unlock()
  2. Configure MediaRecorder equally shown in the code case to a higher place
  3. Start recording using MediaRecorder.showtime()
  4. Tape the video
  5. Stop recording using MediaRecorder.finish()
  6. Release the media recorder with MediaRecorder.release()
  7. Lock the photographic camera using Camera.lock()

The following case lawmaking demonstrates how to wire up a button to properly start and cease video recording using the camera and the MediaRecorder grade.

Note: When completing a video recording, exercise not release the photographic camera or else your preview volition be stopped.

Kotlin

var isRecording = imitation val captureButton: Button = findViewById(R.id.button_capture) captureButton.setOnClickListener {     if (isRecording) {         // stop recording and release camera         mediaRecorder?.stop() // stop the recording         releaseMediaRecorder() // release the MediaRecorder object         mCamera?.lock() // have photographic camera access back from MediaRecorder          // inform the user that recording has stopped         setCaptureButtonText("Capture")         isRecording = false     } else {         // initialize video photographic camera         if (prepareVideoRecorder()) {             // Photographic camera is available and unlocked, MediaRecorder is prepared,             // now you can outset recording             mediaRecorder?.showtime()              // inform the user that recording has started             setCaptureButtonText("Stop")             isRecording = true         } else {             // prepare didn't work, release the camera             releaseMediaRecorder()             // inform user         }     } }            

Java

private boolean isRecording = false;  // Add a listener to the Capture button Push button captureButton = (Button) findViewById(id.button_capture); captureButton.setOnClickListener(     new View.OnClickListener() {         @Override         public void onClick(View v) {             if (isRecording) {                 // end recording and release photographic camera                 mediaRecorder.terminate();  // stop the recording                 releaseMediaRecorder(); // release the MediaRecorder object                 mCamera.lock();         // take camera access back from MediaRecorder                  // inform the user that recording has stopped                 setCaptureButtonText("Capture");                 isRecording = false;             } else {                 // initialize video camera                 if (prepareVideoRecorder()) {                     // Camera is bachelor and unlocked, MediaRecorder is prepared,                     // now you tin can start recording                     mediaRecorder.commencement();                      // inform the user that recording has started                     setCaptureButtonText("Stop");                     isRecording = true;                 } else {                     // prepare didn't piece of work, release the camera                     releaseMediaRecorder();                     // inform user                 }             }         }     } );

Note: In the above case, the prepareVideoRecorder() method refers to the example code shown in Configuring MediaRecorder. This method takes intendance of locking the camera, configuring and preparing the MediaRecorder instance.

Releasing the camera

Cameras are a resource that is shared past applications on a device. Your application tin make use of the camera subsequently getting an instance of Photographic camera, and you must exist particularly conscientious to release the photographic camera object when your awarding stops using it, and as soon equally your application is paused (Activity.onPause()). If your application does non properly release the camera, all subsequent attempts to access the photographic camera, including those by your own awarding, will fail and may cause your or other applications to be close downwards.

To release an instance of the Photographic camera object, use the Camera.release() method, as shown in the example lawmaking below.

Kotlin

class CameraActivity : Activity() {     private var mCamera: Camera?     individual var preview: SurfaceView?     private var mediaRecorder: MediaRecorder?      override fun onPause() {         super.onPause()         releaseMediaRecorder() // if yous are using MediaRecorder, release it first         releaseCamera() // release the camera immediately on pause outcome     }      private fun releaseMediaRecorder() {         mediaRecorder?.reset() // clear recorder configuration         mediaRecorder?.release() // release the recorder object         mediaRecorder = null         mCamera?.lock() // lock camera for later utilise     }      private fun releaseCamera() {         mCamera?.release() // release the camera for other applications         mCamera = zip     } }            

Java

public class CameraActivity extends Activity {     private Camera mCamera;     individual SurfaceView preview;     individual MediaRecorder mediaRecorder;      ...      @Override     protected void onPause() {         super.onPause();         releaseMediaRecorder();       // if you lot are using MediaRecorder, release information technology kickoff         releaseCamera();              // release the camera immediately on interruption effect     }      individual void releaseMediaRecorder(){         if (mediaRecorder != zero) {             mediaRecorder.reset();   // clear recorder configuration             mediaRecorder.release(); // release the recorder object             mediaRecorder = null;             mCamera.lock();           // lock camera for subsequently use         }     }      individual void releaseCamera(){         if (mCamera != null){             mCamera.release();        // release the camera for other applications             mCamera = naught;         }     } }            

Caution: If your application does not properly release the photographic camera, all subsequent attempts to access the camera, including those past your own application, will fail and may cause your or other applications to exist shut downwards.

Media files created by users such equally pictures and videos should be saved to a device'due south external storage directory (SD Card) to conserve organization infinite and to let users to access these files without their device. There are many possible directory locations to salve media files on a device, however there are only ii standard locations you should consider as a developer:

  • Environment.getExternalStoragePublicDirectory(Environs.DIRECTORY_PICTURES) - This method returns the standard, shared and recommended location for saving pictures and videos. This directory is shared (public), so other applications tin can easily find, read, change and delete files saved in this location. If your awarding is uninstalled past the user, media files saved to this location will not be removed. To avoid interfering with users existing pictures and videos, you should create a sub-directory for your application's media files within this directory, every bit shown in the lawmaking sample below. This method is available in Android ii.2 (API Level 8), for equivalent calls in before API versions, encounter Saving Shared Files.
  • Context.getExternalFilesDir(Surround.DIRECTORY_PICTURES) - This method returns a standard location for saving pictures and videos which are associated with your application. If your application is uninstalled, any files saved in this location are removed. Security is non enforced for files in this location and other applications may read, alter and delete them.

The following example code demonstrates how to create a File or Uri location for a media file that tin can be used when invoking a device'southward camera with an Intent or every bit part of a Building a Photographic camera App.

Kotlin

val MEDIA_TYPE_IMAGE = 1 val MEDIA_TYPE_VIDEO = ii  /** Create a file Uri for saving an image or video */ private fun getOutputMediaFileUri(type: Int): Uri {     return Uri.fromFile(getOutputMediaFile(blazon)) }  /** Create a File for saving an image or video */ private fun getOutputMediaFile(blazon: Int): File? {     // To exist prophylactic, you should check that the SDCard is mounted     // using Surroundings.getExternalStorageState() before doing this.      val mediaStorageDir = File(             Environment.getExternalStoragePublicDirectory(Surround.DIRECTORY_PICTURES),             "MyCameraApp"     )     // This location works best if you want the created images to be shared     // between applications and persist subsequently your app has been uninstalled.      // Create the storage directory if it does not exist     mediaStorageDir.utilize {         if (!exists()) {             if (!mkdirs()) {                 Log.d("MyCameraApp", "failed to create directory")                 return null             }         }     }      // Create a media file name     val timeStamp = SimpleDateFormat("yyyyMMdd_HHmmss").format(Date())     return when (type) {         MEDIA_TYPE_IMAGE -> {             File("${mediaStorageDir.path}${File.separator}IMG_$timeStamp.jpg")         }         MEDIA_TYPE_VIDEO -> {             File("${mediaStorageDir.path}${File.separator}VID_$timeStamp.mp4")         }         else -> null     } }            

Java

public static final int MEDIA_TYPE_IMAGE = ane; public static final int MEDIA_TYPE_VIDEO = 2;  /** Create a file Uri for saving an image or video */ private static Uri getOutputMediaFileUri(int type){       return Uri.fromFile(getOutputMediaFile(type)); }  /** Create a File for saving an image or video */ private static File getOutputMediaFile(int type){     // To exist safe, you should cheque that the SDCard is mounted     // using Environment.getExternalStorageState() before doing this.      File mediaStorageDir = new File(Environment.getExternalStoragePublicDirectory(               Environment.DIRECTORY_PICTURES), "MyCameraApp");     // This location works best if you want the created images to exist shared     // betwixt applications and persist after your app has been uninstalled.      // Create the storage directory if it does non exist     if (! mediaStorageDir.exists()){         if (! mediaStorageDir.mkdirs()){             Log.d("MyCameraApp", "failed to create directory");             return null;         }     }      // Create a media file name     Cord timeStamp = new SimpleDateFormat("yyyyMMdd_HHmmss").format(new Date());     File mediaFile;     if (type == MEDIA_TYPE_IMAGE){         mediaFile = new File(mediaStorageDir.getPath() + File.separator +         "IMG_"+ timeStamp + ".jpg");     } else if(type == MEDIA_TYPE_VIDEO) {         mediaFile = new File(mediaStorageDir.getPath() + File.separator +         "VID_"+ timeStamp + ".mp4");     } else {         return null;     }      render mediaFile; }            

Annotation: Environment.getExternalStoragePublicDirectory() is available in Android 2.2 (API Level 8) or higher. If you are targeting devices with earlier versions of Android, use Environs.getExternalStorageDirectory() instead. For more information, encounter Saving Shared Files.

To make the URI support work profiles, beginning convert the file URI to a content URI. And so, add the content URI to EXTRA_OUTPUT of an Intent.

For more information about saving files on an Android device, see Information Storage.

Camera features

Android supports a wide array of camera features you can control with your photographic camera application, such as picture show format, flash style, focus settings, and many more. This section lists the common photographic camera features, and briefly discusses how to use them. Most camera features tin be accessed and set using the through Camera.Parameters object. However, there are several important features that require more than unproblematic settings in Photographic camera.Parameters. These features are covered in the following sections:

  • Metering and focus areas
  • Face detection
  • Time lapse video

For general information about how to apply features that are controlled through Photographic camera.Parameters, review the Using photographic camera features section. For more detailed data virtually how to apply features controlled through the photographic camera parameters object, follow the links in the feature list below to the API reference documentation.

Table 1. Mutual camera features sorted past the Android API Level in which they were introduced.

Feature API Level Description
Face Detection 14 Identify human faces within a picture and use them for focus, metering and white balance
Metering Areas 14 Specify ane or more than areas within an image for computing white balance
Focus Areas 14 Set ane or more areas within an image to employ for focus
White Balance Lock 14 Stop or start automatic white balance adjustments
Exposure Lock 14 Stop or showtime automatic exposure adjustments
Video Snapshot 14 Have a movie while shooting video (frame catch)
Time Lapse Video 11 Record frames with set delays to tape a time lapse video
Multiple Cameras ix Support for more than i camera on a device, including front-facing and back-facing cameras
Focus Distance nine Reports distances between the photographic camera and objects that announced to be in focus
Zoom 8 Gear up prototype magnification
Exposure Compensation 8 Increase or decrease the light exposure level
GPS Information 5 Include or omit geographic location information with the prototype
White Residuum 5 Set the white balance mode, which affects color values in the captured epitome
Focus Way 5 Set how the camera focuses on a field of study such as automatic, stock-still, macro or infinity
Scene Style 5 Apply a preset mode for specific types of photography situations such as night, beach, snow or candlelight scenes
JPEG Quality v Set the pinch level for a JPEG paradigm, which increases or decreases image output file quality and size
Wink Mode 5 Turn flash on, off, or use automatic setting
Color Furnishings 5 Utilize a color upshot to the captured image such as black and white, sepia tone or negative.
Anti-Banding 5 Reduces the effect of banding in color gradients due to JPEG pinch
Picture Format 1 Specify the file format for the moving picture
Picture Size 1 Specify the pixel dimensions of the saved motion-picture show

Notation: These features are non supported on all devices due to hardware differences and software implementation. For data on checking the availability of features on the device where your application is running, run into Checking characteristic availability.

Checking feature availability

The start thing to sympathise when setting out to utilise photographic camera features on Android devices is that non all camera features are supported on all devices. In addition, devices that back up a particular characteristic may back up them to unlike levels or with dissimilar options. Therefore, part of your decision procedure as you develop a camera application is to make up one's mind what camera features y'all want to back up and to what level. After making that decision, you lot should programme on including lawmaking in your camera application that checks to see if device hardware supports those features and fails gracefully if a feature is not available.

You tin cheque the availability of camera features past getting an example of a camera'south parameters object, and checking the relevant methods. The post-obit code sample shows yous how to obtain a Camera.Parameters object and check if the camera supports the autofocus characteristic:

Kotlin

val params: Photographic camera.Parameters? = photographic camera?.parameters val focusModes: Listing<Cord>? = params?.supportedFocusModes if (focusModes?.contains(Photographic camera.Parameters.FOCUS_MODE_AUTO) == true) {     // Autofocus way is supported }            

Java

// become Camera parameters Camera.Parameters params = camera.getParameters();  Listing<String> focusModes = params.getSupportedFocusModes(); if (focusModes.contains(Camera.Parameters.FOCUS_MODE_AUTO)) {   // Autofocus mode is supported }            

You tin can utilise the technique shown above for most camera features. The Camera.Parameters object provides a getSupported...(), is...Supported() or getMax...() method to determine if (and to what extent) a characteristic is supported.

If your application requires certain camera features in guild to office properly, you can require them through additions to your awarding manifest. When you declare the use of specific photographic camera features, such every bit flash and auto-focus, Google Play restricts your application from being installed on devices which do not support these features. For a listing of photographic camera features that can be alleged in your app manifest, see the manifest Features Reference.

Using camera features

Most camera features are activated and controlled using a Camera.Parameters object. You lot obtain this object past first getting an instance of the Camera object, calling the getParameters() method, changing the returned parameter object and then setting it back into the photographic camera object, as demonstrated in the following case lawmaking:

Kotlin

val params: Camera.Parameters? = camera?.parameters params?.focusMode = Photographic camera.Parameters.FOCUS_MODE_AUTO camera?.parameters = params            

Java

// become Camera parameters Camera.Parameters params = photographic camera.getParameters(); // set the focus fashion params.setFocusMode(Camera.Parameters.FOCUS_MODE_AUTO); // fix Photographic camera parameters camera.setParameters(params);            

This technique works for virtually all camera features, and most parameters can be changed at any time after y'all have obtained an example of the Photographic camera object. Changes to parameters are typically visible to the user immediately in the application's photographic camera preview. On the software side, parameter changes may accept several frames to actually take effect as the photographic camera hardware processes the new instructions and so sends updated image data.

Important: Some camera features cannot be changed at will. In particular, changing the size or orientation of the camera preview requires that you offset stop the preview, modify the preview size, and so restart the preview. Starting with Android 4.0 (API Level 14) preview orientation can exist inverse without restarting the preview.

Other camera features require more than lawmaking in order to implement, including:

  • Metering and focus areas
  • Confront detection
  • Time lapse video

A quick outline of how to implement these features is provided in the following sections.

Metering and focus areas

In some photographic scenarios, automatic focusing and light metering may not produce the desired results. Starting with Android 4.0 (API Level 14), your photographic camera awarding tin provide boosted controls to let your app or users to specify areas in an paradigm to use for determining focus or light level settings and pass these values to the camera hardware for utilise in capturing images or video.

Areas for metering and focus work very similarly to other camera features, in that you command them through methods in the Camera.Parameters object. The following code demonstrates setting two light metering areas for an instance of Photographic camera:

Kotlin

// Create an example of Photographic camera photographic camera = getCameraInstance()  // set up Photographic camera parameters val params: Camera.Parameters? = camera?.parameters  params?.apply {     if (maxNumMeteringAreas > 0) { // check that metering areas are supported         meteringAreas = ArrayList<Camera.Area>().use {             val areaRect1 = Rect(-100, -100, 100, 100) // specify an area in center of image             add(Camera.Expanse(areaRect1, 600)) // set up weight to sixty%             val areaRect2 = Rect(800, -k, 1000, -800) // specify an area in upper correct of paradigm             add(Camera.Area(areaRect2, 400)) // set up weight to 40%         }     }     camera?.parameters = this }            

Coffee

// Create an case of Camera camera = getCameraInstance();  // set Camera parameters Camera.Parameters params = camera.getParameters();  if (params.getMaxNumMeteringAreas() > 0){ // cheque that metering areas are supported     List<Camera.Expanse> meteringAreas = new ArrayList<Photographic camera.Area>();      Rect areaRect1 = new Rect(-100, -100, 100, 100);    // specify an surface area in center of image     meteringAreas.add(new Camera.Area(areaRect1, 600)); // set weight to 60%     Rect areaRect2 = new Rect(800, -1000, one thousand, -800);  // specify an area in upper correct of image     meteringAreas.add(new Camera.Surface area(areaRect2, 400)); // set weight to 40%     params.setMeteringAreas(meteringAreas); }  camera.setParameters(params);            

The Camera.Area object contains two data parameters: A Rect object for specifying an area within the photographic camera's field of view and a weight value, which tells the camera what level of importance this expanse should be given in light metering or focus calculations.

The Rect field in a Camera.Expanse object describes a rectangular shape mapped on a 2000 10 2000 unit grid. The coordinates -thou, -chiliad stand for the top, left corner of the camera image, and coordinates 1000, 1000 represent the lesser, right corner of the camera paradigm, as shown in the illustration below.

Effigy 1. The red lines illustrate the coordinate system for specifying a Camera.Area within a photographic camera preview. The blue box shows the location and shape of an camera area with the Rect values 333,333,667,667.

The bounds of this coordinate system ever represent to the outer edge of the paradigm visible in the camera preview and practise not shrink or expand with the zoom level. Similarly, rotation of the prototype preview using Camera.setDisplayOrientation() does not remap the coordinate system.

Face detection

For pictures that include people, faces are commonly the most of import role of the picture, and should be used for determining both focus and white rest when capturing an epitome. The Android four.0 (API Level fourteen) framework provides APIs for identifying faces and calculating pic settings using face recognition technology.

Note: While the face detection feature is running, setWhiteBalance(String), setFocusAreas(Listing<Camera.Area>) and setMeteringAreas(List<Photographic camera.Area>) accept no effect.

Using the face up detection feature in your photographic camera awarding requires a few general steps:

  • Check that confront detection is supported on the device
  • Create a confront detection listener
  • Add together the face detection listener to your camera object
  • Offset face detection after preview (and later every preview restart)

The face detection feature is not supported on all devices. You can check that this feature is supported by calling getMaxNumDetectedFaces(). An instance of this check is shown in the startFaceDetection() sample method below.

In club to be notified and respond to the detection of a confront, your camera awarding must prepare a listener for confront detection events. In social club to practice this, you must create a listener class that implements the Photographic camera.FaceDetectionListener interface as shown in the example code below.

Kotlin

internal class MyFaceDetectionListener : Camera.FaceDetectionListener {      override fun onFaceDetection(faces: Array<Camera.Face>, camera: Camera) {         if (faces.isNotEmpty()) {             Log.d("FaceDetection", ("face detected: ${faces.size}" +                     " Face 1 Location 10: ${faces[0].rect.centerX()}" +                     "Y: ${faces[0].rect.centerY()}"))         }     } }            

Java

class MyFaceDetectionListener implements Camera.FaceDetectionListener {      @Override     public void onFaceDetection(Face[] faces, Photographic camera camera) {         if (faces.length > 0){             Log.d("FaceDetection", "face detected: "+ faces.length +                     " Face i Location X: " + faces[0].rect.centerX() +                     "Y: " + faces[0].rect.centerY() );         }     } }            

After creating this grade, you then set up it into your application's Camera object, every bit shown in the example code below:

Kotlin

photographic camera?.setFaceDetectionListener(MyFaceDetectionListener())            

Coffee

camera.setFaceDetectionListener(new MyFaceDetectionListener());            

Your awarding must start the face detection office each time you start (or restart) the photographic camera preview. Create a method for starting face detection so you can call it every bit needed, as shown in the instance lawmaking beneath.

Kotlin

fun startFaceDetection() {     // Effort starting Face Detection     val params = mCamera?.parameters     // start face detection only *after* preview has started      params?.apply {         if (maxNumDetectedFaces > 0) {             // camera supports face detection, then tin can start it:             mCamera?.startFaceDetection()         }     } }            

Java

public void startFaceDetection(){     // Endeavor starting Face Detection     Photographic camera.Parameters params = mCamera.getParameters();      // starting time face detection just *after* preview has started     if (params.getMaxNumDetectedFaces() > 0){         // camera supports face detection, and so can start it:         mCamera.startFaceDetection();     } }            

You lot must start confront detection each time y'all start (or restart) the camera preview. If yous use the preview class shown in Creating a preview class, add your startFaceDetection() method to both the surfaceCreated() and surfaceChanged() methods in your preview grade, as shown in the sample code beneath.

Kotlin

override fun surfaceCreated(holder: SurfaceHolder) {     endeavour {         mCamera.setPreviewDisplay(holder)         mCamera.startPreview()          startFaceDetection() // start face detection characteristic     } grab (e: IOException) {         Log.d(TAG, "Error setting camera preview: ${eastward.message}")     } }  override fun surfaceChanged(holder: SurfaceHolder, format: Int, west: Int, h: Int) {     if (holder.surface == cipher) {         // preview surface does not be         Log.d(TAG, "holder.getSurface() == zippo")         return     }     try {         mCamera.stopPreview()     } take hold of (east: Exception) {         // ignore: tried to stop a non-existent preview         Log.d(TAG, "Mistake stopping camera preview: ${e.message}")     }     try {         mCamera.setPreviewDisplay(holder)         mCamera.startPreview()          startFaceDetection() // re-showtime face detection feature     } catch (e: Exception) {         // ignore: tried to stop a non-existent preview         Log.d(TAG, "Error starting camera preview: ${east.bulletin}")     } }            

Java

public void surfaceCreated(SurfaceHolder holder) {     attempt {         mCamera.setPreviewDisplay(holder);         mCamera.startPreview();          startFaceDetection(); // start face up detection feature      } grab (IOException eastward) {         Log.d(TAG, "Mistake setting camera preview: " + eastward.getMessage());     } }  public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {      if (holder.getSurface() == null){         // preview surface does not be         Log.d(TAG, "holder.getSurface() == goose egg");         return;     }      try {         mCamera.stopPreview();      } catch (Exception east){         // ignore: tried to stop a non-existent preview         Log.d(TAG, "Error stopping camera preview: " + eastward.getMessage());     }      try {         mCamera.setPreviewDisplay(holder);         mCamera.startPreview();          startFaceDetection(); // re-get-go face up detection feature      } take hold of (Exception e){         // ignore: tried to stop a non-real preview         Log.d(TAG, "Error starting camera preview: " + e.getMessage());     } }            

Note: Remember to call this method afterwards calling startPreview(). Do non endeavor to showtime face detection in the onCreate() method of your camera app's main activeness, every bit the preview is not available by this point in your application'due south the execution.

Time lapse video

Time lapse video allows users to create video clips that combine pictures taken a few seconds or minutes apart. This feature uses MediaRecorder to tape the images for a fourth dimension lapse sequence.

To tape a time lapse video with MediaRecorder, you lot must configure the recorder object as if you are recording a normal video, setting the captured frames per 2nd to a low number and using one of the time lapse quality settings, as shown in the code case beneath.

Kotlin

mediaRecorder.setProfile(CamcorderProfile.become(CamcorderProfile.QUALITY_TIME_LAPSE_HIGH)) mediaRecorder.setCaptureRate(0.1) // capture a frame every 10 seconds            

Coffee

// Step 3: Set a CamcorderProfile (requires API Level 8 or college) mediaRecorder.setProfile(CamcorderProfile.get(CamcorderProfile.QUALITY_TIME_LAPSE_HIGH)); ... // Step v.5: Set up the video capture rate to a depression number mediaRecorder.setCaptureRate(0.1); // capture a frame every 10 seconds            

These settings must be done as part of a larger configuration process for MediaRecorder. For a full configuration code example, come across Configuring MediaRecorder. Once the configuration is complete, you start the video recording as if you were recording a normal video clip. For more information about configuring and running MediaRecorder, see Capturing videos.

The Camera2Video and HdrViewfinder samples farther demonstrate the use of the APIs covered on this page.

Photographic camera fields that require permission

Apps running Android 10 (API level 29) or higher must have the CAMERA permission in order to access the values of the post-obit fields that the getCameraCharacteristics() method returns:

  • LENS_POSE_ROTATION
  • LENS_POSE_TRANSLATION
  • LENS_INTRINSIC_CALIBRATION
  • LENS_RADIAL_DISTORTION
  • LENS_POSE_REFERENCE
  • LENS_DISTORTION
  • LENS_INFO_HYPERFOCAL_DISTANCE
  • LENS_INFO_MINIMUM_FOCUS_DISTANCE
  • SENSOR_REFERENCE_ILLUMINANT1
  • SENSOR_REFERENCE_ILLUMINANT2
  • SENSOR_CALIBRATION_TRANSFORM1
  • SENSOR_CALIBRATION_TRANSFORM2
  • SENSOR_COLOR_TRANSFORM1
  • SENSOR_COLOR_TRANSFORM2
  • SENSOR_FORWARD_MATRIX1
  • SENSOR_FORWARD_MATRIX2

Boosted sample code

To download sample apps, see the Camera2Basic sample and Official CameraX sample app.

Source: https://developer.android.com/guide/topics/media/camera

Posted by: juddwrick1979.blogspot.com

0 Response to "How To Take Picture Using Camera In C++"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel