双目立体视觉- ZED2双目视觉Examples for Beginner (Cpp/Liunx) - 下篇

Posted Techblog of HaoWANG

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了双目立体视觉- ZED2双目视觉Examples for Beginner (Cpp/Liunx) - 下篇相关的知识,希望对你有一定的参考价值。

项目地址:https://github.com/stereolabs/zed-examples

官方文档:https://www.stereolabs.com/docs/


 

目录

Tutorial 4: Positional tracking with the ZED

Prerequisites

Build the program

Code overview

Create a camera

Enable positional tracking

Capture pose data

Inertial Data

Tutorial 5: Spatial mapping with the ZED

Prerequisites

Build the program

Code overview

Create a camera

Enable positional tracking

Enable spatial mapping

Capture data

Extract mesh

Disable modules and exit

Tutorial 6: Object Detection with the ZED 2

Prerequisites

Build the program

Code overview

Create a camera

Enable Object detection

Capture data

Disable modules and exit

Tutorial 7: Getting sensors data from ZED Mini and ZED2

Prerequisites

Build the program

Code overview

Create a camera

Sensors data capture

Process data

Close camera and exit


Tutorial 4: Positional tracking with the ZED

This tutorial shows how to use the ZED as a positional tracker. The program will loop until 1000 position are grabbed. We assume that you have followed previous tutorials.

Prerequisites

  • Windows 10, Ubuntu LTS, L4T
  • ZED SDK and its dependencies (CUDA)

Build the program

Download the sample and follow the instructions below: More

Build for Windows

  • Create a "build" folder in the source folder
  • Open cmake-gui and select the source and build folders
  • Generate the Visual Studio Win64 solution
  • Open the resulting solution and change configuration to Release
  • Build solution

Build for Linux

Open a terminal in the sample directory and execute the following command:

    mkdir build
    cd build
    cmake ..
    make

Code overview



#include <sl/Camera.hpp>

using namespace std;
using namespace sl;

int main(int argc, char **argv) {

    // Create a ZED camera object
    Camera zed;

    // Set configuration parameters
    InitParameters init_parameters;
    init_parameters.camera_resolution = RESOLUTION::HD720; // Use HD720 video mode (default fps: 60)
    init_parameters.coordinate_system = COORDINATE_SYSTEM::RIGHT_HANDED_Y_UP; // Use a right-handed Y-up coordinate system
    init_parameters.coordinate_units = UNIT::METER; // Set units in meters
    init_parameters.sensors_required = true;
    
    // Open the camera
    auto returned_state = zed.open(init_parameters);
    if (returned_state != ERROR_CODE::SUCCESS) {
        cout << "Error " << returned_state << ", exit program.\\n";
        return EXIT_FAILURE;
    }

    // Enable positional tracking with default parameters
    PositionalTrackingParameters tracking_parameters;
    returned_state = zed.enablePositionalTracking(tracking_parameters);
    if (returned_state != ERROR_CODE::SUCCESS) {
        cout << "Error " << returned_state << ", exit program.\\n";
        return EXIT_FAILURE;
    }


    // Track the camera position during 1000 frames
    int i = 0;
    Pose zed_pose;

    // Check if the camera is a ZED M and therefore if an IMU is available
    bool zed_has_imu = zed.getCameraInformation().sensors_configuration.isSensorAvailable(sl::SENSOR_TYPE::GYROSCOPE);
    SensorsData sensor_data;

    while (i < 1000) {
        if (zed.grab() == ERROR_CODE::SUCCESS) {

            // Get the pose of the left eye of the camera with reference to the world frame
            zed.getPosition(zed_pose, REFERENCE_FRAME::WORLD); 

            // get the translation information
            auto zed_translation = zed_pose.getTranslation();
            // get the orientation information
            auto zed_orientation = zed_pose.getOrientation();
            // get the timestamp
            auto ts = zed_pose.timestamp.getNanoseconds();

            // Display the translation and timestamp
            cout << "Camera Translation: {" << zed_translation << "}, Orientation: {" << zed_orientation << "}, timestamp: " << zed_pose.timestamp.getNanoseconds() << "ns\\n";
            
            // Display IMU data
            if (zed_has_imu) {
                 // Get IMU data at the time the image was captured
                zed.getSensorsData(sensor_data, TIME_REFERENCE::IMAGE);

                //get filtered orientation quaternion
                auto imu_orientation = sensor_data.imu.pose.getOrientation();
                // get raw acceleration
                auto acceleration = sensor_data.imu.linear_acceleration;

                cout << "IMU Orientation: {" << zed_orientation << "}, Acceleration: {" << acceleration << "}\\n";
            }
            i++;
        }
    }

    // Disable positional tracking and close the camera
    zed.disablePositionalTracking();
    zed.close();
    return EXIT_SUCCESS;
}

Create a camera

As in previous tutorials, we create, configure and open the ZED.

// Create a ZED camera object
Camera zed;

// Set configuration parameters
InitParameters init_params;
init_params.camera_resolution = RESOLUTION::HD720; // Use HD720 video mode (default fps: 60)
init_params.coordinate_system = COORDINATE_SYSTEM::RIGHT_HANDED_Y_UP; // Use a right-handed Y-up coordinate system
init_params.coordinate_units = UNIT::METER; // Set units in meters

// Open the camera
ERROR_CODE err = zed.open(init_params);
if (err != ERROR_CODE::SUCCESS)
    exit(-1);

Enable positional tracking

Once the camera is opened, we must enable the positional tracking module in order to get the position and orientation of the ZED.

// Enable positional tracking with default parameters
sl::PositionalTrackingParameters tracking_parameters;
err = zed.enablePositionalTracking(tracking_parameters);
if (err != ERROR_CODE::SUCCESS)
    exit(-1);

In the above example, we leave the default tracking parameters. For the list of available parameters, check the online documentation.

Capture pose data

Now that the ZED is opened and the positional tracking enabled, we create a loop to grab and retrieve the camera position.

The camera position is given by the class sl::Pose. This class contains the translation and orientation of the camera, as well as image timestamp and tracking confidence (quality).
A pose is always linked to a reference frame. The SDK provides two reference frame : REFERENCE_FRAME::WORLD and REFERENCE_FRAME::CAMERA.
It is not the purpose of this tutorial to go into the details of these reference frame. Read the documentation for more information.
In the example, we get the device position in the World Frame.

// Track the camera position during 1000 frames
int i = 0;
sl::Pose zed_pose;
while (i < 1000) {
    if (zed.grab() == ERROR_CODE::SUCCESS) {

        zed.getPosition(zed_pose, REFERENCE_FRAME::WORLD); // Get the pose of the left eye of the camera with reference to the world frame

        // Display the translation and timestamp
        printf("Translation: Tx: %.3f, Ty: %.3f, Tz: %.3f, Timestamp: %llu\\n", zed_pose.getTranslation().tx, zed_pose.getTranslation().ty, zed_pose.getTranslation().tz, zed_pose.timestamp);

        // Display the orientation quaternion
        printf("Orientation: Ox: %.3f, Oy: %.3f, Oz: %.3f, Ow: %.3f\\n\\n", zed_pose.getOrientation().ox, zed_pose.getOrientation().oy, zed_pose.getOrientation().oz, zed_pose.getOrientation().ow);

  i++;
    }
}

Inertial Data

If a ZED Mini is open, we can have access to the inertial data from the integrated IMU

bool zed_mini = (zed.getCameraInformation().camera_model == MODEL::ZED_M);

First, we test that the opened camera is a ZED Mini, then, we display some useful IMU data, such as the quaternion and the linear acceleration.

if (zed_mini) { // Display IMU data

    // Get IMU data
    zed.getIMUData(imu_data, TIME_REFERENCE::IMAGE); // Get the data

    // Filtered orientation quaternion
    printf("IMU Orientation: Ox: %.3f, Oy: %.3f, Oz: %.3f, Ow: %.3f\\n", imu_data.getOrientation().ox,
            imu_data.getOrientation().oy, imu_data.getOrientation().oz, zed_pose.getOrientation().ow);
    // Raw acceleration
    printf("IMU Acceleration: x: %.3f, y: %.3f, z: %.3f\\n", imu_data.linear_acceleration.x,
            imu_data.linear_acceleration.y, imu_data.linear_acceleration.z);
}

This will loop until the ZED has been tracked during 1000 frames. We display the camera translation (in meters) in the console window and close the camera before exiting the application.

// Disable positional tracking and close the camera
zed.disablePositionalTracking();
zed.close();
return 0;

You can now use the ZED as an inside-out positional tracker. You can now read the next tutorial to learn how to use the Spatial Mapping.

CMAKE_MINIMUM_REQUIRED(VERSION 2.4)
PROJECT(ZED_Tutorial_4)

option(LINK_SHARED_ZED "Link with the ZED SDK shared executable" ON)

if (NOT LINK_SHARED_ZED AND MSVC)
    message(FATAL_ERROR "LINK_SHARED_ZED OFF : ZED SDK static libraries not available on Windows")
endif()

if(COMMAND cmake_policy)
	cmake_policy(SET CMP0003 OLD)
	cmake_policy(SET CMP0015 OLD)
endif(COMMAND cmake_policy)

if (NOT CMAKE_BUILD_TYPE OR CMAKE_BUILD_TYPE STREQUAL "")
SET(CMAKE_BUILD_TYPE "RelWithDebInfo")
endif()

SET(EXECUTABLE_OUTPUT_PATH ".")    

find_package(ZED 3 REQUIRED)
find_package(CUDA ${ZED_CUDA_VERSION} EXACT REQUIRED)

include_directories(${CUDA_INCLUDE_DIRS})
include_directories(${ZED_INCLUDE_DIRS})

link_directories(${ZED_LIBRARY_DIR})
link_directories(${CUDA_LIBRARY_DIRS})

ADD_EXECUTABLE(${PROJECT_NAME} main.cpp)
add_definitions(-std=c++14 -O3)

if (LINK_SHARED_ZED)
    SET(ZED_LIBS ${ZED_LIBRARIES} ${CUDA_CUDA_LIBRARY} ${CUDA_CUDART_LIBRARY})
else()
    SET(ZED_LIBS ${ZED_STATIC_LIBRARIES} ${CUDA_CUDA_LIBRARY} ${CUDA_LIBRARY})
endif()

TARGET_LINK_LIBRARIES(${PROJECT_NAME} ${ZED_LIBS})

Tutorial 5: Spatial mapping with the ZED

This tutorial shows how to use the spatial mapping module with the ZED. It will loop until 500 frames are grabbed, extract a mesh, filter it and save it as a obj file.
We assume that you have followed previous tutorials.

Prerequisites

  • Windows 10, Ubuntu LTS, L4T
  • ZED SDK and its dependencies (CUDA)

Build the program

Download the sample and follow the instructions below: More

Build for Windows

  • Create a "build" folder in the source folder
  • Open cmake-gui and select the source and build folders
  • Generate the Visual Studio Win64 solution
  • Open the resulting solution and change configuration to Release
  • Build solution

Build for Linux

Open a terminal in the sample directory and execute the following command:

mkdir build
cd build
cmake ..
make

Code overview


#include <sl/Camera.hpp>

using namespace std;
using namespace sl;

int main(int argc, char **argv) {

    // Create a ZED camera object
    Camera zed;

    // Set configuration parameters
    InitParameters init_parameters;
    init_parameters.camera_resolution = RESOLUTION::HD720; // Use HD720 video mode (default fps: 60)
    init_parameters.coordinate_system = COORDINATE_SYSTEM::RIGHT_HANDED_Y_UP; // Use a right-handed Y-up coordinate system
    init_parameters.coordinate_units = UNIT::METER; // Set units in meters

    // Open the camera
    auto returned_state = zed.open(init_parameters);
    if (returned_state != ERROR_CODE::SUCCESS) {
        cout << "Error " << returned_state << ", exit program.\\n";
        return EXIT_FAILURE;
    }

    // Enable positional tracking with default parameters. Positional tracking needs to be enabled before using spatial mapping
    sl::PositionalTrackingParameters tracking_parameters;
    returned_state = zed.enablePositionalTracking(tracking_parameters);
    if (returned_state != ERROR_CODE::SUCCESS) {
        cout << "Error " << returned_state << ", exit program.\\n";
        return EXIT_FAILURE;
    }

    // Enable spatial mapping
    sl::SpatialMappingParameters mapping_parameters;
    returned_state = zed.enableSpatialMapping(mapping_parameters);
    if (returned_state != ERROR_CODE::SUCCESS) {
        cout << "Error " << returned_state << ", exit program.\\n";
        return EXIT_FAILURE;
    }
    
    // Grab data during 500 frames
    int i = 0;
    sl::Mesh mesh; // Create a mesh object
    while (i < 500) {
        // For each new grab, mesh data is updated 
        if (zed.grab() == ERROR_CODE::SUCCESS) {
            // In the background, spatial mapping will use newly retrieved images, depth and pose to update the mesh
            sl::SPATIAL_MAPPING_STATE mapping_state = zed.getSpatialMappingState();

            // Print spatial mapping state
            cout << "\\rImages captured: " << i << " / 500  ||  Spatial mapping state: " << mapping_state << "\\t" << flush;
            i++;
        }
    }
    cout << endl;
    // Extract, filter and save the mesh in a obj file
    cout << "Extracting Mesh...\\n";
    zed.extractWholeSpatialMap(mesh); // Extract the whole mesh
    cout << "Filtering Mesh...\\n";
    mesh.filter(sl::MeshFilterParameters::MESH_FILTER::LOW); // Filter the mesh (remove unnecessary vertices and faces)
    cout << "Saving Mesh...\\n";
    mesh.save("mesh.obj"); // Save the mesh in an obj file
    
    // Disable tracking and mapping and close the camera
    zed.disableSpatialMapping();
    zed.disablePositionalTracking();
    zed.close();
    return EXIT_SUCCESS;
}

Create a camera

As in previous tutorials, we create, configure and open the ZED. In this example, we choose to have a right-handed coordinate system with Y axis up, since it is the most common system chosen in 3D viewing software (meshlab for example).

// Create a ZED camera object
Camera zed;

// Set configuration parameters
InitParameters init_params;
init_params.camera_resolution = RESOLUTION::HD720; // Use HD720 video mode (default fps: 60)
init_params.coordinate_system = COORDINATE_SYSTEM::RIGHT_HANDED_Y_UP; // Use a right-handed Y-up coordinate system
init_params.coordinate_units = UNIT::METER; // Set units in meters

// Open the camera
ERROR_CODE err = zed.open(init_params);
if (err != ERROR_CODE::SUCCESS)
    exit(-1);

Enable positional tracking

The spatial mapping needs the positional tracking to be activated. Therefore, as with tutorial 4 - Positional tracking, we need to enable the tracking module first.

sl::PositionalTrackingParameters tracking_parameters;
err = zed.enablePositionalTracking(tracking_parameters);
if (err != ERROR_CODE::SUCCESS)
    exit(-1);

Enable spatial mapping

Now that tracking is enabled, we need to enable the spatial mapping module. You will see that it is very close to the positional tracking: We create a spatial mapping parameters and call enableSpatialMapping() function with this parameter.

sl::SpatialMappingParameters mapping_parameters;
err = zed.enableSpatialMapping(mapping_parameters);
if (err != ERROR_CODE::SUCCESS)
    exit(-1);

It is not the purpose of this tutorial to go into the details of SpatialMappingParameters class, but you will find mode information in the API documentation.

The spatial mapping is now activated.

Capture data

The spatial mapping does not require any function call in the grab process. the ZED SDK handles and checks that a new image,depth and position can be ingested in the mapping module and will automatically launch the calculation asynchronously.
It means that you just simply have to grab images to have a mesh creating in background.
In this tutorial, we grab 500 frames and then stop the loop to extract mesh.

// Grab data during 500 frames
	int i = 0;
	sl::Mesh mesh; // Create a mesh object
	while (i < 500) {
		if (zed.grab() == ERROR_CODE::SUCCESS) {
			// In background, spatial mapping will use new images, depth and pose to create and update the mesh. No specific functions are required here
			sl::SPATIAL_MAPPING_STATE mapping_state = zed.getSpatialMappingState();

			// Print spatial mapping state
			std::cout << "\\rImages captured: " << i << " / 500  ||  Spatial mapping state: " << spatialMappingState2str(mapping_state) << "                     " << std::flush;

			i++;
		}
	}

Extract mesh

We have now grabbed 500 frames and the mesh has been created in background. Now we need to extract it.
First, we need to create a mesh object to manipulate it: a sl::Mesh. Then launch the extraction with Camera::extractWholeMesh(). This function will block until the mesh is available.

zed.extractWholeMesh(mesh); // Extract the whole mesh

We have now a mesh. This mesh can be filtered (if needed) to remove duplicate vertices and unneeded faces. This will make the mesh lighter to manipulate.
Since we are manipulating the mesh, this function is a function member of sl::Mesh.

mesh.filter(sl::MeshFilterParameters::MESH_FILTER::LOW); // Filter the mesh (remove unnecessary vertices and faces)

You can see that filter takes a filtering parameter. This allows you to fine tuning the processing. Likewise, more information are given in the API documentation regarding filtering parameters.

You can now save the mesh as an obj file for external manipulation:

mesh.save("mesh.obj"); // Save the mesh in an obj file

Disable modules and exit

Once the mesh is extracted and saved, don't forget to disable the modules and close the camera before exiting the program.
Since spatial mapping requires positional tracking, always disable spatial mapping before disabling tracking.

// Disable tracking and mapping and close the camera
 zed.disableSpatialMapping();
 zed.disablePositionalTracking();
 zed.close();
 return 0;

And this is it!

You can now map your environment with the ZED.

CMAKE_MINIMUM_REQUIRED(VERSION 2.4)
PROJECT(ZED_Tutorial_5)

option(LINK_SHARED_ZED "Link with the ZED SDK shared executable" ON)

if (NOT LINK_SHARED_ZED AND MSVC)
    message(FATAL_ERROR "LINK_SHARED_ZED OFF : ZED SDK static libraries not available on Windows")
endif()

if(COMMAND cmake_policy)
	cmake_policy(SET CMP0003 OLD)
	cmake_policy(SET CMP0015 OLD)
endif(COMMAND cmake_policy)

if (NOT CMAKE_BUILD_TYPE OR CMAKE_BUILD_TYPE STREQUAL "")
SET(CMAKE_BUILD_TYPE "RelWithDebInfo")
endif()

SET(EXECUTABLE_OUTPUT_PATH ".")

find_package(ZED 3 REQUIRED)
find_package(CUDA ${ZED_CUDA_VERSION} EXACT REQUIRED)

include_directories(${CUDA_INCLUDE_DIRS})
include_directories(${ZED_INCLUDE_DIRS})

link_directories(${ZED_LIBRARY_DIR})
link_directories(${CUDA_LIBRARY_DIRS})

ADD_EXECUTABLE(${PROJECT_NAME} main.cpp)
add_definitions(-std=c++14 -O3)

if (LINK_SHARED_ZED)
    SET(ZED_LIBS ${ZED_LIBRARIES} ${CUDA_CUDA_LIBRARY} ${CUDA_CUDART_LIBRARY})
else()
    SET(ZED_LIBS ${ZED_STATIC_LIBRARIES} ${CUDA_CUDA_LIBRARY} ${CUDA_LIBRARY})
endif()

TARGET_LINK_LIBRARIES(${PROJECT_NAME} ${ZED_LIBS})

Tutorial 6: Object Detection with the ZED 2

This tutorial shows how to use the object detection module with the ZED 2.
We assume that you have followed previous tutorials.

Prerequisites

  • Windows 10, Ubuntu LTS, L4T
  • ZED SDK and its dependencies (CUDA)

Build the program

Download the sample and follow the instructions below: More

Build for Windows

  • Create a "build" folder in the source folder
  • Open cmake-gui and select the source and build folders
  • Generate the Visual Studio Win64 solution
  • Open the resulting solution and change configuration to Release
  • Build solution

Build for Linux

Open a terminal in the sample directory and execute the following command:

mkdir build
cd build
cmake ..
make

Code overview


/*********************************************************************************
 ** This sample demonstrates how to use the objects detection module            **
 **      with the ZED SDK and display the result                                **
 *********************************************************************************/

 // Standard includes
#include <iostream>
#include <fstream>

// ZED includes
#include <sl/Camera.hpp>

// Using std and sl namespaces
using namespace std;
using namespace sl;

int main(int argc, char** argv) {
    // Create ZED objects
    Camera zed;
    InitParameters init_parameters;
    init_parameters.camera_resolution = RESOLUTION::HD720;
    init_parameters.depth_mode = DEPTH_MODE::PERFORMANCE;
    init_parameters.coordinate_units = UNIT::METER;
    init_parameters.sdk_verbose = true;

    // Open the camera
    auto returned_state = zed.open(init_parameters);
    if (returned_state != ERROR_CODE::SUCCESS) {
        cout << "Error " << returned_state << ", exit program.\\n";
        return EXIT_FAILURE;
    }

    // Define the Objects detection module parameters
    ObjectDetectionParameters detection_parameters;
    // run detection for every Camera grab
    detection_parameters.image_sync = true;
    // track detects object accross time and space
    detection_parameters.enable_tracking = true;
    // compute a binary mask for each object aligned on the left image
    detection_parameters.enable_mask_output = true; // designed to give person pixel mask

    // If you want to have object tracking you need to enable positional tracking first
    if (detection_parameters.enable_tracking)
        zed.enablePositionalTracking();    

    cout << "Object Detection: Loading Module..." << endl;
    returned_state = zed.enableObjectDetection(detection_parameters);
    if (returned_state != ERROR_CODE::SUCCESS) {
        cout << "Error " << returned_state << ", exit program.\\n";
        zed.close();
        return EXIT_FAILURE;
    }
    // detection runtime parameters
    ObjectDetectionRuntimeParameters detection_parameters_rt;
    // detection output
    Objects objects;
    cout << setprecision(3);

    int nb_detection = 0;
    while (nb_detection < 100) {

        if(zed.grab() == ERROR_CODE::SUCCESS){
           zed.retrieveObjects(objects, detection_parameters_rt);

            if (objects.is_new) {
                cout << objects.object_list.size() << " Object(s) detected\\n\\n";
                if (!objects.object_list.empty()) {

                    auto first_object = objects.object_list.front();

                    cout << "First object attributes :\\n";
                    cout << " Label '" << first_object.label << "' (conf. "
                        << first_object.confidence << "/100)\\n";

                    if (detection_parameters.enable_tracking)
                        cout << " Tracking ID: " << first_object.id << " tracking state: " <<
                        first_object.tracking_state << " / " << first_object.action_state << "\\n";

                    cout << " 3D position: " << first_object.position <<
                        " Velocity: " << first_object.velocity << "\\n";

                    cout << " 3D dimensions: " << first_object.dimensions << "\\n";

                    if (first_object.mask.isInit())
                        cout << " 2D mask available\\n";

                    cout << " Bounding Box 2D \\n";
                    for (auto it : first_object.bounding_box_2d)
                        cout << "    " << it<<"\\n";

                    cout << " Bounding Box 3D \\n";
                    for (auto it : first_object.bounding_box)
                        cout << "    " << it << "\\n";

                    cout << "\\nPress 'Enter' to continue...\\n";
                    cin.ignore();
                }
                nb_detection++;
            }
        }
    }
    zed.close();
    return EXIT_SUCCESS;
}

Create a camera

As in previous tutorials, we create, configure and open the ZED 2. Please note that the ZED 1 is not compatible with the object detection module.

This module uses the GPU to perform deep neural networks computations. On platforms with limited amount of memory such as jetson Nano, it's advise to disable the GUI to improve the performances and avoid memory overflow.

// Create ZED objects
Camera zed;
InitParameters initParameters;
initParameters.camera_resolution = RESOLUTION::HD720;
initParameters.depth_mode = DEPTH_MODE::PERFORMANCE;
initParameters.sdk_verbose = true;

// Open the camera
ERROR_CODE zed_error = zed.open(initParameters);
if (zed_error != ERROR_CODE::SUCCESS) {
	std::cout << "Error " << zed_error << ", exit program.\\n";
	return 1; // Quit if an error occurred
}

Enable Object detection

We will define the object detection parameters. Notice that the object tracking needs the positional tracking to be able to track the objects in the world reference frame.

// Define the Objects detection module parameters
ObjectDetectionParameters detection_parameters;
detection_parameters.enable_tracking = false;
detection_parameters.enable_mask_output = false;
detection_parameters.image_sync = false;

// Object tracking requires the positional tracking module
if (detection_parameters.enable_tracking)
	zed.enablePositionalTracking();

Then we can start the module, it will load the model. This operation can take a few seconds. The first time the module is used, the model will be optimized for the hardware and will take more time. This operation is done only once.

std::cout << "Object Detection: Loading Module..." << std::endl;
zed_error = zed.enableObjectDetection(detection_parameters);
if (zed_error != ERROR_CODE::SUCCESS) {
	std::cout << "Error " << zed_error << ", exit program.\\n";
	zed.close();
	return 1;
}

The object detection is now activated.

Capture data

The object confidence threshold can be adjusted at runtime to select only the revelant objects depending on the scene complexity. Since the parameters have been set to image_sync, for each grab call, the image will be fed into the AI module and will output the detections for each frames.

// Detection runtime parameters
ObjectDetectionRuntimeParameters detection_parameters_rt;
detection_parameters_rt.detection_confidence_threshold = 40;

// Detection output
Objects objects;

while (zed.grab() == ERROR_CODE::SUCCESS) {
	zed_error = zed.retrieveObjects(objects, detection_parameters_rt);

	if (objects.is_new) {
		std::cout << objects.object_list.size() << " Object(s) detected ("
				<< zed.getCurrentFPS() << " FPS)" << std::endl;
	}
}

Disable modules and exit

Once the program is over the modules can be disabled and the camera closed. This step is optional since the zed.close() will take care of disabling all the modules. This function is also called automatically by the destructor if necessary.

// Disable object detection and close the camera
zed.disableObjectDetection();
zed.close();
return 0;

And this is it!

You can now detect object in 3D with the ZED 2.


Tutorial 7: Getting sensors data from ZED Mini and ZED2

This tutorial shows how to retrieve sensors data from ZED Mini and ZED2. Contrary to other samples, this one does not focus on images or depth information but on embedded sensors. It will loop for 5 seconds, printing the retrieved sensors values on console.
We assume that you have followed previous tutorials.

Prerequisites

  • Windows 10, Ubuntu LTS, L4T
  • ZED SDK and its dependencies (CUDA)

Build the program

Download the sample and follow the instructions below: More

Build for Windows

  • Create a "build" folder in the source folder
  • Open cmake-gui and select the source and build folders
  • Generate the Visual Studio Win64 solution
  • Open the resulting solution and change configuration to Release
  • Build solution

Build for Linux

Open a terminal in the sample directory and execute the following command:

mkdir build
cd build
cmake ..
make

Code overview


#include <sl/Camera.hpp>

using namespace std;
using namespace sl;

// Basic structure to compare timestamps of a sensor. Determines if a specific sensor data has been updated or not.
struct TimestampHandler {

    // Compare the new timestamp to the last valid one. If it is higher, save it as new reference.
    inline bool isNew(Timestamp& ts_curr, Timestamp& ts_ref) {
        bool new_ = ts_curr > ts_ref;
        if (new_) ts_ref = ts_curr;
        return new_;
    }
    // Specific function for IMUData.
    inline bool isNew(SensorsData::IMUData& imu_data) {
        return isNew(imu_data.timestamp, ts_imu);
    }
    // Specific function for MagnetometerData.
    inline bool isNew(SensorsData::MagnetometerData& mag_data) {
        return isNew(mag_data.timestamp, ts_mag);
    }
    // Specific function for BarometerData.
    inline bool isNew(SensorsData::BarometerData& baro_data) {
        return isNew(baro_data.timestamp, ts_baro);
    }

    Timestamp ts_imu = 0, ts_baro = 0, ts_mag = 0; // Initial values
};


// Function to display sensor parameters.
void printSensorConfiguration(SensorParameters& sensor_parameters) {
    if (sensor_parameters.isAvailable) {
        cout << "*****************************" << endl;
        cout << "Sensor Type: " << sensor_parameters.type << endl;
        cout << "Max Rate: "    << sensor_parameters.sampling_rate << SENSORS_UNIT::HERTZ << endl;
        cout << "Range: ["      << sensor_parameters.range << "] " << sensor_parameters.sensor_unit << endl;
        cout << "Resolution: "  << sensor_parameters.resolution << " " << sensor_parameters.sensor_unit << endl;
        if (isfinite(sensor_parameters.noise_density)) cout << "Noise Density: " << sensor_parameters.noise_density <<" "<< sensor_parameters.sensor_unit<<"/√Hz"<<endl;
        if (isfinite(sensor_parameters.random_walk)) cout << "Random Walk: " << sensor_parameters.random_walk <<" "<< sensor_parameters.sensor_unit<<"/s/√Hz"<<endl;
    }
}


int main(int argc, char **argv) {

    // Create a ZED camera object.
    Camera zed;

    // Set configuration parameters.
    InitParameters init_parameters;
    init_parameters.depth_mode = DEPTH_MODE::NONE; // No depth computation required here.

    // Open the camera.
    auto returned_state = zed.open(init_parameters);
    if (returned_state != ERROR_CODE::SUCCESS) {
        cout << "Error " << returned_state << ", exit program.\\n";
        return EXIT_FAILURE;
    }

    // Check camera model.
    auto info = zed.getCameraInformation();
    MODEL cam_model =info.camera_model;
    if (cam_model == MODEL::ZED) {
        cout << "This tutorial only works with ZED 2 and ZED-M cameras. ZED does not have additional sensors.\\n"<<endl;
        return EXIT_FAILURE;
    }

    // Display camera information (model, serial number, firmware versions).
    cout << "Camera Model: " << cam_model << endl;
    cout << "Serial Number: " << info.serial_number << endl;
    cout << "Camera Firmware: " << info.camera_configuration.firmware_version << endl;
    cout << "Sensors Firmware: " << info.sensors_configuration.firmware_version << endl;

    // Display sensors configuration (imu, barometer, magnetometer).
    printSensorConfiguration(info.sensors_configuration.accelerometer_parameters);
    printSensorConfiguration(info.sensors_configuration.gyroscope_parameters);
    printSensorConfiguration(info.sensors_configuration.magnetometer_parameters);
    printSensorConfiguration(info.sensors_configuration.barometer_parameters);

    // Used to store sensors data.
    SensorsData sensors_data;

    // Used to store sensors timestamps and check if new data is available.
    TimestampHandler ts;

    // Retrieve sensors data during 5 seconds.
    auto start_time = std::chrono::high_resolution_clock::now();
    int count = 0;
    double elapse_time = 0;

    while (elapse_time < 5000) {

        // Depending on your camera model, different sensors are available.
        // They do not run at the same rate: therefore, to not miss any new samples we iterate as fast as possible
        // and compare timestamps to determine when a given sensor's data has been updated.
        // NOTE: There is no need to acquire images with grab(). getSensorsData runs in a separate internal capture thread.
        if (zed.getSensorsData(sensors_data, TIME_REFERENCE::CURRENT) == ERROR_CODE::SUCCESS) {

            // Check if a new IMU sample is available. IMU is the sensor with the highest update frequency.
            if (ts.isNew(sensors_data.imu)) {
                cout << "Sample " << count++ << "\\n";
                cout << " - IMU:\\n";
                cout << " \\t Orientation: {" << sensors_data.imu.pose.getOrientation() << "}\\n";
                cout << " \\t Acceleration: {" << sensors_data.imu.linear_acceleration << "} [m/sec^2]\\n";
                cout << " \\t Angular Velocitiy: {" << sensors_data.imu.angular_velocity << "} [deg/sec]\\n";

                // Check if Magnetometer data has been updated.
                if (ts.isNew(sensors_data.magnetometer))
                    cout << " - Magnetometer\\n \\t Magnetic Field: {" << sensors_data.magnetometer.magnetic_field_calibrated << "} [uT]\\n";

                // Check if Barometer data has been updated.
                if (ts.isNew(sensors_data.barometer))
                    cout << " - Barometer\\n \\t Atmospheric pressure:" << sensors_data.barometer.pressure << " [hPa]\\n";
            }
        }

        // Compute the elapsed time since the beginning of the main loop.
        elapse_time = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::high_resolution_clock::now() - start_time).count();
    }

    // Close camera
    zed.close();
    return EXIT_SUCCESS;
}

Create a camera

As in previous tutorials, we create, configure and open the ZED camera, as we do not need depth information we can disable its computation to save process power.

    // Create a ZED camera object
    Camera zed;

    // Set configuration parameters
    InitParameters init_parameters;
    // no depth computation required here
    init_parameters.depth_mode = DEPTH_MODE::NONE;

    // Open the camera
    ERROR_CODE err = zed.open(init_parameters);
    if (err != ERROR_CODE::SUCCESS) {
        cout << "Error " << err << ", exit program.\\n";
        return -1;
    }

Sensors data capture

Depending on your camera model, different sensors may send informations. To simplify the retrieve process we have a global class, SensorsData, that encapsulates all sensors data.

    SensorsData sensors_data;
    double elapse_time = 0;
    while (elapse_time < 5000)
    {

        if (zed.getSensorsData(sensors_data, TIME_REFERENCE::CURRENT) == ERROR_CODE::SUCCESS) 
        {

        [...]

        }
    }        

Process data

As previously said, sensors have different frequencies and they are stored in a global class which means between two getSensorsData call, some sensors may not have newer data to provide. To handle this, each sensor sends the timestamp of its data, by checking if the given timestamp is newer than the previous we know if the data is a new one or not.

In this sample we use a basic class TimestampHandler to store timestamp and check for data update.

 TimestampHandler ts;
  if (ts.isNew(sensors_data.imu)) {
      // sensors_data.imu contains new data
  }

If the data are udpated we display them:

    cout << " - IMU:\\n";
    // Filtered orientation quaternion
    cout << " \\t Orientation: {" << sensors_data.imu.pose.getOrientation() << "}\\n";

    // Filtered acceleration
    cout << " \\t Acceleration: {" << sensors_data.imu.linear_acceleration << "} [m/sec^2]\\n";

    // Filtered angular velocities
    cout << " \\t Angular Velocities: {" << sensors_data.imu.angular_velocity << "} [deg/sec]\\n";

    // Check if Magnetometer data has been updated 
    if (ts.isNew(sensors_data.magnetometer))
        // Filtered magnetic fields
        cout << " - Magnetometer\\n \\t Magnetic Field: {" << sensors_data.magnetometer.magnetic_field_calibrated << "} [uT]\\n";

    // Check if Barometer data has been updated 
    if (ts.isNew(sensors_data.barometer))
        // Atmospheric pressure
        cout << " - Barometer\\n \\t Atmospheric pressure:" << sensors_data.barometer.pressure << " [hPa]\\n";

You do not have to care about your camera model to acces sensors fields, if the sensors is not available its data will contains NAN values and its timestamp will be 0.

Depending on your camera model and firmware, different sensors can send their temperature. To access it you can iterate over sensors and check if the data is available:

    cout << " - Temperature\\n";
    float temperature;
    for (int s = 0; s < SensorsData::TemperatureData::SENSOR_LOCATION::LAST; s++) {
        auto sensor_loc = static_cast<SensorsData::TemperatureData::SENSOR_LOCATION>(s);
        if (sensors_data.temperature.get(sensor_loc, temperature) == ERROR_CODE::SUCCESS)
            cout << " \\t " << sensor_loc << ": " << temperature << "C\\n";
}

Close camera and exit

Once the data are extracted, don't forget to close the camera before exiting the program.

    // Close camera
    zed.close();
    return 0;

And this is it!

You can now get all the sensor data from ZED-M and ZED2 cameras.

CMAKE_MINIMUM_REQUIRED(VERSION 2.4)
PROJECT(ZED_Tutorial_7)

option(LINK_SHARED_ZED "Link with the ZED SDK shared executable" ON)

if (NOT LINK_SHARED_ZED AND MSVC)
    message(FATAL_ERROR "LINK_SHARED_ZED OFF : ZED SDK static libraries not available on Windows")
endif()

if(COMMAND cmake_policy)
	cmake_policy(SET CMP0003 OLD)
	cmake_policy(SET CMP0015 OLD)
endif(COMMAND cmake_policy)

if (NOT CMAKE_BUILD_TYPE OR CMAKE_BUILD_TYPE STREQUAL "")
SET(CMAKE_BUILD_TYPE "RelWithDebInfo")
endif()

SET(EXECUTABLE_OUTPUT_PATH ".")

find_package(ZED 3 REQUIRED)
find_package(CUDA ${ZED_CUDA_VERSION} EXACT REQUIRED)

include_directories(${CUDA_INCLUDE_DIRS})
include_directories(${ZED_INCLUDE_DIRS})

link_directories(${ZED_LIBRARY_DIR})
link_directories(${CUDA_LIBRARY_DIRS})

ADD_EXECUTABLE(${PROJECT_NAME} main.cpp)
add_definitions(-std=c++14 -O3)

if (LINK_SHARED_ZED)
    SET(ZED_LIBS ${ZED_LIBRARIES} ${CUDA_CUDA_LIBRARY} ${CUDA_CUDART_LIBRARY})
else()
    SET(ZED_LIBS ${ZED_STATIC_LIBRARIES} ${CUDA_CUDA_LIBRARY} ${CUDA_LIBRARY})
endif()

TARGET_LINK_LIBRARIES(${PROJECT_NAME} ${ZED_LIBS})

 

 

以上是关于双目立体视觉- ZED2双目视觉Examples for Beginner (Cpp/Liunx) - 下篇的主要内容,如果未能解决你的问题,请参考以下文章

双目立体视觉- ZED2双目视觉Examples for Beginner (Cpp/Liunx) - 下篇

双目立体视觉- ZED2双目视觉Examples for Beginner (Cpp/Liunx) - 下篇

双目立体视觉- ZED2双目视觉Examples for Beginner (Cpp/Liunx) - 上篇

双目立体视觉- ZED2双目视觉Examples for Beginner (Cpp/Liunx) - 上篇

双目立体视觉- ZED2 ROS 双目视觉开发理论与实践 2

双目立体视觉- ZED2 ROS 双目视觉开发理论与实践 2