Apollo: How to visualize obstacle in Apollo 3.5?

Created on 5 Feb 2019  路  14Comments  路  Source: ApolloAuto/apollo

System information

  • OS Platform and Distribution (Linux Ubuntu 16.04):
  • Apollo installed from (source):
  • Apollo version (3.5):

Steps to reproduce the issue:

  • Please use bullet points and include as much details as possible:
  • We changed obs_enable_visualization in perception/production/conf/perception, and output_final_obstacles and enable_visualization in perception/production/conf/perception/camera to be true.
  • Then we launched perception_camera using cyber_launch start /apollo/modules/perception/production/launch/perception_camera.launch
  • We also played the converted record with channels for front_6mm and front_12mm.
  • Through cyber_monitor, we can see obstacle info is published in channel /perception/obstacles with detailed info.
    However, I was not able to see a window or anything inside dreamview.

Is this desired behavior? Or how can I visually see obstacles detected from camera sensors?

Thank you so much.

Supporting materials (screenshots, command lines, code/script snippets):

Perception Help wanted

Most helpful comment

We added visualization code in fusion component. Please check modules/perception/camera/tools/offline/visualizer.cc

All 14 comments

We would also like to figure this out.

In 2.5/3.0 there was an openGL visualizer.

There appears to be a visualizer in the code somewhere that we are trying to use.

https://github.com/ApolloAuto/apollo/issues/6656

We have managed to create our own OpenCV visualizer, however it is a work in progress. Would like to hear from the Apollo team. But we could perhaps share our humble bounding box drawer.

@snuffysasa Thank you so much for your quick reply. I am very interested in getting your bounding box drawer. Where can I get it?

I can likely share it tomorrow.

This is what @snuffysasa was referring to

In the perception/onboard/component/BUILD, make the camera binary look like this

cc_binary(
    name = "libperception_component_camera.so",
    linkopts = [
        "-lopencv_core",
        "-lopencv_imgproc",
        "-lopencv_highgui",
        "-lboost_system",
    ],
    linkshared = True,
    linkstatic = False,
    deps = [":perception_component_inner_camera",
        "//modules/perception/camera/common",],
)

then in fusion_camera_detection_component.cc add the following

```
+#include

...

int FusionCameraDetectionComponent::InternalProc(
const std::shared_ptr &in_message,
const std::string &camera_name, apollo::common::ErrorCode *error_code,
SensorFrameMessage *prefused_message,
apollo::perception::PerceptionObstacles *out_message) {

...
if (!camera_obstacle_pipeline_->Perception(camera_perception_options_,
&camera_frame)) {

}

...

+uint8_t* data = const_cast(reinterpret_cast(in_message->data().data()) );
+cv::Mat cv_img(image_height_, image_width_, CV_8UC3, data, static_cast(0));

+// drawing bounding boxes - green
+for (auto obj : camera_frame.tracked_objects) {

  • auto &box = obj->camera_supplement.box;
  • if (box.xmin < box.xmax && box.ymin < box.ymax) {
  • cv::Point pt1(static_cast(box.xmax), static_cast(box.ymax));
  • cv::Point pt2(static_cast(box.xmin), static_cast(box.ymin));

  • cv::rectangle(cv_img, pt1, pt2, cv::Scalar(0, 255, 0), 3);

  • }
    +}

    +// drawing lane lines - red
    +int num_of_points = 30;
    +for(auto lane : camera_frame.lane_objects){

  • double x_start = lane.curve_image_coord.x_start;

  • double x_end = lane.curve_image_coord.x_end;
  • //AERROR<<"X start, x end "<
  • double a_ = lane.curve_image_coord.a;
  • double b_ = lane.curve_image_coord.b;
  • double c_ = lane.curve_image_coord.c;
  • double d_ = lane.curve_image_coord.d;

  • double iter_x = (x_end - x_start) / static_cast(num_of_points);

  • double prev_x = 0.0;
  • double prev_y = 0.0;

  • for(int i = 0; i < num_of_points; i++){

  • double y_current = (a_ * pow(x_start, 3)) + (b_ * pow(x_start, 2)) + (c_ * x_start) + d_;

  • // draw line

  • if(i != 0){
  • AERROR<<"prev x y "<
  • cv::Point pt1_(static_cast(prev_y), static_cast(prev_x));
  • cv::Point pt2_(static_cast(y_current), static_cast(x_start));
  • cv::line(cv_img, pt1_, pt2_, cv::Scalar(255, 0, 0), 1);
  • }

  • //update values, store old ones

  • prev_x = x_start;
  • prev_y = y_current;
  • x_start += iter_x;
  • }

    +}

    +cv::imshow("Image",cv_img);
    +cv::waitKey(30);

@DevMohamedMI Thank you so much! I will try it.

@snuffysasa @DevMohamedMI It works, I appreciate your help!

Closing this issue as it appears to be resolved. Thanks!

We added visualization code in fusion component. Please check modules/perception/camera/tools/offline/visualizer.cc

@techoe

Thank you.

@DevMohamedMI I updated the BUILD and cc files as described above, rebuilt with build_gpu, and ran the perception_camera.launch; which is just the dag_streaming_perception_camera.dag.

I then ran the vanilla record file with cyber_recorder play -f docs/demo_guide/demo_3.5.record --loop but I am not seeing any visualization.

Forgive my ignorance, but I was expecting some kind of popup based on cv.imshow.

I also started Dreamview but not seeing any changes there from previous.

Any thoughts on this?

@techoe Can you please elaborate on how to use the visualizer.cc code? How do I incorporate this into the record play? What is the expected output?

@techoe my apologies! that was a very silly question! I found the bazel BUILD file and built the offline_obstable_pipeline binary. Still a couple of issues had to be resolved, like two bad symlinks for data and conf directories, and a hard coded path with /home/<some-user>/cybertron in the flags file.

For anyone coming here and would like a quick answer, you can run the offline obstacle visualizer tool as follows (it will pull same images from the testdata directory for processing):

1) make sure to build the perception module with

$ bash docker/scripts/dev_start.sh
$ bash docker/scripts/dev_into.sh
$ cd /apollo/
$ bash apollo.sh build_gpu

2) the build the offline obstacle tool with

# from /apollo on container
$ bazel build //modules/perception/camera/tools/offline:offline_obstacle_pipeline

3) run the offline tool

$ bazel-bin/modules/perception/camera/tools/offline/offline_obstacle_pipeline

4) You'll see some errors like:

- unable to find sensor_meta file
- failed to load intrinsic file

I forget the specific steps to resolve these, but it's not hard. Just search for sensor_meta.pt , find the path, and make sure the flag file that's being used has the correct path.

@techoe I am getting this error when I follow your guide:
Any Idea what I can do?

E0806 16:15:17.537537 20013 transform_server.cc:49] [] load velodyne128 extrisic file error, YAML exception:bad file
F0806 16:15:17.537578 20013 offline_obstacle_pipeline.cc:165] Check failed: transform_server.Init(camera_names, FLAGS_params_dir)

@techoe I am getting this error when I follow your guide:
Any Idea what I can do?

E0806 16:15:17.537537 20013 transform_server.cc:49] [] load velodyne128 extrisic file error, YAML exception:bad file
F0806 16:15:17.537578 20013 offline_obstacle_pipeline.cc:165] Check failed: transform_server.Init(camera_names, FLAGS_params_dir)

I have the same problem. Have you solved it?

Was this page helpful?
0 / 5 - 0 ratings

Related issues

chasingw picture chasingw  路  3Comments

BenBaek picture BenBaek  路  3Comments

zmsunnyday picture zmsunnyday  路  3Comments

chilihua picture chilihua  路  3Comments

Wsine picture Wsine  路  3Comments