obs_enable_visualization in perception/production/conf/perception, and output_final_obstacles and enable_visualization in perception/production/conf/perception/camera to be true.cyber_launch start /apollo/modules/perception/production/launch/perception_camera.launch/perception/obstacles with detailed info.Is this desired behavior? Or how can I visually see obstacles detected from camera sensors?
Thank you so much.
We would also like to figure this out.
In 2.5/3.0 there was an openGL visualizer.
There appears to be a visualizer in the code somewhere that we are trying to use.
https://github.com/ApolloAuto/apollo/issues/6656
We have managed to create our own OpenCV visualizer, however it is a work in progress. Would like to hear from the Apollo team. But we could perhaps share our humble bounding box drawer.
@snuffysasa Thank you so much for your quick reply. I am very interested in getting your bounding box drawer. Where can I get it?
I can likely share it tomorrow.
This is what @snuffysasa was referring to
In the perception/onboard/component/BUILD, make the camera binary look like this
cc_binary(
name = "libperception_component_camera.so",
linkopts = [
"-lopencv_core",
"-lopencv_imgproc",
"-lopencv_highgui",
"-lboost_system",
],
linkshared = True,
linkstatic = False,
deps = [":perception_component_inner_camera",
"//modules/perception/camera/common",],
)
then in fusion_camera_detection_component.cc add the following
```
+#include
...
int FusionCameraDetectionComponent::InternalProc(
const std::shared_ptr
const std::string &camera_name, apollo::common::ErrorCode *error_code,
SensorFrameMessage *prefused_message,
apollo::perception::PerceptionObstacles *out_message) {
...
if (!camera_obstacle_pipeline_->Perception(camera_perception_options_,
&camera_frame)) {
}
...
+uint8_t* data = const_cast
+cv::Mat cv_img(image_height_, image_width_, CV_8UC3, data, static_cast
+// drawing bounding boxes - green
+for (auto obj : camera_frame.tracked_objects) {
cv::Point pt2(static_cast
cv::rectangle(cv_img, pt1, pt2, cv::Scalar(0, 255, 0), 3);
}
+}
+// drawing lane lines - red
+int num_of_points = 30;
+for(auto lane : camera_frame.lane_objects){
double x_start = lane.curve_image_coord.x_start;
double d_ = lane.curve_image_coord.d;
double iter_x = (x_end - x_start) / static_cast
double prev_y = 0.0;
for(int i = 0; i < num_of_points; i++){
double y_current = (a_ * pow(x_start, 3)) + (b_ * pow(x_start, 2)) + (c_ * x_start) + d_;
// draw line
}
//update values, store old ones
}
+}
+cv::imshow("Image",cv_img);
+cv::waitKey(30);
@DevMohamedMI Thank you so much! I will try it.
@snuffysasa @DevMohamedMI It works, I appreciate your help!
Closing this issue as it appears to be resolved. Thanks!
We added visualization code in fusion component. Please check modules/perception/camera/tools/offline/visualizer.cc
@techoe
Thank you.
@DevMohamedMI I updated the BUILD and cc files as described above, rebuilt with build_gpu, and ran the perception_camera.launch; which is just the dag_streaming_perception_camera.dag.
I then ran the vanilla record file with cyber_recorder play -f docs/demo_guide/demo_3.5.record --loop but I am not seeing any visualization.
Forgive my ignorance, but I was expecting some kind of popup based on cv.imshow.
I also started Dreamview but not seeing any changes there from previous.
Any thoughts on this?
@techoe Can you please elaborate on how to use the visualizer.cc code? How do I incorporate this into the record play? What is the expected output?
@techoe my apologies! that was a very silly question! I found the bazel BUILD file and built the offline_obstable_pipeline binary. Still a couple of issues had to be resolved, like two bad symlinks for data and conf directories, and a hard coded path with /home/<some-user>/cybertron in the flags file.
For anyone coming here and would like a quick answer, you can run the offline obstacle visualizer tool as follows (it will pull same images from the testdata directory for processing):
1) make sure to build the perception module with
$ bash docker/scripts/dev_start.sh
$ bash docker/scripts/dev_into.sh
$ cd /apollo/
$ bash apollo.sh build_gpu
2) the build the offline obstacle tool with
# from /apollo on container
$ bazel build //modules/perception/camera/tools/offline:offline_obstacle_pipeline
3) run the offline tool
$ bazel-bin/modules/perception/camera/tools/offline/offline_obstacle_pipeline
4) You'll see some errors like:
- unable to find sensor_meta file
- failed to load intrinsic file
I forget the specific steps to resolve these, but it's not hard. Just search for sensor_meta.pt , find the path, and make sure the flag file that's being used has the correct path.
@techoe I am getting this error when I follow your guide:
Any Idea what I can do?
E0806 16:15:17.537537 20013 transform_server.cc:49] [] load velodyne128 extrisic file error, YAML exception:bad file
F0806 16:15:17.537578 20013 offline_obstacle_pipeline.cc:165] Check failed: transform_server.Init(camera_names, FLAGS_params_dir)
@techoe I am getting this error when I follow your guide:
Any Idea what I can do?E0806 16:15:17.537537 20013 transform_server.cc:49] [] load velodyne128 extrisic file error, YAML exception:bad file
F0806 16:15:17.537578 20013 offline_obstacle_pipeline.cc:165] Check failed: transform_server.Init(camera_names, FLAGS_params_dir)
I have the same problem. Have you solved it?
Most helpful comment
We added visualization code in fusion component. Please check modules/perception/camera/tools/offline/visualizer.cc