Apollo: Run Offline Perception Visualizer with VLP-16?

Created on 5 Feb 2018  ·  12Comments  ·  Source: ApolloAuto/apollo

Is it possible to run the Offline Perception Visualizer with the VLP-16 lidar?

Perception Question

Most helpful comment

I don't use the ros bag data,if you don't have a GPS device,you can follow these steps:
1.Change the query_time to ros::Time::now() in compensator.cpp
bool Compensator::query_pose_affine_from_tf2(const double& timestamp,
Eigen::Affine3d& pose) {
ros::Time query_time(timestamp);
std::string err_string;
if (!tf2_buffer_.canTransform("world", child_frame_id_, ros::Time::now(),
ros::Duration(tf_timeout_), &err_string)) {
ROS_WARN_STREAM("Can not find transform. "
<< std::fixed << timestamp
<< " Error info: " << err_string);
return false;
}

geometry_msgs::TransformStamped stamped_transform;

try {
stamped_transform =
tf2_buffer_.lookupTransform("world", child_frame_id_, ros::Time::now());
} catch (tf2::TransformException& ex) {
ROS_ERROR_STREAM(ex.what());
return false;
}

tf::transformMsgToEigen(stamped_transform.transform, pose);
// ROS_DEBUG_STREAM("pose matrix : " << pose);
return true;
}

  1. Publish the static TF of novatel to world coorinate system:
    rosrun tf static_transform_publisher 0 0 0 0 0 0 novatel world 100
  1. roslaunch velodyne export_pcd.launch generate pcd file in /apollo/data/pcd

4.cd /apollo/modules/perception/tool && python gen_pose_file.py /apollo/data/pcd generate pose.txt and stamp.txt file in /apollo/data/pcd

5.cd /apollo
bazel build -c opt --cxxopt=-DUSE_CAFFE_GPU //modules/perception/tool/offline_visualizer_tool:offline_lidar_visualizer_tool

6./apollo/bazel-bin/modules/perception/tool/offline_visualizer_tool/offline_lidar_visualizer_tool

All 12 comments

I have done a test with VLP-16 lidar, it can support running the offline Perception Visuallizer. And you can refer to https://github.com/ApolloAuto/apollo/blob/master/docs/howto/how_to_run_offline_perception_visualizer.md

Did you modify any of the steps in the "How to Run Offline Perception Visualizer" guide?

I tried following those steps but ran into a lot of problems/errors

I don't use the ros bag data,if you don't have a GPS device,you can follow these steps:
1.Change the query_time to ros::Time::now() in compensator.cpp
bool Compensator::query_pose_affine_from_tf2(const double& timestamp,
Eigen::Affine3d& pose) {
ros::Time query_time(timestamp);
std::string err_string;
if (!tf2_buffer_.canTransform("world", child_frame_id_, ros::Time::now(),
ros::Duration(tf_timeout_), &err_string)) {
ROS_WARN_STREAM("Can not find transform. "
<< std::fixed << timestamp
<< " Error info: " << err_string);
return false;
}

geometry_msgs::TransformStamped stamped_transform;

try {
stamped_transform =
tf2_buffer_.lookupTransform("world", child_frame_id_, ros::Time::now());
} catch (tf2::TransformException& ex) {
ROS_ERROR_STREAM(ex.what());
return false;
}

tf::transformMsgToEigen(stamped_transform.transform, pose);
// ROS_DEBUG_STREAM("pose matrix : " << pose);
return true;
}

  1. Publish the static TF of novatel to world coorinate system:
    rosrun tf static_transform_publisher 0 0 0 0 0 0 novatel world 100
  1. roslaunch velodyne export_pcd.launch generate pcd file in /apollo/data/pcd

4.cd /apollo/modules/perception/tool && python gen_pose_file.py /apollo/data/pcd generate pose.txt and stamp.txt file in /apollo/data/pcd

5.cd /apollo
bazel build -c opt --cxxopt=-DUSE_CAFFE_GPU //modules/perception/tool/offline_visualizer_tool:offline_lidar_visualizer_tool

6./apollo/bazel-bin/modules/perception/tool/offline_visualizer_tool/offline_lidar_visualizer_tool

Thanks for the steps, I tried them but I'm a bit confused about some for step 3 onwards:

  • For step 3: there was something about removing a legacy stamp in the pcd folder?
  • For step 4: are you just generating a pose.txt file and stamp.txt file with this command?
  • For step 5: bazel build -c opt --cxxopt=-DUSE_CAFFE_GPU works fine, but what does //modules/perception/tool/offline_visualizer_tool:offline_lidar_visualizer_tool mean? Is this just showing where the lidar visualizer tool is?
  • For step 6: does this command actually start running the visualizer tool? Just with a ./ and no bash or anything?

I am sorry I did not express clearly,I try to tell the details of the steps:
Steps 1: Change the query_time to ros::Time::now() in compensator.cpp ///Refer to comment 1

Steps 2: $ rosrun tf static_transform_publisher 0 0 0 0 0 0 novatel world 100 ///Publish the static TF

Steps 3:
(1) $ cd /apollo/data/pcd
(2) $ rm * /// delete all file in pcd folder,and this can solve the problem of removing a legacy
/// stamp in the pcd folder
(3) $ roslaunch velodyne export_pcd.launch ///generate pcd、stamp.txt、pose.txt in pcd file

Steps 4:
(1) $ cd /apollo/modules/perception/tool
(2) $ python gen_pose_file.py /apollo/data/pcd ///generate the Pose files from pose.txt
Notes: Steps 3 and Steps 4 can not be executed simultaneously,Step 3 stop and go to Step 4

Steps 5:
(1) $ cd /apollo
(2) There are two ways to Build The Offline Perception Visualizer(CPU or GPU)
$ bazel build -c opt //modules/perception/tool/offline_visualizer_tool:offline_lidar_visualizer_tool
/// use CPU build
$ bazel build -c opt --cxxopt=-DUSE_CAFFE_GPU
//modules/perception/tool/offline_visualizer_tool:offline_lidar_visualizer_tool
/// use GPU build,This is a order,not two

Step 6: Open /apollo/modules/perception/tool/offline_visualizer_tool/conf/offline_lidar_perception_test.flag
file and make some changes:
(1) --enable_hdmap_input=false
(2) --onboard_roi_filter=DummyROIFilter
(3) --onboard_tracker=DummyTracker

Step 7: Run The Visualizer With Offline Perception Simulation
$ /apollo/bazel-bin/modules/perception/tool/offline_visualizer_tool/offline_lidar_visualizer_tool
/// use this order can run The Visualizer With Offline Perception ,not "./"

Thank you for the clarification!

I had to use roslaunch velodyne export_pcd_offline.launch to generate the pose files and I had to use
bazel build -c opt //modules/perception/tool/offline_visualizer_tool:offline_lidar_visualizer_tool

to use

/apollo/bazel-bin/modules/perception/tool/offline_visualizer_tool/offline_lidar_visualizer_tool

However after running that I get:

I0213 14:54:43.275192 104 layer_factory.hpp:79] Creating layer input I0213 14:54:43.275722 104 net.cpp:94] Creating Layer input I0213 14:54:43.275754 104 net.cpp:402] input -> data F0213 14:54:43.276754 104 syncedmem.hpp:18] Check failed: error == cudaSuccess (35 vs. 0) CUDA driver version is insufficient for CUDA runtime version *** Check failure stack trace: *** Aborted (core dumped)

I think this may be because I don't have the NVIDIA driver installed properly?

I followed the instructions here (How to Run Perception Module on your Computer) to install the NVIDIA drivers and think I did them correctly, since I was able to get the Launched module perception message after running ./scripts/perception.sh start

However I'm not sure if step 6 is correct:

After I commit a docker image with the NVIDIA driver setup (in step 5) I try to start the image using ./docker/scripts/dev_start.sh NEW_DOCKER_IMAGE_TAG but I get a warning: Unknown option: [my new docker image tag]

I looked into the dev_start.sh file and it looks like the only valid options are

  • -C (pull docker image from China mirror)
  • -h (display help)
  • -t (specify which version of a docker image to pull)
  • -l (use local docker image)

I tried -t with my new tag, but it looks like I would need to push my local docker image to the apolloauto/apollo repository in docker first, and I don't have permission to do that.

I tried -l too, but it looks like using this option uses a preset local docker image, not necessarily the one that I want to use with all the NVIDIA driver/CUDA installations I need.

Any advice on what to do?

You can run this command ($ docker images ) to see if there is a NEW_DOCKER_IMAGE_TAG label. If not,commit failure,commit again. For example:
(1)$docker ps /// Obtain CONTAINER_ID
(2) $ docker commit CONTAINER_ID apolloauto/apollo:perception_nvidia_driver
///commit perception_nvidia_drviver tag
(3)$ docker images ///see if there is a perception_nvidia_driver,if has, commit success

@vivianistan - It should be

./docker/scripts/dev_start.sh -t NEW_DOCKER_IMAGE_TAG -l

@luo2880399 Hi luo, I've mentioned that in the script export_pcd_offline.launch, it shows

<!-- Play a bag only contains topics '/apollo/sensor/velodyne64/VelodyneScanUnified' and '/tf' to make this launch file work -->

so, how do you deal with the topic data /apollo/sensor/velodyne64/VelodyneScanUnified?.
I intend to try use apollo perception part with 3D lidar object detection. I download the sample lidar data from baidu apollo platform. The downloaded data is .bin file, therefore, I load the data with pcl and publish it with the topic /apollo/sensor/velodyne64/compensator/PointCloud2 which is one of the input nodes for perception.
I also did some changes to perception.conf

--enable_hdmap_input=false
--onboard_roi_filter=DummyROIFilter
--onboard_tracker=DummyTracker

then I start the perception, but when I echo the topic /apollo/perception/obstacles, there's an error of ros publisher and subscriber connection.

Do you have any ideas to play with apollo perception part with only lidar data? Thank you!

@ytzhao You run perception failed because you did have not tf date,if you can not provide tf data you can do like this: modify /apollo/modules/perception/obstacle/onboard/lidar_process_subnode.cc
1.Comment this code in function void LidarProcessSubnode::OnPointCloud()
/* if (!GetVelodyneTrans(kTimeStamp, velodyne_trans.get())) {
AERROR << "failed to get trans at timestamp: " << kTimeStamp;
error_code_ = common::PERCEPTION_ERROR_TF;
return false;
}/
2.Add it:
velodyne_trans)<< 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1,0, 0, 0, 0, 1;
3.if you run perception success,you run (bash scripts/diagnostics.sh ) to look topic data
4.And you can look log in /apollo/data/log/percetion.info.
Notes: you must install nvidia driver in apollo docker first

Closing as the issue seems to be resolved.

Was this page helpful?
0 / 5 - 0 ratings