Greetings,
I have some comments about the current Lidar model. These were discussed already in #1018 and #835.
Inorder to collect a full 360 degree sweep in one frame, the suggestion provided was to run the server with fixed time-step.
However basing the lidar functionality on fixed time-step of the physics engine is not 100% reliable as there can be situations when the graphics rendering engine麓s frame rate drops. The link below also discusses the potential problems that could arise because of fixing the time-step.
https://forums.unrealengine.com/community/community-content-tools-and-tutorials/87505-using-a-fixed-physics-timestep-in-unreal-engine-free-the-physics-approach
Further the RayCastLidar.cpp reveals that the nested for loop is only iterating through the channels but not through the horizontal sweep.
Replacing the following parts of the loop, results in a full 360 degree sweep making the Lidar independent of the time-step.
````
void ARayCastLidar::ReadPoints(const float DeltaTime)
{
const uint32 ChannelCount = Description.Channels;
const uint32 PointsToScanWithOneLaser =
FMath::RoundHalfFromZero(
Description.PointsPerSecond * DeltaTime / float(ChannelCount));
if (PointsToScanWithOneLaser <= 0)
{
UE_LOG(
LogCarla,
Warning,
TEXT("%s: no points requested this frame, try increasing the number of points per second."),
*GetName());
return;
}
check(ChannelCount == LaserAngles.Num());
const float CurrentHorizontalAngle = LidarMeasurement.GetHorizontalAngle();
const float AngleDistanceOfTick = Description.RotationFrequency * 360.0f * DeltaTime;
const float AngleDistanceOfLaserMeasure = AngleDistanceOfTick / PointsToScanWithOneLaser;
LidarMeasurement.Reset(ChannelCount * PointsToScanWithOneLaser);
for (auto Channel = 0u; Channel < ChannelCount; ++Channel)
{
for (auto i = 0u; i < 360; ++i)
{
FVector Point;
std::vector<std::string> Label;
const float Angle = CurrentHorizontalAngle + i; //AngleDistanceOfLaserMeasure * i;
if (ShootLaser(Channel, Angle, Point))
{
LidarMeasurement.WritePoint(Channel, Point, actor_string, Angle);
}
}
}
const float HorizontalAngle = std::fmod(CurrentHorizontalAngle + AngleDistanceOfTick, 360.0f);
LidarMeasurement.SetHorizontalAngle(HorizontalAngle);;
}
````
Now the loop just makes 360degree sweeps. And now a single frame caps the complete 360degree scene.
I have been playing around with this new model and have not found any bug yet. Would like to know if someone already tried this and stumbled upon a bug elsewhere because of this.
Thank You.
Have you checked the fov result of your output?
@user025, If you are referring to the horizontal fov, it's a complete 360degree sweep.
@vishal-prgmr I'm referring to vertical fov but soon I found it's unrelated.
But I do have some questions.
Since you set the second loop to a fixed range, the total point of laser become a fixed number, channel * 360, not points_per_second / frequency.
And the horizontal degree resolution is fixed to 1, not AngleDistanceOfTick, which will be more accurate as the points_per_second grows.
The result of this method might be too sparse to simulate a typical lidar.
Yes the cloud is now sparse and resolution is also one.
Just replace DeltaTime in calculations with 1 / Description.RotationFrequency might be a better solution
Will do this and get back.
@vishal-prgmr I just want to add that getting a full 360掳 or even partly scan in one time step is not really a good model for a scanning LiDAR.
Why: In a real LiDAR the vehicle and world objects move during the scan over the scene - especially during turns and high speeds this leads to motion artifacts in the pointcloud data (straight objects can look bent). Do you keep in mind that getting a full frame at once/ one timestep is neglecting these effects? I would pledge to rather have a much higher sampling rate for LiDAR in terms of when partitioned measurements are performed and only scan smaller parts of the scene (defined by angle/second (frequency)).
Maybe this could help: #266 - still unclear if possible.
One LiDAR frame is accumulated over time (unlike camera where you get one whole image per frame at one timestep). Also FPS can vary so displaying LiDAR data over camera frames naturally leads to the effect that for one image only parts of the 360掳 have been scanned (partly adressing #1018 and #835).
@weasei, agree with every fact you pointed out. As you mentioned scanning small sectors with high sampling rate and accumulating the scans after one complete rotation would render a 360 sweep. Also will take in to account the motion of the EGO car.
@weasei , my use case is to generate pointwise labelled pointcloud. That is why I am interested in obtaining a fullsweep at every timestep.
@user025 , the PointCloud is futher sparsed by replacing DeltaTime. However the resolution can be changed by changing the increment steps of the loop.
A real Velodyne style lidar will sweep at a rate of between 5-20 sweeps a second, which is definitely slow enough to see significant changes due to motion. You can compensate for the ego car's motion but not for motion of other actors in the scene. So in a real system you will simply not get a full 360 degree point cloud that represents one point in time. You would also get a slower frame rate than you would get from a video camera.
@vishal-prgmr if you replace DeltaTime in CARLA's implementation, resolution can be increased by setting larger points_per_second attribution in client. It is read as Description.PointsPerSecond at server side.
@user025, thank you.
Like I mentioned above for my use case, increasing resolution by changing the loop increment produces a Velodyne-like result. Will close this issue now.