Thank you for publishing this great drive simulator code.
I am trying to map depth points(u,v,depth) from Pixel coordinates to World coordinates(x,y,z).
To do this, Camera intrinsic parameter is necessary. I found that "CameraFOV(horizontal field of view) and ImageSize (in CarlaSettings.ini)" is parameters for calculating camera intrinsic matrix. I thought as following.
Focus_length = ImageSizeX /(2 * tan(CameraFOV * 蟺 / 360))
Center_X = ImageSizeX / 2
Center_Y = ImageSizeY / 2
But I found this intrinsic matrix is not correct enough because 3D points (transformed from depth points(u,v,depth) by this matrix) has some distortion.
Could you tell me how to get camera intrinsic matrix?
(Now I think Focus_length_per_pixel (fx, fy) or distortion parameter is necessary for this. )
[P.S.]
To confirm whether my implementation is correct, please tell me the definition of depth length in each image pixel. I think this depth is the length from Camera coordinate original point to each point.
(not simply z value in Camera coordinate)
Hello @syinari0123,
Sorry for the late response.
Yes, the depth is perpendicular to the projection plane.
We've been doing some experiments and we've got results without noticiable distortion.
Your code is correct to me, and the intrinsic matrix shoud be the following one:
K = [[f, 0, Cu],
[0, f, Cv],
[0, 0, 1 ]]
Where Cu and Cv represents the center point of the image.
Given a 2D point p2d = [u,v,1], your world point position P = [X,Y,Z] will be:
P = ( inv(K) * p2d ) * depth
We're working on a example integrating the point cloud with the python client.
I hope we are going to release it soon :)
Due to #58 we can't get the correct camera world rotation, but it will be fixed in the next release.
Thank you for replying.
I modified my code following as what you say, I was able to get correct result ! Thank you! :)
I thought depth is the ray length to each point from the pole (like LIDAR sensor's value, what is called, "r in polar coordinate system" ), which caused distortion in my code.
Glad it worked! :)
Most helpful comment
Hello @syinari0123,
Sorry for the late response.
Yes, the depth is perpendicular to the projection plane.
We've been doing some experiments and we've got results without noticiable distortion.
Your code is correct to me, and the intrinsic matrix shoud be the following one:
Where Cu and Cv represents the center point of the image.
Given a 2D point
p2d = [u,v,1], your world point positionP = [X,Y,Z]will be:We're working on a example integrating the point cloud with the python client.
I hope we are going to release it soon :)
Due to #58 we can't get the correct camera world rotation, but it will be fixed in the next release.