I am trying to use openMVG_main_ComputeStructureFromKnownPoses
to compute structure from a calibrated camera which is on a rotating beam which is 18cm long. It takes 5 pictures evenly spaced throughout a 60deg sweep, the camera is always oriented normal to the circular arc through which it rotates. I define the axis of rotation as the origin of this system so I have a python script calculate the Cartesian coordinates of the camera center and the rotation matrix of the camera at each frame. I only rotate the camera about the z -axis that I've defined so for the rotation matrix I use:
Rot_z = [[cos -sin 0]
[sin cos 0]
[ 0 0 1]]
Here is the pose data calculated for the motion I described at each of the 5 frames:
----- FRAME: 0 -----
COORDS:
[ 9. 15.58846 0. ]
ROT_MTX:
[[ 0.5 -0.86603 0. ]
[ 0.86603 0.5 0. ]
[ 0. 0. 1. ]]
----- FRAME: 1 -----
COORDS:
[ 4.65874 17.38666 0. ]
ROT_MTX:
[[ 0.25882 -0.96593 0. ]
[ 0.96593 0.25882 0. ]
[ 0. 0. 1. ]]
----- FRAME: 2 -----
COORDS:
[ 0. 18. 0.]
ROT_MTX:
[[ 0. -1. 0.]
[ 1. 0. 0.]
[ 0. 0. 1.]]
----- FRAME: 3 -----
COORDS:
[ -4.65874 17.38666 0. ]
ROT_MTX:
[[-0.25882 -0.96593 0. ]
[ 0.96593 -0.25882 0. ]
[ 0. 0. 1. ]]
----- FRAME: 4 -----
COORDS:
[ -9. 15.58846 0. ]
ROT_MTX:
[[-0.5 -0.86603 0. ]
[ 0.86603 -0.5 0. ]
[ 0. 0. 1. ]]
After I generate an sfm_data.json with the proper intrinsics I go edit the generated file by hand to add the extrinsics for each view. Here's how formatted the data into the json file:
When I pass this json to openMVG_main_ComputeStructureFromKnownPoses
I end up with no errors but no tracks and no landmarks, a robust.ply results with no points in it. When I pass this json to openMVG_main_GlobalSfM
I get a "rotation averaging" failure.
I think this issue is arising from me entering my extrinsic parameters incorrectly. In the docs I could not find any specification for how the extrinsics should be expressed in the sfm_data.json
file. Could the developers specify this? As you can see I entered the columns of the rotation matrix in each entry of "rotation," is that correct?
Thank you for the support!
main_ExportCameraFrustums
@pmoulon thank you for the tip! Exporting camera frustums revealed to me that extrinsics aren't representative of the motion of my camera. I'll mess around with rotation matrices until they are correct and report back to this thread tomorrow.
I was able to add my rotation matrices to the json file correctly, I see the proper motion when I export the frustums. I still have the no tracks issue, but I'll make a new thread for that separate issue.
hello, I want to make use of the intrinsic and pose parameters to project 3d points to 2d points, I use the stardard standard formula, but I have a wrong answer.can you help me? In addition, the focal length in json file is not equal to the given focal length.
You can use OpenMVG data structure.
You can use Project and Residual
https://github.com/openMVG/openMVG/blob/master/src/openMVG/cameras/Camera_Intrinsics.hpp#L84
https://github.com/openMVG/openMVG/blob/master/src/openMVG/cameras/Camera_Intrinsics.hpp#L104
Vec2D projection = intrinsic->Project(X);
double residual = intrinsic->Residual(X, x);
In addition, the focal length in json file is not equal to the given focal length.
How much does it differ?
If you ran a SfM process, this is normal, since SfM and BA will refine your Focal Length estimate.