Openmvg: SFM with initialization?

Created on 9 Oct 2015  路  18Comments  路  Source: openMVG/openMVG

I wonder if there is any way to give the initial camera poses to SFM reconstruction.

question

Most helpful comment

@FAC94 As indicated by Save function in https://github.com/openMVG/openMVG/blob/f0889f0a0024a211a7e5ed71794ad1926e545767/src/openMVG/geometry/pose3.hpp

a pose in .json is just a rotation and a center (Note it is center stored instead of translation ).

All 18 comments

You can use the API for that.
You must:
-> create a sfm_data container with your data (view, intrinsic and poses),
Then compute your features & run openMVG_main_ComputeStructureFromKnownPoses => and add a BA refinement step.

See https://github.com/openMVG/openMVG/issues/388
And see here for a user that have successfully loaded he's own camera positions (screenshot) https://github.com/openMVG/openMVG/issues/262#issuecomment-125641762

Thanks for the reply!
Actually my own pose data is inaccurate, and I want to use SFM to refine these poses, rather than use these poses for structure computing.

Ok, so the best will be:

  • to create your SfM_Data with your poses.
  • Compute features and matches.
  • Create a new binary that join all the matches into tracks, do a BA, clean the found structure, and redo a BA.
    I can help on the last point (the new binary)

hi! I'm also very interested in this last point.
In my workflow I'm exporting to MVE.

I've got two example-jobs, job#1 and job#2. They share the same camera intrinsics and extrinsics but not the same image-data of course. 20 images each.

I do job#1 with feature extraction, matching, BA etc and everything is fine, I get a sfm_data.json with about 20 MB of size and I am able to export to MVE.
I redo the same cameras intrinsics and extrinsics in job#2, skip the BA, use openMVG_main_ComputeStructureFromKnownPoses with the sfm_data.json from job#1 and get a sparse point cloud (reco.ply) from the image-dataset from job#2.
Here I'm stuck as I cannot seperate camera intrinsics and extrinsics from job#1 and use the matches from job#2 and also I am not able to refine the BA.

pmoulon, you wrote that you could help on that topic. Do you have any suggestions?

Hi @aph3xtwin,
From what I understand you have a fixed camera setup, you want to calibrate it and reuse the existing calibration later on.
So, you want to use the caliartion from the scene job#1 for a new scene => job#2.

The basic idea is to use the data stored in sfm_data.json between the two scenes.
One will provide refined value to use (job#1/SfMResult.sfm_data.json) that you will feed to job#2/sfm_data.json)

One way to do that is:

  1. use SfMInit with the images from job#2
  2. copy paste the intrinsics & extrinsics data from the SfM step of job#1 inside the sfm_data.json file
  3. do computeFeatures & computeMatches on the sfm_data.json file that you have produced
  4. use openMVG_main_ComputeStructureFromKnownPoses on the produced files (matches of the scene job#2)
  5. if you want to perform only BA in a binary, I could add a new one.

Don't hesitate to continue the discussion or share two dataset on openmvg-team[AT]googlegroups.com

I'll try that tomorrow, but it sounds plausible. I'll see if I can provide sampla data in the future.
From the other Thread #262 I learned that independent from the accurate camera extrinsics, the values always need to be readjusted in an refinement-step. Do you have any experience with that? I think, what I want at the end is to do a final BA-step. For that a binary would be great (Ubuntu Linux 64 Bit).

@aph3xtwin We can try to add a binary that load camera poses & some matches.
All pairwise matches are linked (tie point), we make a first estimation (blind) and then run a BA.

It's seems it's what you need.

Hi guys,

I'm taking my first steps with openMVG.

I'd like to use the function openMVG_main_ComputeStructureFromKnownPoses and I think I've already understood how it works, however, I'm stuck in the fact that I don't know how to add the camera poses (x,y,z or rotation/translation matrices) to a .json file like sfm_data.

Is there any binary that load that information to a .json file?
Sorry for my newbie :) I'd appreciate if someone can hep me.

Thanks

@FAC94 As indicated by Save function in https://github.com/openMVG/openMVG/blob/f0889f0a0024a211a7e5ed71794ad1926e545767/src/openMVG/geometry/pose3.hpp

a pose in .json is just a rotation and a center (Note it is center stored instead of translation ).

@autosquid I was not finding that file, it helped a lot. Now I'm able to run openMVG_main_ComputeStructureFromKnownPose properly :)

Thanks for your help

@FAC94 Happy to see that it works as you expected. Is it ok to close the issue?
@autosquid Perhaps we have to enhance the doc somehow in order to help people on this subject.

@pmoulon Yes, for me you can close this issue.

Hope you don't mind but I attached here the sfm_data.json file that I used for the people who have the same problem have an example how it should be constructed :)

sfm_data2.json.zip

The simplest way to import custom poses & intrinsics is to use OpenMVG as a third party library in your own project and to create a SfM_Data and fill it with your view, intrinsic & poses information.

  • Views depicts the used images and set link to intrinsic & pose ids.
  • Intrinsics depicts the internal orientation,
  • Poses depicts the external orientation.

PS: Last two can be shared or not.

Hello again,
I paused my project for some time and now I returned to try to make some reconstructions with openMVG, using the function openMVG_main_ComputeStructureFromKnownPose, however I'm not getting that good results this time :(

I have UAV filtered information about its position and orientation, and I tried to generate the extrinsic sructure of sfm_data.json using a simple Matlab script that looks like this:

qw = IMU_data(i,8);  
qx = IMU_data(i,5);
qy = IMU_data(i,6);
qz = IMU_data(i,7);
cx = IMU_data(i,2); % UAV position
cy = IMU_data(i,3);
cz = IMU_data(i,4);

mat(1,1,i) = qw^2 + qx^2 - qy^2 - qz^2;
mat(1,2,i) = 2*(qx*qy - qz*qw);
mat(1,3,i) = 2*(qx*qz + qy*qw);
mat(2,1,i) = 2*(qx*qy + qz*qw);
mat(2,2,i) = qw^2 - qx^2 + qy^2 - qz^2;
mat(2,3,i) = 2*(qy*qz - qx*qw);
mat(3,1,i) = 2*(qx*qz - qy*qw);
mat(3,2,i) = 2*(qy*qz + qx*qw);
mat(3,3,i) = qw^2 - qx^2 - qy^2 + qz^2;

% mat is the orientation matrix of the UAV, in Euler angles, in relation to the world referential
% R is the rotation matrix that relates the camera and UAV referentials

mat(:,:,i) = R*mat(:,:,i);

c(1,i) = cx;
c(2,i) = cy;
c(3,i) = cz;

c(:,i) = R*c(:,i);

% for the sfm_data.json file will be exported the variables mat and c

What am I doing wrong here? Can anyone help me?
Thanks in advance :)

First I suggest you to check visualy your camera positions and orientations (R*(0,0,1))?

You can try to display it with Matlab or use this tool to visualize them: https://github.com/openMVG/openMVG/issues/626#issuecomment-247781955

@pmoulon I didn't knew this visualizer, but it seems really great :D
Thanks for your answer, I will perform some tests and will get back to you about what was the problem

Do you know if your camera position is accurate or not?
If not, perhaps just a BA on the initial data & matches would make the job... I can release a binary to do that.

I changed this:

mat(:,:,i) = R*mat(:,:,i);

c(1,i) = cx;
c(2,i) = cy;
c(3,i) = cz;

c(:,i) = R*c(:,i);

To this:

c(1,i) = cx;

c(2,i) = cy;

c(3,i) = cz;

c(:,i) = -mat(:,:,i)'*c(:,i);
mat(:,:,i) = R*mat(:,:,i);

And it appears to work better, although the results are not as good as if I perform the main_IncrementalSfM

The camera position may not be that much accurate and the camera resolution is not the best one as well. This is an old log. I'll try to make a new one this week with better positioning and camera resolution.

If you can release that binary, it would be great :)
Thanks

Was this page helpful?
0 / 5 - 0 ratings

Related issues

learnmano picture learnmano  路  6Comments

autosquid picture autosquid  路  4Comments

glbsalazar picture glbsalazar  路  7Comments

lab3d picture lab3d  路  7Comments

itsdsk picture itsdsk  路  6Comments