Openmvg: Incremental SfM v2

Created on 25 Feb 2018  路  34Comments  路  Source: openMVG/openMVG

Implement a new Incremental SfM pipeline (SequentialSfMReconstructionEngine2) with the following features:

  • The reconstruction initialization is done thanks to an abstract interface, SfMSceneInitializer.
  • The triangulation stage considers the entire scene tracks.
  • The resection stage is based on 2d-3D matching confidence.
  • The reconstruction can start from existing camera poses.

This new engine is easier to read and to customize. This new engine has been designed after reading [1], [2] & [3].

SequentialSfMReconstructionEngine2 is:

  • fast:

    • Since it localizes images as soon as it can, fewer Bundle Adjustment steps are observed than in SequentialSfMReconstructionEngine.

  • flexible:

    • The engine can extend a partial reconstruction, you can call this engine on the results of any other SfM Engine. For example, you can run GlobalSfM (to obtain the pose of the camera triplets) and then run SequentialSfMReconstructionEngine2 to localize the remaining images.

    • You can now initialize the reconstruction with a n-view reconstruction (Stellar [2]) and provide a very stable seed for the reconstruction.

For the moment three SfMSceneInitializer are implemented:

  • SfMSceneInitializer

    • Keep the existing poses. -> extend a previous reconstruction.

  • SfMSceneInitializerMaxPair:

    • Initialize a 2-view reconstruction (the relative pose with the most of inliers).

  • SfMSceneInitializerStellar

    • Initialize a stellar reconstruction (a n-view reconstruction with edge connected to a central unique pose: i.e here a 5 pose stellar configuration defined by 4 relative pairs {{0,1}, {0,2}, {0,6}, {0,10}}. see [2])

Looking forward to the community feedback on this new engine.

[1] Batched Incremental Structure-from-Motion. Hainan Cui, Shuhan Shen, Xiang Gao, Zhanyi Hu. (3DV 2017).
[2] Global Structure-from-Motion by Similarity Averaging. Zhaopeng Cui and Ping Tan. (ICCV2015)
[3] Structure-from-Motion Revisited, Schonberger, Johannes Lutz and Frahm, Jan-Michael. (CVPR2016).

Here is an example of a stellar reconstruction computed from 27 relative pose pairs and 223 relative scales, the Stellar estimation results in 28 images localized to start the SfM process.
snapshot00

enhancement

Most helpful comment

@happyfrank

Is that support Spherical camera?

Yes. you can still mix various intrinsics.

@ManishSahu53

Note that the current implementation will not be as fast as [1]. The track reduction trick is not implemented, but any help on this is welcomed. But even with the current implementation SequentialSfMReconstructionEngine2 will be faster than SequentialSfMReconstructionEngine on any large dataset.

All 34 comments

Dear @pmoulon That looks great. Is that support Spherical camera?

Thats really great. I saw results of paper[1]. I can't wait to test it on large dataset.

This is great!

@happyfrank

Is that support Spherical camera?

Yes. you can still mix various intrinsics.

@ManishSahu53

Note that the current implementation will not be as fast as [1]. The track reduction trick is not implemented, but any help on this is welcomed. But even with the current implementation SequentialSfMReconstructionEngine2 will be faster than SequentialSfMReconstructionEngine on any large dataset.

I was testing this new incremental technique on large dataset. around ~3601 Images(24MP each) and intend to compare it with Global SFM results. I got great results with Global SFM see(https://github.com/openMVG/openMVG/issues/1232 , Project 3). But I am a bit confused regarding the options of new sfm pipeline. Why there is two parameters for specifying match files? I specified values to both the parameters. I might be giving wrong parameters. Out of 3601 poses only 2 poses were recovered.

pRecons = subprocess.Popen( [os.path.join(OPENMVG_SFM_BIN, "openMVG_main_IncrementalSfM2"), "-i", matches_dir+"/sfm_data.json", "-m", matches_dir, "-o", reconstruction_dir,"-P","-S","MAX_PAIR","-M",matches_dir+ "matches.f.bin"] )

Max_Pair.txt

@ManishSahu53
Your command line is right. In your case, the pair that has been chosen seems like not a good candidate to start the reconstruction.
I would advise you to use the -S STELLAR.
The option -M is to specify any matches file (i.e matches.f.bin,matches.e.bin,matches.h.bin so you can run the pipeline with a matches file that is not the default expected one matches.f.bin).

Using below command but it stuck in relative pose.
I could generate excellent sfm model with global sfm
pRecons = subprocess.Popen( [os.path.join(OPENMVG_SFM_BIN, "openMVG_main_IncrementalSfM2"), "-i", matches_dir+"/sfm_data.json", "-m", matches_dir, "-o", reconstruction_dir,"-P","-S","STELLAR","-M",matches_dir+ "matches.f.bin"] )

Stellar.txt

It should not being stuck, it should progress perhaps slowly. I will soon work on improving the stellar choice, and it will give it faster start.

I am sorry. Perhaps should have used different word there. The program exited after that not stuck.

Strange, the method to compute the relative pose is the same as the one used for the GlobalSfM engine. It should not quit like that.

I tried running Old Incremental method. It was running fine without any problem(though I stopped it after few minutes). So I think the match file is ok here.

I captured my screen while running new incremental method.
incremental.mp4.zip

@ManishSahu53 Can you try again with the last update I made?

  • Now the choice of the initial stellar seed if very fast. We compute only the required relative poses linked to the largest plausible stellar configuration.

@pmoulon I still can't run. I have tried running on two dataset too. One large and another small dataset.

  1. Jbl - 3601 images
  2. Okhla - 408 images
    The program exits abruptly like before. Can you please try running with my 2 second dataset as it is small and I can upload data easily too?
    link to Okhla dataset - https://drive.google.com/file/d/0BxdowQswHG4GQTd0Y3gteEtNOGs/view?usp=sharing

jbl_Stellar.txt
Okhla_stellar.txt

@ManishSahu53
Default settings for Okhla:

Bundle Adjustment statistics (approximated RMSE):
 #views: 407
 #poses: 407
 #intrinsics: 1
 #tracks: 523940
 #residuals: 3361440
 Initial RMSE: 0.429745
 Final RMSE: 0.426167
 Time (s): 22.9042
-------------------------------
-- Structure from Motion (statistics):
-- #Camera calibrated: 407 from 407 input images.
-- #Tracks, #3D points: 523940
-- #Poses loop used: 21
-------------------------------

snapshot00

I will try later on with your special match settings.

How come initial RMSE is so low? Its almost close to minima. Did you run with new incremental_v2 data?

I've experianced on some datasets that less views are estimated than when using original incremental pipeline, sometimes even less than global. Should I expect same number of views to be estimated comparing to original incremental?

@ManishSahu53 Yes I ran openMVG_main_IncrementalSfM2
I used the script SfM_SequentialPipeline.py just by changing inside openMVG_main_IncrementalSfM to openMVG_main_IncrementalSfM2.

@mitjap You should expect approximatively the same number of registered view.
Can you tell me how many images are missing (does it is like 2 or 3 or like 10%, 20%)... compare to openMVG_main_IncrementalSfM?

There was approximately 30% of missing images. Dataset was 100 images total and it was not an easy one :) I can share if that would help you.
Amount of reconstructed images:

  • global: 44 - 112.5 seconds
  • incremental (old): 100 - 588.3 seconds
  • incremental (new): 100 - 495.4 seconds
  • global + incremental (new): 73 (total) - 262.6 seconds

Feel free to share it to me. Seems an interesting dataset, perhaps something wrong about the intrinsics are happening.

I ran the data again and got different results. I've updated the post above to new results and added duration for each algorithm. I will run again to see if anything changes.

@mitjap Here my results:

  • global: 44 - 57 seconds
  • incremental (old): 100 - 324 seconds
  • incremental (new): 100 - 265 seconds
  • global + incremental (new): 100 (total) - 240 seconds (#Poses loop used: 16)

    • Only one camera seems a bit off the UAV trajectory

Using the fix about the bug you found https://github.com/openMVG/openMVG/commit/917cad88da005ced0fecc76266e01d37619fdc2f.

@pmoulon No matter what I do It always stops at the same position as before. Can you email me the executable file of incremental_v2? I am building develop_incremental_v2 branch. I suppose this is the correct branch which has v2.
[EDIT] I tried to run the test command of sfm2 and got the same error as above.
I can't understand what is wrong with my executable.
image

Thanks
Email - [email protected]
openMVG_main_IncrementalSfM2.zip

Dear @pmoulon about this feature:The reconstruction can start from existing camera poses.
Can I use the SLAM pose(not accurate as SfM) as inital position?

@happyfrank Yes you can do it.

That's brilliant. @pmoulon Thank you very much for provide this feature. ^v^

Hi @pmoulon,

I have been trying this feature, and now I am trying with a 1,367 image dataset. It looks like it is stuck on a never-ending cycle. See below. It looks like it is stuck with images <959> and <1336>, as it starts the same process again and again. It has been like that for over 15 hours. Any ideas what could it be? I run the feature matching using the fundamental matrix, I changed the name of the matches file to make it compatible with openmvg global sfm, then run the global sfm, changed back the name of the matche files, and used the output of the global sfm as an input for the incremental v2. Any comments would be appreciated.

-- Robust Resection of camera index: <959> image: DSC00960.JPG
-- Threshold: 1919.29
-- Resection status: OK
-- Nb points used for Resection: 141
-- Nb points validated by robust estimation: 134
-- % points validated: 95.0355
-------------------------------
ViewId: 1344; #number of 2D-3D matches: 265; 2.16168 % of the view track coverage.
ViewId: 1345; #number of 2D-3D matches: 217; 1.67477 % of the view track coverage.
ViewId: 1360; #number of 2D-3D matches: 47; 1.02218 % of the view track coverage.
  nfa=-216.262 inliers=100/606 precisionNormalized=6.85124e-05 precision=127.282 (iter=34 ,sample=99,39,97,46,53,65,)
ViewId: 1342; #number of 2D-3D matches: 336; 7.28376 % of the view track coverage.
ViewId: 1223; #number of 2D-3D matches: 454; 10.9927 % of the view track coverage.

Bundle Adjustment statistics (approximated RMSE):
 #views: 1
 #poses: 1
 #intrinsics: 1
 #tracks: 134
 #residuals: 268
 Initial RMSE: 92.0102
 Final RMSE: 82.4286
 Time (s): 0.00835186

  nfa=-224.892 inliers=262/606 precisionNormalized=0.00742565 precision=1325.11 (iter=68 ,sample=17,10,15,20,40,43,)
  nfa=-295.832 inliers=276/606 precisionNormalized=0.00486457 precision=1072.52 (iter=69 ,sample=252,42,138,176,0,162,)
ViewId: 393; #number of 2D-3D matches: 36; 0.0748814 % of the view track coverage.
ViewId: 1361; #number of 2D-3D matches: 104; 2.22985 % of the view track coverage.
  nfa=-421.238 inliers=287/606 precisionNormalized=0.00203727 precision=694.077 (iter=124 ,sample=215,133,258,130,275,244,)
  nfa=-446.403 inliers=287/606 precisionNormalized=0.00165763 precision=626.077 (iter=258 ,sample=175,214,210,200,79,131,)

-------------------------------
-- Robust Resection 
-- Resection status: 1
-- #Points used for Resection: 606
-- #Points validated by robust Resection: 287
-- Threshold: 626.077
-------------------------------

-------------------------------
-- Robust Resection of camera index: <1336> image: DSC01337.JPG
-- Threshold: 626.077
-- Resection status: OK
-- Nb points used for Resection: 606
-- Nb points validated by robust estimation: 287
-- % points validated: 47.3597
-------------------------------

Bundle Adjustment statistics (approximated RMSE):
 #views: 1367
 #poses: 1241
 #intrinsics: 1
 #tracks: 2240802
 #residuals: 15786452
 Initial RMSE: 0.495982
 Final RMSE: 0.451537
 Time (s): 375.626

ViewId: 0; #number of 2D-3D matches: 0; ViewId: 1; #number of 2D-3D matches: 0; 00 % of the view track coverage. % of the view track coverage.

ViewId: 14; #number of 2D-3D matches: 2; 0.0394633 % of the view track coverage.
ViewId: 12; #number of 2D-3D matches: 0; 0 % of the view track coverage.
ViewId: 25; #number of 2D-3D matches: 23; 8.04196 % of the view track coverage.
ViewId: 3; #number of 2D-3D matches: 0; 0 % of the view track coverage.
ViewId: 15; #number of 2D-3D matches: 3; 0.0651183 % of the view track coverage.
ViewId: 13; #number of 2D-3D matches: 0; ViewId: 5; #number of 2D-3D matches: 5; 0 % of the view track coverage.
0.0162559 % of the view track coverage.
ViewId: 27; #number of 2D-3D matches: 97; 5.87879 % of the view track coverage.
ViewId: 29; #number of 2D-3D matches: 173; 11.3072 % of the view track coverage.
ViewId: 28; #number of 2D-3D matches: 72; 3.9757 % of the view track coverage.
ViewId: 41; #number of 2D-3D matches: 122; 3.35903 % of the view track coverage.
ViewId: 42; #number of 2D-3D matches: 38; 1.2394 % of the view track coverage.
ViewId: 40; #number of 2D-3D matches: 296; 7.80385 % of the view track coverage.
ViewId: 11; #number of 2D-3D matches: 32; 0.159236 % of the view track coverage.
ViewId: 43; #number of 2D-3D matches: 45; 1.29051 % of the view track coverage.
ViewId: 47; #number of 2D-3D matches: 53; 2.65133 % of the view track coverage.
ViewId: 44; #number of 2D-3D matches: 49; 1.35659 % of the view track coverage.
ViewId: 45; #number of 2D-3D matches: 56; 1.96767 % of the view track coverage.
ViewId: 48; #number of 2D-3D matches: 49; 2.605 % of the view track coverage.
ViewId: 46; #number of 2D-3D matches: 63; 2.45041 % of the view track coverage.
ViewId: 49; #number of 2D-3D matches: 55; 2.80183 % of the view track coverage.
ViewId: 50; #number of 2D-3D matches: 51; 2.5235 % of the view track coverage.
ViewId: 53; #number of 2D-3D matches: 47; 2.1679 % of the view track coverage.
ViewId: 52; #number of 2D-3D matches: 57; 2.3021 % of the view track coverage.
ViewId: 51; #number of 2D-3D matches: 61; 2.45374 % of the view track coverage.
ViewId: 55; #number of 2D-3D matches: 108; 5.86001 % of the view track coverage.
ViewId: 56; #number of 2D-3D matches: 114; 6.52547 % of the view track coverage.
ViewId: 54; #number of 2D-3D matches: 121; 5.71024 % of the view track coverage.
ViewId: 57; #number of 2D-3D matches: 25; 1.65673 % of the view track coverage.
ViewId: 59; #number of 2D-3D matches: 16; 0.971463 % of the view track coverage.
ViewId: 58; #number of 2D-3D matches: 26; 1.51869 % of the view track coverage.
ViewId: 61; #number of 2D-3D matches: 101; 4.46903 % of the view track coverage.
ViewId: 60; #number of 2D-3D matches: 84; 4.11765 % of the view track coverage.
ViewId: 62; #number of 2D-3D matches: 109; 3.93076 % of the view track coverage.
ViewId: 63; #number of 2D-3D matches: 103; 3.87947 % of the view track coverage.
ViewId: 64; #number of 2D-3D matches: 51; 2.07233 % of the view track coverage.
ViewId: 65; #number of 2D-3D matches: 21; 1.14068 % of the view track coverage.
ViewId: 66; #number of 2D-3D matches: 14; 0.761283 % of the view track coverage.
ViewId: 67; #number of 2D-3D matches: 30; 1.7301 % of the view track coverage.
ViewId: 69; #number of 2D-3D matches: 51; 2.92935 % of the view track coverage.
ViewId: 68; #number of 2D-3D matches: 41; 2.2319 % of the view track coverage.
ViewId: 70; #number of 2D-3D matches: 93; 4.40341 % of the view track coverage.
ViewId: 71; #number of 2D-3D matches: 97; 3.65348 % of the view track coverage.
ViewId: 4; #number of 2D-3D matches: 0; 0 % of the view track coverage.
ViewId: 73; #number of 2D-3D matches: 81; ViewId: 74; #number of 2D-3D matches: 62; 3.71049 % of the view track coverage.
3.79669 % of the view track coverage.
ViewId: 84; #number of 2D-3D matches: 225; 8.92503 % of the view track coverage.
ViewId: 87; #number of 2D-3D matches: 60; 3.39175 % of the view track coverage.
ViewId: 86; #number of 2D-3D matches: 74; 3.52885 % of the view track coverage.
ViewId: 85; #number of 2D-3D matches: 89; 3.30732 % of the view track coverage.
ViewId: 88; #number of 2D-3D matches: 69; 5.45024 % of the view track coverage.
ViewId: 96; #number of 2D-3D matches: 70; 5.76132 % of the view track coverage.
ViewId: 97; #number of 2D-3D matches: 71; 4.51941 % of the view track coverage.
ViewId: 94; #number of 2D-3D matches: 200; 16.1031 % of the view track coverage.
ViewId: 105; #number of 2D-3D matches: 25; 1.86567 % of the view track coverage.
ViewId: 106; #number of 2D-3D matches: 20; 1.657 % of the view track coverage.
ViewId: 98; #number of 2D-3D matches: 354; 11.9919 % of the view track coverage.
ViewId: 108; #number of 2D-3D matches: 25; 1.33976 % of the view track coverage.
ViewId: 107; #number of 2D-3D matches: 21; 1.15639 % of the view track coverage.
ViewId: 104; #number of 2D-3D matches: 27; 0.487365 % of the view track coverage.
ViewId: 109; #number of 2D-3D matches: 46; 3.66826 % of the view track coverage.
ViewId: 72; #number of 2D-3D matches: 62; 2.43807 % of the view track coverage.
ViewId: 110; #number of 2D-3D matches: 100; 8.00641 % of the view track coverage.
ViewId: 112; #number of 2D-3D matches: 304; 13.0249 % of the view track coverage.
ViewId: 117; #number of 2D-3D matches: 279; 10.7432 % of the view track coverage.
ViewId: 95; #number of 2D-3D matches: 145; 12.2673 % of the view track coverage.
ViewId: 134; #number of 2D-3D matches: 248; 11.5295 % of the view track coverage.
ViewId: 135; #number of 2D-3D matches: 119; 4.87505 % of the view track coverage.
ViewId: 111; #number of 2D-3D matches: 82; 1.57814 % of the view track coverage.
ViewId: 136; #number of 2D-3D matches: 109; 4.19715 % of the view track coverage.
ViewId: 138; #number of 2D-3D matches: 81; 3.8756 % of the view track coverage.
ViewId: 137; #number of 2D-3D matches: 105; 5.0578 % of the view track coverage.
ViewId: 139; #number of 2D-3D matches: 243; 7.48844 % of the view track coverage.
ViewId: 140; #number of 2D-3D matches: 598; 19.7425 % of the view track coverage.
ViewId: 301; #number of 2D-3D matches: 711; 18.9752 % of the view track coverage.
ViewId: 302; #number of 2D-3D matches: 620; 16.6086 % of the view track coverage.
ViewId: 403; #number of 2D-3D matches: 0; 0 % of the view track coverage.
ViewId: 669; #number of 2D-3D matches: 0; 0 % of the view track coverage.
ViewId: 303; #number of 2D-3D matches: 552; 14.3377 % of the view track coverage.
ViewId: 298; #number of 2D-3D matches: 849; 12.4141 % of the view track coverage.
ViewId: 782; #number of 2D-3D matches: 378; 14.1097 % of the view track coverage.
ViewId: 390; #number of 2D-3D matches: 252; 2.63103 % of the view track coverage.
ViewId: 783; #number of 2D-3D matches: 402; 11.5418 % of the view track coverage.
ViewId: 784; #number of 2D-3D matches: 402; 12.083 % of the view track coverage.
ViewId: 785; #number of 2D-3D matches: 619; 19.0755 % of the view track coverage.
ViewId: 795; #number of 2D-3D matches: 506; 16.2284 % of the view track coverage.
ViewId: 791; #number of 2D-3D matches: 820; 19.5704 % of the view track coverage.
ViewId: 823; #number of 2D-3D matches: 285; 11.3591 % of the view track coverage.
ViewId: 825; #number of 2D-3D matches: 121; 9.40171 % of the view track coverage.
ViewId: 824; #number of 2D-3D matches: 80; 2.70728 % of the view track coverage.
ViewId: 796; #number of 2D-3D matches: 667; 17.9157 % of the view track coverage.
ViewId: 959; #number of 2D-3D matches: 141; 23.0769 % of the view track coverage.
  nfa=-60.9451 inliers=87/141 precisionNormalized=0.0134249 precision=1781.71 (iter=0 ,sample=114,19,127,118,21,136,)
  nfa=-63.9208 inliers=66/141 precisionNormalized=0.00383308 precision=952.045 (iter=1 ,sample=129,58,107,68,66,116,)
ViewId: 1074; #number of 2D-3D matches: 410; 17.9746 % of the view track coverage.
  nfa=-69.1743 inliers=115/141 precisionNormalized=0.0318575 precision=2744.66 (iter=9 ,sample=24,101,115,60,133,12,)
ViewId: 826; #number of 2D-3D matches: 86; 6.86353 % of the view track coverage.
  nfa=-99.8098 inliers=112/141 precisionNormalized=0.0147675 precision=1868.69 (iter=15 ,sample=81,112,3,68,137,105,)
ViewId: 1075; #number of 2D-3D matches: 398; 17.7125 % of the view track coverage.
ViewId: 1076; #number of 2D-3D matches: 376; 14.7047 % of the view track coverage.
ViewId: 1078; #number of 2D-3D matches: 462; 17.6741 % of the view track coverage.
ViewId: 1077; #number of 2D-3D matches: 427; 16.3664 % of the view track coverage.
ViewId: 1190; #number of 2D-3D matches: 416; 15.5689 % of the view track coverage.
ViewId: 1192; #number of 2D-3D matches: 266; 11.7439 % of the view track coverage.
ViewId: 392; #number of 2D-3D matches: 74; 0.20927 % of the view track coverage.
ViewId: 1189; #number of 2D-3D matches: 455; 17.8852 % of the view track coverage.
ViewId: 1191; #number of 2D-3D matches: 230; 10.2633 % of the view track coverage.
  nfa=-103.723 inliers=119/141 precisionNormalized=0.0180372 precision=2065.23 (iter=74 ,sample=130,78,70,16,6,79,)
  nfa=-103.77 inliers=123/141 precisionNormalized=0.02109 precision=2233.17 (iter=74 ,sample=130,78,70,16,6,79,)
ViewId: 1193; #number of 2D-3D matches: 384; 15.1779 % of the view track coverage.
  nfa=-108.223 inliers=104/141 precisionNormalized=0.00860746 precision=1426.66 (iter=78 ,sample=82,32,3,128,22,60,)
  nfa=-128.209 inliers=126/141 precisionNormalized=0.014826 precision=1872.38 (iter=79 ,sample=5,134,76,140,127,100,)
ViewId: 1216; #number of 2D-3D matches: 284; 12.4671 % of the view track coverage.
ViewId: 391; #number of 2D-3D matches: 99; 0.491632 % of the view track coverage.
  nfa=-136.818 inliers=131/141 precisionNormalized=0.0156521 precision=1923.84 (iter=129 ,sample=100,126,74,114,34,86,)
  nfa=-142.084 inliers=132/141 precisionNormalized=0.0148556 precision=1874.25 (iter=129 ,sample=100,126,74,114,34,86,)
ViewId: 1194; #number of 2D-3D matches: 382; 17.5956 % of the view track coverage.
ViewId: 1217; #number of 2D-3D matches: 354; 12.2576 % of the view track coverage.
ViewId: 1218; #number of 2D-3D matches: 241; 12.6709 % of the view track coverage.
ViewId: 1215; #number of 2D-3D matches: 169; 15.5331 % of the view track coverage.
ViewId: 393; #number of 2D-3D matches: 36; 0.0748814 % of the view track coverage.
  nfa=-144.46 inliers=134/141 precisionNormalized=0.015578 precision=1919.29 (iter=198 ,sample=42,31,9,85,119,4,)
ViewId: 1221; #number of 2D-3D matches: 260; 11.9376 % of the view track coverage.
ViewId: 1224; #number of 2D-3D matches: 397; 13.5356 % of the view track coverage.
ViewId: 1223; #number of 2D-3D matches: 454; 10.9927 % of the view track coverage.
ViewId: 1222; #number of 2D-3D matches: 398; 10.0759 % of the view track coverage.
ViewId: 1220; #number of 2D-3D matches: 203; 15.0929 % of the view track coverage.
ViewId: 1336; #number of 2D-3D matches: 606; 35.3559 % of the view track coverage.
ViewId: 1342; #number of 2D-3D matches: 336; 7.28376 % of the view track coverage.
  nfa=-40.0019 inliers=154/606 precisionNormalized=0.014009 precision=1820.06 (iter=0 ,sample=493,82,549,506,80,587,)
  nfa=-59.7974 inliers=100/606 precisionNormalized=0.00316977 precision=865.759 (iter=1 ,sample=11,32,59,42,38,241,)
  nfa=-79.2308 inliers=78/606 precisionNormalized=0.00072752 precision=414.768 (iter=1 ,sample=11,32,59,42,38,241,)
  nfa=-154.363 inliers=98/606 precisionNormalized=0.000278882 precision=256.799 (iter=2 ,sample=4,33,14,54,21,36,)
ViewId: 1341; #number of 2D-3D matches: 457; 16.0746 % of the view track coverage.
ViewId: 1343; #number of 2D-3D matches: 272; 3.67319 % of the view track coverage.
  nfa=-177.283 inliers=100/606 precisionNormalized=0.000178201 precision=205.276 (iter=8 ,sample=16,38,97,98,9,173,)
ViewId: 1360; #number of 2D-3D matches: 47; 1.02218 % of the view track coverage.
  nfa=-208.528 inliers=99/606 precisionNormalized=7.71726e-05 precision=135.087 (iter=11 ,sample=85,3,18,60,57,10,)

-------------------------------
-- Robust Resection 
-- Resection status: 1
-- #Points used for Resection: 141
-- #Points validated by robust Resection: 134
-- Threshold: 1919.29
-------------------------------

-------------------------------
-- Robust Resection of camera index: <959> image: DSC00960.JPG
-- Threshold: 1919.29
-- Resection status: OK
-- Nb points used for Resection: 141
-- Nb points validated by robust estimation: 134
-- % points validated: 95.0355
-------------------------------

Bundle Adjustment statistics (approximated RMSE):
 #views: 1
 #poses: 1
 #intrinsics: 1
 #tracks: 134
 #residuals: 268
 Initial RMSE: 92.0102
 Final RMSE: 82.4286
 Time (s): 0.00833905

  nfa=-216.265 inliers=100/606 precisionNormalized=6.8507e-05 precision=127.277 (iter=34 ,sample=99,39,97,46,53,65,)
ViewId: 1361; #number of 2D-3D matches: 104; 2.22985 % of the view track coverage.
ViewId: 1344; #number of 2D-3D matches: 265; 2.16168 % of the view track coverage.
ViewId: 1345; #number of 2D-3D matches: 217; 1.67477 % of the view track coverage.
  nfa=-224.809 inliers=262/606 precisionNormalized=0.00743121 precision=1325.6 (iter=68 ,sample=17,10,15,20,40,43,)
  nfa=-295.832 inliers=276/606 precisionNormalized=0.00486457 precision=1072.52 (iter=69 ,sample=252,42,138,176,0,162,)
ViewId: 2; #number of 2D-3D matches: 0; 0 % of the view track coverage.
  nfa=-421.232 inliers=287/606 precisionNormalized=0.00203737 precision=694.094 (iter=124 ,sample=215,133,258,130,275,244,)
  nfa=-446.397 inliers=287/606 precisionNormalized=0.00165771 precision=626.092 (iter=258 ,sample=175,214,210,200,79,131,)

-------------------------------
-- Robust Resection 
-- Resection status: 1
-- #Points used for Resection: 606
-- #Points validated by robust Resection: 287
-- Threshold: 626.092
-------------------------------

-------------------------------
-- Robust Resection of camera index: <1336> image: DSC01337.JPG
-- Threshold: 626.092
-- Resection status: OK
-- Nb points used for Resection: 606
-- Nb points validated by robust estimation: 287
-- % points validated: 47.3597
-------------------------------

@pmoulon , the images referenced in "console" output from my previous comments, look to be completely outside the set (probably no matching image). Maybe an infinite loop trying to find their locations?

I tried with a tiny dataset that have 2 black image and everything ran fine.

In your case the image pose are correctly found but it seems that the pose is not set and so it continue the pose estimation again and again.

Any change you could share to me the files to reproduce the issue (sfm_data.json, the *.feat and the matches.f.bin files)?

Thank you @pmoulon. Unfortunately, I can't share them :(. What else can I do to support spotting/fixing the problem?. I tried without those 2 images and all worked good.

Please note that I did not ask any images or descriptor files, so the only thing I will be able to do it building a point cloud...
Can you share the asked data privately?

I wonder if it's a threading issue.

If you want to debug on your time, I would advise you to check if you hit this line with the two problematic images: https://github.com/openMVG/openMVG/blob/develop_incremental_v2/src/openMVG/sfm/pipelines/sequential/sequential_SfM2.cpp#L479?

Also can you tell me if the image share the same intrinsic id or if they have an undefined intrinsic group. In order to know that just tell me the intrinsic id related to the images that have the issue in the sfm_data.json file.

@pogilon I was able to finally reproduce your infinite loop bug. I pushed a fix for it here https://github.com/openMVG/openMVG/commit/3c8f7debe9a4781f08643f9421b23c1638d5e332

In fact the infinite loop was due to the fact that the SfM is trying to add some camera, and then remove them since they are unstable. So the process was going in an infinite loop (adding-removing, ...). This should be fixed now.

I see that this has been out for a while now, but I am just recently trying to switch over to the new incremental V2 version and I'm running into some very odd behavior. I am essentially running the pipeline in the exact same manner as I was before, but with V2 instead of V1. On all datasets I've tested so far, V2 fails to construct any poses in the sfm_data.bin file while the V1 succeeds.

Here is some example output from V2:

```Bundle Adjustment statistics (approximated RMSE):
#views: 138
#poses: 73
#intrinsics: 1
#tracks: 9
#residuals: 317
Initial RMSE: 1.55604
Final RMSE: 0.00275799
Time (s): 0.191987

Usable motion priors: 1
Pose prior statistics (user units):

  • Starting median fitting error: 1.04556
  • Final fitting error:
    min: 0
    mean: 0.00275348
    median: 0
    max: 0.0445948

Bundle Adjustment statistics (approximated RMSE):
#views: 138
#poses: 0
#intrinsics: 1
#tracks: 0
#residuals: 0
Initial RMSE: -nan
Final RMSE: -nan
Time (s): 5.669e-06

Usable motion priors: 0
```

This specific example passes with 105/138 on V1 incremental (83/138 for global). I have tried running this with EXISTING_POSE, STELLAR, MAX_PAIR. All result in the #poses: 0. I was very excited to see that an existing pose version was available, but either there is a bug for Ubuntu 16.04, or I am doing something wrong. Any help at all would be greatly appreciated.

Hi @pmoulon , does SfM v2 only suite for large scale dataset? I tried SfM v1 and SfM v2 on my own dataset with 60 images, and found that v1 can reconstruct rather well while v2 produce a inferior result with less time. I use the revised script _SfM_SequentialPipeline.py_ provided in openMVG, the only difference is changing openMVG_main_IncrementalSfM to openMVG_main_IncrementalSfM2.
I used the released V1.5 windows binary.

Below is the screen shot of my result, the first one is produced by v1 and second is produced by v2.
image

image

The green dot line is the camera positions, actually the trace produced by v1 is right and v2 seems to localize some image inaccurately.

More weird is that when I change the SIFT extraction option to HIGH, the result of v2 become even worse, see below
image

So, where I did wrong?

Was this page helpful?
0 / 5 - 0 ratings

Related issues

jpwhitney picture jpwhitney  路  7Comments

treyfortmuller picture treyfortmuller  路  5Comments

yonelay11 picture yonelay11  路  5Comments

MaXL130 picture MaXL130  路  7Comments

lab3d picture lab3d  路  7Comments