HI, have a dataset of 6 views at different positions.
I ran IncrementalSFM it showed no error, but only 4 camera positions were estimated. In the camera_poses.json file there are extrinsics for 4 views (2 are missing). Is it normal?
When I ran GlobalSFM, I got the error: Rotation Averaging failure. I tried with the options "-r 1" and "-r 2", both had the same problem.
Any suggestions would be appreciated!
Please try ANN_L2 matcher.
@whuaegeanse thanks. I tried add "-n ANNL2" when running openMVG_main_ComputeMatches . But still got the same error.
Show more information, eg log ,and share your data.
@whuaegeanse Thanks a lot! Below is the log when I run the commands (I don't know how to share the data here, please let me know if i have to share data as well):
You called :
openMVG_main_SfMInit_ImageListing
--imageDirectory /home/turtlebot/openMVG_Build/software/SfM/Test_data8asp_nolight/images/
--sensorWidthDatabase /home/turtlebot/openMVG/src/openMVG/exif/sensor_width_database/sensor_width_camera_database.txt
--outputDirectory /home/turtlebot/openMVG_Build/software/SfM/Test_data8asp_nolight/matches
--focal 1400
--intrinsics
--camera_model 3
--group_camera_model 0
SfMInit_ImageListing report:
listed #File(s): 6
usable #File(s) listed in sfm_data: 6
usable #Intrinsic(s) listed in sfm_data: 6
You called :
openMVG_main_ComputeFeatures
--input_file /home/turtlebot/openMVG_Build/software/SfM/Test_data8asp_nolight/matches/sfm_data.json
--outdir /home/turtlebot/openMVG_Build/software/SfM/Test_data8asp_nolight/matches/
--describerMethod SIFT
--upright 0
--describerPreset HIGH
--force 0
--numThreads 0
Task done in (s): 0
You called :
openMVG_main_ComputeMatches
--input_file /home/turtlebot/openMVG_Build/software/SfM/Test_data8asp_nolight/matches/sfm_data.json
--out_dir /home/turtlebot/openMVG_Build/software/SfM/Test_data8asp_nolight/matches/
Optional parameters:
--force 0
--ratio 0.8
--geometric_model e
--video_mode_matching -1
--pair_list
--nearest_matching_method ANNL2
--guided_matching 0
--cache_size unlimited
PUTATIVE MATCHES -
PREVIOUS RESULTS LOADED; #pair: 15
Task done in (s): 0
Export Adjacency Matrix of the pairwise's geometric matches
Open Source implementation of the paper:
"Global Fusion of Relative Motions for Robust, Accurate and Scalable Structure from Motion."
CleanGraph_KeepLargestBiEdge_Nodes():: => connected Component: 6
Connected component of size: 1
Connected component of size: 1
Connected component of size: 1
Connected component of size: 1
Connected component of size: 1
Connected component of size: 1
In the Global SFM step, the connected component is 6 and each component only has 1 image. In this case Gloabal SFM cannot be used for that Global SFM need 3 or more images in a connected component.
Please try ULTRA mode in feature extraction step and ANN_L2 matcher in feature matching step.
@whuaegeanse I tried that but still got the error. Do you mean that 6 images are not enough to do global SFM? Here is the log:
You called :
openMVG_main_SfMInit_ImageListing
--imageDirectory /home/turtlebot/openMVG_Build/software/SfM/Test_data8asp_nolight/images/
--sensorWidthDatabase /home/turtlebot/openMVG/src/openMVG/exif/sensor_width_database/sensor_width_camera_database.txt
--outputDirectory /home/turtlebot/openMVG_Build/software/SfM/Test_data8asp_nolight/matches
--focal 1400
--intrinsics
--camera_model 3
--group_camera_model 0
SfMInit_ImageListing report:
listed #File(s): 6
usable #File(s) listed in sfm_data: 6
usable #Intrinsic(s) listed in sfm_data: 6
You called :
openMVG_main_ComputeFeatures
--input_file /home/turtlebot/openMVG_Build/software/SfM/Test_data8asp_nolight/matches/sfm_data.json
--outdir /home/turtlebot/openMVG_Build/software/SfM/Test_data8asp_nolight/matches/
--describerMethod SIFT
--upright 0
--describerPreset ULTRA
--force 0
--numThreads 0
Task done in (s): 0
You called :
openMVG_main_ComputeMatches
--input_file /home/turtlebot/openMVG_Build/software/SfM/Test_data8asp_nolight/matches/sfm_data.json
--out_dir /home/turtlebot/openMVG_Build/software/SfM/Test_data8asp_nolight/matches/
Optional parameters:
--force 0
--ratio 0.8
--geometric_model e
--video_mode_matching -1
--pair_list
--nearest_matching_method ANNL2
--guided_matching 0
--cache_size unlimited
PUTATIVE MATCHES -
PREVIOUS RESULTS LOADED; #pair: 15
Task done in (s): 0
Export Adjacency Matrix of the pairwise's geometric matches
Open Source implementation of the paper:
"Global Fusion of Relative Motions for Robust, Accurate and Scalable Structure from Motion."
CleanGraph_KeepLargestBiEdge_Nodes():: => connected Component: 6
Connected component of size: 1
Connected component of size: 1
Connected component of size: 1
Connected component of size: 1
Connected component of size: 1
Connected component of size: 1
It seems in your case that no matches are found at the geometric matching step so perhaps there is not a lot of features matches even at the putative matches step.
Can you try with the sequential pipeline with the preset HIGH or ULTRA?
Feel free to share your image dataset and we can take a look to it.
Tip for photogrammetry there is some important points:
@pmoulon Yes I used ULTRA in the above run. Could you tell me how to share the image data? Thanks a lot!
You can make an archive (zip) of your images and drag and drop in your GitHub message or you can send your image to the private mailing list openmvg-team[AT]googlegroups.com
@pmoulon Hi, I've sent the images to the googgroups email since it's not allowed to send files larger than 10MB here. There are only 6 images taken by one ZED camera. The image files do not have EXIF information. The pixel dimension is 0.002mm*0.002mm, the focal lenght in pixel is 1400, so I used the "-f 1400" when running Imagelisting. Thank you very much!
@squashking Before you redo the pipeline , please remove the match directory . Or use --force 1 in the feature extraction step.
@whuaegeanse I just tried that and got the same error.
Share your data to me. [email protected]
Thanks @squashking I will have a look to your dataset.
@whuaegeanse I have sent you the dataset. Thank you very much!
Thanks @squashking .
Since your baseline is large it is better to use the Sequential pipeline for this dataset.
Here the results with the default setting (-f 1400 and default sequentialSfM)
4 camera registered out of 6
Here the results with (-f 1400, -m AKAZE_FLOAT, -p HIGH, -n ANNL2 and sequential SfM)
6 camera registered out of 6
colorized.ply.zip
So as conclusion, if you want to use the Global SfM module you need a smaller baseline when you are moving. Note that here we don't have dense RGBD data for registration of continuous scan, we only rely on sparse feature correspondences.
@pmoulon Thank you so much! But I've got a newbie question. What is baseline? Do you mean the distance between neighbouring views? And what is the sequential SFM pipline? Do you mean IncrementalSFM? Thanks again!
Baseline was to express the distance between two camera.
SequentialSfM and Incremental SfM are the same (depict the fact that do thing incrementally or sequentially).
@pmoulon May I know how did you get he colorized point cloud? Did you use some dense reconstruction tool? thanks.
@squashking openMVG contain a function to do that "openMVG_main_ComputeSfM_DataColor"
Closing the issue since now we now that the global SfM engine is failing due to lack of feature matches between some images (The global SfM engine use triplet of images).
Most helpful comment
Since your baseline is large it is better to use the Sequential pipeline for this dataset.
Here the results with the default setting (-f 1400 and default sequentialSfM)

4 camera registered out of 6
Here the results with (-f 1400, -m AKAZE_FLOAT, -p HIGH, -n ANNL2 and sequential SfM)

6 camera registered out of 6
colorized.ply.zip
So as conclusion, if you want to use the Global SfM module you need a smaller baseline when you are moving. Note that here we don't have dense RGBD data for registration of continuous scan, we only rely on sparse feature correspondences.