Hello,
On my laptop there is a NVDIA gpu and i was wondering which part of the OpenMVG codes use it ?
I saw somewhere CUDA compile flag on your code (maybe on the feature matching part) but i prefer your clear answer to my assumption.
Thank you
Actually there is not GPU support. The GPU option you see if for FLANN and it is not used.
GPU will come slowly in OpenMVG, as you can see here regarding some recent PR:
That's look very promising except the last red words "All checks have failed".
Do you have an idea of their release date ?
No idea about the release date (but you can already use their fork to have some fun).
And you can help to change the red flag to a green flag ;-) in order to have this merged in OpenMVG.
Oh yes with great pleasure. What should i do ?
You can test the forks on your computer and help them to be up to date in order to be merged in OpenMVG develop branch:
@alvaro562003 Any feedback?
@saihv@pmoulon
Recently I want to test GPU feature extraction function in our project, sometimes it give error like:
Cannot create the designed Image_describer: CSIFT.
My GPU Info:
not sure about the reason( seems initialization failed). just success run GPU extraction once!
Thanks,
Wei
Sorry, because of some config issue, now it works! sorry for the inconvenience!
@hiakru Don't hesitate to share any feedback ;-)
New reconstruction experiment based on cudasift feature:
Measureset: 1.5k images
Baseline: Previous based on sift feature reconstruction result" almost all images resection into model, with mean fitting error ~ 1.0m
New Exp: just change feature extraction part
from
to
the result : #views: 1543 #poses: 575
which means only ~30% images resection into model, with mean fitting error: 2.24197m
Thanks,
Wei
Did you tried various dataset?
Not yet, but it's a valuable metric to measure, not only feature speed!
I will continue test and analyse.
Since the GPU is faster you can try to use the -p HIGH
to see if the problem comes from a different parameter configuration that leads to different features.
yes, I have already set -p ULTRA, and reconstruction run >14h, still running. will update after done!
Change feature extraction, based on new cuda sift feature, result as below:
DescriberPreset | Match Ratio | Resection/Total Images | Fitting Error min | Mean | Median | max
-- | -- | -- | -- | -- | -- | --
ULTRA | 0.8 | 1543/1543 | 0.209692 | 72.9179 | 1.34442 | 1479.01
ULTRA | 0.7 | 1542/1543 | 0.168679 | 1.54705 | 1.17811 | 7.08971
ULTRA | 0.6 | 940/1543 | 0.205481 | 1.11791 | 0.957963 | 3.50832
ULTRA | 0.5 | 369/1543 | 0.339853 | 1.46702 | 1.06751 | 4.64446
HIGH | 0.8 | 1525/1543 | 0.165375 | 2.67423 | 1.43884 | 14.2605
the second exp show on par with original model, using sift feature.
PS: all the test result based on our measure set, FYI
Thanks,
Wei
Does your table include the CPU baseline?
no,
As mentioned in previous reply:
Measureset: 1.5k images
Baseline: based on sift feature reconstruction result, which use CPU extraction
almost all images resection into model, with mean fitting error ~ 1.0m