Openmvg: Question: Which part of the openMVG use GPU ?

Created on 20 Apr 2017  路  17Comments  路  Source: openMVG/openMVG

Hello,
On my laptop there is a NVDIA gpu and i was wondering which part of the OpenMVG codes use it ?

I saw somewhere CUDA compile flag on your code (maybe on the feature matching part) but i prefer your clear answer to my assumption.

Thank you

question

All 17 comments

Actually there is not GPU support. The GPU option you see if for FLANN and it is not used.

GPU will come slowly in OpenMVG, as you can see here regarding some recent PR:

That's look very promising except the last red words "All checks have failed".
Do you have an idea of their release date ?

No idea about the release date (but you can already use their fork to have some fun).

And you can help to change the red flag to a green flag ;-) in order to have this merged in OpenMVG.

Oh yes with great pleasure. What should i do ?

You can test the forks on your computer and help them to be up to date in order to be merged in OpenMVG develop branch:

  • to add a CUDA SIFT implementation #837
  • to add various GPU stuff #845

@alvaro562003 Any feedback?

@saihv@pmoulon
Recently I want to test GPU feature extraction function in our project, sometimes it give error like:
Cannot create the designed Image_describer: CSIFT.

My GPU Info:
_20171122164705

not sure about the reason( seems initialization failed). just success run GPU extraction once!

Thanks,
Wei

Sorry, because of some config issue, now it works! sorry for the inconvenience!

@hiakru Don't hesitate to share any feedback ;-)

New reconstruction experiment based on cudasift feature:
Measureset: 1.5k images

Baseline: Previous based on sift feature reconstruction result" almost all images resection into model, with mean fitting error ~ 1.0m

New Exp: just change feature extraction part
from
_20171123122200

to
image

the result : #views: 1543 #poses: 575
which means only ~30% images resection into model, with mean fitting error: 2.24197m

Thanks,
Wei

Did you tried various dataset?

Not yet, but it's a valuable metric to measure, not only feature speed!
I will continue test and analyse.

Since the GPU is faster you can try to use the -p HIGH to see if the problem comes from a different parameter configuration that leads to different features.

yes, I have already set -p ULTRA, and reconstruction run >14h, still running. will update after done!

Change feature extraction, based on new cuda sift feature, result as below:

DescriberPreset | Match Ratio | Resection/Total Images | Fitting Error min | Mean | Median | max
-- | -- | -- | -- | -- | -- | --
ULTRA | 0.8 | 1543/1543 | 0.209692 | 72.9179 | 1.34442 | 1479.01
ULTRA | 0.7 | 1542/1543 | 0.168679 | 1.54705 | 1.17811 | 7.08971
ULTRA | 0.6 | 940/1543 | 0.205481 | 1.11791 | 0.957963 | 3.50832
ULTRA | 0.5 | 369/1543 | 0.339853 | 1.46702 | 1.06751 | 4.64446
HIGH | 0.8 | 1525/1543 | 0.165375 | 2.67423 | 1.43884 | 14.2605

the second exp show on par with original model, using sift feature.

PS: all the test result based on our measure set, FYI

Thanks,
Wei

Does your table include the CPU baseline?

no,

As mentioned in previous reply:
Measureset: 1.5k images
Baseline: based on sift feature reconstruction result, which use CPU extraction
almost all images resection into model, with mean fitting error ~ 1.0m

Was this page helpful?
0 / 5 - 0 ratings

Related issues

pmoulon picture pmoulon  路  34Comments

ManishSahu53 picture ManishSahu53  路  18Comments

KarimHassanieh picture KarimHassanieh  路  20Comments

jby1993 picture jby1993  路  18Comments

higerra picture higerra  路  18Comments