Openmvg: Adding images to an existing point cloud.

Created on 22 Apr 2017  路  7Comments  路  Source: openMVG/openMVG

I'm trying to use openMVG to process an image set to create a point cloud. However the goal was to generate the point cloud feeding the images to the pipeline one by one since one of the future targets was to try to handle the images in near real time.

  • Can opeMVG handle images like this? Does it need to regenerate the whole point cloud or it just increments the existing one with the new data?
  • Is it possible to indicate the neighbouring images for the match determination?
  • Do we need to recompute all the features and matches for the already processed images when adding a new one?
  • Can we limit the number of features to compute?

Thank you

question

Most helpful comment

Hi @GSlzr,

So your question is about:
1- build a point cloud by reconstructing a scene from some images
2- localize images and extend this scene in real time

OpenMVG provide tools to perform those steps but for the moment the localization tools does not allow to extend the scene and it's not working in real time.

Can openMVG handle images like this? Does it need to regenerate the whole point cloud or it just increments the existing one with the new data?

Is it possible to indicate the neighbouring images for the match determination?

  • Yes, you can indicate in a file the image you want to compare for SfM
    https://github.com/openMVG/openMVG/blob/master/src/software/SfM/main_ComputeMatches.cpp#L114
    The file must be a list of pair of integer corresponding to the ViewId used in the sfm_data.json.
  • For the localization part for the moment this is not possible, the nearest neighbor is based on a KDTREE and is quite fast. If you want something different feel free to implement your own implementation by creating a new interface in the abstract design.

Do we need to recompute all the features and matches for the already processed images when adding a new one?

  • No in localization mode you compute features only for the new images.

Can we limit the number of features to compute?

  • No, but you can easily add your new image_describer implementation in the existing pipeline and use it.

BTW, Seems like your problem is closer to SLAM application (Simultaneous Localization And Mapping) (if you have very dense image set - video like).

Looking forward to continue the discussion to understand more your target.

All 7 comments

Hi @GSlzr,

So your question is about:
1- build a point cloud by reconstructing a scene from some images
2- localize images and extend this scene in real time

OpenMVG provide tools to perform those steps but for the moment the localization tools does not allow to extend the scene and it's not working in real time.

Can openMVG handle images like this? Does it need to regenerate the whole point cloud or it just increments the existing one with the new data?

Is it possible to indicate the neighbouring images for the match determination?

  • Yes, you can indicate in a file the image you want to compare for SfM
    https://github.com/openMVG/openMVG/blob/master/src/software/SfM/main_ComputeMatches.cpp#L114
    The file must be a list of pair of integer corresponding to the ViewId used in the sfm_data.json.
  • For the localization part for the moment this is not possible, the nearest neighbor is based on a KDTREE and is quite fast. If you want something different feel free to implement your own implementation by creating a new interface in the abstract design.

Do we need to recompute all the features and matches for the already processed images when adding a new one?

  • No in localization mode you compute features only for the new images.

Can we limit the number of features to compute?

  • No, but you can easily add your new image_describer implementation in the existing pipeline and use it.

BTW, Seems like your problem is closer to SLAM application (Simultaneous Localization And Mapping) (if you have very dense image set - video like).

Looking forward to continue the discussion to understand more your target.

Hi @pmoulon,

Thank you for your answer.

Adding the new image points to an existing point cloud is relevant since the final target involves integrating the openMVG with ROS and having a module that subscribes to a ROS topic with an image and a pose and continuously computes the features and descriptors and generates the point cloud.

The idea of using the openMVG came around due to the flexibility of the library.

Do you know of anyone that tried to use ROS with openMVG?

I will try to work on the library to have a tool to update the point cloud.

I still have one question: can we use pose information to determine which images to match? That is to use the position to determine the closest images and only ttry to match those ones?

Just another question: I cannot run the localization example (http://openmvg.readthedocs.io/en/latest/software/localization/localization/) after running the tutorial out available.

You must run with a dataset in which the image is not there at the start.
So in order to try it:

  • put aside one image of the dataset,
  • run SfM
  • run Localization with the image you put aside.

ROS + OpenMVG:

@GSlzr Any feedback?

@pmoulon Sorry about the late reply but we were trying to get something working. We have some kind of working method to append to an existing .ply file. It stills needs some improvement.

Regarding the ROS/OpenMVG integration, it is starting, thank you for the pointer to the link, it should work but as soon as we have something more I'll let you know.

@pmoulon Just to give some status of the work. Currently we have both ros and openMVG kinda working together, we are just working on the handling of the pose information and trying to see how we can include the orientation information in the openMVG pipeline. Do you happen to have any idea on how to do this?

We manage to include to append points to an existing cloud. However it is not really clean yet. We'll get some more work into this to see how it can be made in a more generic way

Sorry I forgot to answer.

If you know the orientation (I don't know if you mean only rotation or camera translation or the two).

If you know the rotation, you can use the solver that estimation the camera translation:

If you know the translation of the camera, you can constraint the ViewPositionPrior and let the BA help to move the camera close to your desired position.

Any plan to make some contribution back or how to try to work together on some extensions?

Was this page helpful?
0 / 5 - 0 ratings