Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

localize new images into an existing reconstruction #175

Closed
Dorothy-2016 opened this issue May 12, 2017 · 9 comments
Closed

localize new images into an existing reconstruction #175

Dorothy-2016 opened this issue May 12, 2017 · 9 comments

Comments

@Dorothy-2016
Copy link

Hi @paulinus ,I want to localize new images into an existing reconstruction,so I try to modify the incremental_reconstruction to make it go like that , so that I just detect features for the new image,then match it with the images that have done before,then do the incremental_reconstruction,but for the same data,if I do like this ,it usually can't be reconstructed correctly,and even some images can't be added to the reconstruction,but if I don't do some changes,it can get the location,and the location is right,can you tell me why? Why the new image cannot be added to the existing reconstruction? I was confused by this question for a long time,looking forward to your reply,thanks!

@paulinus
Copy link
Member

Hi @Dorothy-2016,

the incremental_reconstruction pipeline uses the tracks graph to understand the correspondence between features from different images. If you add a new image and compute matches for it, you will need to update the tracks graph accordingly.

This will require extending existing tracks in case a feature point matches points on it. And also creating a new track in case a new feature point matches an old feature point that does not belong to any track yet.

Could that be the part you are missing, or you are already doing that?

@Dorothy-2016
Copy link
Author

Thank you for your reply!@paulinus, I get the meaning of your advice,Does the step of create_tracks do this job?I think if I give the whole matching results which include the new frame to this step,I can get the whole tracks_graph including the new frame,or do I understand it wrong?

@Dorothy-2016
Copy link
Author

and I also have another question,even though the time of the whole sfm pipeline is faster than other methods I have tried,I need to improve the speed to run on mobile phone,I found that the detection and matching part costs more time than other parts,so I modify the feature from sift to orb,and match the current frame only to its neighbor one frame before it,what the question is no matter how many points orb can detect,the reconstruction step cannot get the right results,I found the pairs of compute_image_pairs was always null,I'm confused, can you give me some advice?Thank you very much!

@paulinus
Copy link
Member

ORB features should work fine for SfM. Do you know if the matches generated are correct? You can plot them using

bin/plot_matches data/berlin

Did you modify features.py to add ORB? It would be great to integrate those changes once it is working.

@Dorothy-2016
Copy link
Author

yes,I have modified the features.py to add ORB using opencv, I also use plot_features to see the points that have been detected,but the result is not as good as sift or surf,I set the max number of orb feature points to 10000, here is what I get :
figure_1
figure_2
However,when I use sift,the result is like these:
figure_1_sift
figure_2_sift
and as what you adviced,I test the plot_matches,the result is as follows:
figure_1_match_figure_2
figure_1_match_figure_2 sift

from this compairation,I'm a little confused about the result of orb,it apparently shows that the number of orb is bigger than sift,while looking at the result picture,I think sift is much more than orb,and the features on the floor are aslo detected

Here is the code that add orb:
def extract_features_orb(image, config):
detector = cv2.ORB_create(nfeatures=10000)
points = detector.detect(image, None)
points, desc = detector.compute(image, points)
if config.get('feature_root', False): desc = root_feature(desc, True)
points = np.array([(i.pt[0], i.pt[1], i.size, i.angle) for i in points])
return points, desc

can u help me find where the problem is?Thanks!

@Dorothy-2016
Copy link
Author

I have uploaded the dataset here https://drive.google.com/drive/folders/0B9KhOFLoD2ytdnowc0JjUUwzNEE

@BrookRoberts
Copy link

Have just seen this - not sure how much we might be overlapping but yesterday I started looking at making an easy way to extend existing reconstructions.

Was planning on seeing how easy it was to
a) have opensfm default more to not recalculating parts of the reconstruction that are already made
b) make it easy to both add new images to a reconstruction, and locate images in a reconstruction without adjusting the placement of the rest (possibly a faster way of just localising new images if you trust the initial reconstruction)

@paulinus Does any of this stuff already exist? (I know e.g. incremental_reconstruction is structured in a way such that most of this work is done) Would you agree that some of this stuff would be useful core features?

@BrookRoberts
Copy link

#178 allows you to add new images to an existing reconstruction.

@YanNoun
Copy link
Member

YanNoun commented Mar 15, 2021

Duplicate of https://github.com/mapillary/OpenSfM/pull/178/files, closing.

@YanNoun YanNoun closed this as completed Mar 15, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants