The first step of a “structure from motion” process is defining the “features points” of each image.
These are points that the software is able to detect very well.
It is also able to position them in the 2D image space and create a database for each pictures.
Some of these will then become tie points (or matching points), after pairselection between multiple images, and will form the sparse point cloud.
A structure from motion software can better find features points where there is more contrast.
Contrast is a more defined transition between lights and shadows.
A snow-covered area, a glazed surface, a white beach of fine sand are very tricky situations for finding features points.
In the photo of this post you can clearly see where the software (in this case LiMapper of Green Valley International) managed to find features points better (and more easily) and where it had to “struggle” more to get them (that happened in the beaten part of the dirt road).
The exposure of the photos has a great part in the identification of the features points.
If you have overexposed images, the software will find fewer points than a correctly exposed one.
In this image you can see:
- green points: features points that have become tie points because they have found the corresponding counterpart in other photos of the dataset;
- gray point: features points that have not found correspondence with other points in the other photographs;
- red points: some features points which have been classified as tie points points but then have been discarded because they did not satisfy a threshold filter.