Skip to content

Instantly share code, notes, and snippets.

@natowi
Last active January 29, 2024 06:45
Show Gist options
  • Save natowi/ad9ce6d9f912e089bba9b69ec91d7115 to your computer and use it in GitHub Desktop.
Save natowi/ad9ce6d9f912e089bba9b69ec91d7115 to your computer and use it in GitHub Desktop.
How the nodes work (reconstructed, personal project)
CC BY-SA
## CameraInit
reads image metadata
sensor width lookup
Calculates viewpoints.sfm with image metadata and Intrinsics
Intrinsics Kmatrix "f;0;ppx;0;f;ppy;0;0;1"
Create cameraInit.sfm
calculates distortionParams
Although cameraInit.sfm uses data from viewpoints.sfm, those parts are slightly modified
the sfm files are json files
FoV https://www.scantips.com/lights/fieldofviewmath.html
https://www.pointsinfocus.com/tools/depth-of-field-and-equivalent-lens-calculator/#{%22c%22:[{%22f%22:13,%22av%22:%228%22,%22fl%22:50,%22d%22:3048,%22cm%22:%220%22}],%22m%22:0}
CameraInit.sfm/viewpoints.sfm file structure:
(viewpoints.sfm is the file generated when importing images to meshroom. CameraInit.sfm is the output that is being used in the next node)
----
version: [Array]
views: [Array] _the different views (images) with unique ids and metadata_
intrinsics: [Array] _the camera intrinsics, views (images) with the same intrinsics share an id. A different zoom level or camera has a new id. Includes principalPoint and distortionParams_
----
## Feature Extraction (SIFT)
desc are binary files
feat are ascii ones
view_id.sift.feat -> table with the extracted *features*
view_id.sift.desc -> descriptors
Reference: http://www.vlfeat.org/overview/sift.html
https://dsp.stackexchange.com/questions/24346/difference-between-feature-detector-and-descriptor
---
The image origin (top-left corner) has coordinate (0,0)
The lower-right corner is defined by the image dimensions
For a landscape images 5000x2000 this is (5000,2000)
0-------------------------5000
. x
.
. x
.
. x
. x
2000
---
view_id.sift.feat Matrix (without column title):
x y scale orientation
2711.52 1571.74 308.335 4.75616
-
(to plot this, make y negative (*-1))
---
scale: square/circle size
orientation: line from origin rotated in radiant
------
## ImageMatching
Matches all images (tree)
197018718 907017304 1638077662
907017304 1638077662
_which images are matched against each other. Example_
W X Y Z
X Y Z
Y Z
W will be matched with X, Y, Z, then X with Y and Z and so on
-------
## FeatureMatching
0.matches.txt
Matches all features of the images from ImageMatching Pairs
#viewid1 #viewid2
197018718 907017304
1 #first maching pair
sift 2632 # detected matches
44 38
183 122
907017304 1638077662 #viewid2 #viewid3
1
sift 2707
90 74
110 134
197018718 1638077662 #viewid1 #viewid3
1
sift 1929
129 74
** StructureFromMotion
calculates poses
"poses": [
{
"poseId": "797046670",
"pose": {
"transform": {
"rotation": [
"0.99328929576636837",
"-0.10823948227899582",
"0.040750329960956289",
"0.11564708144936042",
"0.92507429929971252",
"-0.36175031904255572",
"0.0014585843125640811",
"0.36403537637312233",
"0.93138397950613383"
],
"center": [
"-0.16712305009175787",
"1.6837678457953795",
"0.56603363841980026"
]
},
"locked": "1"
}
},
]
}
camera rotation as a quaternion
https://github.com/alicevision/meshroom/blob/bc1eb83d92048e6f888c4762c7ffcaab50395da6/meshroom/ui/reconstruction.py#L293
https://math.stackexchange.com/questions/893984/conversion-of-rotation-matrix-to-quaternion
https://www.euclideanspace.com/maths/geometry/rotations/conversions/matrixToQuaternion/
_cameraInit.sfm is being augmented by the sfm node and saved as cameras.sfm:_
version: [Array]
featuresFolder: ["node-internal-folder-path"]
matchesFolder: ["node-internal-folder-path"]
views: [Array] _the different views (images) with unique ids and metadata_
intrinsics: [Array] _the camera intrinsics, views (images) with the same intrinsics share an id. A different zoom level or camera has a new id. Includes principalPoint and distortionParams_
poses: [Array] _the camera poses_
-----
**mesh filtering node
uses
https://github.com/bldeng/MeshSDFilter
------------------------------
------------------------------
---
Masking idea for generic background
https://docs.opencv.org/2.4/modules/features2d/doc/common_interfaces_of_descriptor_extractors.html
Descriptors are generated from features,
so it should be possible to filter descriptors before using masks
We do not want to manually generate all the masks,
so we could use the results from FeatureMatching.
When we select features in one image to be masked,
the matching features in the other images can be masked as well.
The corresponding descriptors need to be updated,
than we can do the SfM only for the relevant area.
Double click on FeatureMatching to load the Feature Masking GUI
Select features and highlight matching features in the other images
Button for brushes to select include/exclude areas
new node for re-computation of decribers
re-computation node could have a button or icon to mark it as "user interaction required"
@skinkie
Copy link

skinkie commented Jan 6, 2020

I think we are still missing the option to remove invalid matches.

@natowi
Copy link
Author

natowi commented Jan 6, 2020

@skinkie yes, but the option for removing invalid matches comes hand in hand with modifying and creating features.
I am still reconstructing the formats used and how the nodes work as there is no documentation apart from the code. The openmvg documentation is still sometimes helpful, but a lot has changed in alicevision.

Some ideas:
maybe the features viewer could be used as base. A new feature editor could enable feature removal and addition. I think this would require also updating matches.

-connected editing (apply on all images in parallel /transfer), unconnected editing (modifying feature on one image not transferred to others),
relative position to other detected features could be used,
a side by side 2d image matching preview would be useful

@skinkie
Copy link

skinkie commented Jan 6, 2020

invalid matches comes hand in hand with modifying and creating features.

I consider these things two separate scenario's. Modifying, delete, create feature describers are the basis for creating matches between images. As we have seen repetitive patterns will create false-positives. So we are looking at multiple solutions: allow us to remove the invalid image pairs, mask the repetitive pattern or create/select a higher quality feature that should be present, similar to using SIFT + AKAZE, but then SIFT + manual.

I am still reconstructing the formats used and how the nodes work as there is no documentation apart from the code.

I value your contribution greatly. I wonder if you are part of the project team (as in: any of the companies part in the Horizon 2020 project) or doing this because you have a great interest in this open source product.

I like the connected editing approach, but in a different paradigm. Maybe we could even make this working in a way that a user could "walk" through all the images like the photo's in the project being a graph.

@natowi
Copy link
Author

natowi commented Jan 7, 2020

I wonder if you are part of the project team (as in: any of the companies part in the Horizon 2020 project) or doing this because you have a great interest in this open source product.

I am the latter, a Meshroom community contributor, helping how I can and learning new things while doing so :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment