Yesterday’s problem is solved (hopefully). Also, nice overview on coordinate representation.
To understand the „implicit correspondence measurement model” better in Bleser et al., Arnoud provided me with an article of Hol et al.
Later it seemed that I was confused about the equation itself as well. I will reproduce the equation again, together with all dimensions as superscripts:
\[\begin{bmatrix}I_2^{2 \times 2} & - \mathbf{m}_{n,t}^{2 \times 1} \end{bmatrix}^{2 \times 3} Q_{cs}^{3 \times 3}(Q_{sw,t}^{3 \times 3}(\mathbf{m}_{w,t}^{3 \times 1} - \mathbf{s}_{w,t}^{3 \times 1}) - \mathbf{c}_s^{3 \times 1})\]At first, I interpreted the square brackets as „normal” brackets, but just to read it easier. They were meant to create a matrix out of the identity matrix \(I_2\) and a feature detected in an image \(\mathbf{m}_{n,t}\). It could not be a substraction (because that would make it an operation between matrices/vectors of size \(2 \times 2\) and \(2 \times 1\)).
Bleser hinted at this in her Ph.D. thesis by hinting that \(\mathbf{0}\) is a 2D vector, by giving it the subscript „\(_2\)”. She notes in section 3.4.1.1, p.59:
Note that this expression is obtained from the homogeneous collinearity constraint (2.7b) by inserting the hand-eye transformation (2.12).
For completeness a reproduction of (2.7b):
\[\mathbf{0}_2 = d_h(\mathbf{m}_n, \mathbf{m}_w, \mathbf{s}) = z_c \begin{bmatrix}x_n \\ y_n \end{bmatrix} - \begin{bmatrix}x_c \\ y_c \end{bmatrix} = \begin{bmatrix}-\mathbf{I}_2 & \mathbf{m}_n \end{bmatrix} \mathcal{T}(\mathbf{m}_w, \mathbf{s})\]
with:
-
\(\mathbf{s}\) denotes an appropriate parametrisation of the camera pose (cf. Section 2.1.1.4).
-
On p.14:
\(\mathcal{T}(\mathbf{m}_w)\) denotes the mapping from the world to the camera frame given in (2.4a) and (2.4b)
(2.12) refer to simple rotations and translations.
References
-
Jeroen Hol, Per Slycke, Thomas Schoen, and Fredrik Gustafsson.
2D-3D model correspondence for camera pose estimation using sensor
fusion.
In InerVis workshop at the IEEE International Conference
on Robotics and Automation, 2005.
[ bib | .pdf ]
@inproceedings{hol2005model, title = { {2D-3D} model correspondence for camera pose estimation using sensor fusion}, author = {Hol, Jeroen and Slycke, Per and Schoen, Thomas and Gustafsson, Fredrik}, booktitle = { {I}ner{V}is workshop at the {IEEE} International Conference on Robotics and Automation}, year = {2005}, url = {http://users.isy.liu.se/en/rt/fredrik/reports/05ICRAhol.pdf} }
-
Gabriele Bleser and Didier Stricker.
Advanced tracking through efficient image processing and
visual-inertial sensor fusion.
In Virtual Reality Conference, pages 137–144. IEEE, 2008.
[ bib | DOI ]
@inproceedings{bleser2008advanced, author = {Bleser, Gabriele and Stricker, Didier}, booktitle = {Virtual Reality Conference}, title = {Advanced tracking through efficient image processing and visual-inertial sensor fusion}, year = {2008}, pages = {137--144}, organization = {IEEE}, doi = {10.1109/VR.2008.4480765} }
-
Gabriele Bleser.
Towards visual-inertial SLAM for mobile augmented reality.
PhD thesis, Universität Kaiserslautern, 2009.
[ bib ]
@phdthesis{bleser2009towards, title = {Towards visual-inertial {SLAM} for mobile augmented reality}, author = {Bleser, Gabriele}, year = {2009}, school = {Universit\"{a}t Kaiserslautern}, url = {https://www.xsens.com/images/stories/PDF/BleserPhD2009.pdf} }