Perspective-n-Point[1] is the problem of estimating the pose of a calibrated camera given a set of n 3D points in the world and their corresponding 2D projections in the image. The camera pose consists of 6 degrees-of-freedom (DOF) which are made up of the rotation (roll, pitch, and yaw) and 3D translation of the camera with respect to the world. This problem originates from camera calibration and has many applications in computer vision and other areas, including 3D pose estimation, robotics and augmented reality.[2] A commonly used solution to the problem exists for n = 3 called P3P, and many solutions are available for the general case of n ≥ 3. A solution for n = 2 exists if feature orientations are available at the two points.[3] Implementations of these solutions are also available in open source software.
Problem Specification
Definition
Given a set of n 3D points in a world reference frame and their corresponding 2D image projections as well as the calibrated intrinsic camera parameters, determine the 6 DOF pose of the camera in the form of its rotation and translation with respect to the world. This follows the perspective projection model for cameras:
- .
where is the homogeneous world point, is the corresponding homogeneous image point, is the matrix of intrinsic camera parameters, (where and are the scaled focal lengths, is the skew parameter which is sometimes assumed to be 0, and is the principal point), is a scale factor for the image point, and and are the desired 3D rotation and 3D translation of the camera (extrinsic parameters) that are being calculated. This leads to the following equation for the model:
- .
Assumptions and Data Characteristics
There are a few preliminary aspects of the problem that are common to all solutions of PnP. The assumption made in most solutions is that the camera is already calibrated. Thus, its intrinsic properties are already known, such as the focal length, principal image point, skew parameter, and other parameters. Some methods, such as UPnP.[4] or the Direct Linear Transform (DLT) applied to the projection model, are exceptions to this assumption as they estimate these intrinsic parameters as well as the extrinsic parameters which make up the pose of the camera that the original PnP problem is trying to find.
For each solution to PnP, the chosen point correspondences cannot be colinear. In addition, PnP can have multiple solutions, and choosing a particular solution would require post-processing of the solution set. RANSAC is also commonly used with a PnP method to make the solution robust to outliers in the set of point correspondences. P3P methods assume that the data is noise free, most PnP methods assume Gaussian noise on the inlier set.
Methods
This following section describes two common methods that can be used to solve the PnP problem that are also readily available in open source software and how RANSAC can be used to deal with outliers in the data set.
P3P
When n = 3, the PnP problem is in its minimal form of P3P and can be solved with three point correspondences. However, with just three point correspondences, P3P yields up to four real, geometrically feasible solutions. For low noise levels a fourth correspondence can be used to remove ambiguity. The setup for the problem is as follows.
Let P be the center of projection for the camera, A, B, and C be 3D world points with corresponding images points u, v, and w. Let X = |PA|, Y = |PB|, Z = |PC|, , , , , , , , , . This forms triangles PBC, PAC, and PAB from which we obtain a sufficient equation system for P3P:
- .
Solving the P3P system results in up to four geometrically feasible real solutions for R and T. The oldest published solution dates to 1841.[5] A recent algorithm for solving the problem as well as a solution classification for it is given in the 2003 IEEE Transactions on Pattern Analysis and Machine Intelligence paper by Gao, et al.[6] An open source implementation of Gao's P3P solver can be found in OpenCV's calib3d module in the solvePnP function.[7] Several faster and more accurate versions have been published since, including Lambda Twist P3P[8] which achieved state of the art performance in 2018 with a 50 fold increase in speed and a 400 fold decrease in numerical failures. Lambdatwist is available as open source in OpenMVG and at https://github.com/midjji/pnp.
EPnP
Efficient PnP (EPnP) is a method developed by Lepetit, et al. in their 2008 International Journal of Computer Vision paper[9] that solves the general problem of PnP for n ≥ 4. This method is based on the notion that each of the n points (which are called reference points) can be expressed as a weighted sum of four virtual control points. Thus, the coordinates of these control points become the unknowns of the problem. It is from these control points that the final pose of the camera is solved for.
As an overview of the process, first note that each of the n reference points in the world frame, , and their corresponding image points, , are weighted sums of the four controls points, and respectively, and the weights are normalized per reference point as shown below. All points are expressed in homogeneous form.
From this, the derivation of the image reference points becomes
- .
Where is the image reference points with pixel coordinate . The homogeneous image control point has the form . Rearranging the image reference point equation yields the following two linear equations for each reference point:
- .
Using these two equations for each of the n reference points, the system can be formed where . The solution for the control points exists in the null space of M and is expressed as
where is the number of null singular values in and each is the corresponding right singular vector of . can range from 0 to 4. After calculating the initial coefficients , the Gauss-Newton algorithm is used to refine them. The R and T matrices that minimize the reprojection error of the world reference points, , and their corresponding actual image points , are then calculated.
This solution has complexity and works in the general case of PnP for both planar and non-planar control points. Open source software implementations of this method can be found in OpenCV's Camera Calibration and 3D Reconstruction module in the solvePnP function[7] as well as from the code published by Lepetit, et al. at their website, CVLAB at EPFL.[10]
This method is not robust against outliers and generally compares poorly to RANSAC P3P followed by nonlinear refinement .
SQPnP
SQPnP was described by Terzakis and Lourakis in an ECCV 2020 paper.[11] It is a non-minimal, non-polynomial solver which casts PnP as a non-linear quadratic program. SQPnP identifies regions in the parameter space of 3D rotations (i.e., the 8-sphere) that contain unique minima with guarantees that at least one of them is the global one. Each regional minimum is computed with sequential quadratic programming that is initiated at nearest orthogonal approximation matrices.
SQPnP has similar or even higher accuracy compared to state of the art polynomial solvers, is globally optimal and computationally very efficient, being practically linear in the number of supplied points n. A C++ implementation is available on GitHub, which has also been ported to OpenCV and included in the Camera Calibration and 3D Reconstruction module (SolvePnP function).[12]
Using RANSAC
PnP is prone to errors if there are outliers in the set of point correspondences. Thus, RANSAC can be used in conjunction with existing solutions to make the final solution for the camera pose more robust to outliers. An open source implementation of PnP methods with RANSAC can be found in OpenCV's Camera Calibration and 3D Reconstruction module in the solvePnPRansac function.[12]
See also
References
- ↑ Fischler, M. A.; Bolles, R. C. (1981). "Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography". Communications of the ACM. 24 (6): 381–395. doi:10.1145/358669.358692. S2CID 972888.
- ↑ Apple, ARKIT team (2018). "Understanding ARKit Tracking and Detection". WWDC.
- ↑ Fabbri, Ricardo; Giblin, Peter; Kimia, Benjamin (2012). "Camera Pose Estimation Using First-Order Curve Differential Geometry". Computer Vision – ECCV 2012 (PDF). Lecture Notes in Computer Science. Vol. 7575. pp. 231–244. doi:10.1007/978-3-642-33765-9_17. ISBN 978-3-642-33764-2. S2CID 15402824.
- ↑ Penate-Sanchez, A.; Andrade-Cetto, J.; Moreno-Noguer, F. (2013). "Exhaustive Linearization for Robust Camera Pose and Focal Length Estimation". IEEE Transactions on Pattern Analysis and Machine Intelligence. 35 (10): 2387–2400. doi:10.1109/TPAMI.2013.36. hdl:2117/22931. PMID 23969384. S2CID 9614348.
- ↑ Quan, Long; Lan, Zhong-Dan (1999). "Linear N-Point Camera Pose Determination" (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence.
- ↑ Gao, Xiao-Shan; Hou, Xiao-Rong; Tang, Jianliang; Cheng, Hang-Fei (2003). "Complete Solution Classification for the Perspective-Three-Point Problem". IEEE Transactions on Pattern Analysis and Machine Intelligence. 25 (8): 930–943. doi:10.1109/tpami.2003.1217599. S2CID 15869446.
- 1 2 "Camera Calibration and 3D Reconstruction". OpenCV.
- ↑ Persson, Mikael; Nordberg, Klas (2018). "Lambda Twist: An Accurate Fast Robust Perspective Three Point (P3P) Solver" (PDF). The European Conference on Computer Vision (ECCV).
- ↑ Lepetit, V.; Moreno-Noguer, M.; Fua, P. (2009). "EPnP: An Accurate O(n) Solution to the PnP Problem". International Journal of Computer Vision. 81 (2): 155–166. doi:10.1007/s11263-008-0152-6. hdl:2117/10327. S2CID 207252029.
- ↑ "EPnP: Efficient Perspective-n-Point Camera Pose Estimation". EPFL-CVLAB.
- ↑ Terzakis, George; Lourakis, Manolis (2020). "A Consistently Fast and Globally Optimal Solution to the Perspective-n-Point Problem". Computer Vision – ECCV 2020. Lecture Notes in Computer Science. Vol. 12346. pp. 478–494. doi:10.1007/978-3-030-58452-8_28. ISBN 978-3-030-58451-1. S2CID 226239551.
- 1 2 "Camera Calibration and 3D Reconstruction". OpenCV.