OpenCV: Confusion about solvePnP
I have a small confusion regarding the use of the solvePnP function in OpenCV.
I have the matrix for the intrinsic parameters of the camera and I have identified some keypoints in image and I am trying to estimate the extrinsic parameters for the calibration.
The documentation for solvePnP says:
cv2.solvePnP(objectPoints, imagePoints, cameraMatrix,
distCoeffs[, rvec[, tvec[, useExtrinsicGuess[, flags]]]]) → retval, rvec, tvec
I am guesisng my imagePoints
parameters are the keypoints I have detected. and these arte specified in pixels as (x1, y1), (x2, y2), (x3, y3)
.
I am totally confused about the objectPoints. So, the documentation says:
objectPoints – Array of object points in the object coordinate space,
3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points.
vector<Point3f> can be also passed here.
How can I generate these object points from my image points? What does it mean when they say the object coordinate space in here?
opencv camera-calibration perspectivecamera
add a comment |
I have a small confusion regarding the use of the solvePnP function in OpenCV.
I have the matrix for the intrinsic parameters of the camera and I have identified some keypoints in image and I am trying to estimate the extrinsic parameters for the calibration.
The documentation for solvePnP says:
cv2.solvePnP(objectPoints, imagePoints, cameraMatrix,
distCoeffs[, rvec[, tvec[, useExtrinsicGuess[, flags]]]]) → retval, rvec, tvec
I am guesisng my imagePoints
parameters are the keypoints I have detected. and these arte specified in pixels as (x1, y1), (x2, y2), (x3, y3)
.
I am totally confused about the objectPoints. So, the documentation says:
objectPoints – Array of object points in the object coordinate space,
3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points.
vector<Point3f> can be also passed here.
How can I generate these object points from my image points? What does it mean when they say the object coordinate space in here?
opencv camera-calibration perspectivecamera
add a comment |
I have a small confusion regarding the use of the solvePnP function in OpenCV.
I have the matrix for the intrinsic parameters of the camera and I have identified some keypoints in image and I am trying to estimate the extrinsic parameters for the calibration.
The documentation for solvePnP says:
cv2.solvePnP(objectPoints, imagePoints, cameraMatrix,
distCoeffs[, rvec[, tvec[, useExtrinsicGuess[, flags]]]]) → retval, rvec, tvec
I am guesisng my imagePoints
parameters are the keypoints I have detected. and these arte specified in pixels as (x1, y1), (x2, y2), (x3, y3)
.
I am totally confused about the objectPoints. So, the documentation says:
objectPoints – Array of object points in the object coordinate space,
3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points.
vector<Point3f> can be also passed here.
How can I generate these object points from my image points? What does it mean when they say the object coordinate space in here?
opencv camera-calibration perspectivecamera
I have a small confusion regarding the use of the solvePnP function in OpenCV.
I have the matrix for the intrinsic parameters of the camera and I have identified some keypoints in image and I am trying to estimate the extrinsic parameters for the calibration.
The documentation for solvePnP says:
cv2.solvePnP(objectPoints, imagePoints, cameraMatrix,
distCoeffs[, rvec[, tvec[, useExtrinsicGuess[, flags]]]]) → retval, rvec, tvec
I am guesisng my imagePoints
parameters are the keypoints I have detected. and these arte specified in pixels as (x1, y1), (x2, y2), (x3, y3)
.
I am totally confused about the objectPoints. So, the documentation says:
objectPoints – Array of object points in the object coordinate space,
3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points.
vector<Point3f> can be also passed here.
How can I generate these object points from my image points? What does it mean when they say the object coordinate space in here?
opencv camera-calibration perspectivecamera
opencv camera-calibration perspectivecamera
asked Nov 14 '18 at 10:31
LucaLuca
3,25752783
3,25752783
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
The cv2.solvePnP()
method is generally used in pose estimation, or in other words, it can be used to estimate the orientation of a 3D object in a 2D image. So for this you need to tag some key-points in the 3D model of the object(objectPoints
) and also detect those key-points in the 2D image(imagePoints
).
You can consult this answer, to get the application of this technique for face pose estimation.
Thanks for your answer. So, I am confused as to the coordinate system of the 3D object. So the key points in the image is given in pixels relative to the top left corner of the image. What about the 3D object? Can I set the coordinate system arbitrarily? For example, can one of the detected keypoints be the origin of the 3D coordinate system and all the other keypoints be defined relative to this one?
– Luca
Nov 14 '18 at 11:14
1
Yes they can be arbitrary, this system needs at least 6 points to work robustly, so as long as you are giving 6 points the 3D points can be arbitrary.
– ZdaR
Nov 14 '18 at 11:24
Thank you! Much appreciated. The units are going to be the units of the 3D model measurement, I guess and the intrinsic matrix should also be in this unit?
– Luca
Nov 14 '18 at 11:26
1
I am not sure about the intrinsic matrix as my camera was already calibrated so I knid of hardcoded those values. In my attached answer you can checkout other links to satisfy your queries.
– ZdaR
Nov 14 '18 at 12:11
1
Afaik, if different units are used for intrinsics and objPoints, there will be some scale factor inside the computed obj pose
– Micka
Nov 14 '18 at 12:29
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53298065%2fopencv-confusion-about-solvepnp%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
The cv2.solvePnP()
method is generally used in pose estimation, or in other words, it can be used to estimate the orientation of a 3D object in a 2D image. So for this you need to tag some key-points in the 3D model of the object(objectPoints
) and also detect those key-points in the 2D image(imagePoints
).
You can consult this answer, to get the application of this technique for face pose estimation.
Thanks for your answer. So, I am confused as to the coordinate system of the 3D object. So the key points in the image is given in pixels relative to the top left corner of the image. What about the 3D object? Can I set the coordinate system arbitrarily? For example, can one of the detected keypoints be the origin of the 3D coordinate system and all the other keypoints be defined relative to this one?
– Luca
Nov 14 '18 at 11:14
1
Yes they can be arbitrary, this system needs at least 6 points to work robustly, so as long as you are giving 6 points the 3D points can be arbitrary.
– ZdaR
Nov 14 '18 at 11:24
Thank you! Much appreciated. The units are going to be the units of the 3D model measurement, I guess and the intrinsic matrix should also be in this unit?
– Luca
Nov 14 '18 at 11:26
1
I am not sure about the intrinsic matrix as my camera was already calibrated so I knid of hardcoded those values. In my attached answer you can checkout other links to satisfy your queries.
– ZdaR
Nov 14 '18 at 12:11
1
Afaik, if different units are used for intrinsics and objPoints, there will be some scale factor inside the computed obj pose
– Micka
Nov 14 '18 at 12:29
add a comment |
The cv2.solvePnP()
method is generally used in pose estimation, or in other words, it can be used to estimate the orientation of a 3D object in a 2D image. So for this you need to tag some key-points in the 3D model of the object(objectPoints
) and also detect those key-points in the 2D image(imagePoints
).
You can consult this answer, to get the application of this technique for face pose estimation.
Thanks for your answer. So, I am confused as to the coordinate system of the 3D object. So the key points in the image is given in pixels relative to the top left corner of the image. What about the 3D object? Can I set the coordinate system arbitrarily? For example, can one of the detected keypoints be the origin of the 3D coordinate system and all the other keypoints be defined relative to this one?
– Luca
Nov 14 '18 at 11:14
1
Yes they can be arbitrary, this system needs at least 6 points to work robustly, so as long as you are giving 6 points the 3D points can be arbitrary.
– ZdaR
Nov 14 '18 at 11:24
Thank you! Much appreciated. The units are going to be the units of the 3D model measurement, I guess and the intrinsic matrix should also be in this unit?
– Luca
Nov 14 '18 at 11:26
1
I am not sure about the intrinsic matrix as my camera was already calibrated so I knid of hardcoded those values. In my attached answer you can checkout other links to satisfy your queries.
– ZdaR
Nov 14 '18 at 12:11
1
Afaik, if different units are used for intrinsics and objPoints, there will be some scale factor inside the computed obj pose
– Micka
Nov 14 '18 at 12:29
add a comment |
The cv2.solvePnP()
method is generally used in pose estimation, or in other words, it can be used to estimate the orientation of a 3D object in a 2D image. So for this you need to tag some key-points in the 3D model of the object(objectPoints
) and also detect those key-points in the 2D image(imagePoints
).
You can consult this answer, to get the application of this technique for face pose estimation.
The cv2.solvePnP()
method is generally used in pose estimation, or in other words, it can be used to estimate the orientation of a 3D object in a 2D image. So for this you need to tag some key-points in the 3D model of the object(objectPoints
) and also detect those key-points in the 2D image(imagePoints
).
You can consult this answer, to get the application of this technique for face pose estimation.
answered Nov 14 '18 at 11:10
ZdaRZdaR
13.2k33553
13.2k33553
Thanks for your answer. So, I am confused as to the coordinate system of the 3D object. So the key points in the image is given in pixels relative to the top left corner of the image. What about the 3D object? Can I set the coordinate system arbitrarily? For example, can one of the detected keypoints be the origin of the 3D coordinate system and all the other keypoints be defined relative to this one?
– Luca
Nov 14 '18 at 11:14
1
Yes they can be arbitrary, this system needs at least 6 points to work robustly, so as long as you are giving 6 points the 3D points can be arbitrary.
– ZdaR
Nov 14 '18 at 11:24
Thank you! Much appreciated. The units are going to be the units of the 3D model measurement, I guess and the intrinsic matrix should also be in this unit?
– Luca
Nov 14 '18 at 11:26
1
I am not sure about the intrinsic matrix as my camera was already calibrated so I knid of hardcoded those values. In my attached answer you can checkout other links to satisfy your queries.
– ZdaR
Nov 14 '18 at 12:11
1
Afaik, if different units are used for intrinsics and objPoints, there will be some scale factor inside the computed obj pose
– Micka
Nov 14 '18 at 12:29
add a comment |
Thanks for your answer. So, I am confused as to the coordinate system of the 3D object. So the key points in the image is given in pixels relative to the top left corner of the image. What about the 3D object? Can I set the coordinate system arbitrarily? For example, can one of the detected keypoints be the origin of the 3D coordinate system and all the other keypoints be defined relative to this one?
– Luca
Nov 14 '18 at 11:14
1
Yes they can be arbitrary, this system needs at least 6 points to work robustly, so as long as you are giving 6 points the 3D points can be arbitrary.
– ZdaR
Nov 14 '18 at 11:24
Thank you! Much appreciated. The units are going to be the units of the 3D model measurement, I guess and the intrinsic matrix should also be in this unit?
– Luca
Nov 14 '18 at 11:26
1
I am not sure about the intrinsic matrix as my camera was already calibrated so I knid of hardcoded those values. In my attached answer you can checkout other links to satisfy your queries.
– ZdaR
Nov 14 '18 at 12:11
1
Afaik, if different units are used for intrinsics and objPoints, there will be some scale factor inside the computed obj pose
– Micka
Nov 14 '18 at 12:29
Thanks for your answer. So, I am confused as to the coordinate system of the 3D object. So the key points in the image is given in pixels relative to the top left corner of the image. What about the 3D object? Can I set the coordinate system arbitrarily? For example, can one of the detected keypoints be the origin of the 3D coordinate system and all the other keypoints be defined relative to this one?
– Luca
Nov 14 '18 at 11:14
Thanks for your answer. So, I am confused as to the coordinate system of the 3D object. So the key points in the image is given in pixels relative to the top left corner of the image. What about the 3D object? Can I set the coordinate system arbitrarily? For example, can one of the detected keypoints be the origin of the 3D coordinate system and all the other keypoints be defined relative to this one?
– Luca
Nov 14 '18 at 11:14
1
1
Yes they can be arbitrary, this system needs at least 6 points to work robustly, so as long as you are giving 6 points the 3D points can be arbitrary.
– ZdaR
Nov 14 '18 at 11:24
Yes they can be arbitrary, this system needs at least 6 points to work robustly, so as long as you are giving 6 points the 3D points can be arbitrary.
– ZdaR
Nov 14 '18 at 11:24
Thank you! Much appreciated. The units are going to be the units of the 3D model measurement, I guess and the intrinsic matrix should also be in this unit?
– Luca
Nov 14 '18 at 11:26
Thank you! Much appreciated. The units are going to be the units of the 3D model measurement, I guess and the intrinsic matrix should also be in this unit?
– Luca
Nov 14 '18 at 11:26
1
1
I am not sure about the intrinsic matrix as my camera was already calibrated so I knid of hardcoded those values. In my attached answer you can checkout other links to satisfy your queries.
– ZdaR
Nov 14 '18 at 12:11
I am not sure about the intrinsic matrix as my camera was already calibrated so I knid of hardcoded those values. In my attached answer you can checkout other links to satisfy your queries.
– ZdaR
Nov 14 '18 at 12:11
1
1
Afaik, if different units are used for intrinsics and objPoints, there will be some scale factor inside the computed obj pose
– Micka
Nov 14 '18 at 12:29
Afaik, if different units are used for intrinsics and objPoints, there will be some scale factor inside the computed obj pose
– Micka
Nov 14 '18 at 12:29
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53298065%2fopencv-confusion-about-solvepnp%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown