Understanding derivatives on an image as well as scipy's convolve signal function
up vote
0
down vote
favorite
I'm looking to differentiate my image, first by rows, and then separately, by columns.
A derivative is given as f[i+1]-f[i] where i is the pixel, f is the value/intensity of that pixel.
I was taught that this can be done by convolution - with d/dx convolving with (1,0,-1) as a row vector, and d/dy as the same vector but in column form.
My question is - when convolving in python using scipy.signal.convolve2D, with mode = 'same', convolving with those vectors, I am getting different results than that of numpy.diff.
Say for a 3x3 matrix:
5 4 3
2 1 1
3 2 5
Convolving with (1,0,-1) gives me
-4 2 4
-1 1 1
-2 -2 2
While numpy's diff gives me:
-3 -3 -2
1 1 4
The questions I have are as follows:
1.) When convolving a NxN image with a kernel such as (1,0,-1), does the function convolve every row by the vector?
2.) Why are my results different? I understand the shapes are different, with the numpy's result having one more row, but I can understand that since it doesn't include the original row 0
def deriv(image):
"""
:param im: 2D image
:return: magnitude of the derivative
"""
#Horizontal Derivative
dx = convolve2d(im, DX, mode='same')
print('dx', dx)
#Vertical Derivative
dy = convolve2d(im, DY, mode='same')
print('dy', dy)
magnitude = np.sqrt(np.abs(dx)**TWO + np.abs(dy)**TWO)
return magnitude
The prints are just for me to check their values before the magnitude is calculated.
DX = (1,0,-1)
DY = (1,0,-1) as a column vector
python image-processing convolution derivative
add a comment |
up vote
0
down vote
favorite
I'm looking to differentiate my image, first by rows, and then separately, by columns.
A derivative is given as f[i+1]-f[i] where i is the pixel, f is the value/intensity of that pixel.
I was taught that this can be done by convolution - with d/dx convolving with (1,0,-1) as a row vector, and d/dy as the same vector but in column form.
My question is - when convolving in python using scipy.signal.convolve2D, with mode = 'same', convolving with those vectors, I am getting different results than that of numpy.diff.
Say for a 3x3 matrix:
5 4 3
2 1 1
3 2 5
Convolving with (1,0,-1) gives me
-4 2 4
-1 1 1
-2 -2 2
While numpy's diff gives me:
-3 -3 -2
1 1 4
The questions I have are as follows:
1.) When convolving a NxN image with a kernel such as (1,0,-1), does the function convolve every row by the vector?
2.) Why are my results different? I understand the shapes are different, with the numpy's result having one more row, but I can understand that since it doesn't include the original row 0
def deriv(image):
"""
:param im: 2D image
:return: magnitude of the derivative
"""
#Horizontal Derivative
dx = convolve2d(im, DX, mode='same')
print('dx', dx)
#Vertical Derivative
dy = convolve2d(im, DY, mode='same')
print('dy', dy)
magnitude = np.sqrt(np.abs(dx)**TWO + np.abs(dy)**TWO)
return magnitude
The prints are just for me to check their values before the magnitude is calculated.
DX = (1,0,-1)
DY = (1,0,-1) as a column vector
python image-processing convolution derivative
1
Please show what code you used to produce the results. Also consider using a larger matrix so that it is not dominated by edge effects - only the central column can be checked in your results and they seem correct for the first example. And in answer to your question 1, the convolution is performed at every single pixel.
– Mark Setchell
Nov 10 at 18:09
@MarkSetchell I've added them. Thank you for the clarification. Also what do you mean by the central column seeming correct?
– RonaldB
Nov 10 at 18:15
1
To replicatediff
, convolve with[1,-1]
, not[1,0,-1]
. These are two different approximations to the derivative.
– Cris Luengo
Nov 10 at 18:56
@CrisLuengo What is the difference between the two if I may ask?
– RonaldB
Nov 10 at 19:00
1
It is the difference betweenf(x+1)-f(x-1)
andf(x)-f(x-1)
. Actually, for a correct derivative estimation, the[1,0,-1]
kernel should be divided by 2.
– Cris Luengo
Nov 10 at 19:07
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I'm looking to differentiate my image, first by rows, and then separately, by columns.
A derivative is given as f[i+1]-f[i] where i is the pixel, f is the value/intensity of that pixel.
I was taught that this can be done by convolution - with d/dx convolving with (1,0,-1) as a row vector, and d/dy as the same vector but in column form.
My question is - when convolving in python using scipy.signal.convolve2D, with mode = 'same', convolving with those vectors, I am getting different results than that of numpy.diff.
Say for a 3x3 matrix:
5 4 3
2 1 1
3 2 5
Convolving with (1,0,-1) gives me
-4 2 4
-1 1 1
-2 -2 2
While numpy's diff gives me:
-3 -3 -2
1 1 4
The questions I have are as follows:
1.) When convolving a NxN image with a kernel such as (1,0,-1), does the function convolve every row by the vector?
2.) Why are my results different? I understand the shapes are different, with the numpy's result having one more row, but I can understand that since it doesn't include the original row 0
def deriv(image):
"""
:param im: 2D image
:return: magnitude of the derivative
"""
#Horizontal Derivative
dx = convolve2d(im, DX, mode='same')
print('dx', dx)
#Vertical Derivative
dy = convolve2d(im, DY, mode='same')
print('dy', dy)
magnitude = np.sqrt(np.abs(dx)**TWO + np.abs(dy)**TWO)
return magnitude
The prints are just for me to check their values before the magnitude is calculated.
DX = (1,0,-1)
DY = (1,0,-1) as a column vector
python image-processing convolution derivative
I'm looking to differentiate my image, first by rows, and then separately, by columns.
A derivative is given as f[i+1]-f[i] where i is the pixel, f is the value/intensity of that pixel.
I was taught that this can be done by convolution - with d/dx convolving with (1,0,-1) as a row vector, and d/dy as the same vector but in column form.
My question is - when convolving in python using scipy.signal.convolve2D, with mode = 'same', convolving with those vectors, I am getting different results than that of numpy.diff.
Say for a 3x3 matrix:
5 4 3
2 1 1
3 2 5
Convolving with (1,0,-1) gives me
-4 2 4
-1 1 1
-2 -2 2
While numpy's diff gives me:
-3 -3 -2
1 1 4
The questions I have are as follows:
1.) When convolving a NxN image with a kernel such as (1,0,-1), does the function convolve every row by the vector?
2.) Why are my results different? I understand the shapes are different, with the numpy's result having one more row, but I can understand that since it doesn't include the original row 0
def deriv(image):
"""
:param im: 2D image
:return: magnitude of the derivative
"""
#Horizontal Derivative
dx = convolve2d(im, DX, mode='same')
print('dx', dx)
#Vertical Derivative
dy = convolve2d(im, DY, mode='same')
print('dy', dy)
magnitude = np.sqrt(np.abs(dx)**TWO + np.abs(dy)**TWO)
return magnitude
The prints are just for me to check their values before the magnitude is calculated.
DX = (1,0,-1)
DY = (1,0,-1) as a column vector
def deriv(image):
"""
:param im: 2D image
:return: magnitude of the derivative
"""
#Horizontal Derivative
dx = convolve2d(im, DX, mode='same')
print('dx', dx)
#Vertical Derivative
dy = convolve2d(im, DY, mode='same')
print('dy', dy)
magnitude = np.sqrt(np.abs(dx)**TWO + np.abs(dy)**TWO)
return magnitude
def deriv(image):
"""
:param im: 2D image
:return: magnitude of the derivative
"""
#Horizontal Derivative
dx = convolve2d(im, DX, mode='same')
print('dx', dx)
#Vertical Derivative
dy = convolve2d(im, DY, mode='same')
print('dy', dy)
magnitude = np.sqrt(np.abs(dx)**TWO + np.abs(dy)**TWO)
return magnitude
python image-processing convolution derivative
python image-processing convolution derivative
edited Nov 10 at 18:14
asked Nov 10 at 17:43
RonaldB
1357
1357
1
Please show what code you used to produce the results. Also consider using a larger matrix so that it is not dominated by edge effects - only the central column can be checked in your results and they seem correct for the first example. And in answer to your question 1, the convolution is performed at every single pixel.
– Mark Setchell
Nov 10 at 18:09
@MarkSetchell I've added them. Thank you for the clarification. Also what do you mean by the central column seeming correct?
– RonaldB
Nov 10 at 18:15
1
To replicatediff
, convolve with[1,-1]
, not[1,0,-1]
. These are two different approximations to the derivative.
– Cris Luengo
Nov 10 at 18:56
@CrisLuengo What is the difference between the two if I may ask?
– RonaldB
Nov 10 at 19:00
1
It is the difference betweenf(x+1)-f(x-1)
andf(x)-f(x-1)
. Actually, for a correct derivative estimation, the[1,0,-1]
kernel should be divided by 2.
– Cris Luengo
Nov 10 at 19:07
add a comment |
1
Please show what code you used to produce the results. Also consider using a larger matrix so that it is not dominated by edge effects - only the central column can be checked in your results and they seem correct for the first example. And in answer to your question 1, the convolution is performed at every single pixel.
– Mark Setchell
Nov 10 at 18:09
@MarkSetchell I've added them. Thank you for the clarification. Also what do you mean by the central column seeming correct?
– RonaldB
Nov 10 at 18:15
1
To replicatediff
, convolve with[1,-1]
, not[1,0,-1]
. These are two different approximations to the derivative.
– Cris Luengo
Nov 10 at 18:56
@CrisLuengo What is the difference between the two if I may ask?
– RonaldB
Nov 10 at 19:00
1
It is the difference betweenf(x+1)-f(x-1)
andf(x)-f(x-1)
. Actually, for a correct derivative estimation, the[1,0,-1]
kernel should be divided by 2.
– Cris Luengo
Nov 10 at 19:07
1
1
Please show what code you used to produce the results. Also consider using a larger matrix so that it is not dominated by edge effects - only the central column can be checked in your results and they seem correct for the first example. And in answer to your question 1, the convolution is performed at every single pixel.
– Mark Setchell
Nov 10 at 18:09
Please show what code you used to produce the results. Also consider using a larger matrix so that it is not dominated by edge effects - only the central column can be checked in your results and they seem correct for the first example. And in answer to your question 1, the convolution is performed at every single pixel.
– Mark Setchell
Nov 10 at 18:09
@MarkSetchell I've added them. Thank you for the clarification. Also what do you mean by the central column seeming correct?
– RonaldB
Nov 10 at 18:15
@MarkSetchell I've added them. Thank you for the clarification. Also what do you mean by the central column seeming correct?
– RonaldB
Nov 10 at 18:15
1
1
To replicate
diff
, convolve with [1,-1]
, not [1,0,-1]
. These are two different approximations to the derivative.– Cris Luengo
Nov 10 at 18:56
To replicate
diff
, convolve with [1,-1]
, not [1,0,-1]
. These are two different approximations to the derivative.– Cris Luengo
Nov 10 at 18:56
@CrisLuengo What is the difference between the two if I may ask?
– RonaldB
Nov 10 at 19:00
@CrisLuengo What is the difference between the two if I may ask?
– RonaldB
Nov 10 at 19:00
1
1
It is the difference between
f(x+1)-f(x-1)
and f(x)-f(x-1)
. Actually, for a correct derivative estimation, the [1,0,-1]
kernel should be divided by 2.– Cris Luengo
Nov 10 at 19:07
It is the difference between
f(x+1)-f(x-1)
and f(x)-f(x-1)
. Actually, for a correct derivative estimation, the [1,0,-1]
kernel should be divided by 2.– Cris Luengo
Nov 10 at 19:07
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53241720%2funderstanding-derivatives-on-an-image-as-well-as-scipys-convolve-signal-functio%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
Please show what code you used to produce the results. Also consider using a larger matrix so that it is not dominated by edge effects - only the central column can be checked in your results and they seem correct for the first example. And in answer to your question 1, the convolution is performed at every single pixel.
– Mark Setchell
Nov 10 at 18:09
@MarkSetchell I've added them. Thank you for the clarification. Also what do you mean by the central column seeming correct?
– RonaldB
Nov 10 at 18:15
1
To replicate
diff
, convolve with[1,-1]
, not[1,0,-1]
. These are two different approximations to the derivative.– Cris Luengo
Nov 10 at 18:56
@CrisLuengo What is the difference between the two if I may ask?
– RonaldB
Nov 10 at 19:00
1
It is the difference between
f(x+1)-f(x-1)
andf(x)-f(x-1)
. Actually, for a correct derivative estimation, the[1,0,-1]
kernel should be divided by 2.– Cris Luengo
Nov 10 at 19:07