Understanding derivatives on an image as well as scipy's convolve signal function











up vote
0
down vote

favorite












I'm looking to differentiate my image, first by rows, and then separately, by columns.



A derivative is given as f[i+1]-f[i] where i is the pixel, f is the value/intensity of that pixel.



I was taught that this can be done by convolution - with d/dx convolving with (1,0,-1) as a row vector, and d/dy as the same vector but in column form.



My question is - when convolving in python using scipy.signal.convolve2D, with mode = 'same', convolving with those vectors, I am getting different results than that of numpy.diff.



Say for a 3x3 matrix:



5 4 3

2 1 1

3 2 5


Convolving with (1,0,-1) gives me



-4 2 4

-1 1 1

-2 -2 2


While numpy's diff gives me:



-3 -3 -2

1 1 4


The questions I have are as follows:



1.) When convolving a NxN image with a kernel such as (1,0,-1), does the function convolve every row by the vector?



2.) Why are my results different? I understand the shapes are different, with the numpy's result having one more row, but I can understand that since it doesn't include the original row 0






def deriv(image):
"""
:param im: 2D image
:return: magnitude of the derivative
"""
#Horizontal Derivative
dx = convolve2d(im, DX, mode='same')
print('dx', dx)
#Vertical Derivative
dy = convolve2d(im, DY, mode='same')
print('dy', dy)
magnitude = np.sqrt(np.abs(dx)**TWO + np.abs(dy)**TWO)
return magnitude





The prints are just for me to check their values before the magnitude is calculated.



DX = (1,0,-1)



DY = (1,0,-1) as a column vector










share|improve this question




















  • 1




    Please show what code you used to produce the results. Also consider using a larger matrix so that it is not dominated by edge effects - only the central column can be checked in your results and they seem correct for the first example. And in answer to your question 1, the convolution is performed at every single pixel.
    – Mark Setchell
    Nov 10 at 18:09










  • @MarkSetchell I've added them. Thank you for the clarification. Also what do you mean by the central column seeming correct?
    – RonaldB
    Nov 10 at 18:15






  • 1




    To replicate diff, convolve with [1,-1], not [1,0,-1]. These are two different approximations to the derivative.
    – Cris Luengo
    Nov 10 at 18:56










  • @CrisLuengo What is the difference between the two if I may ask?
    – RonaldB
    Nov 10 at 19:00






  • 1




    It is the difference between f(x+1)-f(x-1) and f(x)-f(x-1). Actually, for a correct derivative estimation, the [1,0,-1] kernel should be divided by 2.
    – Cris Luengo
    Nov 10 at 19:07

















up vote
0
down vote

favorite












I'm looking to differentiate my image, first by rows, and then separately, by columns.



A derivative is given as f[i+1]-f[i] where i is the pixel, f is the value/intensity of that pixel.



I was taught that this can be done by convolution - with d/dx convolving with (1,0,-1) as a row vector, and d/dy as the same vector but in column form.



My question is - when convolving in python using scipy.signal.convolve2D, with mode = 'same', convolving with those vectors, I am getting different results than that of numpy.diff.



Say for a 3x3 matrix:



5 4 3

2 1 1

3 2 5


Convolving with (1,0,-1) gives me



-4 2 4

-1 1 1

-2 -2 2


While numpy's diff gives me:



-3 -3 -2

1 1 4


The questions I have are as follows:



1.) When convolving a NxN image with a kernel such as (1,0,-1), does the function convolve every row by the vector?



2.) Why are my results different? I understand the shapes are different, with the numpy's result having one more row, but I can understand that since it doesn't include the original row 0






def deriv(image):
"""
:param im: 2D image
:return: magnitude of the derivative
"""
#Horizontal Derivative
dx = convolve2d(im, DX, mode='same')
print('dx', dx)
#Vertical Derivative
dy = convolve2d(im, DY, mode='same')
print('dy', dy)
magnitude = np.sqrt(np.abs(dx)**TWO + np.abs(dy)**TWO)
return magnitude





The prints are just for me to check their values before the magnitude is calculated.



DX = (1,0,-1)



DY = (1,0,-1) as a column vector










share|improve this question




















  • 1




    Please show what code you used to produce the results. Also consider using a larger matrix so that it is not dominated by edge effects - only the central column can be checked in your results and they seem correct for the first example. And in answer to your question 1, the convolution is performed at every single pixel.
    – Mark Setchell
    Nov 10 at 18:09










  • @MarkSetchell I've added them. Thank you for the clarification. Also what do you mean by the central column seeming correct?
    – RonaldB
    Nov 10 at 18:15






  • 1




    To replicate diff, convolve with [1,-1], not [1,0,-1]. These are two different approximations to the derivative.
    – Cris Luengo
    Nov 10 at 18:56










  • @CrisLuengo What is the difference between the two if I may ask?
    – RonaldB
    Nov 10 at 19:00






  • 1




    It is the difference between f(x+1)-f(x-1) and f(x)-f(x-1). Actually, for a correct derivative estimation, the [1,0,-1] kernel should be divided by 2.
    – Cris Luengo
    Nov 10 at 19:07















up vote
0
down vote

favorite









up vote
0
down vote

favorite











I'm looking to differentiate my image, first by rows, and then separately, by columns.



A derivative is given as f[i+1]-f[i] where i is the pixel, f is the value/intensity of that pixel.



I was taught that this can be done by convolution - with d/dx convolving with (1,0,-1) as a row vector, and d/dy as the same vector but in column form.



My question is - when convolving in python using scipy.signal.convolve2D, with mode = 'same', convolving with those vectors, I am getting different results than that of numpy.diff.



Say for a 3x3 matrix:



5 4 3

2 1 1

3 2 5


Convolving with (1,0,-1) gives me



-4 2 4

-1 1 1

-2 -2 2


While numpy's diff gives me:



-3 -3 -2

1 1 4


The questions I have are as follows:



1.) When convolving a NxN image with a kernel such as (1,0,-1), does the function convolve every row by the vector?



2.) Why are my results different? I understand the shapes are different, with the numpy's result having one more row, but I can understand that since it doesn't include the original row 0






def deriv(image):
"""
:param im: 2D image
:return: magnitude of the derivative
"""
#Horizontal Derivative
dx = convolve2d(im, DX, mode='same')
print('dx', dx)
#Vertical Derivative
dy = convolve2d(im, DY, mode='same')
print('dy', dy)
magnitude = np.sqrt(np.abs(dx)**TWO + np.abs(dy)**TWO)
return magnitude





The prints are just for me to check their values before the magnitude is calculated.



DX = (1,0,-1)



DY = (1,0,-1) as a column vector










share|improve this question















I'm looking to differentiate my image, first by rows, and then separately, by columns.



A derivative is given as f[i+1]-f[i] where i is the pixel, f is the value/intensity of that pixel.



I was taught that this can be done by convolution - with d/dx convolving with (1,0,-1) as a row vector, and d/dy as the same vector but in column form.



My question is - when convolving in python using scipy.signal.convolve2D, with mode = 'same', convolving with those vectors, I am getting different results than that of numpy.diff.



Say for a 3x3 matrix:



5 4 3

2 1 1

3 2 5


Convolving with (1,0,-1) gives me



-4 2 4

-1 1 1

-2 -2 2


While numpy's diff gives me:



-3 -3 -2

1 1 4


The questions I have are as follows:



1.) When convolving a NxN image with a kernel such as (1,0,-1), does the function convolve every row by the vector?



2.) Why are my results different? I understand the shapes are different, with the numpy's result having one more row, but I can understand that since it doesn't include the original row 0






def deriv(image):
"""
:param im: 2D image
:return: magnitude of the derivative
"""
#Horizontal Derivative
dx = convolve2d(im, DX, mode='same')
print('dx', dx)
#Vertical Derivative
dy = convolve2d(im, DY, mode='same')
print('dy', dy)
magnitude = np.sqrt(np.abs(dx)**TWO + np.abs(dy)**TWO)
return magnitude





The prints are just for me to check their values before the magnitude is calculated.



DX = (1,0,-1)



DY = (1,0,-1) as a column vector






def deriv(image):
"""
:param im: 2D image
:return: magnitude of the derivative
"""
#Horizontal Derivative
dx = convolve2d(im, DX, mode='same')
print('dx', dx)
#Vertical Derivative
dy = convolve2d(im, DY, mode='same')
print('dy', dy)
magnitude = np.sqrt(np.abs(dx)**TWO + np.abs(dy)**TWO)
return magnitude





def deriv(image):
"""
:param im: 2D image
:return: magnitude of the derivative
"""
#Horizontal Derivative
dx = convolve2d(im, DX, mode='same')
print('dx', dx)
#Vertical Derivative
dy = convolve2d(im, DY, mode='same')
print('dy', dy)
magnitude = np.sqrt(np.abs(dx)**TWO + np.abs(dy)**TWO)
return magnitude






python image-processing convolution derivative






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 10 at 18:14

























asked Nov 10 at 17:43









RonaldB

1357




1357








  • 1




    Please show what code you used to produce the results. Also consider using a larger matrix so that it is not dominated by edge effects - only the central column can be checked in your results and they seem correct for the first example. And in answer to your question 1, the convolution is performed at every single pixel.
    – Mark Setchell
    Nov 10 at 18:09










  • @MarkSetchell I've added them. Thank you for the clarification. Also what do you mean by the central column seeming correct?
    – RonaldB
    Nov 10 at 18:15






  • 1




    To replicate diff, convolve with [1,-1], not [1,0,-1]. These are two different approximations to the derivative.
    – Cris Luengo
    Nov 10 at 18:56










  • @CrisLuengo What is the difference between the two if I may ask?
    – RonaldB
    Nov 10 at 19:00






  • 1




    It is the difference between f(x+1)-f(x-1) and f(x)-f(x-1). Actually, for a correct derivative estimation, the [1,0,-1] kernel should be divided by 2.
    – Cris Luengo
    Nov 10 at 19:07
















  • 1




    Please show what code you used to produce the results. Also consider using a larger matrix so that it is not dominated by edge effects - only the central column can be checked in your results and they seem correct for the first example. And in answer to your question 1, the convolution is performed at every single pixel.
    – Mark Setchell
    Nov 10 at 18:09










  • @MarkSetchell I've added them. Thank you for the clarification. Also what do you mean by the central column seeming correct?
    – RonaldB
    Nov 10 at 18:15






  • 1




    To replicate diff, convolve with [1,-1], not [1,0,-1]. These are two different approximations to the derivative.
    – Cris Luengo
    Nov 10 at 18:56










  • @CrisLuengo What is the difference between the two if I may ask?
    – RonaldB
    Nov 10 at 19:00






  • 1




    It is the difference between f(x+1)-f(x-1) and f(x)-f(x-1). Actually, for a correct derivative estimation, the [1,0,-1] kernel should be divided by 2.
    – Cris Luengo
    Nov 10 at 19:07










1




1




Please show what code you used to produce the results. Also consider using a larger matrix so that it is not dominated by edge effects - only the central column can be checked in your results and they seem correct for the first example. And in answer to your question 1, the convolution is performed at every single pixel.
– Mark Setchell
Nov 10 at 18:09




Please show what code you used to produce the results. Also consider using a larger matrix so that it is not dominated by edge effects - only the central column can be checked in your results and they seem correct for the first example. And in answer to your question 1, the convolution is performed at every single pixel.
– Mark Setchell
Nov 10 at 18:09












@MarkSetchell I've added them. Thank you for the clarification. Also what do you mean by the central column seeming correct?
– RonaldB
Nov 10 at 18:15




@MarkSetchell I've added them. Thank you for the clarification. Also what do you mean by the central column seeming correct?
– RonaldB
Nov 10 at 18:15




1




1




To replicate diff, convolve with [1,-1], not [1,0,-1]. These are two different approximations to the derivative.
– Cris Luengo
Nov 10 at 18:56




To replicate diff, convolve with [1,-1], not [1,0,-1]. These are two different approximations to the derivative.
– Cris Luengo
Nov 10 at 18:56












@CrisLuengo What is the difference between the two if I may ask?
– RonaldB
Nov 10 at 19:00




@CrisLuengo What is the difference between the two if I may ask?
– RonaldB
Nov 10 at 19:00




1




1




It is the difference between f(x+1)-f(x-1) and f(x)-f(x-1). Actually, for a correct derivative estimation, the [1,0,-1] kernel should be divided by 2.
– Cris Luengo
Nov 10 at 19:07






It is the difference between f(x+1)-f(x-1) and f(x)-f(x-1). Actually, for a correct derivative estimation, the [1,0,-1] kernel should be divided by 2.
– Cris Luengo
Nov 10 at 19:07



















active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














 

draft saved


draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53241720%2funderstanding-derivatives-on-an-image-as-well-as-scipys-convolve-signal-functio%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown






























active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes
















 

draft saved


draft discarded



















































 


draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53241720%2funderstanding-derivatives-on-an-image-as-well-as-scipys-convolve-signal-functio%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Florida Star v. B. J. F.

Danny Elfman

Retrieve a Users Dashboard in Tumblr with R and TumblR. Oauth Issues