Tensorflow cost function based on binary xor operation












1















I am trying to create a cost function that is based on the xor operation of x & y.



x and y are tf.float32 tensors and both have the same shape.



x --> ... --> y auto-encoder network



I have tried doing the following:



x_value = tf.cast(x, tf.int32)
y_value = tf.cast(y, tf.int32)
xor_bitwise_result = tf.bitwise.bitwise_xor(x_value, y_value)
cost_value = tf.cast(xor_bitwise_result, tf.float32)
cost = tf.reduce_mean(cost_value)


but I am getting an error:



ValueError: No gradients provided for any variable, check your graph for ops that do not support gradient.



if I try to do the following I will get the same error as well



cost = tf.sqrt(tf.reduce_mean(tf.square(x_value-y_value)))


It might be the way I am casting is not good, it is possible my understanding of the process is lacking so any pointers would be greatly appreciated.










share|improve this question























  • The gradients for XOR are not defined. That is, is basically impossible to tell whether changing a value in your parameters will make cost bigger or smaller, so the optimizer does not know how to update the variables. If you know how to compute (or estimate) the gradient for that loss function you could implement it with tf.custom_gradient, but I don't think it's possible in this case.

    – jdehesa
    Nov 13 '18 at 10:57











  • Thank you for the comment. I was thinking I would get a numeric value that would be treated same as if I do (y-x). Also I tried the other case where I compared cast-ed values. For example, this one failed: cost = tf.sqrt(tf.reduce_mean(tf.square(x_value-y_value))) even though this one works fine: cost = tf.sqrt(tf.reduce_mean(tf.square(x-y)))

    – Kour
    Nov 13 '18 at 15:08











  • Yes, you're right, gradients do not propagate through integer tensors. There is a discussion about in issue #20524 (apparently there was a time when they did, but they removed the behavior for consistency). So yes, any loss that you define on the basis of integers is not going to work (unless you provide your own gradient definition, as I said earlier).

    – jdehesa
    Nov 13 '18 at 17:32
















1















I am trying to create a cost function that is based on the xor operation of x & y.



x and y are tf.float32 tensors and both have the same shape.



x --> ... --> y auto-encoder network



I have tried doing the following:



x_value = tf.cast(x, tf.int32)
y_value = tf.cast(y, tf.int32)
xor_bitwise_result = tf.bitwise.bitwise_xor(x_value, y_value)
cost_value = tf.cast(xor_bitwise_result, tf.float32)
cost = tf.reduce_mean(cost_value)


but I am getting an error:



ValueError: No gradients provided for any variable, check your graph for ops that do not support gradient.



if I try to do the following I will get the same error as well



cost = tf.sqrt(tf.reduce_mean(tf.square(x_value-y_value)))


It might be the way I am casting is not good, it is possible my understanding of the process is lacking so any pointers would be greatly appreciated.










share|improve this question























  • The gradients for XOR are not defined. That is, is basically impossible to tell whether changing a value in your parameters will make cost bigger or smaller, so the optimizer does not know how to update the variables. If you know how to compute (or estimate) the gradient for that loss function you could implement it with tf.custom_gradient, but I don't think it's possible in this case.

    – jdehesa
    Nov 13 '18 at 10:57











  • Thank you for the comment. I was thinking I would get a numeric value that would be treated same as if I do (y-x). Also I tried the other case where I compared cast-ed values. For example, this one failed: cost = tf.sqrt(tf.reduce_mean(tf.square(x_value-y_value))) even though this one works fine: cost = tf.sqrt(tf.reduce_mean(tf.square(x-y)))

    – Kour
    Nov 13 '18 at 15:08











  • Yes, you're right, gradients do not propagate through integer tensors. There is a discussion about in issue #20524 (apparently there was a time when they did, but they removed the behavior for consistency). So yes, any loss that you define on the basis of integers is not going to work (unless you provide your own gradient definition, as I said earlier).

    – jdehesa
    Nov 13 '18 at 17:32














1












1








1


1






I am trying to create a cost function that is based on the xor operation of x & y.



x and y are tf.float32 tensors and both have the same shape.



x --> ... --> y auto-encoder network



I have tried doing the following:



x_value = tf.cast(x, tf.int32)
y_value = tf.cast(y, tf.int32)
xor_bitwise_result = tf.bitwise.bitwise_xor(x_value, y_value)
cost_value = tf.cast(xor_bitwise_result, tf.float32)
cost = tf.reduce_mean(cost_value)


but I am getting an error:



ValueError: No gradients provided for any variable, check your graph for ops that do not support gradient.



if I try to do the following I will get the same error as well



cost = tf.sqrt(tf.reduce_mean(tf.square(x_value-y_value)))


It might be the way I am casting is not good, it is possible my understanding of the process is lacking so any pointers would be greatly appreciated.










share|improve this question














I am trying to create a cost function that is based on the xor operation of x & y.



x and y are tf.float32 tensors and both have the same shape.



x --> ... --> y auto-encoder network



I have tried doing the following:



x_value = tf.cast(x, tf.int32)
y_value = tf.cast(y, tf.int32)
xor_bitwise_result = tf.bitwise.bitwise_xor(x_value, y_value)
cost_value = tf.cast(xor_bitwise_result, tf.float32)
cost = tf.reduce_mean(cost_value)


but I am getting an error:



ValueError: No gradients provided for any variable, check your graph for ops that do not support gradient.



if I try to do the following I will get the same error as well



cost = tf.sqrt(tf.reduce_mean(tf.square(x_value-y_value)))


It might be the way I am casting is not good, it is possible my understanding of the process is lacking so any pointers would be greatly appreciated.







python tensorflow bitwise-operators bitwise-xor






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 13 '18 at 4:12









KourKour

62




62













  • The gradients for XOR are not defined. That is, is basically impossible to tell whether changing a value in your parameters will make cost bigger or smaller, so the optimizer does not know how to update the variables. If you know how to compute (or estimate) the gradient for that loss function you could implement it with tf.custom_gradient, but I don't think it's possible in this case.

    – jdehesa
    Nov 13 '18 at 10:57











  • Thank you for the comment. I was thinking I would get a numeric value that would be treated same as if I do (y-x). Also I tried the other case where I compared cast-ed values. For example, this one failed: cost = tf.sqrt(tf.reduce_mean(tf.square(x_value-y_value))) even though this one works fine: cost = tf.sqrt(tf.reduce_mean(tf.square(x-y)))

    – Kour
    Nov 13 '18 at 15:08











  • Yes, you're right, gradients do not propagate through integer tensors. There is a discussion about in issue #20524 (apparently there was a time when they did, but they removed the behavior for consistency). So yes, any loss that you define on the basis of integers is not going to work (unless you provide your own gradient definition, as I said earlier).

    – jdehesa
    Nov 13 '18 at 17:32



















  • The gradients for XOR are not defined. That is, is basically impossible to tell whether changing a value in your parameters will make cost bigger or smaller, so the optimizer does not know how to update the variables. If you know how to compute (or estimate) the gradient for that loss function you could implement it with tf.custom_gradient, but I don't think it's possible in this case.

    – jdehesa
    Nov 13 '18 at 10:57











  • Thank you for the comment. I was thinking I would get a numeric value that would be treated same as if I do (y-x). Also I tried the other case where I compared cast-ed values. For example, this one failed: cost = tf.sqrt(tf.reduce_mean(tf.square(x_value-y_value))) even though this one works fine: cost = tf.sqrt(tf.reduce_mean(tf.square(x-y)))

    – Kour
    Nov 13 '18 at 15:08











  • Yes, you're right, gradients do not propagate through integer tensors. There is a discussion about in issue #20524 (apparently there was a time when they did, but they removed the behavior for consistency). So yes, any loss that you define on the basis of integers is not going to work (unless you provide your own gradient definition, as I said earlier).

    – jdehesa
    Nov 13 '18 at 17:32

















The gradients for XOR are not defined. That is, is basically impossible to tell whether changing a value in your parameters will make cost bigger or smaller, so the optimizer does not know how to update the variables. If you know how to compute (or estimate) the gradient for that loss function you could implement it with tf.custom_gradient, but I don't think it's possible in this case.

– jdehesa
Nov 13 '18 at 10:57





The gradients for XOR are not defined. That is, is basically impossible to tell whether changing a value in your parameters will make cost bigger or smaller, so the optimizer does not know how to update the variables. If you know how to compute (or estimate) the gradient for that loss function you could implement it with tf.custom_gradient, but I don't think it's possible in this case.

– jdehesa
Nov 13 '18 at 10:57













Thank you for the comment. I was thinking I would get a numeric value that would be treated same as if I do (y-x). Also I tried the other case where I compared cast-ed values. For example, this one failed: cost = tf.sqrt(tf.reduce_mean(tf.square(x_value-y_value))) even though this one works fine: cost = tf.sqrt(tf.reduce_mean(tf.square(x-y)))

– Kour
Nov 13 '18 at 15:08





Thank you for the comment. I was thinking I would get a numeric value that would be treated same as if I do (y-x). Also I tried the other case where I compared cast-ed values. For example, this one failed: cost = tf.sqrt(tf.reduce_mean(tf.square(x_value-y_value))) even though this one works fine: cost = tf.sqrt(tf.reduce_mean(tf.square(x-y)))

– Kour
Nov 13 '18 at 15:08













Yes, you're right, gradients do not propagate through integer tensors. There is a discussion about in issue #20524 (apparently there was a time when they did, but they removed the behavior for consistency). So yes, any loss that you define on the basis of integers is not going to work (unless you provide your own gradient definition, as I said earlier).

– jdehesa
Nov 13 '18 at 17:32





Yes, you're right, gradients do not propagate through integer tensors. There is a discussion about in issue #20524 (apparently there was a time when they did, but they removed the behavior for consistency). So yes, any loss that you define on the basis of integers is not going to work (unless you provide your own gradient definition, as I said earlier).

– jdehesa
Nov 13 '18 at 17:32












0






active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53273692%2ftensorflow-cost-function-based-on-binary-xor-operation%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53273692%2ftensorflow-cost-function-based-on-binary-xor-operation%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Florida Star v. B. J. F.

Danny Elfman

Lugert, Oklahoma