I want to use Tensorflow on CPU for everything except back propagation











up vote
1
down vote

favorite
1












I recently built my first TensorFlow model (converted from hand-coded python). I'm using tensorflow-gpu, but I only want to use GPU for backprop during training. For everything else I want to use CPU. I've seen this article showing how to force CPU use on a system that will use GPU by default. However, you have to specify every single operation where you want to force CPU use. Instead I'd like to do the opposite. I'd like to default to CPU use, but then specify GPU just for the backprop that I do during training. Is there a way to do that?



Update



Looks like things are just going to run slower over tensorflow because of how my model and scenario are built at present. I tried using a different environment that just uses regular (non-gpu) tensorflow, and it still runs significantly slower than hand-coded python. The reason for this, I suspect, is it's a reinforcement learning model that plays checkers (see below) and makes one single forward prop "prediction" at a time as it plays against a computer opponent. At the time I designed the architecture, that made sense. But it's not very efficient to do predictions one at a time, and less so with whatever overhead there is for tensorflow.



So, now I'm thinking that I'm going to need to change the game playing architecture to play, say, a thousand games simultaneously and run a thousand forward prop moves in a batch. But, man, changing the architecture now is going to be tricky at best.










share|improve this question




















  • 1




    Is there a reason you want to do this? If you want forward-passes to be CPU only for a user (non-developer), then just enable CPU when the user is using it.
    – Mateen Ulhaq
    Nov 10 at 2:29








  • 1




    Here's the reason. This model is being used for reinforcement learning. Specifically, it plays checkers. This means that when playing a game (against another computer player), forward prop gets used to make single moves (i.e. single predictions). Thus, for every move, the game itself (built with Python and NumPy) needs to provide the NumPy input array, which then has to be copied over to the GPU for forward prop, then copied back to NumPy to the game. Those copies for every move are very expensive and in fact make the actual game play far slower than hand-coded python on the CPU.
    – tobogranyte
    Nov 10 at 12:26

















up vote
1
down vote

favorite
1












I recently built my first TensorFlow model (converted from hand-coded python). I'm using tensorflow-gpu, but I only want to use GPU for backprop during training. For everything else I want to use CPU. I've seen this article showing how to force CPU use on a system that will use GPU by default. However, you have to specify every single operation where you want to force CPU use. Instead I'd like to do the opposite. I'd like to default to CPU use, but then specify GPU just for the backprop that I do during training. Is there a way to do that?



Update



Looks like things are just going to run slower over tensorflow because of how my model and scenario are built at present. I tried using a different environment that just uses regular (non-gpu) tensorflow, and it still runs significantly slower than hand-coded python. The reason for this, I suspect, is it's a reinforcement learning model that plays checkers (see below) and makes one single forward prop "prediction" at a time as it plays against a computer opponent. At the time I designed the architecture, that made sense. But it's not very efficient to do predictions one at a time, and less so with whatever overhead there is for tensorflow.



So, now I'm thinking that I'm going to need to change the game playing architecture to play, say, a thousand games simultaneously and run a thousand forward prop moves in a batch. But, man, changing the architecture now is going to be tricky at best.










share|improve this question




















  • 1




    Is there a reason you want to do this? If you want forward-passes to be CPU only for a user (non-developer), then just enable CPU when the user is using it.
    – Mateen Ulhaq
    Nov 10 at 2:29








  • 1




    Here's the reason. This model is being used for reinforcement learning. Specifically, it plays checkers. This means that when playing a game (against another computer player), forward prop gets used to make single moves (i.e. single predictions). Thus, for every move, the game itself (built with Python and NumPy) needs to provide the NumPy input array, which then has to be copied over to the GPU for forward prop, then copied back to NumPy to the game. Those copies for every move are very expensive and in fact make the actual game play far slower than hand-coded python on the CPU.
    – tobogranyte
    Nov 10 at 12:26















up vote
1
down vote

favorite
1









up vote
1
down vote

favorite
1






1





I recently built my first TensorFlow model (converted from hand-coded python). I'm using tensorflow-gpu, but I only want to use GPU for backprop during training. For everything else I want to use CPU. I've seen this article showing how to force CPU use on a system that will use GPU by default. However, you have to specify every single operation where you want to force CPU use. Instead I'd like to do the opposite. I'd like to default to CPU use, but then specify GPU just for the backprop that I do during training. Is there a way to do that?



Update



Looks like things are just going to run slower over tensorflow because of how my model and scenario are built at present. I tried using a different environment that just uses regular (non-gpu) tensorflow, and it still runs significantly slower than hand-coded python. The reason for this, I suspect, is it's a reinforcement learning model that plays checkers (see below) and makes one single forward prop "prediction" at a time as it plays against a computer opponent. At the time I designed the architecture, that made sense. But it's not very efficient to do predictions one at a time, and less so with whatever overhead there is for tensorflow.



So, now I'm thinking that I'm going to need to change the game playing architecture to play, say, a thousand games simultaneously and run a thousand forward prop moves in a batch. But, man, changing the architecture now is going to be tricky at best.










share|improve this question















I recently built my first TensorFlow model (converted from hand-coded python). I'm using tensorflow-gpu, but I only want to use GPU for backprop during training. For everything else I want to use CPU. I've seen this article showing how to force CPU use on a system that will use GPU by default. However, you have to specify every single operation where you want to force CPU use. Instead I'd like to do the opposite. I'd like to default to CPU use, but then specify GPU just for the backprop that I do during training. Is there a way to do that?



Update



Looks like things are just going to run slower over tensorflow because of how my model and scenario are built at present. I tried using a different environment that just uses regular (non-gpu) tensorflow, and it still runs significantly slower than hand-coded python. The reason for this, I suspect, is it's a reinforcement learning model that plays checkers (see below) and makes one single forward prop "prediction" at a time as it plays against a computer opponent. At the time I designed the architecture, that made sense. But it's not very efficient to do predictions one at a time, and less so with whatever overhead there is for tensorflow.



So, now I'm thinking that I'm going to need to change the game playing architecture to play, say, a thousand games simultaneously and run a thousand forward prop moves in a batch. But, man, changing the architecture now is going to be tricky at best.







python tensorflow






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 10 at 18:14

























asked Nov 10 at 2:19









tobogranyte

519417




519417








  • 1




    Is there a reason you want to do this? If you want forward-passes to be CPU only for a user (non-developer), then just enable CPU when the user is using it.
    – Mateen Ulhaq
    Nov 10 at 2:29








  • 1




    Here's the reason. This model is being used for reinforcement learning. Specifically, it plays checkers. This means that when playing a game (against another computer player), forward prop gets used to make single moves (i.e. single predictions). Thus, for every move, the game itself (built with Python and NumPy) needs to provide the NumPy input array, which then has to be copied over to the GPU for forward prop, then copied back to NumPy to the game. Those copies for every move are very expensive and in fact make the actual game play far slower than hand-coded python on the CPU.
    – tobogranyte
    Nov 10 at 12:26
















  • 1




    Is there a reason you want to do this? If you want forward-passes to be CPU only for a user (non-developer), then just enable CPU when the user is using it.
    – Mateen Ulhaq
    Nov 10 at 2:29








  • 1




    Here's the reason. This model is being used for reinforcement learning. Specifically, it plays checkers. This means that when playing a game (against another computer player), forward prop gets used to make single moves (i.e. single predictions). Thus, for every move, the game itself (built with Python and NumPy) needs to provide the NumPy input array, which then has to be copied over to the GPU for forward prop, then copied back to NumPy to the game. Those copies for every move are very expensive and in fact make the actual game play far slower than hand-coded python on the CPU.
    – tobogranyte
    Nov 10 at 12:26










1




1




Is there a reason you want to do this? If you want forward-passes to be CPU only for a user (non-developer), then just enable CPU when the user is using it.
– Mateen Ulhaq
Nov 10 at 2:29






Is there a reason you want to do this? If you want forward-passes to be CPU only for a user (non-developer), then just enable CPU when the user is using it.
– Mateen Ulhaq
Nov 10 at 2:29






1




1




Here's the reason. This model is being used for reinforcement learning. Specifically, it plays checkers. This means that when playing a game (against another computer player), forward prop gets used to make single moves (i.e. single predictions). Thus, for every move, the game itself (built with Python and NumPy) needs to provide the NumPy input array, which then has to be copied over to the GPU for forward prop, then copied back to NumPy to the game. Those copies for every move are very expensive and in fact make the actual game play far slower than hand-coded python on the CPU.
– tobogranyte
Nov 10 at 12:26






Here's the reason. This model is being used for reinforcement learning. Specifically, it plays checkers. This means that when playing a game (against another computer player), forward prop gets used to make single moves (i.e. single predictions). Thus, for every move, the game itself (built with Python and NumPy) needs to provide the NumPy input array, which then has to be copied over to the GPU for forward prop, then copied back to NumPy to the game. Those copies for every move are very expensive and in fact make the actual game play far slower than hand-coded python on the CPU.
– tobogranyte
Nov 10 at 12:26














1 Answer
1






active

oldest

votes

















up vote
0
down vote













TensorFlow lets you control device placement with the tf.device context manager.



So for example to run some code on the CPU do



with tf.device('cpu:0'):
<your code goes here>


Similarly to force GPU usage.



Instead of always running your forward pass on the CPU though you're better off making two graphs: a forward-only cpu-only graph to be used when rolling out the policy and a gpu-only forward-and-backward graph to be used when training.






share|improve this answer





















    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














     

    draft saved


    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53235485%2fi-want-to-use-tensorflow-on-cpu-for-everything-except-back-propagation%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    0
    down vote













    TensorFlow lets you control device placement with the tf.device context manager.



    So for example to run some code on the CPU do



    with tf.device('cpu:0'):
    <your code goes here>


    Similarly to force GPU usage.



    Instead of always running your forward pass on the CPU though you're better off making two graphs: a forward-only cpu-only graph to be used when rolling out the policy and a gpu-only forward-and-backward graph to be used when training.






    share|improve this answer

























      up vote
      0
      down vote













      TensorFlow lets you control device placement with the tf.device context manager.



      So for example to run some code on the CPU do



      with tf.device('cpu:0'):
      <your code goes here>


      Similarly to force GPU usage.



      Instead of always running your forward pass on the CPU though you're better off making two graphs: a forward-only cpu-only graph to be used when rolling out the policy and a gpu-only forward-and-backward graph to be used when training.






      share|improve this answer























        up vote
        0
        down vote










        up vote
        0
        down vote









        TensorFlow lets you control device placement with the tf.device context manager.



        So for example to run some code on the CPU do



        with tf.device('cpu:0'):
        <your code goes here>


        Similarly to force GPU usage.



        Instead of always running your forward pass on the CPU though you're better off making two graphs: a forward-only cpu-only graph to be used when rolling out the policy and a gpu-only forward-and-backward graph to be used when training.






        share|improve this answer












        TensorFlow lets you control device placement with the tf.device context manager.



        So for example to run some code on the CPU do



        with tf.device('cpu:0'):
        <your code goes here>


        Similarly to force GPU usage.



        Instead of always running your forward pass on the CPU though you're better off making two graphs: a forward-only cpu-only graph to be used when rolling out the policy and a gpu-only forward-and-backward graph to be used when training.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered 2 days ago









        Alexandre Passos

        4,0091917




        4,0091917






























             

            draft saved


            draft discarded



















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53235485%2fi-want-to-use-tensorflow-on-cpu-for-everything-except-back-propagation%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Florida Star v. B. J. F.

            Error while running script in elastic search , gateway timeout

            Adding quotations to stringified JSON object values