How much total virtual cores are required to process 100GB of data in spark











up vote
-2
down vote

favorite












For example if i choose 16 vcore with 10 worker nodes, so 16-1 (one core to store the hadoop daemons) * 10 (worker nodes) = 150 vcores (Total).
Is it safe to say that 150 vcores are required to process 100GB of data?
or is there any calculation should i need to consider before choosing the vcore?










share|improve this question


























    up vote
    -2
    down vote

    favorite












    For example if i choose 16 vcore with 10 worker nodes, so 16-1 (one core to store the hadoop daemons) * 10 (worker nodes) = 150 vcores (Total).
    Is it safe to say that 150 vcores are required to process 100GB of data?
    or is there any calculation should i need to consider before choosing the vcore?










    share|improve this question
























      up vote
      -2
      down vote

      favorite









      up vote
      -2
      down vote

      favorite











      For example if i choose 16 vcore with 10 worker nodes, so 16-1 (one core to store the hadoop daemons) * 10 (worker nodes) = 150 vcores (Total).
      Is it safe to say that 150 vcores are required to process 100GB of data?
      or is there any calculation should i need to consider before choosing the vcore?










      share|improve this question













      For example if i choose 16 vcore with 10 worker nodes, so 16-1 (one core to store the hadoop daemons) * 10 (worker nodes) = 150 vcores (Total).
      Is it safe to say that 150 vcores are required to process 100GB of data?
      or is there any calculation should i need to consider before choosing the vcore?







      apache-spark






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 11 at 0:13









      Ram

      11




      11
























          1 Answer
          1






          active

          oldest

          votes

















          up vote
          -1
          down vote













          Number of Core in each node:-
          A thumb rule is to use core per task. If tasks are not that much heavy then we can allocate 0.75 core per task.
          Say if the machine is 16 Core then we can run at most 16 + (.25 of 16) = 20 tasks; 0.25 of 16 is added with the assumption that 0.75 per core is getting used.



          Probably this link can help you better:
          https://data-flair.training/forums/topic/hadoop-cluster-hardware-planning-and-provisioning/






          share|improve this answer





















            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














             

            draft saved


            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53244684%2fhow-much-total-virtual-cores-are-required-to-process-100gb-of-data-in-spark%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            -1
            down vote













            Number of Core in each node:-
            A thumb rule is to use core per task. If tasks are not that much heavy then we can allocate 0.75 core per task.
            Say if the machine is 16 Core then we can run at most 16 + (.25 of 16) = 20 tasks; 0.25 of 16 is added with the assumption that 0.75 per core is getting used.



            Probably this link can help you better:
            https://data-flair.training/forums/topic/hadoop-cluster-hardware-planning-and-provisioning/






            share|improve this answer

























              up vote
              -1
              down vote













              Number of Core in each node:-
              A thumb rule is to use core per task. If tasks are not that much heavy then we can allocate 0.75 core per task.
              Say if the machine is 16 Core then we can run at most 16 + (.25 of 16) = 20 tasks; 0.25 of 16 is added with the assumption that 0.75 per core is getting used.



              Probably this link can help you better:
              https://data-flair.training/forums/topic/hadoop-cluster-hardware-planning-and-provisioning/






              share|improve this answer























                up vote
                -1
                down vote










                up vote
                -1
                down vote









                Number of Core in each node:-
                A thumb rule is to use core per task. If tasks are not that much heavy then we can allocate 0.75 core per task.
                Say if the machine is 16 Core then we can run at most 16 + (.25 of 16) = 20 tasks; 0.25 of 16 is added with the assumption that 0.75 per core is getting used.



                Probably this link can help you better:
                https://data-flair.training/forums/topic/hadoop-cluster-hardware-planning-and-provisioning/






                share|improve this answer












                Number of Core in each node:-
                A thumb rule is to use core per task. If tasks are not that much heavy then we can allocate 0.75 core per task.
                Say if the machine is 16 Core then we can run at most 16 + (.25 of 16) = 20 tasks; 0.25 of 16 is added with the assumption that 0.75 per core is getting used.



                Probably this link can help you better:
                https://data-flair.training/forums/topic/hadoop-cluster-hardware-planning-and-provisioning/







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 11 at 10:50









                swapnil shashank

                625




                625






























                     

                    draft saved


                    draft discarded



















































                     


                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53244684%2fhow-much-total-virtual-cores-are-required-to-process-100gb-of-data-in-spark%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Florida Star v. B. J. F.

                    Error while running script in elastic search , gateway timeout

                    Adding quotations to stringified JSON object values