is it possible to calculate how many polygon can be rendered in unity 3d by using GPU information alone?












0















Actually i am planning to buy a new laptop for game development and was wondering if is it possible to calculate which graphic card can rendered heavy geometry stuff without testing on actual hardware?



Is possible to calculate tentative number of polygon a graphic card render in unity3d?










share|improve this question



























    0















    Actually i am planning to buy a new laptop for game development and was wondering if is it possible to calculate which graphic card can rendered heavy geometry stuff without testing on actual hardware?



    Is possible to calculate tentative number of polygon a graphic card render in unity3d?










    share|improve this question

























      0












      0








      0








      Actually i am planning to buy a new laptop for game development and was wondering if is it possible to calculate which graphic card can rendered heavy geometry stuff without testing on actual hardware?



      Is possible to calculate tentative number of polygon a graphic card render in unity3d?










      share|improve this question














      Actually i am planning to buy a new laptop for game development and was wondering if is it possible to calculate which graphic card can rendered heavy geometry stuff without testing on actual hardware?



      Is possible to calculate tentative number of polygon a graphic card render in unity3d?







      unity3d 3d gpu






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 15 '18 at 18:48









      Robin GuptaRobin Gupta

      1




      1
























          1 Answer
          1






          active

          oldest

          votes


















          1














          The short answer is that GPU render time heavily depends on how many vertices and pixels are being processed, the number of cycles necessary to run the shaders you need to run, the memory bandwidth of the GPU (due to the time it takes to receive instructions from the CPU), and the processing power of the GPU itself, as well as any limitations in how its shader cores can be used. It's too broad to be answered fully in one question here.



          But, if you are able to render your game on graphics card A and you want an extremely rough estimate of render time on graphics card B, here's one option:



          Take the render time on graphics card A and multiply it by A's FLOPS rate. Then, divide by the FLOPS rate of B. The result is an extremely rough estimate of the render time on graphics card B. This estimate will only get less accurate the more differences there are between the scales/architectures of A and B.



          The most accurate way to calculate max/min/average frame render speed is to experiment with the hardware in question.






          share|improve this answer
























          • a mayor factor at play here is shader complexity, vertex count is not that much of a limit these days

            – zambari
            Nov 16 '18 at 8:13











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53326080%2fis-it-possible-to-calculate-how-many-polygon-can-be-rendered-in-unity-3d-by-usin%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1














          The short answer is that GPU render time heavily depends on how many vertices and pixels are being processed, the number of cycles necessary to run the shaders you need to run, the memory bandwidth of the GPU (due to the time it takes to receive instructions from the CPU), and the processing power of the GPU itself, as well as any limitations in how its shader cores can be used. It's too broad to be answered fully in one question here.



          But, if you are able to render your game on graphics card A and you want an extremely rough estimate of render time on graphics card B, here's one option:



          Take the render time on graphics card A and multiply it by A's FLOPS rate. Then, divide by the FLOPS rate of B. The result is an extremely rough estimate of the render time on graphics card B. This estimate will only get less accurate the more differences there are between the scales/architectures of A and B.



          The most accurate way to calculate max/min/average frame render speed is to experiment with the hardware in question.






          share|improve this answer
























          • a mayor factor at play here is shader complexity, vertex count is not that much of a limit these days

            – zambari
            Nov 16 '18 at 8:13
















          1














          The short answer is that GPU render time heavily depends on how many vertices and pixels are being processed, the number of cycles necessary to run the shaders you need to run, the memory bandwidth of the GPU (due to the time it takes to receive instructions from the CPU), and the processing power of the GPU itself, as well as any limitations in how its shader cores can be used. It's too broad to be answered fully in one question here.



          But, if you are able to render your game on graphics card A and you want an extremely rough estimate of render time on graphics card B, here's one option:



          Take the render time on graphics card A and multiply it by A's FLOPS rate. Then, divide by the FLOPS rate of B. The result is an extremely rough estimate of the render time on graphics card B. This estimate will only get less accurate the more differences there are between the scales/architectures of A and B.



          The most accurate way to calculate max/min/average frame render speed is to experiment with the hardware in question.






          share|improve this answer
























          • a mayor factor at play here is shader complexity, vertex count is not that much of a limit these days

            – zambari
            Nov 16 '18 at 8:13














          1












          1








          1







          The short answer is that GPU render time heavily depends on how many vertices and pixels are being processed, the number of cycles necessary to run the shaders you need to run, the memory bandwidth of the GPU (due to the time it takes to receive instructions from the CPU), and the processing power of the GPU itself, as well as any limitations in how its shader cores can be used. It's too broad to be answered fully in one question here.



          But, if you are able to render your game on graphics card A and you want an extremely rough estimate of render time on graphics card B, here's one option:



          Take the render time on graphics card A and multiply it by A's FLOPS rate. Then, divide by the FLOPS rate of B. The result is an extremely rough estimate of the render time on graphics card B. This estimate will only get less accurate the more differences there are between the scales/architectures of A and B.



          The most accurate way to calculate max/min/average frame render speed is to experiment with the hardware in question.






          share|improve this answer













          The short answer is that GPU render time heavily depends on how many vertices and pixels are being processed, the number of cycles necessary to run the shaders you need to run, the memory bandwidth of the GPU (due to the time it takes to receive instructions from the CPU), and the processing power of the GPU itself, as well as any limitations in how its shader cores can be used. It's too broad to be answered fully in one question here.



          But, if you are able to render your game on graphics card A and you want an extremely rough estimate of render time on graphics card B, here's one option:



          Take the render time on graphics card A and multiply it by A's FLOPS rate. Then, divide by the FLOPS rate of B. The result is an extremely rough estimate of the render time on graphics card B. This estimate will only get less accurate the more differences there are between the scales/architectures of A and B.



          The most accurate way to calculate max/min/average frame render speed is to experiment with the hardware in question.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 15 '18 at 19:51









          RuzihmRuzihm

          3,71611628




          3,71611628













          • a mayor factor at play here is shader complexity, vertex count is not that much of a limit these days

            – zambari
            Nov 16 '18 at 8:13



















          • a mayor factor at play here is shader complexity, vertex count is not that much of a limit these days

            – zambari
            Nov 16 '18 at 8:13

















          a mayor factor at play here is shader complexity, vertex count is not that much of a limit these days

          – zambari
          Nov 16 '18 at 8:13





          a mayor factor at play here is shader complexity, vertex count is not that much of a limit these days

          – zambari
          Nov 16 '18 at 8:13




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53326080%2fis-it-possible-to-calculate-how-many-polygon-can-be-rendered-in-unity-3d-by-usin%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Florida Star v. B. J. F.

          Error while running script in elastic search , gateway timeout

          Adding quotations to stringified JSON object values