write values into csv during script execution












1














I have a simple script that reads values from one csv, runs some internal function on them that takes 2-3 seconds each time, and then writes the results into another csv file.



Here is what it looks like, minus the internal function I referenced.



import csv
import time

pause = 3

with open('input.csv', mode='r') as input_file,
open('output.csv', mode='w') as output_file:
input_reader = csv.DictReader(input_file)
output_writer = csv.writer(output_file, delimiter=',', quotechar='"',
quoting=csv.QUOTE_MINIMAL)
count = 1
for row in input_reader:
row['new_value'] = "result from function that takes time"
output_writer.writerow( row.values() )
print( 'Processed row: ' + str( count ) )
count = count + 1
time.sleep(pause)


The problem I face is that the output.csv file remains blank until everything is finished executing.



I'd like to access and make use of the file elsewhere whilst this long script runs.



Is there a way I can prevent the delay of writing of the values into the output.csv?



Edit: here is an dummy csv file for the script above:



value
43t34t34t
4r245r243
2q352q352
gergmergre
435q345q35









share|improve this question
























  • Have you thought of creating a string object which you append (write to) the new rows, which you then later write to the file?
    – Modelmat
    Nov 12 at 6:57












  • Putting an output_file.flush() after the output_writer.writerow() call might do the trick.
    – martineau
    Nov 12 at 7:59


















1














I have a simple script that reads values from one csv, runs some internal function on them that takes 2-3 seconds each time, and then writes the results into another csv file.



Here is what it looks like, minus the internal function I referenced.



import csv
import time

pause = 3

with open('input.csv', mode='r') as input_file,
open('output.csv', mode='w') as output_file:
input_reader = csv.DictReader(input_file)
output_writer = csv.writer(output_file, delimiter=',', quotechar='"',
quoting=csv.QUOTE_MINIMAL)
count = 1
for row in input_reader:
row['new_value'] = "result from function that takes time"
output_writer.writerow( row.values() )
print( 'Processed row: ' + str( count ) )
count = count + 1
time.sleep(pause)


The problem I face is that the output.csv file remains blank until everything is finished executing.



I'd like to access and make use of the file elsewhere whilst this long script runs.



Is there a way I can prevent the delay of writing of the values into the output.csv?



Edit: here is an dummy csv file for the script above:



value
43t34t34t
4r245r243
2q352q352
gergmergre
435q345q35









share|improve this question
























  • Have you thought of creating a string object which you append (write to) the new rows, which you then later write to the file?
    – Modelmat
    Nov 12 at 6:57












  • Putting an output_file.flush() after the output_writer.writerow() call might do the trick.
    – martineau
    Nov 12 at 7:59
















1












1








1







I have a simple script that reads values from one csv, runs some internal function on them that takes 2-3 seconds each time, and then writes the results into another csv file.



Here is what it looks like, minus the internal function I referenced.



import csv
import time

pause = 3

with open('input.csv', mode='r') as input_file,
open('output.csv', mode='w') as output_file:
input_reader = csv.DictReader(input_file)
output_writer = csv.writer(output_file, delimiter=',', quotechar='"',
quoting=csv.QUOTE_MINIMAL)
count = 1
for row in input_reader:
row['new_value'] = "result from function that takes time"
output_writer.writerow( row.values() )
print( 'Processed row: ' + str( count ) )
count = count + 1
time.sleep(pause)


The problem I face is that the output.csv file remains blank until everything is finished executing.



I'd like to access and make use of the file elsewhere whilst this long script runs.



Is there a way I can prevent the delay of writing of the values into the output.csv?



Edit: here is an dummy csv file for the script above:



value
43t34t34t
4r245r243
2q352q352
gergmergre
435q345q35









share|improve this question















I have a simple script that reads values from one csv, runs some internal function on them that takes 2-3 seconds each time, and then writes the results into another csv file.



Here is what it looks like, minus the internal function I referenced.



import csv
import time

pause = 3

with open('input.csv', mode='r') as input_file,
open('output.csv', mode='w') as output_file:
input_reader = csv.DictReader(input_file)
output_writer = csv.writer(output_file, delimiter=',', quotechar='"',
quoting=csv.QUOTE_MINIMAL)
count = 1
for row in input_reader:
row['new_value'] = "result from function that takes time"
output_writer.writerow( row.values() )
print( 'Processed row: ' + str( count ) )
count = count + 1
time.sleep(pause)


The problem I face is that the output.csv file remains blank until everything is finished executing.



I'd like to access and make use of the file elsewhere whilst this long script runs.



Is there a way I can prevent the delay of writing of the values into the output.csv?



Edit: here is an dummy csv file for the script above:



value
43t34t34t
4r245r243
2q352q352
gergmergre
435q345q35






python






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 12 at 7:55









martineau

65.6k989177




65.6k989177










asked Nov 12 at 6:54









Jack Robson

628519




628519












  • Have you thought of creating a string object which you append (write to) the new rows, which you then later write to the file?
    – Modelmat
    Nov 12 at 6:57












  • Putting an output_file.flush() after the output_writer.writerow() call might do the trick.
    – martineau
    Nov 12 at 7:59




















  • Have you thought of creating a string object which you append (write to) the new rows, which you then later write to the file?
    – Modelmat
    Nov 12 at 6:57












  • Putting an output_file.flush() after the output_writer.writerow() call might do the trick.
    – martineau
    Nov 12 at 7:59


















Have you thought of creating a string object which you append (write to) the new rows, which you then later write to the file?
– Modelmat
Nov 12 at 6:57






Have you thought of creating a string object which you append (write to) the new rows, which you then later write to the file?
– Modelmat
Nov 12 at 6:57














Putting an output_file.flush() after the output_writer.writerow() call might do the trick.
– martineau
Nov 12 at 7:59






Putting an output_file.flush() after the output_writer.writerow() call might do the trick.
– martineau
Nov 12 at 7:59














1 Answer
1






active

oldest

votes


















2














I think you want to look at the buffering option - this is what controls how often Python flushes to a file.



Specifically setting open('name','wb',buffering=0) will reduce buffering to minimum, but maybe you want to set it to some thing else that makes sense.




buffering is an optional integer used to set the buffering policy.
Pass 0 to switch buffering off (only allowed in binary mode), 1 to
select line buffering (only usable in text mode), and an integer > 1
to indicate the size in bytes of a fixed-size chunk buffer. When no
buffering argument is given, the default buffering policy works as
follows:




  • Binary files are buffered in fixed-size chunks; the size of the buffer is chosen using a heuristic trying to determine the underlying
    device’s “block size” and falling back on io.DEFAULT_BUFFER_SIZE. On
    many systems, the buffer will typically be 4096 or 8192 bytes long.

  • “Interactive” text files (files for which isatty() returns True) use line buffering. Other text files use the policy described above
    for binary files.




See also How often does python flush to a file? .






share|improve this answer























    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53257171%2fwrite-values-into-csv-during-script-execution%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    2














    I think you want to look at the buffering option - this is what controls how often Python flushes to a file.



    Specifically setting open('name','wb',buffering=0) will reduce buffering to minimum, but maybe you want to set it to some thing else that makes sense.




    buffering is an optional integer used to set the buffering policy.
    Pass 0 to switch buffering off (only allowed in binary mode), 1 to
    select line buffering (only usable in text mode), and an integer > 1
    to indicate the size in bytes of a fixed-size chunk buffer. When no
    buffering argument is given, the default buffering policy works as
    follows:




    • Binary files are buffered in fixed-size chunks; the size of the buffer is chosen using a heuristic trying to determine the underlying
      device’s “block size” and falling back on io.DEFAULT_BUFFER_SIZE. On
      many systems, the buffer will typically be 4096 or 8192 bytes long.

    • “Interactive” text files (files for which isatty() returns True) use line buffering. Other text files use the policy described above
      for binary files.




    See also How often does python flush to a file? .






    share|improve this answer




























      2














      I think you want to look at the buffering option - this is what controls how often Python flushes to a file.



      Specifically setting open('name','wb',buffering=0) will reduce buffering to minimum, but maybe you want to set it to some thing else that makes sense.




      buffering is an optional integer used to set the buffering policy.
      Pass 0 to switch buffering off (only allowed in binary mode), 1 to
      select line buffering (only usable in text mode), and an integer > 1
      to indicate the size in bytes of a fixed-size chunk buffer. When no
      buffering argument is given, the default buffering policy works as
      follows:




      • Binary files are buffered in fixed-size chunks; the size of the buffer is chosen using a heuristic trying to determine the underlying
        device’s “block size” and falling back on io.DEFAULT_BUFFER_SIZE. On
        many systems, the buffer will typically be 4096 or 8192 bytes long.

      • “Interactive” text files (files for which isatty() returns True) use line buffering. Other text files use the policy described above
        for binary files.




      See also How often does python flush to a file? .






      share|improve this answer


























        2












        2








        2






        I think you want to look at the buffering option - this is what controls how often Python flushes to a file.



        Specifically setting open('name','wb',buffering=0) will reduce buffering to minimum, but maybe you want to set it to some thing else that makes sense.




        buffering is an optional integer used to set the buffering policy.
        Pass 0 to switch buffering off (only allowed in binary mode), 1 to
        select line buffering (only usable in text mode), and an integer > 1
        to indicate the size in bytes of a fixed-size chunk buffer. When no
        buffering argument is given, the default buffering policy works as
        follows:




        • Binary files are buffered in fixed-size chunks; the size of the buffer is chosen using a heuristic trying to determine the underlying
          device’s “block size” and falling back on io.DEFAULT_BUFFER_SIZE. On
          many systems, the buffer will typically be 4096 or 8192 bytes long.

        • “Interactive” text files (files for which isatty() returns True) use line buffering. Other text files use the policy described above
          for binary files.




        See also How often does python flush to a file? .






        share|improve this answer














        I think you want to look at the buffering option - this is what controls how often Python flushes to a file.



        Specifically setting open('name','wb',buffering=0) will reduce buffering to minimum, but maybe you want to set it to some thing else that makes sense.




        buffering is an optional integer used to set the buffering policy.
        Pass 0 to switch buffering off (only allowed in binary mode), 1 to
        select line buffering (only usable in text mode), and an integer > 1
        to indicate the size in bytes of a fixed-size chunk buffer. When no
        buffering argument is given, the default buffering policy works as
        follows:




        • Binary files are buffered in fixed-size chunks; the size of the buffer is chosen using a heuristic trying to determine the underlying
          device’s “block size” and falling back on io.DEFAULT_BUFFER_SIZE. On
          many systems, the buffer will typically be 4096 or 8192 bytes long.

        • “Interactive” text files (files for which isatty() returns True) use line buffering. Other text files use the policy described above
          for binary files.




        See also How often does python flush to a file? .







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Nov 12 at 7:38

























        answered Nov 12 at 7:01









        kabanus

        11.2k31237




        11.2k31237






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53257171%2fwrite-values-into-csv-during-script-execution%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Florida Star v. B. J. F.

            Danny Elfman

            Retrieve a Users Dashboard in Tumblr with R and TumblR. Oauth Issues