apache spark load inner folder











up vote
0
down vote

favorite












import findspark
findspark.init('C:spark')
from pyspark.sql import SparkSession

spark = SparkSession.builder.getOrCreate()

a =
i=1880
while i<2018:
a.append(str(i)+'/'+str(i)+'verr.csv')
i = i+1

dataset1 = spark.read.format('csv').option('header','true').load('C://venq/uyh/'+ a)


i run code and i get the error;
dataset1 = spark.read.format('csv').option('header','true').load('C://venq/uyh/'+ a)
TypeError: can only concatenate str (not "list") to str



i have a "C:venquyh18801880verr.csv" format a loop.
I have csv files in nested folders. I want to read them all with spark. however I get the following error. How can I solve this? thanks










share|improve this question




























    up vote
    0
    down vote

    favorite












    import findspark
    findspark.init('C:spark')
    from pyspark.sql import SparkSession

    spark = SparkSession.builder.getOrCreate()

    a =
    i=1880
    while i<2018:
    a.append(str(i)+'/'+str(i)+'verr.csv')
    i = i+1

    dataset1 = spark.read.format('csv').option('header','true').load('C://venq/uyh/'+ a)


    i run code and i get the error;
    dataset1 = spark.read.format('csv').option('header','true').load('C://venq/uyh/'+ a)
    TypeError: can only concatenate str (not "list") to str



    i have a "C:venquyh18801880verr.csv" format a loop.
    I have csv files in nested folders. I want to read them all with spark. however I get the following error. How can I solve this? thanks










    share|improve this question


























      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      import findspark
      findspark.init('C:spark')
      from pyspark.sql import SparkSession

      spark = SparkSession.builder.getOrCreate()

      a =
      i=1880
      while i<2018:
      a.append(str(i)+'/'+str(i)+'verr.csv')
      i = i+1

      dataset1 = spark.read.format('csv').option('header','true').load('C://venq/uyh/'+ a)


      i run code and i get the error;
      dataset1 = spark.read.format('csv').option('header','true').load('C://venq/uyh/'+ a)
      TypeError: can only concatenate str (not "list") to str



      i have a "C:venquyh18801880verr.csv" format a loop.
      I have csv files in nested folders. I want to read them all with spark. however I get the following error. How can I solve this? thanks










      share|improve this question















      import findspark
      findspark.init('C:spark')
      from pyspark.sql import SparkSession

      spark = SparkSession.builder.getOrCreate()

      a =
      i=1880
      while i<2018:
      a.append(str(i)+'/'+str(i)+'verr.csv')
      i = i+1

      dataset1 = spark.read.format('csv').option('header','true').load('C://venq/uyh/'+ a)


      i run code and i get the error;
      dataset1 = spark.read.format('csv').option('header','true').load('C://venq/uyh/'+ a)
      TypeError: can only concatenate str (not "list") to str



      i have a "C:venquyh18801880verr.csv" format a loop.
      I have csv files in nested folders. I want to read them all with spark. however I get the following error. How can I solve this? thanks







      python python-3.x apache-spark hadoop






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 10 at 19:22

























      asked Nov 10 at 19:17









      tim software

      254




      254
























          1 Answer
          1






          active

          oldest

          votes

















          up vote
          1
          down vote



          accepted










          variable 'a' is a list of files.



          dataset1 = spark.read.format('csv').option('header','true').load('C://venq/uyh/'+ a)



          Here you are trying to concatenate a string 'C://venq/uyh/' with 'a' which is a list - which throws the error. Try



          root = r"C://venq/uyh/"

          while i<2018:
          a.append(root + str(i)+'/'+ str(i)+'verr.csv')
          i = i+1


          and then use a directly



          dataset1 = spark.read.format('csv').option('header','true').load(a)






          share|improve this answer





















          • it worked,thank you very much
            – tim software
            Nov 10 at 20:13










          • please consider accepting the answer if it worked for you. thanks!
            – justanotherguy
            Nov 10 at 20:15











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














           

          draft saved


          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53242542%2fapache-spark-load-inner-folder%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          1
          down vote



          accepted










          variable 'a' is a list of files.



          dataset1 = spark.read.format('csv').option('header','true').load('C://venq/uyh/'+ a)



          Here you are trying to concatenate a string 'C://venq/uyh/' with 'a' which is a list - which throws the error. Try



          root = r"C://venq/uyh/"

          while i<2018:
          a.append(root + str(i)+'/'+ str(i)+'verr.csv')
          i = i+1


          and then use a directly



          dataset1 = spark.read.format('csv').option('header','true').load(a)






          share|improve this answer





















          • it worked,thank you very much
            – tim software
            Nov 10 at 20:13










          • please consider accepting the answer if it worked for you. thanks!
            – justanotherguy
            Nov 10 at 20:15















          up vote
          1
          down vote



          accepted










          variable 'a' is a list of files.



          dataset1 = spark.read.format('csv').option('header','true').load('C://venq/uyh/'+ a)



          Here you are trying to concatenate a string 'C://venq/uyh/' with 'a' which is a list - which throws the error. Try



          root = r"C://venq/uyh/"

          while i<2018:
          a.append(root + str(i)+'/'+ str(i)+'verr.csv')
          i = i+1


          and then use a directly



          dataset1 = spark.read.format('csv').option('header','true').load(a)






          share|improve this answer





















          • it worked,thank you very much
            – tim software
            Nov 10 at 20:13










          • please consider accepting the answer if it worked for you. thanks!
            – justanotherguy
            Nov 10 at 20:15













          up vote
          1
          down vote



          accepted







          up vote
          1
          down vote



          accepted






          variable 'a' is a list of files.



          dataset1 = spark.read.format('csv').option('header','true').load('C://venq/uyh/'+ a)



          Here you are trying to concatenate a string 'C://venq/uyh/' with 'a' which is a list - which throws the error. Try



          root = r"C://venq/uyh/"

          while i<2018:
          a.append(root + str(i)+'/'+ str(i)+'verr.csv')
          i = i+1


          and then use a directly



          dataset1 = spark.read.format('csv').option('header','true').load(a)






          share|improve this answer












          variable 'a' is a list of files.



          dataset1 = spark.read.format('csv').option('header','true').load('C://venq/uyh/'+ a)



          Here you are trying to concatenate a string 'C://venq/uyh/' with 'a' which is a list - which throws the error. Try



          root = r"C://venq/uyh/"

          while i<2018:
          a.append(root + str(i)+'/'+ str(i)+'verr.csv')
          i = i+1


          and then use a directly



          dataset1 = spark.read.format('csv').option('header','true').load(a)







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 10 at 19:54









          justanotherguy

          799




          799












          • it worked,thank you very much
            – tim software
            Nov 10 at 20:13










          • please consider accepting the answer if it worked for you. thanks!
            – justanotherguy
            Nov 10 at 20:15


















          • it worked,thank you very much
            – tim software
            Nov 10 at 20:13










          • please consider accepting the answer if it worked for you. thanks!
            – justanotherguy
            Nov 10 at 20:15
















          it worked,thank you very much
          – tim software
          Nov 10 at 20:13




          it worked,thank you very much
          – tim software
          Nov 10 at 20:13












          please consider accepting the answer if it worked for you. thanks!
          – justanotherguy
          Nov 10 at 20:15




          please consider accepting the answer if it worked for you. thanks!
          – justanotherguy
          Nov 10 at 20:15


















           

          draft saved


          draft discarded



















































           


          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53242542%2fapache-spark-load-inner-folder%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Florida Star v. B. J. F.

          Danny Elfman

          Lugert, Oklahoma