Print all categories in pyspark dataframe column












-1















I have a large dataframe where one column, called location, has just a small number of cities, for example: ["New York", "London", "Paris", "Berlin"...].



I want to print all distinct values on that column, such that I know if, for example, values for one city are missing. How can I do this, since the .describe('location') method is not helping ?










share|improve this question



























    -1















    I have a large dataframe where one column, called location, has just a small number of cities, for example: ["New York", "London", "Paris", "Berlin"...].



    I want to print all distinct values on that column, such that I know if, for example, values for one city are missing. How can I do this, since the .describe('location') method is not helping ?










    share|improve this question

























      -1












      -1








      -1








      I have a large dataframe where one column, called location, has just a small number of cities, for example: ["New York", "London", "Paris", "Berlin"...].



      I want to print all distinct values on that column, such that I know if, for example, values for one city are missing. How can I do this, since the .describe('location') method is not helping ?










      share|improve this question














      I have a large dataframe where one column, called location, has just a small number of cities, for example: ["New York", "London", "Paris", "Berlin"...].



      I want to print all distinct values on that column, such that I know if, for example, values for one city are missing. How can I do this, since the .describe('location') method is not helping ?







      python pyspark pyspark-sql






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 14 '18 at 10:37









      QubixQubix

      77221327




      77221327
























          3 Answers
          3






          active

          oldest

          votes


















          2














          With this you cant print the distinct values in the column location



          from pyspark.sql import functions as F
          df.select(F.col('location')).distinct()





          share|improve this answer
























          • Sorry, by Distinct I meant just a list with all possible values, so not [London, Berlin, Berlin, Berlin, Paris] , but just [London, Berlin, Paris]. I think mine does the same.

            – Qubix
            Nov 14 '18 at 14:14






          • 1





            Yours does also, but you are agregating data and performing an operation you didnt really need. With my code you get the result you want more efficiently :)

            – Manrique
            Nov 14 '18 at 14:18











          • did it help ? @Qubix

            – Manrique
            Nov 15 '18 at 21:56



















          1














          describe method is for basic predefined statistics like count, mean, std, min, max etc. However, in order to find distinct values for any column you can use distinct() method.



          Hope this helps.



          Regards,



          Neeraj






          share|improve this answer































            0














            I found it:



            df.groupBy("location").count().show()





            share|improve this answer























              Your Answer






              StackExchange.ifUsing("editor", function () {
              StackExchange.using("externalEditor", function () {
              StackExchange.using("snippets", function () {
              StackExchange.snippets.init();
              });
              });
              }, "code-snippets");

              StackExchange.ready(function() {
              var channelOptions = {
              tags: "".split(" "),
              id: "1"
              };
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function() {
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled) {
              StackExchange.using("snippets", function() {
              createEditor();
              });
              }
              else {
              createEditor();
              }
              });

              function createEditor() {
              StackExchange.prepareEditor({
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: true,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: 10,
              bindNavPrevention: true,
              postfix: "",
              imageUploader: {
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              },
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              });


              }
              });














              draft saved

              draft discarded


















              StackExchange.ready(
              function () {
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53298203%2fprint-all-categories-in-pyspark-dataframe-column%23new-answer', 'question_page');
              }
              );

              Post as a guest















              Required, but never shown

























              3 Answers
              3






              active

              oldest

              votes








              3 Answers
              3






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              2














              With this you cant print the distinct values in the column location



              from pyspark.sql import functions as F
              df.select(F.col('location')).distinct()





              share|improve this answer
























              • Sorry, by Distinct I meant just a list with all possible values, so not [London, Berlin, Berlin, Berlin, Paris] , but just [London, Berlin, Paris]. I think mine does the same.

                – Qubix
                Nov 14 '18 at 14:14






              • 1





                Yours does also, but you are agregating data and performing an operation you didnt really need. With my code you get the result you want more efficiently :)

                – Manrique
                Nov 14 '18 at 14:18











              • did it help ? @Qubix

                – Manrique
                Nov 15 '18 at 21:56
















              2














              With this you cant print the distinct values in the column location



              from pyspark.sql import functions as F
              df.select(F.col('location')).distinct()





              share|improve this answer
























              • Sorry, by Distinct I meant just a list with all possible values, so not [London, Berlin, Berlin, Berlin, Paris] , but just [London, Berlin, Paris]. I think mine does the same.

                – Qubix
                Nov 14 '18 at 14:14






              • 1





                Yours does also, but you are agregating data and performing an operation you didnt really need. With my code you get the result you want more efficiently :)

                – Manrique
                Nov 14 '18 at 14:18











              • did it help ? @Qubix

                – Manrique
                Nov 15 '18 at 21:56














              2












              2








              2







              With this you cant print the distinct values in the column location



              from pyspark.sql import functions as F
              df.select(F.col('location')).distinct()





              share|improve this answer













              With this you cant print the distinct values in the column location



              from pyspark.sql import functions as F
              df.select(F.col('location')).distinct()






              share|improve this answer












              share|improve this answer



              share|improve this answer










              answered Nov 14 '18 at 14:12









              ManriqueManrique

              500114




              500114













              • Sorry, by Distinct I meant just a list with all possible values, so not [London, Berlin, Berlin, Berlin, Paris] , but just [London, Berlin, Paris]. I think mine does the same.

                – Qubix
                Nov 14 '18 at 14:14






              • 1





                Yours does also, but you are agregating data and performing an operation you didnt really need. With my code you get the result you want more efficiently :)

                – Manrique
                Nov 14 '18 at 14:18











              • did it help ? @Qubix

                – Manrique
                Nov 15 '18 at 21:56



















              • Sorry, by Distinct I meant just a list with all possible values, so not [London, Berlin, Berlin, Berlin, Paris] , but just [London, Berlin, Paris]. I think mine does the same.

                – Qubix
                Nov 14 '18 at 14:14






              • 1





                Yours does also, but you are agregating data and performing an operation you didnt really need. With my code you get the result you want more efficiently :)

                – Manrique
                Nov 14 '18 at 14:18











              • did it help ? @Qubix

                – Manrique
                Nov 15 '18 at 21:56

















              Sorry, by Distinct I meant just a list with all possible values, so not [London, Berlin, Berlin, Berlin, Paris] , but just [London, Berlin, Paris]. I think mine does the same.

              – Qubix
              Nov 14 '18 at 14:14





              Sorry, by Distinct I meant just a list with all possible values, so not [London, Berlin, Berlin, Berlin, Paris] , but just [London, Berlin, Paris]. I think mine does the same.

              – Qubix
              Nov 14 '18 at 14:14




              1




              1





              Yours does also, but you are agregating data and performing an operation you didnt really need. With my code you get the result you want more efficiently :)

              – Manrique
              Nov 14 '18 at 14:18





              Yours does also, but you are agregating data and performing an operation you didnt really need. With my code you get the result you want more efficiently :)

              – Manrique
              Nov 14 '18 at 14:18













              did it help ? @Qubix

              – Manrique
              Nov 15 '18 at 21:56





              did it help ? @Qubix

              – Manrique
              Nov 15 '18 at 21:56













              1














              describe method is for basic predefined statistics like count, mean, std, min, max etc. However, in order to find distinct values for any column you can use distinct() method.



              Hope this helps.



              Regards,



              Neeraj






              share|improve this answer




























                1














                describe method is for basic predefined statistics like count, mean, std, min, max etc. However, in order to find distinct values for any column you can use distinct() method.



                Hope this helps.



                Regards,



                Neeraj






                share|improve this answer


























                  1












                  1








                  1







                  describe method is for basic predefined statistics like count, mean, std, min, max etc. However, in order to find distinct values for any column you can use distinct() method.



                  Hope this helps.



                  Regards,



                  Neeraj






                  share|improve this answer













                  describe method is for basic predefined statistics like count, mean, std, min, max etc. However, in order to find distinct values for any column you can use distinct() method.



                  Hope this helps.



                  Regards,



                  Neeraj







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Nov 19 '18 at 14:10









                  neeraj bhadanineeraj bhadani

                  837312




                  837312























                      0














                      I found it:



                      df.groupBy("location").count().show()





                      share|improve this answer




























                        0














                        I found it:



                        df.groupBy("location").count().show()





                        share|improve this answer


























                          0












                          0








                          0







                          I found it:



                          df.groupBy("location").count().show()





                          share|improve this answer













                          I found it:



                          df.groupBy("location").count().show()






                          share|improve this answer












                          share|improve this answer



                          share|improve this answer










                          answered Nov 14 '18 at 10:43









                          QubixQubix

                          77221327




                          77221327






























                              draft saved

                              draft discarded




















































                              Thanks for contributing an answer to Stack Overflow!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid



                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function () {
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53298203%2fprint-all-categories-in-pyspark-dataframe-column%23new-answer', 'question_page');
                              }
                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              Florida Star v. B. J. F.

                              Danny Elfman

                              Retrieve a Users Dashboard in Tumblr with R and TumblR. Oauth Issues