Why are there no nodes under /controller znode while Kafka cluster is running?












1















Output from the zookepeer client:



[zk: s1:2181(CONNECTED) 1] ls /brokers/ids
[1, 2, 3]
[zk: s1:2181(CONNECTED) 1] ls /controller



There are 3 kafka brokers in the cluster, so why I can't find the controller in the /controller znode as well?



Another question is how can I know which master election used by kafka, ZooKeeper master election or Kafka master election? Is the master election in kafka to elect the master kafka broker? So what's the leader election? Is it to elect the leader partition?










share|improve this question





























    1















    Output from the zookepeer client:



    [zk: s1:2181(CONNECTED) 1] ls /brokers/ids
    [1, 2, 3]
    [zk: s1:2181(CONNECTED) 1] ls /controller



    There are 3 kafka brokers in the cluster, so why I can't find the controller in the /controller znode as well?



    Another question is how can I know which master election used by kafka, ZooKeeper master election or Kafka master election? Is the master election in kafka to elect the master kafka broker? So what's the leader election? Is it to elect the leader partition?










    share|improve this question



























      1












      1








      1








      Output from the zookepeer client:



      [zk: s1:2181(CONNECTED) 1] ls /brokers/ids
      [1, 2, 3]
      [zk: s1:2181(CONNECTED) 1] ls /controller



      There are 3 kafka brokers in the cluster, so why I can't find the controller in the /controller znode as well?



      Another question is how can I know which master election used by kafka, ZooKeeper master election or Kafka master election? Is the master election in kafka to elect the master kafka broker? So what's the leader election? Is it to elect the leader partition?










      share|improve this question
















      Output from the zookepeer client:



      [zk: s1:2181(CONNECTED) 1] ls /brokers/ids
      [1, 2, 3]
      [zk: s1:2181(CONNECTED) 1] ls /controller



      There are 3 kafka brokers in the cluster, so why I can't find the controller in the /controller znode as well?



      Another question is how can I know which master election used by kafka, ZooKeeper master election or Kafka master election? Is the master election in kafka to elect the master kafka broker? So what's the leader election? Is it to elect the leader partition?







      apache-kafka






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 29 '18 at 13:53









      Jacek Laskowski

      46.2k18136276




      46.2k18136276










      asked Nov 16 '18 at 5:46









      zluozluo

      83




      83
























          1 Answer
          1






          active

          oldest

          votes


















          1














          You're asking very general questions about how Kafka was designed. For this, there's no better place to look than the official Kafka documentation:



          https://kafka.apache.org/documentation.html#impl_zookeeper



          The official controller explanation, from the docs (my highlights):




          It is also important to optimize the leadership election process as
          that is the critical window of unavailability. A naive implementation
          of leader election would end up running an election per partition for
          all partitions a node hosted when that node failed. Instead, we elect
          one of the brokers as the "controller". This controller detects
          failures at the broker level and is responsible for changing the
          leader of all affected partitions in a failed broker.
          The result is
          that we are able to batch together many of the required leadership
          change notifications which makes the election process far cheaper and
          faster for a large number of partitions. If the controller fails, one
          of the surviving brokers will become the new controller.




          https://kafka.apache.org/documentation.html#design_replicamanagment



          So, the controller is the broker that is responsible for monitoring the leaders of each of the topic/partitions. If one or more of those leader topic/partitions becomes unavailable, the controller then runs leader elections, and appoints the new leader for each of those topic/partitions, so that clients (both consumers and producers) can consume/produce there.



          The reason why you can't see anything under /controller is because you're assuming it's a "directory", when in reality it's a znode with information. You need to issue a get /controller command to see the output. You should see something like this:



          [zk: s1:2181(CONNECTED) 1] get /controller
          {"version":1,"brokerid":100,"timestamp":"1506197069724"}
          cZxid = 0xf9
          ctime = Sat Sep 23 22:04:29 CEST 2017
          mZxid = 0xf9
          mtime = Sat Sep 23 22:04:29 CEST 2017
          pZxid = 0xf9
          cversion = 0
          dataVersion = 0
          aclVersion = 0
          ephemeralOwner = 0x15eaa3a4fdd000d
          dataLength = 56
          numChildren = 0





          share|improve this answer
























            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53332111%2fwhy-are-there-no-nodes-under-controller-znode-while-kafka-cluster-is-running%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1














            You're asking very general questions about how Kafka was designed. For this, there's no better place to look than the official Kafka documentation:



            https://kafka.apache.org/documentation.html#impl_zookeeper



            The official controller explanation, from the docs (my highlights):




            It is also important to optimize the leadership election process as
            that is the critical window of unavailability. A naive implementation
            of leader election would end up running an election per partition for
            all partitions a node hosted when that node failed. Instead, we elect
            one of the brokers as the "controller". This controller detects
            failures at the broker level and is responsible for changing the
            leader of all affected partitions in a failed broker.
            The result is
            that we are able to batch together many of the required leadership
            change notifications which makes the election process far cheaper and
            faster for a large number of partitions. If the controller fails, one
            of the surviving brokers will become the new controller.




            https://kafka.apache.org/documentation.html#design_replicamanagment



            So, the controller is the broker that is responsible for monitoring the leaders of each of the topic/partitions. If one or more of those leader topic/partitions becomes unavailable, the controller then runs leader elections, and appoints the new leader for each of those topic/partitions, so that clients (both consumers and producers) can consume/produce there.



            The reason why you can't see anything under /controller is because you're assuming it's a "directory", when in reality it's a znode with information. You need to issue a get /controller command to see the output. You should see something like this:



            [zk: s1:2181(CONNECTED) 1] get /controller
            {"version":1,"brokerid":100,"timestamp":"1506197069724"}
            cZxid = 0xf9
            ctime = Sat Sep 23 22:04:29 CEST 2017
            mZxid = 0xf9
            mtime = Sat Sep 23 22:04:29 CEST 2017
            pZxid = 0xf9
            cversion = 0
            dataVersion = 0
            aclVersion = 0
            ephemeralOwner = 0x15eaa3a4fdd000d
            dataLength = 56
            numChildren = 0





            share|improve this answer




























              1














              You're asking very general questions about how Kafka was designed. For this, there's no better place to look than the official Kafka documentation:



              https://kafka.apache.org/documentation.html#impl_zookeeper



              The official controller explanation, from the docs (my highlights):




              It is also important to optimize the leadership election process as
              that is the critical window of unavailability. A naive implementation
              of leader election would end up running an election per partition for
              all partitions a node hosted when that node failed. Instead, we elect
              one of the brokers as the "controller". This controller detects
              failures at the broker level and is responsible for changing the
              leader of all affected partitions in a failed broker.
              The result is
              that we are able to batch together many of the required leadership
              change notifications which makes the election process far cheaper and
              faster for a large number of partitions. If the controller fails, one
              of the surviving brokers will become the new controller.




              https://kafka.apache.org/documentation.html#design_replicamanagment



              So, the controller is the broker that is responsible for monitoring the leaders of each of the topic/partitions. If one or more of those leader topic/partitions becomes unavailable, the controller then runs leader elections, and appoints the new leader for each of those topic/partitions, so that clients (both consumers and producers) can consume/produce there.



              The reason why you can't see anything under /controller is because you're assuming it's a "directory", when in reality it's a znode with information. You need to issue a get /controller command to see the output. You should see something like this:



              [zk: s1:2181(CONNECTED) 1] get /controller
              {"version":1,"brokerid":100,"timestamp":"1506197069724"}
              cZxid = 0xf9
              ctime = Sat Sep 23 22:04:29 CEST 2017
              mZxid = 0xf9
              mtime = Sat Sep 23 22:04:29 CEST 2017
              pZxid = 0xf9
              cversion = 0
              dataVersion = 0
              aclVersion = 0
              ephemeralOwner = 0x15eaa3a4fdd000d
              dataLength = 56
              numChildren = 0





              share|improve this answer


























                1












                1








                1







                You're asking very general questions about how Kafka was designed. For this, there's no better place to look than the official Kafka documentation:



                https://kafka.apache.org/documentation.html#impl_zookeeper



                The official controller explanation, from the docs (my highlights):




                It is also important to optimize the leadership election process as
                that is the critical window of unavailability. A naive implementation
                of leader election would end up running an election per partition for
                all partitions a node hosted when that node failed. Instead, we elect
                one of the brokers as the "controller". This controller detects
                failures at the broker level and is responsible for changing the
                leader of all affected partitions in a failed broker.
                The result is
                that we are able to batch together many of the required leadership
                change notifications which makes the election process far cheaper and
                faster for a large number of partitions. If the controller fails, one
                of the surviving brokers will become the new controller.




                https://kafka.apache.org/documentation.html#design_replicamanagment



                So, the controller is the broker that is responsible for monitoring the leaders of each of the topic/partitions. If one or more of those leader topic/partitions becomes unavailable, the controller then runs leader elections, and appoints the new leader for each of those topic/partitions, so that clients (both consumers and producers) can consume/produce there.



                The reason why you can't see anything under /controller is because you're assuming it's a "directory", when in reality it's a znode with information. You need to issue a get /controller command to see the output. You should see something like this:



                [zk: s1:2181(CONNECTED) 1] get /controller
                {"version":1,"brokerid":100,"timestamp":"1506197069724"}
                cZxid = 0xf9
                ctime = Sat Sep 23 22:04:29 CEST 2017
                mZxid = 0xf9
                mtime = Sat Sep 23 22:04:29 CEST 2017
                pZxid = 0xf9
                cversion = 0
                dataVersion = 0
                aclVersion = 0
                ephemeralOwner = 0x15eaa3a4fdd000d
                dataLength = 56
                numChildren = 0





                share|improve this answer













                You're asking very general questions about how Kafka was designed. For this, there's no better place to look than the official Kafka documentation:



                https://kafka.apache.org/documentation.html#impl_zookeeper



                The official controller explanation, from the docs (my highlights):




                It is also important to optimize the leadership election process as
                that is the critical window of unavailability. A naive implementation
                of leader election would end up running an election per partition for
                all partitions a node hosted when that node failed. Instead, we elect
                one of the brokers as the "controller". This controller detects
                failures at the broker level and is responsible for changing the
                leader of all affected partitions in a failed broker.
                The result is
                that we are able to batch together many of the required leadership
                change notifications which makes the election process far cheaper and
                faster for a large number of partitions. If the controller fails, one
                of the surviving brokers will become the new controller.




                https://kafka.apache.org/documentation.html#design_replicamanagment



                So, the controller is the broker that is responsible for monitoring the leaders of each of the topic/partitions. If one or more of those leader topic/partitions becomes unavailable, the controller then runs leader elections, and appoints the new leader for each of those topic/partitions, so that clients (both consumers and producers) can consume/produce there.



                The reason why you can't see anything under /controller is because you're assuming it's a "directory", when in reality it's a znode with information. You need to issue a get /controller command to see the output. You should see something like this:



                [zk: s1:2181(CONNECTED) 1] get /controller
                {"version":1,"brokerid":100,"timestamp":"1506197069724"}
                cZxid = 0xf9
                ctime = Sat Sep 23 22:04:29 CEST 2017
                mZxid = 0xf9
                mtime = Sat Sep 23 22:04:29 CEST 2017
                pZxid = 0xf9
                cversion = 0
                dataVersion = 0
                aclVersion = 0
                ephemeralOwner = 0x15eaa3a4fdd000d
                dataLength = 56
                numChildren = 0






                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 16 '18 at 6:08









                mjuarezmjuarez

                10.4k73954




                10.4k73954
































                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53332111%2fwhy-are-there-no-nodes-under-controller-znode-while-kafka-cluster-is-running%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Florida Star v. B. J. F.

                    Danny Elfman

                    Lugert, Oklahoma