Confluent Kafka connect distributed mode jdbc connector












2















We have successfully used mySQL - kafka data ingestion using jdbc standalone connector but now facing issue in using the same in distributed mode (as kafka connect service ).



Command used for standalone connector which works fine -



/usr/bin/connect-standalone /etc/kafka/connect-standalone.properties /etc/kafka-connect-jdbc/source-quickstart-mysql.properties


Now we have stopped this one and started the kafka connect service in distributed mode like this -



systemctl status confluent-kafka-connect
● confluent-kafka-connect.service - Apache Kafka Connect - distributed
Loaded: loaded (/usr/lib/systemd/system/confluent-kafka-connect.service; disabled; vendor preset: disabled)
Active: active (running) since Wed 2018-11-14 22:52:49 CET; 41min ago
Docs: http://docs.confluent.io/
Main PID: 130178 (java)
CGroup: /system.slice/confluent-kafka-connect.service
└─130178 java -Xms256M -Xmx2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -Djava.a...


2 nodes are currently running the connect service with same connect-distributed.properties file .



bootstrap.servers=node1IP:9092,node2IP:9092
group.id=connect-cluster
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.topic=connect-offsets
offset.storage.replication.factor=1
config.storage.topic=connect-configs
config.storage.replication.factor=1
status.storage.topic=connect-status
status.storage.replication.factor=1
offset.flush.interval.ms=10000
plugin.path=/usr/share/java


The connect service is UP and running but it doesn't load the connectors defined under /etc/kafka/connect-standalone.properties .



What should be done to the service so that whenever you hit the command systemctl start confluent-kafka-connect , it runs the service and starts the defined connectors under /etc/kafka-connect-*/ just like when you run a standalone connector manually providing paths to properties files ..



Might be a silly question, but please help !



Thanks!!










share|improve this question





























    2















    We have successfully used mySQL - kafka data ingestion using jdbc standalone connector but now facing issue in using the same in distributed mode (as kafka connect service ).



    Command used for standalone connector which works fine -



    /usr/bin/connect-standalone /etc/kafka/connect-standalone.properties /etc/kafka-connect-jdbc/source-quickstart-mysql.properties


    Now we have stopped this one and started the kafka connect service in distributed mode like this -



    systemctl status confluent-kafka-connect
    ● confluent-kafka-connect.service - Apache Kafka Connect - distributed
    Loaded: loaded (/usr/lib/systemd/system/confluent-kafka-connect.service; disabled; vendor preset: disabled)
    Active: active (running) since Wed 2018-11-14 22:52:49 CET; 41min ago
    Docs: http://docs.confluent.io/
    Main PID: 130178 (java)
    CGroup: /system.slice/confluent-kafka-connect.service
    └─130178 java -Xms256M -Xmx2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -Djava.a...


    2 nodes are currently running the connect service with same connect-distributed.properties file .



    bootstrap.servers=node1IP:9092,node2IP:9092
    group.id=connect-cluster
    key.converter.schemas.enable=true
    value.converter.schemas.enable=true
    offset.storage.topic=connect-offsets
    offset.storage.replication.factor=1
    config.storage.topic=connect-configs
    config.storage.replication.factor=1
    status.storage.topic=connect-status
    status.storage.replication.factor=1
    offset.flush.interval.ms=10000
    plugin.path=/usr/share/java


    The connect service is UP and running but it doesn't load the connectors defined under /etc/kafka/connect-standalone.properties .



    What should be done to the service so that whenever you hit the command systemctl start confluent-kafka-connect , it runs the service and starts the defined connectors under /etc/kafka-connect-*/ just like when you run a standalone connector manually providing paths to properties files ..



    Might be a silly question, but please help !



    Thanks!!










    share|improve this question



























      2












      2








      2








      We have successfully used mySQL - kafka data ingestion using jdbc standalone connector but now facing issue in using the same in distributed mode (as kafka connect service ).



      Command used for standalone connector which works fine -



      /usr/bin/connect-standalone /etc/kafka/connect-standalone.properties /etc/kafka-connect-jdbc/source-quickstart-mysql.properties


      Now we have stopped this one and started the kafka connect service in distributed mode like this -



      systemctl status confluent-kafka-connect
      ● confluent-kafka-connect.service - Apache Kafka Connect - distributed
      Loaded: loaded (/usr/lib/systemd/system/confluent-kafka-connect.service; disabled; vendor preset: disabled)
      Active: active (running) since Wed 2018-11-14 22:52:49 CET; 41min ago
      Docs: http://docs.confluent.io/
      Main PID: 130178 (java)
      CGroup: /system.slice/confluent-kafka-connect.service
      └─130178 java -Xms256M -Xmx2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -Djava.a...


      2 nodes are currently running the connect service with same connect-distributed.properties file .



      bootstrap.servers=node1IP:9092,node2IP:9092
      group.id=connect-cluster
      key.converter.schemas.enable=true
      value.converter.schemas.enable=true
      offset.storage.topic=connect-offsets
      offset.storage.replication.factor=1
      config.storage.topic=connect-configs
      config.storage.replication.factor=1
      status.storage.topic=connect-status
      status.storage.replication.factor=1
      offset.flush.interval.ms=10000
      plugin.path=/usr/share/java


      The connect service is UP and running but it doesn't load the connectors defined under /etc/kafka/connect-standalone.properties .



      What should be done to the service so that whenever you hit the command systemctl start confluent-kafka-connect , it runs the service and starts the defined connectors under /etc/kafka-connect-*/ just like when you run a standalone connector manually providing paths to properties files ..



      Might be a silly question, but please help !



      Thanks!!










      share|improve this question
















      We have successfully used mySQL - kafka data ingestion using jdbc standalone connector but now facing issue in using the same in distributed mode (as kafka connect service ).



      Command used for standalone connector which works fine -



      /usr/bin/connect-standalone /etc/kafka/connect-standalone.properties /etc/kafka-connect-jdbc/source-quickstart-mysql.properties


      Now we have stopped this one and started the kafka connect service in distributed mode like this -



      systemctl status confluent-kafka-connect
      ● confluent-kafka-connect.service - Apache Kafka Connect - distributed
      Loaded: loaded (/usr/lib/systemd/system/confluent-kafka-connect.service; disabled; vendor preset: disabled)
      Active: active (running) since Wed 2018-11-14 22:52:49 CET; 41min ago
      Docs: http://docs.confluent.io/
      Main PID: 130178 (java)
      CGroup: /system.slice/confluent-kafka-connect.service
      └─130178 java -Xms256M -Xmx2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -Djava.a...


      2 nodes are currently running the connect service with same connect-distributed.properties file .



      bootstrap.servers=node1IP:9092,node2IP:9092
      group.id=connect-cluster
      key.converter.schemas.enable=true
      value.converter.schemas.enable=true
      offset.storage.topic=connect-offsets
      offset.storage.replication.factor=1
      config.storage.topic=connect-configs
      config.storage.replication.factor=1
      status.storage.topic=connect-status
      status.storage.replication.factor=1
      offset.flush.interval.ms=10000
      plugin.path=/usr/share/java


      The connect service is UP and running but it doesn't load the connectors defined under /etc/kafka/connect-standalone.properties .



      What should be done to the service so that whenever you hit the command systemctl start confluent-kafka-connect , it runs the service and starts the defined connectors under /etc/kafka-connect-*/ just like when you run a standalone connector manually providing paths to properties files ..



      Might be a silly question, but please help !



      Thanks!!







      apache-kafka apache-kafka-connect confluent






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 14 '18 at 23:35









      cricket_007

      82.5k1143111




      82.5k1143111










      asked Nov 14 '18 at 22:45









      TonyTony

      12312




      12312
























          2 Answers
          2






          active

          oldest

          votes


















          2















          it runs the service and starts the defined connectors under /etc/kafka-connect-*/




          That's not how distributed mode works... It doesn't know which property files you want to load, and it doesn't scan those folders1



          With standalone-mode the N+1 property files that you give are loaded immediately, yes, but for connect-distributed, you must use HTTP POST calls to the Connect REST API.





          Confluent Control Center or Landoop's Connect UI can provide a nice management web portal for these operations.



          By the way, if you have more than one broker, I'll suggest increasing the replica factors on those connect topics in the connect-distributed.properties file.



          1. It might be a nice feature if it did, but then you have to ensure connectors are never deleted/stopped in distributed mode, and you just end up in an inconsistent state with what's running and the files that are on the filesystem.






          share|improve this answer
























          • I am invoking the connector using : curl -X POST -H "Accept:application/json" -H "Content-Type:application/json" 10.30.72.218:8083/connectors/ -d '{"name": "linuxemp-connector", "config": { "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector", "tasks.max": "1", "connection.url": "jdbc:mysql://X.X.X.X:3306/linux_db?user=groot&password=pwd","table.whitelist": "emp","mode": "timestamp","incrementing.column.name":"empid","topic.prefix": "mysqlconnector-" } }' . I already have the connector drivers in the plugin path and it is kept on all connect nodes .

            – Tony
            Nov 15 '18 at 11:22













          • But still getting this error - {"error_code":400,"message":"Connector configuration is invalid and contains the following 2 error(s):nInvalid value java.sql.SQLException: No suitable driver found for

            – Tony
            Nov 15 '18 at 11:27











          • "No suitable driver found" would mean the JAR isnt being loaded from the plugin path (more specifically the connect-jdbc directory). If standalone mode worked, and you're running distributed mode on the same machine, then I'm not sure why that'd happen

            – cricket_007
            Nov 15 '18 at 15:15













          • @Tony i think you should make sure that you've added the mysql jdbc driver library to the ${confluent.install.dir}/share/java/kafka-connect-jdbc/ directory

            – marius_neo
            Nov 16 '18 at 11:42











          • @marius Feel free to answer stackoverflow.com/questions/53321726/…

            – cricket_007
            Nov 16 '18 at 14:32



















          1














          I can describe what I did for starting the jdbc connector in distributed mode:



          I am using on my local machine, confluent CLI utility for booting up faster the services.



          ./confluent start


          Afterwords I stopped kafka-connect



          ./confluent stop connect


          and then I proceed to manually start the customized connect-distributed on two different ports (18083 and 28083)



          ➜  bin ./connect-distributed ../etc/kafka/connect-distributed-worker1.properties

          ➜ bin ./connect-distributed ../etc/kafka/connect-distributed-worker2.properties


          NOTE: Set plugin.path setting to the full (and not relative) path (e.g.: plugin.path=/full/path/to/confluent-5.0.0/share/java)



          Then i can easily add a new connector



          curl -s -X POST -H "Content-Type: application/json" --data @/full/path/to/confluent-5.0.0/etc/kafka-connect-jdbc/source-quickstart-sqlite.json http://localhost:18083/connectors


          This should do the trick.



          As already pointed out by cricket_007 consider a replication factor of at least 3 for the kafka brokers in case you're dealing with stuff that you don't want to lose in case of outage of one of the brokers.






          share|improve this answer
























          • confluent start will run distributed mode... Why did you use two processes, though?

            – cricket_007
            Nov 16 '18 at 14:31











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53309858%2fconfluent-kafka-connect-distributed-mode-jdbc-connector%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          2















          it runs the service and starts the defined connectors under /etc/kafka-connect-*/




          That's not how distributed mode works... It doesn't know which property files you want to load, and it doesn't scan those folders1



          With standalone-mode the N+1 property files that you give are loaded immediately, yes, but for connect-distributed, you must use HTTP POST calls to the Connect REST API.





          Confluent Control Center or Landoop's Connect UI can provide a nice management web portal for these operations.



          By the way, if you have more than one broker, I'll suggest increasing the replica factors on those connect topics in the connect-distributed.properties file.



          1. It might be a nice feature if it did, but then you have to ensure connectors are never deleted/stopped in distributed mode, and you just end up in an inconsistent state with what's running and the files that are on the filesystem.






          share|improve this answer
























          • I am invoking the connector using : curl -X POST -H "Accept:application/json" -H "Content-Type:application/json" 10.30.72.218:8083/connectors/ -d '{"name": "linuxemp-connector", "config": { "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector", "tasks.max": "1", "connection.url": "jdbc:mysql://X.X.X.X:3306/linux_db?user=groot&password=pwd","table.whitelist": "emp","mode": "timestamp","incrementing.column.name":"empid","topic.prefix": "mysqlconnector-" } }' . I already have the connector drivers in the plugin path and it is kept on all connect nodes .

            – Tony
            Nov 15 '18 at 11:22













          • But still getting this error - {"error_code":400,"message":"Connector configuration is invalid and contains the following 2 error(s):nInvalid value java.sql.SQLException: No suitable driver found for

            – Tony
            Nov 15 '18 at 11:27











          • "No suitable driver found" would mean the JAR isnt being loaded from the plugin path (more specifically the connect-jdbc directory). If standalone mode worked, and you're running distributed mode on the same machine, then I'm not sure why that'd happen

            – cricket_007
            Nov 15 '18 at 15:15













          • @Tony i think you should make sure that you've added the mysql jdbc driver library to the ${confluent.install.dir}/share/java/kafka-connect-jdbc/ directory

            – marius_neo
            Nov 16 '18 at 11:42











          • @marius Feel free to answer stackoverflow.com/questions/53321726/…

            – cricket_007
            Nov 16 '18 at 14:32
















          2















          it runs the service and starts the defined connectors under /etc/kafka-connect-*/




          That's not how distributed mode works... It doesn't know which property files you want to load, and it doesn't scan those folders1



          With standalone-mode the N+1 property files that you give are loaded immediately, yes, but for connect-distributed, you must use HTTP POST calls to the Connect REST API.





          Confluent Control Center or Landoop's Connect UI can provide a nice management web portal for these operations.



          By the way, if you have more than one broker, I'll suggest increasing the replica factors on those connect topics in the connect-distributed.properties file.



          1. It might be a nice feature if it did, but then you have to ensure connectors are never deleted/stopped in distributed mode, and you just end up in an inconsistent state with what's running and the files that are on the filesystem.






          share|improve this answer
























          • I am invoking the connector using : curl -X POST -H "Accept:application/json" -H "Content-Type:application/json" 10.30.72.218:8083/connectors/ -d '{"name": "linuxemp-connector", "config": { "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector", "tasks.max": "1", "connection.url": "jdbc:mysql://X.X.X.X:3306/linux_db?user=groot&password=pwd","table.whitelist": "emp","mode": "timestamp","incrementing.column.name":"empid","topic.prefix": "mysqlconnector-" } }' . I already have the connector drivers in the plugin path and it is kept on all connect nodes .

            – Tony
            Nov 15 '18 at 11:22













          • But still getting this error - {"error_code":400,"message":"Connector configuration is invalid and contains the following 2 error(s):nInvalid value java.sql.SQLException: No suitable driver found for

            – Tony
            Nov 15 '18 at 11:27











          • "No suitable driver found" would mean the JAR isnt being loaded from the plugin path (more specifically the connect-jdbc directory). If standalone mode worked, and you're running distributed mode on the same machine, then I'm not sure why that'd happen

            – cricket_007
            Nov 15 '18 at 15:15













          • @Tony i think you should make sure that you've added the mysql jdbc driver library to the ${confluent.install.dir}/share/java/kafka-connect-jdbc/ directory

            – marius_neo
            Nov 16 '18 at 11:42











          • @marius Feel free to answer stackoverflow.com/questions/53321726/…

            – cricket_007
            Nov 16 '18 at 14:32














          2












          2








          2








          it runs the service and starts the defined connectors under /etc/kafka-connect-*/




          That's not how distributed mode works... It doesn't know which property files you want to load, and it doesn't scan those folders1



          With standalone-mode the N+1 property files that you give are loaded immediately, yes, but for connect-distributed, you must use HTTP POST calls to the Connect REST API.





          Confluent Control Center or Landoop's Connect UI can provide a nice management web portal for these operations.



          By the way, if you have more than one broker, I'll suggest increasing the replica factors on those connect topics in the connect-distributed.properties file.



          1. It might be a nice feature if it did, but then you have to ensure connectors are never deleted/stopped in distributed mode, and you just end up in an inconsistent state with what's running and the files that are on the filesystem.






          share|improve this answer














          it runs the service and starts the defined connectors under /etc/kafka-connect-*/




          That's not how distributed mode works... It doesn't know which property files you want to load, and it doesn't scan those folders1



          With standalone-mode the N+1 property files that you give are loaded immediately, yes, but for connect-distributed, you must use HTTP POST calls to the Connect REST API.





          Confluent Control Center or Landoop's Connect UI can provide a nice management web portal for these operations.



          By the way, if you have more than one broker, I'll suggest increasing the replica factors on those connect topics in the connect-distributed.properties file.



          1. It might be a nice feature if it did, but then you have to ensure connectors are never deleted/stopped in distributed mode, and you just end up in an inconsistent state with what's running and the files that are on the filesystem.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 14 '18 at 23:39









          cricket_007cricket_007

          82.5k1143111




          82.5k1143111













          • I am invoking the connector using : curl -X POST -H "Accept:application/json" -H "Content-Type:application/json" 10.30.72.218:8083/connectors/ -d '{"name": "linuxemp-connector", "config": { "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector", "tasks.max": "1", "connection.url": "jdbc:mysql://X.X.X.X:3306/linux_db?user=groot&password=pwd","table.whitelist": "emp","mode": "timestamp","incrementing.column.name":"empid","topic.prefix": "mysqlconnector-" } }' . I already have the connector drivers in the plugin path and it is kept on all connect nodes .

            – Tony
            Nov 15 '18 at 11:22













          • But still getting this error - {"error_code":400,"message":"Connector configuration is invalid and contains the following 2 error(s):nInvalid value java.sql.SQLException: No suitable driver found for

            – Tony
            Nov 15 '18 at 11:27











          • "No suitable driver found" would mean the JAR isnt being loaded from the plugin path (more specifically the connect-jdbc directory). If standalone mode worked, and you're running distributed mode on the same machine, then I'm not sure why that'd happen

            – cricket_007
            Nov 15 '18 at 15:15













          • @Tony i think you should make sure that you've added the mysql jdbc driver library to the ${confluent.install.dir}/share/java/kafka-connect-jdbc/ directory

            – marius_neo
            Nov 16 '18 at 11:42











          • @marius Feel free to answer stackoverflow.com/questions/53321726/…

            – cricket_007
            Nov 16 '18 at 14:32



















          • I am invoking the connector using : curl -X POST -H "Accept:application/json" -H "Content-Type:application/json" 10.30.72.218:8083/connectors/ -d '{"name": "linuxemp-connector", "config": { "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector", "tasks.max": "1", "connection.url": "jdbc:mysql://X.X.X.X:3306/linux_db?user=groot&password=pwd","table.whitelist": "emp","mode": "timestamp","incrementing.column.name":"empid","topic.prefix": "mysqlconnector-" } }' . I already have the connector drivers in the plugin path and it is kept on all connect nodes .

            – Tony
            Nov 15 '18 at 11:22













          • But still getting this error - {"error_code":400,"message":"Connector configuration is invalid and contains the following 2 error(s):nInvalid value java.sql.SQLException: No suitable driver found for

            – Tony
            Nov 15 '18 at 11:27











          • "No suitable driver found" would mean the JAR isnt being loaded from the plugin path (more specifically the connect-jdbc directory). If standalone mode worked, and you're running distributed mode on the same machine, then I'm not sure why that'd happen

            – cricket_007
            Nov 15 '18 at 15:15













          • @Tony i think you should make sure that you've added the mysql jdbc driver library to the ${confluent.install.dir}/share/java/kafka-connect-jdbc/ directory

            – marius_neo
            Nov 16 '18 at 11:42











          • @marius Feel free to answer stackoverflow.com/questions/53321726/…

            – cricket_007
            Nov 16 '18 at 14:32

















          I am invoking the connector using : curl -X POST -H "Accept:application/json" -H "Content-Type:application/json" 10.30.72.218:8083/connectors/ -d '{"name": "linuxemp-connector", "config": { "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector", "tasks.max": "1", "connection.url": "jdbc:mysql://X.X.X.X:3306/linux_db?user=groot&password=pwd","table.whitelist": "emp","mode": "timestamp","incrementing.column.name":"empid","topic.prefix": "mysqlconnector-" } }' . I already have the connector drivers in the plugin path and it is kept on all connect nodes .

          – Tony
          Nov 15 '18 at 11:22







          I am invoking the connector using : curl -X POST -H "Accept:application/json" -H "Content-Type:application/json" 10.30.72.218:8083/connectors/ -d '{"name": "linuxemp-connector", "config": { "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector", "tasks.max": "1", "connection.url": "jdbc:mysql://X.X.X.X:3306/linux_db?user=groot&password=pwd","table.whitelist": "emp","mode": "timestamp","incrementing.column.name":"empid","topic.prefix": "mysqlconnector-" } }' . I already have the connector drivers in the plugin path and it is kept on all connect nodes .

          – Tony
          Nov 15 '18 at 11:22















          But still getting this error - {"error_code":400,"message":"Connector configuration is invalid and contains the following 2 error(s):nInvalid value java.sql.SQLException: No suitable driver found for

          – Tony
          Nov 15 '18 at 11:27





          But still getting this error - {"error_code":400,"message":"Connector configuration is invalid and contains the following 2 error(s):nInvalid value java.sql.SQLException: No suitable driver found for

          – Tony
          Nov 15 '18 at 11:27













          "No suitable driver found" would mean the JAR isnt being loaded from the plugin path (more specifically the connect-jdbc directory). If standalone mode worked, and you're running distributed mode on the same machine, then I'm not sure why that'd happen

          – cricket_007
          Nov 15 '18 at 15:15







          "No suitable driver found" would mean the JAR isnt being loaded from the plugin path (more specifically the connect-jdbc directory). If standalone mode worked, and you're running distributed mode on the same machine, then I'm not sure why that'd happen

          – cricket_007
          Nov 15 '18 at 15:15















          @Tony i think you should make sure that you've added the mysql jdbc driver library to the ${confluent.install.dir}/share/java/kafka-connect-jdbc/ directory

          – marius_neo
          Nov 16 '18 at 11:42





          @Tony i think you should make sure that you've added the mysql jdbc driver library to the ${confluent.install.dir}/share/java/kafka-connect-jdbc/ directory

          – marius_neo
          Nov 16 '18 at 11:42













          @marius Feel free to answer stackoverflow.com/questions/53321726/…

          – cricket_007
          Nov 16 '18 at 14:32





          @marius Feel free to answer stackoverflow.com/questions/53321726/…

          – cricket_007
          Nov 16 '18 at 14:32













          1














          I can describe what I did for starting the jdbc connector in distributed mode:



          I am using on my local machine, confluent CLI utility for booting up faster the services.



          ./confluent start


          Afterwords I stopped kafka-connect



          ./confluent stop connect


          and then I proceed to manually start the customized connect-distributed on two different ports (18083 and 28083)



          ➜  bin ./connect-distributed ../etc/kafka/connect-distributed-worker1.properties

          ➜ bin ./connect-distributed ../etc/kafka/connect-distributed-worker2.properties


          NOTE: Set plugin.path setting to the full (and not relative) path (e.g.: plugin.path=/full/path/to/confluent-5.0.0/share/java)



          Then i can easily add a new connector



          curl -s -X POST -H "Content-Type: application/json" --data @/full/path/to/confluent-5.0.0/etc/kafka-connect-jdbc/source-quickstart-sqlite.json http://localhost:18083/connectors


          This should do the trick.



          As already pointed out by cricket_007 consider a replication factor of at least 3 for the kafka brokers in case you're dealing with stuff that you don't want to lose in case of outage of one of the brokers.






          share|improve this answer
























          • confluent start will run distributed mode... Why did you use two processes, though?

            – cricket_007
            Nov 16 '18 at 14:31
















          1














          I can describe what I did for starting the jdbc connector in distributed mode:



          I am using on my local machine, confluent CLI utility for booting up faster the services.



          ./confluent start


          Afterwords I stopped kafka-connect



          ./confluent stop connect


          and then I proceed to manually start the customized connect-distributed on two different ports (18083 and 28083)



          ➜  bin ./connect-distributed ../etc/kafka/connect-distributed-worker1.properties

          ➜ bin ./connect-distributed ../etc/kafka/connect-distributed-worker2.properties


          NOTE: Set plugin.path setting to the full (and not relative) path (e.g.: plugin.path=/full/path/to/confluent-5.0.0/share/java)



          Then i can easily add a new connector



          curl -s -X POST -H "Content-Type: application/json" --data @/full/path/to/confluent-5.0.0/etc/kafka-connect-jdbc/source-quickstart-sqlite.json http://localhost:18083/connectors


          This should do the trick.



          As already pointed out by cricket_007 consider a replication factor of at least 3 for the kafka brokers in case you're dealing with stuff that you don't want to lose in case of outage of one of the brokers.






          share|improve this answer
























          • confluent start will run distributed mode... Why did you use two processes, though?

            – cricket_007
            Nov 16 '18 at 14:31














          1












          1








          1







          I can describe what I did for starting the jdbc connector in distributed mode:



          I am using on my local machine, confluent CLI utility for booting up faster the services.



          ./confluent start


          Afterwords I stopped kafka-connect



          ./confluent stop connect


          and then I proceed to manually start the customized connect-distributed on two different ports (18083 and 28083)



          ➜  bin ./connect-distributed ../etc/kafka/connect-distributed-worker1.properties

          ➜ bin ./connect-distributed ../etc/kafka/connect-distributed-worker2.properties


          NOTE: Set plugin.path setting to the full (and not relative) path (e.g.: plugin.path=/full/path/to/confluent-5.0.0/share/java)



          Then i can easily add a new connector



          curl -s -X POST -H "Content-Type: application/json" --data @/full/path/to/confluent-5.0.0/etc/kafka-connect-jdbc/source-quickstart-sqlite.json http://localhost:18083/connectors


          This should do the trick.



          As already pointed out by cricket_007 consider a replication factor of at least 3 for the kafka brokers in case you're dealing with stuff that you don't want to lose in case of outage of one of the brokers.






          share|improve this answer













          I can describe what I did for starting the jdbc connector in distributed mode:



          I am using on my local machine, confluent CLI utility for booting up faster the services.



          ./confluent start


          Afterwords I stopped kafka-connect



          ./confluent stop connect


          and then I proceed to manually start the customized connect-distributed on two different ports (18083 and 28083)



          ➜  bin ./connect-distributed ../etc/kafka/connect-distributed-worker1.properties

          ➜ bin ./connect-distributed ../etc/kafka/connect-distributed-worker2.properties


          NOTE: Set plugin.path setting to the full (and not relative) path (e.g.: plugin.path=/full/path/to/confluent-5.0.0/share/java)



          Then i can easily add a new connector



          curl -s -X POST -H "Content-Type: application/json" --data @/full/path/to/confluent-5.0.0/etc/kafka-connect-jdbc/source-quickstart-sqlite.json http://localhost:18083/connectors


          This should do the trick.



          As already pointed out by cricket_007 consider a replication factor of at least 3 for the kafka brokers in case you're dealing with stuff that you don't want to lose in case of outage of one of the brokers.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 16 '18 at 11:53









          marius_neomarius_neo

          7561624




          7561624













          • confluent start will run distributed mode... Why did you use two processes, though?

            – cricket_007
            Nov 16 '18 at 14:31



















          • confluent start will run distributed mode... Why did you use two processes, though?

            – cricket_007
            Nov 16 '18 at 14:31

















          confluent start will run distributed mode... Why did you use two processes, though?

          – cricket_007
          Nov 16 '18 at 14:31





          confluent start will run distributed mode... Why did you use two processes, though?

          – cricket_007
          Nov 16 '18 at 14:31


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53309858%2fconfluent-kafka-connect-distributed-mode-jdbc-connector%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Florida Star v. B. J. F.

          Danny Elfman

          Retrieve a Users Dashboard in Tumblr with R and TumblR. Oauth Issues