MySQL crashes often












0















I have a droplet on DigitalOcean created using Laravel Forge and since a few days ago the MySQL server just crashes and the only way to make it work again is by rebooting the server (MySQL makes the server unresponsive).



When I type htop to see the list of processes is showing a few of /usr/sbin/mysqld --daemonize --pid-file=/run/mysqld/mysql.pid (currently is showing 33 of them).



The error log is bigger than 1GB (yes, I know!) and shows this message hundreds of times:




[Warning] InnoDB: Difficult to find free blocks in the buffer pool (21
search iterations)! 21 failed attempts to flush a page! Consider
increasing the buffer pool size. It is also possible that in your Unix
version fsync is very slow, or completely frozen inside the OS kernel.
Then upgrading to a newer version of your operating system may help.
Look at the number of fsyncs in diagnostic info below. Pending flushes
(fsync) log: 0; buffer pool: 0. 167678974 OS file reads, 2271392 OS
file writes, 758043 OS fsyncs. Starting InnoDB Monitor to print
further diagnostics to the standard output.




This droplet has been running during 6 months but this problem only started last week. The only thing that changed recently is now we send weekly notifications to customers (only the ones that subscribed to it) to let them know about certain events happening in the current week. This is kind of a intensive process, because we have a few thousands of customers, but we take advantage of Laravel Queues in order to process everything.



Is this a MySQL-settings related issue?










share|improve this question



























    0















    I have a droplet on DigitalOcean created using Laravel Forge and since a few days ago the MySQL server just crashes and the only way to make it work again is by rebooting the server (MySQL makes the server unresponsive).



    When I type htop to see the list of processes is showing a few of /usr/sbin/mysqld --daemonize --pid-file=/run/mysqld/mysql.pid (currently is showing 33 of them).



    The error log is bigger than 1GB (yes, I know!) and shows this message hundreds of times:




    [Warning] InnoDB: Difficult to find free blocks in the buffer pool (21
    search iterations)! 21 failed attempts to flush a page! Consider
    increasing the buffer pool size. It is also possible that in your Unix
    version fsync is very slow, or completely frozen inside the OS kernel.
    Then upgrading to a newer version of your operating system may help.
    Look at the number of fsyncs in diagnostic info below. Pending flushes
    (fsync) log: 0; buffer pool: 0. 167678974 OS file reads, 2271392 OS
    file writes, 758043 OS fsyncs. Starting InnoDB Monitor to print
    further diagnostics to the standard output.




    This droplet has been running during 6 months but this problem only started last week. The only thing that changed recently is now we send weekly notifications to customers (only the ones that subscribed to it) to let them know about certain events happening in the current week. This is kind of a intensive process, because we have a few thousands of customers, but we take advantage of Laravel Queues in order to process everything.



    Is this a MySQL-settings related issue?










    share|improve this question

























      0












      0








      0








      I have a droplet on DigitalOcean created using Laravel Forge and since a few days ago the MySQL server just crashes and the only way to make it work again is by rebooting the server (MySQL makes the server unresponsive).



      When I type htop to see the list of processes is showing a few of /usr/sbin/mysqld --daemonize --pid-file=/run/mysqld/mysql.pid (currently is showing 33 of them).



      The error log is bigger than 1GB (yes, I know!) and shows this message hundreds of times:




      [Warning] InnoDB: Difficult to find free blocks in the buffer pool (21
      search iterations)! 21 failed attempts to flush a page! Consider
      increasing the buffer pool size. It is also possible that in your Unix
      version fsync is very slow, or completely frozen inside the OS kernel.
      Then upgrading to a newer version of your operating system may help.
      Look at the number of fsyncs in diagnostic info below. Pending flushes
      (fsync) log: 0; buffer pool: 0. 167678974 OS file reads, 2271392 OS
      file writes, 758043 OS fsyncs. Starting InnoDB Monitor to print
      further diagnostics to the standard output.




      This droplet has been running during 6 months but this problem only started last week. The only thing that changed recently is now we send weekly notifications to customers (only the ones that subscribed to it) to let them know about certain events happening in the current week. This is kind of a intensive process, because we have a few thousands of customers, but we take advantage of Laravel Queues in order to process everything.



      Is this a MySQL-settings related issue?










      share|improve this question














      I have a droplet on DigitalOcean created using Laravel Forge and since a few days ago the MySQL server just crashes and the only way to make it work again is by rebooting the server (MySQL makes the server unresponsive).



      When I type htop to see the list of processes is showing a few of /usr/sbin/mysqld --daemonize --pid-file=/run/mysqld/mysql.pid (currently is showing 33 of them).



      The error log is bigger than 1GB (yes, I know!) and shows this message hundreds of times:




      [Warning] InnoDB: Difficult to find free blocks in the buffer pool (21
      search iterations)! 21 failed attempts to flush a page! Consider
      increasing the buffer pool size. It is also possible that in your Unix
      version fsync is very slow, or completely frozen inside the OS kernel.
      Then upgrading to a newer version of your operating system may help.
      Look at the number of fsyncs in diagnostic info below. Pending flushes
      (fsync) log: 0; buffer pool: 0. 167678974 OS file reads, 2271392 OS
      file writes, 758043 OS fsyncs. Starting InnoDB Monitor to print
      further diagnostics to the standard output.




      This droplet has been running during 6 months but this problem only started last week. The only thing that changed recently is now we send weekly notifications to customers (only the ones that subscribed to it) to let them know about certain events happening in the current week. This is kind of a intensive process, because we have a few thousands of customers, but we take advantage of Laravel Queues in order to process everything.



      Is this a MySQL-settings related issue?







      mysql digital-ocean laravel-forge






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 14 '18 at 21:07









      TJ is too shortTJ is too short

      148214




      148214
























          1 Answer
          1






          active

          oldest

          votes


















          0














          Try increasing innodb_buffer_pool_size in my.cnf



          The recommendation for a dedicated DB server is 80% - if you're already at that level then you should consider moving to a bigger instance type.






          share|improve this answer
























          • Thanks very much for your reply @Oliver! I'm on a server with 16GB RAM, so I changed the innodb_buffer_pool_size to 13GB (before it was with its default value) and made a test: I ran a script to remove all records of a table and add them again, a total of 30k records. Seems better, the number of error messages is lower, but they are still there: "InnoDB: page_cleaner: 1000ms intended loop took 4228ms. The settings might not be optimal."

            – TJ is too short
            Nov 15 '18 at 8:39











          • That warning is expected if you churn through a lot of records pages like you seem to do, see e.g. stackoverflow.com/questions/41134785 for more details.

            – Oliver
            Nov 15 '18 at 15:10











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53308726%2fmysql-crashes-often%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          Try increasing innodb_buffer_pool_size in my.cnf



          The recommendation for a dedicated DB server is 80% - if you're already at that level then you should consider moving to a bigger instance type.






          share|improve this answer
























          • Thanks very much for your reply @Oliver! I'm on a server with 16GB RAM, so I changed the innodb_buffer_pool_size to 13GB (before it was with its default value) and made a test: I ran a script to remove all records of a table and add them again, a total of 30k records. Seems better, the number of error messages is lower, but they are still there: "InnoDB: page_cleaner: 1000ms intended loop took 4228ms. The settings might not be optimal."

            – TJ is too short
            Nov 15 '18 at 8:39











          • That warning is expected if you churn through a lot of records pages like you seem to do, see e.g. stackoverflow.com/questions/41134785 for more details.

            – Oliver
            Nov 15 '18 at 15:10
















          0














          Try increasing innodb_buffer_pool_size in my.cnf



          The recommendation for a dedicated DB server is 80% - if you're already at that level then you should consider moving to a bigger instance type.






          share|improve this answer
























          • Thanks very much for your reply @Oliver! I'm on a server with 16GB RAM, so I changed the innodb_buffer_pool_size to 13GB (before it was with its default value) and made a test: I ran a script to remove all records of a table and add them again, a total of 30k records. Seems better, the number of error messages is lower, but they are still there: "InnoDB: page_cleaner: 1000ms intended loop took 4228ms. The settings might not be optimal."

            – TJ is too short
            Nov 15 '18 at 8:39











          • That warning is expected if you churn through a lot of records pages like you seem to do, see e.g. stackoverflow.com/questions/41134785 for more details.

            – Oliver
            Nov 15 '18 at 15:10














          0












          0








          0







          Try increasing innodb_buffer_pool_size in my.cnf



          The recommendation for a dedicated DB server is 80% - if you're already at that level then you should consider moving to a bigger instance type.






          share|improve this answer













          Try increasing innodb_buffer_pool_size in my.cnf



          The recommendation for a dedicated DB server is 80% - if you're already at that level then you should consider moving to a bigger instance type.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 15 '18 at 1:35









          OliverOliver

          3,50911926




          3,50911926













          • Thanks very much for your reply @Oliver! I'm on a server with 16GB RAM, so I changed the innodb_buffer_pool_size to 13GB (before it was with its default value) and made a test: I ran a script to remove all records of a table and add them again, a total of 30k records. Seems better, the number of error messages is lower, but they are still there: "InnoDB: page_cleaner: 1000ms intended loop took 4228ms. The settings might not be optimal."

            – TJ is too short
            Nov 15 '18 at 8:39











          • That warning is expected if you churn through a lot of records pages like you seem to do, see e.g. stackoverflow.com/questions/41134785 for more details.

            – Oliver
            Nov 15 '18 at 15:10



















          • Thanks very much for your reply @Oliver! I'm on a server with 16GB RAM, so I changed the innodb_buffer_pool_size to 13GB (before it was with its default value) and made a test: I ran a script to remove all records of a table and add them again, a total of 30k records. Seems better, the number of error messages is lower, but they are still there: "InnoDB: page_cleaner: 1000ms intended loop took 4228ms. The settings might not be optimal."

            – TJ is too short
            Nov 15 '18 at 8:39











          • That warning is expected if you churn through a lot of records pages like you seem to do, see e.g. stackoverflow.com/questions/41134785 for more details.

            – Oliver
            Nov 15 '18 at 15:10

















          Thanks very much for your reply @Oliver! I'm on a server with 16GB RAM, so I changed the innodb_buffer_pool_size to 13GB (before it was with its default value) and made a test: I ran a script to remove all records of a table and add them again, a total of 30k records. Seems better, the number of error messages is lower, but they are still there: "InnoDB: page_cleaner: 1000ms intended loop took 4228ms. The settings might not be optimal."

          – TJ is too short
          Nov 15 '18 at 8:39





          Thanks very much for your reply @Oliver! I'm on a server with 16GB RAM, so I changed the innodb_buffer_pool_size to 13GB (before it was with its default value) and made a test: I ran a script to remove all records of a table and add them again, a total of 30k records. Seems better, the number of error messages is lower, but they are still there: "InnoDB: page_cleaner: 1000ms intended loop took 4228ms. The settings might not be optimal."

          – TJ is too short
          Nov 15 '18 at 8:39













          That warning is expected if you churn through a lot of records pages like you seem to do, see e.g. stackoverflow.com/questions/41134785 for more details.

          – Oliver
          Nov 15 '18 at 15:10





          That warning is expected if you churn through a lot of records pages like you seem to do, see e.g. stackoverflow.com/questions/41134785 for more details.

          – Oliver
          Nov 15 '18 at 15:10




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53308726%2fmysql-crashes-often%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Florida Star v. B. J. F.

          Danny Elfman

          Retrieve a Users Dashboard in Tumblr with R and TumblR. Oauth Issues