• <tfoot id='lLPav'></tfoot>

    <small id='lLPav'></small><noframes id='lLPav'>

      <bdo id='lLPav'></bdo><ul id='lLPav'></ul>

    1. <legend id='lLPav'><style id='lLPav'><dir id='lLPav'><q id='lLPav'></q></dir></style></legend>
      <i id='lLPav'><tr id='lLPav'><dt id='lLPav'><q id='lLPav'><span id='lLPav'><b id='lLPav'><form id='lLPav'><ins id='lLPav'></ins><ul id='lLPav'></ul><sub id='lLPav'></sub></form><legend id='lLPav'></legend><bdo id='lLPav'><pre id='lLPav'><center id='lLPav'></center></pre></bdo></b><th id='lLPav'></th></span></q></dt></tr></i><div id='lLPav'><tfoot id='lLPav'></tfoot><dl id='lLPav'><fieldset id='lLPav'></fieldset></dl></div>
      1. Logstash 未从 MySQL 读取新条目

        时间:2023-10-09

          <small id='kvjNg'></small><noframes id='kvjNg'>

          • <legend id='kvjNg'><style id='kvjNg'><dir id='kvjNg'><q id='kvjNg'></q></dir></style></legend>
          • <tfoot id='kvjNg'></tfoot>

              <tbody id='kvjNg'></tbody>
                • <bdo id='kvjNg'></bdo><ul id='kvjNg'></ul>
                  <i id='kvjNg'><tr id='kvjNg'><dt id='kvjNg'><q id='kvjNg'><span id='kvjNg'><b id='kvjNg'><form id='kvjNg'><ins id='kvjNg'></ins><ul id='kvjNg'></ul><sub id='kvjNg'></sub></form><legend id='kvjNg'></legend><bdo id='kvjNg'><pre id='kvjNg'><center id='kvjNg'></center></pre></bdo></b><th id='kvjNg'></th></span></q></dt></tr></i><div id='kvjNg'><tfoot id='kvjNg'></tfoot><dl id='kvjNg'><fieldset id='kvjNg'></fieldset></dl></div>
                  本文介绍了Logstash 未从 MySQL 读取新条目的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

                  问题描述

                  我在我的 Windows 7 机器上本地安装了 Logstash 和 Elasticsearch.我在 Logstash 中安装了

                  这里是Mysql数据:

                  我想做什么(实现):

                  我希望 Logstash 运行并监听审计表上的新条目,并且只索引该数据(当新的审计条目输入到表中时,Logstash 会知道并将该条目发送到 Elasticsearch.

                  此外,为什么我运行该命令时 Logstash 会停止,它不应该运行吗?我是 Logstash 和 Elasticsearch 的新手.

                  谢谢

                  G

                  我也在Elastic 论坛,如果我得到答案,我会在这里发帖帮助他人.

                  解决方案

                  默认情况下,logstash-input-jdbc 插件会运行你的 SELECT 语句一次然后退出.您可以通过添加 来更改此行为schedule 参数 带有 cron 表达式到您的配置中,如下所示:

                  输入{数据库{jdbc_driver_library =>C:/logstash/lib/mysql-connector-java-5.1.37-bin.jar"jdbc_driver_class =>com.mysql.jdbc.Driver"jdbc_connection_string =>jdbc:mysql://127.0.0.1:3306/test"jdbc_user =>根"jdbc_password =>"声明 =>SELECT * FROM transport.audit"时间表=>"* * * * *" <----- 添加这一行jdbc_paging_enabled =>真的"jdbc_page_size =>50000"}}

                  结果是 SELECT 语句现在每分钟运行一次.

                  如果您的 MySQL 表中有一个日期字段(但似乎并非如此),您还可以使用预定义的 sql_last_start 参数,以免重新索引所有记录每次运行.该参数可以在您的查询中使用,如下所示:

                   语句 =>SELECT * FROM transport.audit WHERE your_date_field >= :sql_last_start"

                  I have Logstash and Elasticsearch installed locally on my Windows 7 machine. I installed logstash-input-jdbc in Logstash.

                  I have data in MySql database which I send to Elasticsearch using Logstash so I can do some report generating.

                  Logstash config file that does this.

                  input {
                   jdbc {
                     jdbc_driver_library => "C:/logstash/lib/mysql-connector-java-5.1.37-bin.jar"
                     jdbc_driver_class => "com.mysql.jdbc.Driver"
                     jdbc_connection_string => "jdbc:mysql://127.0.0.1:3306/test"
                     jdbc_user => "root"
                     jdbc_password => ""
                     statement => "SELECT * FROM transport.audit"
                     jdbc_paging_enabled => "true"
                     jdbc_page_size => "50000"
                  }
                  }
                  
                  output {
                    elasticsearch {
                      hosts => ["localhost:9200"]
                      index => "transport-audit-%{+YYYY.mm.dd}"
                  }
                  }
                  

                  this works and Logstash sends the data to Elasticsearch when I run :

                  binlogstash agent -f logstashconf1_input.conf
                  

                  this is the response from that command

                  io/console not supported; tty will not be manipulated
                  Default settings used: Filter workers: 4
                  Logstash startup completed
                  Logstash shutdown completed
                  

                  WHY, does Logstash shutdown?

                  when I check Elasticsearch the data is there, and if I run the command again the data is re-indexed (duplicated).

                  Here is the Mysql data:

                  What I am trying to do (achieve):

                  I want Logstash to run and listen for new entries on audit table and only index that data (when a new audit entry is entered into the table Logstash would know and send that entry to Elasticsearch.

                  Also why does Logstash stop when I run that command, should it not be running? I am new to Logstash and Elasticsearch.

                  Thanks

                  G

                  I have also posted the same question in Elastic forum, and if I get the answer I will post here to help others.

                  解决方案

                  By default, the logstash-input-jdbc plugin will run your SELECT statement once and then quit. You can change this behavior by adding a schedule parameter with a cron expression to your configuration, like this:

                  input {
                   jdbc {
                     jdbc_driver_library => "C:/logstash/lib/mysql-connector-java-5.1.37-bin.jar"
                     jdbc_driver_class => "com.mysql.jdbc.Driver"
                     jdbc_connection_string => "jdbc:mysql://127.0.0.1:3306/test"
                     jdbc_user => "root"
                     jdbc_password => ""
                     statement => "SELECT * FROM transport.audit"
                     schedule => "* * * * *"               <----- add this line
                     jdbc_paging_enabled => "true"
                     jdbc_page_size => "50000"
                   }
                  }
                  

                  The result is that the SELECT statement will now run every minute.

                  If you had a date field in your MySQL table (but it doesn't seem the case), you could also use the pre-defined sql_last_start parameter in order to not re-index all records on every run. That parameter can be used in your query like this:

                     statement => "SELECT * FROM transport.audit WHERE your_date_field >= :sql_last_start"
                  

                  这篇关于Logstash 未从 MySQL 读取新条目的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

                  上一篇:如何加入多个没有配置外部表权限的azure数据库? 下一篇:SQL Server 2008 无法使用新创建的用户登录

                  相关文章

                • <small id='zGkcD'></small><noframes id='zGkcD'>

                    <i id='zGkcD'><tr id='zGkcD'><dt id='zGkcD'><q id='zGkcD'><span id='zGkcD'><b id='zGkcD'><form id='zGkcD'><ins id='zGkcD'></ins><ul id='zGkcD'></ul><sub id='zGkcD'></sub></form><legend id='zGkcD'></legend><bdo id='zGkcD'><pre id='zGkcD'><center id='zGkcD'></center></pre></bdo></b><th id='zGkcD'></th></span></q></dt></tr></i><div id='zGkcD'><tfoot id='zGkcD'></tfoot><dl id='zGkcD'><fieldset id='zGkcD'></fieldset></dl></div>
                    <legend id='zGkcD'><style id='zGkcD'><dir id='zGkcD'><q id='zGkcD'></q></dir></style></legend>

                      <tfoot id='zGkcD'></tfoot>
                        <bdo id='zGkcD'></bdo><ul id='zGkcD'></ul>