elasticsearch-jdbc-river

Automatically syncing ElasticSearch with SQL

丶灬走出姿态 提交于 2019-12-07 22:44:55
问题 I've ran this query and it worked well. curl -XPUT 'localhost:9200/_river/my_jdbc_river/_meta' -d '{ "type" : "jdbc", "jdbc" : { "url" : "jdbc:mysql://localhost:3306/test", "user" : "myaccount", "password" : "myaccount", "sql" : "select * from orders" } }' Everything seems to be indexed. However, when I changed a data from the Orders Table, the changes did not reflect the document in ElasticSearch. Is it possible to automatically sync updated/changed data? 回答1: You need to add another

Automatically syncing ElasticSearch with SQL

杀马特。学长 韩版系。学妹 提交于 2019-12-06 08:48:59
I've ran this query and it worked well. curl -XPUT 'localhost:9200/_river/my_jdbc_river/_meta' -d '{ "type" : "jdbc", "jdbc" : { "url" : "jdbc:mysql://localhost:3306/test", "user" : "myaccount", "password" : "myaccount", "sql" : "select * from orders" } }' Everything seems to be indexed. However, when I changed a data from the Orders Table, the changes did not reflect the document in ElasticSearch. Is it possible to automatically sync updated/changed data? You need to add another parameter for schedule to tell jdbc-river to pull data periodically. Here is a reference to this. user3799842 I'm

Preferred method of indexing bulk data into ElasticSearch?

徘徊边缘 提交于 2019-12-04 19:55:05
问题 I've been looking at ElasticSearch as solution get some better search and analytics functionality at my company. All of our data is in SQL Server at the moment and I've successfully installed the JDBC River and gotten some test data into ES. Rivers seem like they can be deprecated in future releases and the JDBC river is maintained by a third party. And Logstash doesn't seem to support indexing from SQL Server yet (don't know if its a planned feature). So for my situation where I want to move

In JDBC River how to stop housekeeping not to delete records?

泪湿孤枕 提交于 2019-12-04 05:54:32
问题 When JDBC River does the polling, housekeeping jobs removes chunk of records. Somebody know solutions for it. I want to stop records deletion. For more reference :- https://github.com/jprante/elasticsearch-river-jdbc/issues/61 回答1: The housekeeping job is stop if versioning is disabled by versioning: false in the JDBC river parameters, which is the default. { "jdbc" :{ "strategy" : "oneshot", "driver" : null, "url" : null, "user" : null, "password" : null, "sql" : null, "sqlparams" : null,

Preferred method of indexing bulk data into ElasticSearch?

主宰稳场 提交于 2019-12-03 12:47:33
I've been looking at ElasticSearch as solution get some better search and analytics functionality at my company. All of our data is in SQL Server at the moment and I've successfully installed the JDBC River and gotten some test data into ES. Rivers seem like they can be deprecated in future releases and the JDBC river is maintained by a third party. And Logstash doesn't seem to support indexing from SQL Server yet (don't know if its a planned feature). So for my situation where I want to move data from SQL Server to ElasticSearch, what's the preferred method of indexing data and maintaining

Alternatives to Elasticsearch river plugins

試著忘記壹切 提交于 2019-12-03 03:18:14
问题 I want to synchronize an Elasticsearch index with the contents of an SQL database. The Elasticsearch JDBC river meets all my requirements, but in the documentation it is said that the plugin is deprecated. I don't want to use a tool that won't be supported in the following years. What are the alternatives? In the documentation of the river, it is said: Note, JDBC plugin is not only a river, but also a standalone module. Because Elasticsearch river API is deprecated, this is an important

In JDBC River how to stop housekeeping not to delete records?

最后都变了- 提交于 2019-12-02 10:59:20
When JDBC River does the polling, housekeeping jobs removes chunk of records. Somebody know solutions for it. I want to stop records deletion. For more reference :- https://github.com/jprante/elasticsearch-river-jdbc/issues/61 The housekeeping job is stop if versioning is disabled by versioning: false in the JDBC river parameters , which is the default. { "jdbc" :{ "strategy" : "oneshot", "driver" : null, "url" : null, "user" : null, "password" : null, "sql" : null, "sqlparams" : null, "poll" : "1h", "rounding" : null, "scale" : 0, "autocommit" : false, "fetchsize" : 10, "max_rows" : 0, "max

Fetching changes from table with ElasticSearch JDBC river

懵懂的女人 提交于 2019-11-30 15:11:07
I'm configuring JDBC river for ElasticSearch but I can't find any good config example. I've read all pages on elasticsearch-river-jdbc GitHub. I have a SQL query and I need to fetch changes from all table columns every X seconds. How can I tell JDBC river that some row is changed and should be reindexed? Data are fetched during ES server start, polling is happening, but changes are not fetched from DB to ES. My configuration: curl -XPUT 'localhost:9200/_river/itemsi/_meta' -d '{ "type" : "jdbc", "jdbc" : { "driver" : "com.mysql.jdbc.Driver", "url" : "jdbc:mysql://mydb.com:3306/dbname", "user"

Fetching changes from table with ElasticSearch JDBC river

浪尽此生 提交于 2019-11-29 21:12:29
问题 I'm configuring JDBC river for ElasticSearch but I can't find any good config example. I've read all pages on elasticsearch-river-jdbc GitHub. I have a SQL query and I need to fetch changes from all table columns every X seconds. How can I tell JDBC river that some row is changed and should be reindexed? Data are fetched during ES server start, polling is happening, but changes are not fetched from DB to ES. My configuration: curl -XPUT 'localhost:9200/_river/itemsi/_meta' -d '{ "type" :

ElasticSearch river JDBC MySQL not deleting records

倖福魔咒の 提交于 2019-11-27 17:11:29
问题 I'm using the JDBC plugin for ElasticSearch to update my MySQL database. It picks up new and changed records, but does not delete records that have been removed from MySQL. They remain in the index. This is the code I use to create the river: curl -XPUT 'localhost:9200/_river/account_river/_meta' -d '{ "type" : "jdbc", "jdbc" : { "driver" : "com.mysql.jdbc.Driver", "url" : "jdbc:mysql://localhost:3306/test", "user" : "test_user", "password" : "test_pass", "sql" : "SELECT `account`.`id` as `