问题描述
我必须将记录从 Aurora/Mysql 发送到 MSK,然后再从那里发送到 Elastic 搜索服务
Aurora -->Kafka-connect--->AWS MSK--->kafka connect --->弹性搜索
极光表结构中的记录是这样的
我认为记录将以这种格式发送到 AWS MSK.
因此,为了通过弹性搜索使用,我需要使用正确的架构,因此我必须使用架构注册表.
我的问题
问题 1
对于需要上述类型的消息架构注册表,我应该如何使用架构注册表?.我是否必须为此创建 JSON 结构,如果是,我将其保留在哪里.这里需要更多帮助才能理解这一点?
我已经编辑了
提到了zookeper,但我没有提到什么是kafkastore.topic=_schema
如何将其链接到自定义架构.
即使我开始并收到此错误
这是我所期待的,因为我没有对架构做任何事情.
我确实安装了 jdbc 连接器,当我启动时出现以下错误
问题 2我可以在一个 ec2 上创建两个连接器吗(jdbc 和弹性 serach 一个).如果是,我是否必须在 sepearte cli 中同时启动?
问题 3当我打开 vim/usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties我只看到像下面这样的属性值
在上面的属性文件中,我可以提到架构名称和表名称吗?
根据答案,我正在更新我的 Kafka 连接 JDBC 配置
--------------启动JDBC连接弹性搜索------------------------------
然后
然后我修改了下面的属性
最后我修改了
在这里我修改了以下属性
当我列出主题时,我没有看到任何为表名列出的主题.
错误信息的堆栈跟踪
是否需要架构注册表?
没有.您可以在 json 记录中启用模式.JDBC 源可以根据表信息为您创建
<块引用>
提到了zookeper,但我不知道什么是kafkastore.topic=_schema
如果你想使用 Schema Registry,你应该使用 kafkastore.bootstrap.servers
.with Kafka 地址,而不是 Zookeeper.所以删除 kafkastore.connection.url
请阅读文档 所有属性的解释
<块引用>我没有对架构做任何事情.
没关系.模式主题在注册表第一次启动时被创建
<块引用>我可以在一个 ec2 上创建两个连接器吗
是(忽略可用的 JVM 堆空间).同样,这在 Kafka Connect 文档中有详细说明.
使用独立模式,您首先传递连接工作器配置,然后在一个命令中最多传递 N 个连接器属性
使用分布式模式,您使用 Kafka Connect REST API
https://docs.confluent.io/current/connect/managing/configuring.html
<块引用>当我打开 vim/usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties
首先,这是针对 Sqlite,而不是针对 Mysql/Postgres.您不需要使用快速入门文件,它们仅供参考
同样,所有属性都有详细记录
https://docs.confluent.io/current/connect/kafka-connect-jdbc/index.html#connect-jdbc
<块引用>我确实安装了 jdbc 连接器,当我启动时出现以下错误
这里有更多关于如何调试的信息
https://www.confluent.io/blog/kafka-connect-deep-dive-jdbc-source-connector/
<小时>如前所述,我个人建议尽可能使用 Debezium/CDC
用于 RDS Aurora 的 Debezium 连接器
I have to send records from Aurora/Mysql to MSK and from there to Elastic search service
Aurora -->Kafka-connect--->AWS MSK--->kafka connect --->Elastic search
The record in Aurora table structure is something like this
I think record will go to AWS MSK in this format.
So in order to consume by elastic search i need to use proper schema so schema registry i have to use.
My question
Question 1
How should i use schema registry for above type of message schema registry is required ?. Do i have to create JSON structure for this and if yes where i have keep that. More help required here to understand this ?
I have edited
Mentioned zookeper but i did not what is kafkastore.topic=_schema
How to link this to custom schema .
Even i started and got this error
Which i was expecting because i did not do anything about schema .
I do have jdbc connector installed and when i start i get below error
Question 2 Can i create two onnector on one ec2 (jdbc and elastic serach one ).If yes do i have to start both in sepearte cli ?
Question 3 When i open vim /usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties I see only propeties value like below
In the above properties file where i can mention schema name and table name?
Based on answer i am updating my configuration for Kafka connect JDBC
---------------start JDBC connect elastic search -----------------------------
And then
Then i modified below properties
Last i modified
and here i modified below properties
When i list topic i do not see any topic listed for table name .
Stack trace for the error message
schema registry is required ?
No. You can enable schemas in json records. JDBC source can create them for you based on the table information
Mentioned zookeper but i did not what is kafkastore.topic=_schema
If you want to use Schema Registry, you should be using kafkastore.bootstrap.servers
.with the Kafka address, not Zookeeper. So remove kafkastore.connection.url
Please read the docs for explanations of all properties
i did not do anything about schema .
Doesn't matter. The schemas topic gets created when the Registry first starts
Can i create two onnector on one ec2
Yes (ignoring available JVM heap space). Again, this is detailed in the Kafka Connect documentation.
Using standalone mode, you first pass the connect worker configuration, then up to N connector properties in one command
Using distributed mode, you use the Kafka Connect REST API
https://docs.confluent.io/current/connect/managing/configuring.html
When i open vim /usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties
First of all, that's for Sqlite, not Mysql/Postgres. You don't need to use the quickstart files, they are only there for reference
Again, all properties are well documented
https://docs.confluent.io/current/connect/kafka-connect-jdbc/index.html#connect-jdbc
I do have jdbc connector installed and when i start i get below error
Here's more information about how you can debug that
https://www.confluent.io/blog/kafka-connect-deep-dive-jdbc-source-connector/
As stated before, I would personally suggest using Debezium/CDC where possible
Debezium Connector for RDS Aurora
这篇关于Kafka 连接设置以使用 AWS MSK 从 Aurora 发送记录的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!