You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For the read side we could be using Elasticsearch to index the data for searching. In case of lost of data in Elasticsearch because some restore required it's difficult to sync the command side and the read side. One strategy is to store the offsets in the same datastore of the production data so we could use the hot spots of the Kafka Consumer to store the offsets not only in Kafka/Zookeeper but in the other datastore.
For the regular subscription of the KafkaConsumer, KafkaConsumer.subscribe, we can attach a Listener to get notifications on partition assignment/revocation for the subscribed topics and set and get the position to the last offset/topic/partition stored in the datastore.
The usage of the Eventuate Consumer should provide the possibility to specify an implementation or configuration to choose the ExtenalOffsetStorage due to the consumer of the messages and the offset storage should put the data in the same datastore.
Another possibility is to still have a global MessageConsumer@Bean (actually its MessageConsumerKafkaImpl) but define a ElasticsearchOffsetStorage@Bean that gets injected into it and automatically overrides the default behavior
For the read side we could be using Elasticsearch to index the data for searching. In case of lost of data in Elasticsearch because some restore required it's difficult to sync the command side and the read side. One strategy is to store the offsets in the same datastore of the production data so we could use the hot spots of the Kafka Consumer to store the offsets not only in Kafka/Zookeeper but in the other datastore.
For the regular subscription of the KafkaConsumer,
KafkaConsumer.subscribe
, we can attach a Listener to get notifications on partition assignment/revocation for the subscribed topics and set and get the position to the last offset/topic/partition stored in the datastore.https://javadoc.io/doc/org.apache.kafka/kafka-clients/latest/org/apache/kafka/clients/consumer/KafkaConsumer.html#subscribe-java.util.Collection-org.apache.kafka.clients.consumer.ConsumerRebalanceListener-
https://javadoc.io/doc/org.apache.kafka/kafka-clients/latest/org/apache/kafka/clients/consumer/ConsumerRebalanceListener.html
The usage of the Eventuate Consumer should provide the possibility to specify an implementation or configuration to choose the ExtenalOffsetStorage due to the consumer of the messages and the offset storage should put the data in the same datastore.
The subscription would be directly to the custom message consumer
The text was updated successfully, but these errors were encountered: