You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Let’s imagine that the Eventuate Tram Customers and Orders project was running in production. You then wanted to add the Order History View - #1
You need to populate the order history views - Customer and Order MongoDB tables - with the existing data.
Assumptions
Let’s focus on Kafka for now
Solution
Note:
The goal is to build a framework that makes it easier to implement this behavior for any service/application
In the meantime, let’s just focus on the Eventuate Tram customer and orders application
Exporting data from a service
A service must expose an API for triggering the export behavior.
An export consists of an AggregateSnapshot event, which is an event containing a snapshot of the aggregate’s state.
A service (Order Service, Customer Service) exports its data as follows:
Lock table
This ensures that any domain events representing an update to an aggregate will follows the aggregate’s Order/CustomerSnapshotEvent
Publish StartOfSnapshots events to Order/Customer channel (topic) to indicate starting offset
Since the table is locked, the offset is the starting point for the consumer to read the Snapshot events.
Publish one StartOfSnapshots event to each partition in the topic using round robin with a null message key
Iterate through Orders/Customers and publish an (Customer/Order)Snapshot event for each one to the Order/Customer
The event containers the complete state of the Order/Customer
Perhaps one day use Spring batch
Unsure whether this is needed
Publish End Of Snapshots events to channel to indicate done
Unlock table
Importing data into a CQRS view service
The OrderHistoryService subscribes to (Customer/Order)Snapshot and updates the view appropriately.
One all of the services (Customer and Order) have published their StartOfSnapshots events and the Kafka consumer offsets are known:
Change the kafka consumer offsets for the OrderHistoryService’s subscription to the specified point offsets
Start the OrderHistoryService
The text was updated successfully, but these errors were encountered:
dartartem
added a commit
to dartartem/eventuate-tram-examples-customers-and-orders
that referenced
this issue
Jun 4, 2018
Building/Rebuilding CQRS views
Problem
Let’s imagine that the Eventuate Tram Customers and Orders project was running in production. You then wanted to add the Order History View - #1
You need to populate the order history views - Customer and Order MongoDB tables - with the existing data.
Assumptions
Let’s focus on Kafka for now
Solution
Note:
The goal is to build a framework that makes it easier to implement this behavior for any service/application
In the meantime, let’s just focus on the Eventuate Tram customer and orders application
Exporting data from a service
A service must expose an API for triggering the export behavior.
An export consists of an AggregateSnapshot event, which is an event containing a snapshot of the aggregate’s state.
A service (Order Service, Customer Service) exports its data as follows:
Lock table
This ensures that any domain events representing an update to an aggregate will follows the aggregate’s Order/CustomerSnapshotEvent
Publish StartOfSnapshots events to Order/Customer channel (topic) to indicate starting offset
Since the table is locked, the offset is the starting point for the consumer to read the Snapshot events.
Publish one StartOfSnapshots event to each partition in the topic using round robin with a null message key
Iterate through Orders/Customers and publish an (Customer/Order)Snapshot event for each one to the Order/Customer
The event containers the complete state of the Order/Customer
Perhaps one day use Spring batch
Unsure whether this is needed
Publish End Of Snapshots events to channel to indicate done
Unlock table
Importing data into a CQRS view service
The OrderHistoryService subscribes to (Customer/Order)Snapshot and updates the view appropriately.
One all of the services (Customer and Order) have published their StartOfSnapshots events and the Kafka consumer offsets are known:
Change the kafka consumer offsets for the OrderHistoryService’s subscription to the specified point offsets
Start the OrderHistoryService
The text was updated successfully, but these errors were encountered: