It's required two services and one database:
- Core service with normal business logic (depicted as
Service
) - A service for processing events, including serializing the order of events and publishing to other consumers (depicted
as
Event Service
)
And the database has one table of the form:
CREATE TABLE `events`
(
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
`seq` bigint unsigned DEFAULT NULL,
`data` mediumblob NOT NULL,
`created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
UNIQUE KEY `idx_events_seq` (`seq`)
);
In normal operation, the following steps will happen:
-
In one transaction,
Service
will doing normal business logic, using normal locking mechanism (pessimistic or optimistic) to prevent concurrent writes and then constructs andInsert
an event to the database (seq
field isNULL
). -
After the transaction committed, the
Service
will ping (using http / grpc calls) theEvent Service
to check out the new events. -
The
Event Service
will collect all the pings, performs a call to the database to get all unprocessed events withseq
isNULL
(with limiting). -
The
Event Service
uses the lastseq
value stored on memory, set theseq
for each of event in the previous step in an incremental order. Then updates that sequence numbers to the database. -
Other services / threads can listen for new events and using the sequence number to determine whether events already consumed (like
offsets
in partitions of Kafka).