If you've been following the previous chapters, you already know that Aquameta is designed on our first principle of datafication, rethinking each layer in stack as relational data. We start to see the fruits of our labor and the benefits of systemic consistency here in the events module.
In a "traditional" web stack in 2016, there are different kinds of event systems throughout the stack's various layers. Git has git hooks for commit events, the file system has inotify
for file change events, application-level events can use something like Django signals, we might have a message queue like celery for general purpose message passing. Among others.
Our goal with event
is to do all of the above with one system. Because of Aquameta's first principle of datafication, the idea is that any change that can possibly happen in Aquameta is some kind of data change.
We'll use event
in the future to keep the DOM in sync with the database, handle pub/sub communication of data change events, and build more advanced coding patterns in the spirit of "live coding".
To understand what a data change event is, let's start with a simple data set and make some changes to it:
person | ||
---|---|---|
id | name | score |
1 | Joe Smith | 15 |
2 | Don Jones | 12 |
3 | Sandy Hill | 16 |
4 | Nancy Makowsky | 9 |
Now imagine running the following SQL to change the data:
insert into person (name, score) values ('Don Pablo', 14);
update person set name='Sandy Jones', score=score+3 where id=3;
delete from person where id=4;
After the changes:
person table - after change | ||
---|---|---|
id | name | score |
1 | Joe Smith | 15 |
2 | Don Jones | 12 |
3 | Sandy Jones | 19 |
5 | Don Pablo | 14 |
Here's what you might call a "relational diff", highlighting the difference between the two tables:
person table - inclusive difffd | ||
---|---|---|
id | name | score |
1 | Joe Smith | 15 |
2 | Don Jones | 12 |
3 | Sandy Jones | 19 |
4 | Nancy Makowsky | 9 |
5 | Don Pablo | 14 |
Aquameta's event model for data changes builds on the observation that we can express the "diff" between any two database states as a collection of operations of precisely three types:
change type | arguments |
---|---|
row_insert | relation_id, row data |
delete row | row_id |
update field | field_id, new value |
In this frame, we can express the "delta" between these two tables as a set of these operations:
SQL command | change type | arguments |
---|---|---|
`insert into person (name, score) values ('Don Pablo', 14);` | row_insert | public.person, { id: 5, name: Don Pablo, score: 14 } |
`delete from person where id=4;` | row_delete | public.person.4 |
`update person set name='Sandy Jones', score=score+3 where id=3;` | field_update | public.person.3.name, Sandy Jones |
field_update | public.person.3.score, 19 |
You could imagine a log of changes like the one above going in parallel to the PostgreSQL query log. But rather than logging the commands that have been executed, it logs the resultant changes of those commands. These three simple operations (row_insert, row_delete, field_update) encompass all the ways data can change.
So that covers data changes, but what about schema changes, what some call "migrations"? Say we were to add an age
column to the table above:
alter table public.person add column age integer;
In Aquameta, schema changes can be represented as data changes as well, via meta, our writable system catalog. The column could have just as well been created via an insert
into the meta.column
table:
insert into meta.column (schema_name, relation_name, name, type) values ('public','person','age', 'integer');
So, we can also represent schema changes in our event log:
SQL command | change type | arguments |
---|---|---|
alter table public.person add column age integer; | row_insert | meta.column, { schema_name: public, relation_name: person, name: age, type: integer } |
The event module doesn't yet support schema changes, but it's certainly possible via PostgreSQL's DDL events mechanism.
Ok, let's take a look at the event system in action.
###Sessions
To identify where to send events, we use session
, an abstract entity that represents one use session, say a browser tab or a cookie session. In Aquameta they are the primary key for persistent state, and can be used across PostgreSQL connections and by multiple connections and roles at the same time. They serve as a kind of inbox for events, among other things. Users create sessions and can detatch and reattach to them, or the web server can create them.
Let's create a new session:
aquameta=# select session_create();
session_create
--------------------------------------
ceb2c0cf-9985-454b-bc79-01706b931a3b
(1 row)
Once a session has been created, they can subscribe to data changes at various levels of granularity, an entire table, just one specific row, or just one specific field. Here's the API:
event.subscribe_table( meta.relation_id )
- generates row_insert, row_delete, field_changeevent.subscribe_row( meta.row_id )
- generates field_change, row_deleteevent.subscribe_field( meta.field_id )
- generates field_change, row_deleteevent.subscribe_column( meta.column_id )
- generates field_change, row_delete, row_insert
aquameta=# select subscribe_table(meta.relation_id('widget','machine'));
subscribe_table
--------------------------------------
ac944107-7679-4987-919b-9f3f39cfdf70
(1 row)
Then events come through via PostgreSQL NOTIFY
messages:
aquameta=# insert into widget.machine values (DEFAULT);
INSERT 0 1
Asynchronous notification "92841351-8c73-4548-a801-e89c626b9ec0" with payload "{"operation" : "insert", "subscription_type" : "table", "row_id" : {"pk_column_id":{"relation_id":{"schema_id":{"name":"widget"},"name":"machine"},"name":"id"},"pk_value":"70e63984-1b70-4324-b5f1-6b6efca09169"}, "payload" : {"id":"70e63984-1b70-4324-b5f1-6b6efca09169"}}" received from server process with PID 22834.
aquameta=#
Together, it's a dead simple data change event system. It is highly general purpose, because it's positioned immediately atop our first principle of datafication. Everything that we'll build in Aquameta further up the stack can have a consistent and uniform event model.