Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create initial DB schema #99

Merged
merged 1 commit into from
Aug 1, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,3 +78,21 @@ The `xmtpd` node build provides two options for monitoring your node.
```

To learn how to visualize node data in Grafana, see [Prometheus Histograms with Grafana Heatmaps](https://towardsdatascience.com/prometheus-histograms-with-grafana-heatmaps-d556c28612c7) and [How to visualize Prometheus histograms in Grafana](https://grafana.com/blog/2020/06/23/how-to-visualize-prometheus-histograms-in-grafana/).

## Modifying the protobuf schema

Submit and land a PR to https://github.com/xmtp/proto. Then run:

```sh
dev/generate
```

## Modifying the database schema

Create a new migration by running:

```sh
dev/gen-migration
```

If you are unfamiliar with migrations, you may follow [this guide](https://github.com/golang-migrate/migrate/blob/master/MIGRATIONS.md). The database is PostgreSQL and the driver is PGX.
4 changes: 4 additions & 0 deletions pkg/migrations/00001_init-schema.down.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
DROP TABLE node_info;
DROP TABLE all_envelopes;
DROP TABLE staged_originated_envelopes;
DROP TABLE address_log;
51 changes: 49 additions & 2 deletions pkg/migrations/00001_init-schema.up.sql
Original file line number Diff line number Diff line change
@@ -1,3 +1,50 @@
SELECT
1;
-- Ensures that if the command-line node configuration mutates,
-- the existing data in the DB is invalid
CREATE TABLE node_info(
node_id INTEGER NOT NULL,
public_key BYTEA NOT NULL,

singleton_id SMALLINT PRIMARY KEY DEFAULT 1,
CONSTRAINT is_singleton CHECK (singleton_id = 1)
);

CREATE TABLE all_envelopes(
-- used to construct gateway_sid
id BIGSERIAL PRIMARY KEY,
originator_sid BIGINT NOT NULL,
topic BYTEA NOT NULL,
originator_envelope BYTEA NOT NULL
);
-- Client queries
CREATE INDEX idx_all_envelopes_topic ON all_envelopes(topic);
-- Node queries
CREATE UNIQUE INDEX idx_all_envelopes_originator_sid ON all_envelopes(originator_sid);


-- Process for originating envelopes:
-- 1. Perform any necessary validation
-- 2. Insert into originated_envelopes
-- 3. Singleton background task will continuously query (or subscribe to)
-- staged_originated_envelopes, and for each envelope in order of ID:
-- 2.1. Construct and sign OriginatorEnvelope proto
-- 2.2. Atomically insert into all_envelopes and delete from originated_envelopes,
richardhuaaa marked this conversation as resolved.
Show resolved Hide resolved
-- ignoring unique index violations on originator_sid
-- This preserves total ordering, while avoiding gaps in sequence ID's.
CREATE TABLE staged_originated_envelopes(
-- used to construct originator_sid
id BIGSERIAL PRIMARY KEY,
originator_ns TIMESTAMP NOT NULL DEFAULT now(),
payer_envelope BYTEA NOT NULL
);

-- A cached view for looking up the inbox_id that an address belongs to.
-- Relies on a total ordering of updates across all inbox_ids, from which this
-- view can be deterministically generated.
CREATE TABLE address_log(
address TEXT NOT NULL,
inbox_id BYTEA NOT NULL,
association_sequence_id BIGINT,
revocation_sequence_id BIGINT,

PRIMARY KEY (address, inbox_id)
);
1 change: 0 additions & 1 deletion pkg/server/server.go
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,6 @@ func NewReplicationServer(ctx context.Context, log *zap.Logger, options Options,
if err != nil {
return nil, err
}
// Commenting out the DB stuff until I get the new migrations in
s.writerDb, err = db.NewDB(ctx, options.DB.WriterConnectionString, options.DB.WaitForDB, options.DB.ReadTimeout)
if err != nil {
return nil, err
Expand Down
Loading