This is a server that connects to peers on the Lightning network and calculates compact rapid sync gossip data.
These are the components it's comprised of.
A config file where the Postgres credentials and Lightning peers can be adjusted. Most adjustments can be made by setting environment variables, whose usage is as follows:
Name | Default | Description |
---|---|---|
RAPID_GOSSIP_SYNC_SERVER_DB_HOST | localhost | Domain of the Postgres database |
RAPID_GOSSIP_SYNC_SERVER_DB_PORT | 5432 | Port of the Postgres database |
RAPID_GOSSIP_SYNC_SERVER_DB_USER | alice | Username to access Postgres |
RAPID_GOSSIP_SYNC_SERVER_DB_PASSWORD | None | Password to access Postgres |
RAPID_GOSSIP_SYNC_SERVER_DB_NAME | ln_graph_sync | Name of the database to be used for gossip storage |
RAPID_GOSSIP_SYNC_SERVER_NETWORK | mainnet | Network to operate in. Possible values are mainnet, testnet, signet, regtest |
RAPID_GOSSIP_SYNC_SERVER_SNAPSHOT_INTERVAL | 10800 | The interval in seconds between snapshots |
RAPID_GOSSIP_SYNC_UPLOAD_API_KEY | None | API for uploading gossip to an authenticated server |
RAPID_GOSSIP_SYNC_UPLOAD_URL | None | URL for uploading gossip to an authenticated server |
DB_CERT | db.crt | Cert of the Postgres database |
BITCOIN_REST_DOMAIN | 127.0.0.1 | Domain of the bitcoind REST server |
BITCOIN_REST_PORT | 8332 | HTTP port of the bitcoind REST server |
BITCOIN_REST_PATH | /rest/ | Path infix to access the bitcoind REST endpoints |
LN_PEERS | Wallet of Satoshi | Comma separated list of LN peers to use for retrieving gossip |
The module responsible for initiating the scraping of the network graph from its peers.
The module responsible for persisting all the downloaded graph data to Postgres.
The snapshotting module is responsible for calculating and storing snapshots. It's started up as soon as the first full graph sync completes, and then keeps updating the snapshots at a configurable interval with a 3-hour-default.
The lookup module is responsible for fetching the latest data from the network graph and Postgres, and reconciling it into an actionable delta set that the server can return in a serialized format.
It works by collecting all the channels that are currently in the network graph, and gathering announcements as well as updates for each of them. For the updates specifically, the last update seen prior to the given timestamp, the latest known updates, and, if necessary, all intermediate updates are collected.
Then, any channel that has only had an announcement but never an update is dropped. Additionally, every channel whose first update was seen after the given timestamp is collected alongside its announcement.
Finally, all channel update transitions are evaluated and collected into either a full or an incremental update.
Apache 2.0 or MIT, at your option.