From c758ce1a5b196011a894b0704d98ffcd3d504f4c Mon Sep 17 00:00:00 2001 From: Sam Willis Date: Tue, 6 Aug 2024 14:23:55 +0100 Subject: [PATCH] Edits --- docs/benchmarks.md | 24 +++++++++--------- docs/docs/about.md | 42 +++++++++++++++---------------- docs/docs/api.md | 46 +++++++++++++++++----------------- docs/docs/filesystems.md | 20 ++++++++------- docs/docs/index.md | 20 +++++++-------- docs/docs/live-queries.md | 30 +++++++++++----------- docs/docs/multi-tab-worker.md | 33 ++++++++++++------------ docs/docs/orm-support.md | 6 ++--- docs/docs/repl.md | 10 ++++---- docs/examples.md | 10 ++++---- docs/extensions/development.md | 18 ++++++------- docs/extensions/index.md | 3 ++- docs/index.md | 6 ++--- 13 files changed, 136 insertions(+), 132 deletions(-) diff --git a/docs/benchmarks.md b/docs/benchmarks.md index 78b226554..764219fff 100644 --- a/docs/benchmarks.md +++ b/docs/benchmarks.md @@ -18,25 +18,25 @@ # Benchmarks -There are two sets of micro-benchmarks, one testing [round trip time](#round-trip-time-benchmarks) for both PGlite and wa-sqlite, and [another](#sqlite-benchmark-suite) based on the [SQLite speed test](https://sqlite.org/src/file?name=tool/speedtest.tcl&ci=trunk) which was ported for the [wa-sqlite benchmarks](https://rhashimoto.github.io/wa-sqlite/demo/benchmarks.html). +There are two sets of micro-benchmarks: one testing [round trip time](#round-trip-time-benchmarks) for both PGlite and wa-sqlite, and [another](#sqlite-benchmark-suite) the other based on the [SQLite speed test](https://sqlite.org/src/file?name=tool/speedtest.tcl&ci=trunk) which was ported for the [wa-sqlite benchmarks](https://rhashimoto.github.io/wa-sqlite/demo/benchmarks.html). -We also have a set of [native baseline](#native-baseline) results where we have compared native SQLite (via the Node better-sqlite3 package) to full Postgres. +We also have a set of [native baseline](#native-baseline) results comparing native SQLite (via the Node better-sqlite3 package) to full Postgres. -Comparing Postgres to SQlite is a little difficult as they are quite different databases, particularly when you then throw in the complexities of WASM. Therefore these benchmarks provide a view of performance only as a starting point to investigate the difference between the two and the improvements we can make going forward. +Comparing Postgres to SQlite is challenging, as they are quite different databases, particularly when you take into account the complexities of WASM. Therefore, these benchmarks provide a view of performance only as a starting point to investigate the difference between the two, and the improvements we can make going forward. -The other thing to consider when analysing the speed is the performance of various different VFS implementations providing persistance to both PGlite and wa-sqlite, the the performance of the underlying storage. +Another consideration when analysing the speed, is the performance of the various different VFS implementations providing persistance to both PGlite and wa-sqlite. -The key finding are: +The key findings are: -1. wa-sqlite is a little faster than PGlite when run purely in memory. This is be expected as it is a simpler database with fewer features, its designed to go fast. Having said that, PGlite is not slow, its well withing the range you would expect when [comparing native SQLite to Postgres](#native-baseline). +1. wa-sqlite is faster than PGlite when run purely in memory. This is to be expected as it's a simpler database with fewer features; it's designed to go fast. Having said that, PGlite is not slow; it's well within the range you would expect when [comparing native SQLite to Postgres](#native-baseline). -2. For single row CRUD inserts and updates, PGlite is faster then wa-sqlite. This is likely due to PGlite using the Posrgres WAL, whereas wa-sqlite is only using the SQLite rollback journal mode and not its WAL. +2. For single row CRUD inserts and updates, PGlite is faster then wa-sqlite. This is likely due to PGlite using the Postgres WAL, whereas wa-sqlite is only using the SQLite rollback journal mode and not a WAL. -3. An fsync or flush to the underlying storage can be quite slow, particularly in the browser with IndexedDB for PGlite, or OPFS for wa-sqlite. Both offer some level of "relaxed durability" that can be used to accelerate these queriers, and is likely suitable for many embedded use cases. +3. An fsync or flush to the underlying storage can be quite slow, particularly in the browser with IndexedDB for PGlite, or OPFS for wa-sqlite. Both offer some level of "relaxed durability" that can be used to accelerate these queries, and this mode is likely suitable for many embedded use cases. -We are going to continue to use these micro-benchmarks to feed back into the development of PGlite, and update them and the findings as we move forward. +We plan to continue to use these micro-benchmarks to feed back into the development of PGlite, and update them, and the findings, as we move forward. -These results below were run on a M2 Macbook Air. +These results below were run on an M2 Macbook Air. ## Round-trip-time benchmarks @@ -63,7 +63,7 @@ Values are average ms - lower is better. ## SQLite benchmark suite -The SQLite benchmark suite, converted to web for wa-sqlite, performs a number of large queries to test the performance of the sql engin. +The SQLite benchmark suite, converted to web for wa-sqlite - it performs a number of large queries to test the performance of the sql engine. Values are seconds to complete the test - lower is better. @@ -115,7 +115,7 @@ All tests run with Node, [Better-SQLite3](https://www.npmjs.com/package/better-s ## Run the benchmarks yourself -We have a hosted version of the benchmarks runner: +We have a hosted version of the benchmark runners that you can run yourself: - Benchmark using the SQLite benchmark suite - Benchmark round-trim-time for CRUD queries diff --git a/docs/docs/about.md b/docs/docs/about.md index cec37e0f3..6f319d36f 100644 --- a/docs/docs/about.md +++ b/docs/docs/about.md @@ -1,37 +1,37 @@ # What is PGlite -PGlite is a WASM Postgres build packaged into a TypeScript/JavaScript client library that enables you to run Postgres in the browser, Node.js and Bun, with no need to install any other dependencies. It's under 3mb gzipped, and has support for many [Postgres extensions](../extensions/), including [pgvector](../extensions/#pgvector). +PGlite is a WASM Postgres build packaged into a TypeScript/JavaScript client library, that enables you to run Postgres in the browser, Node.js and Bun, with no need to install any other dependencies. It's under 3mb Gzipped, and has support for many [Postgres extensions](../extensions/), including [pgvector](../extensions/#pgvector). -Unlike previous "Postgres in the browser" projects, PGlite does not use a Linux virtual machine - it is simply Postgres in WASM. +Getting started with PGlite is simple: just install and import the NPM package, then create your embedded database: -It's being developed by [ElectricSQL](https://electric-sql.com/) for our use case of embedding into applications, either locally or at the edge, allowing users to sync a subset of their Postgres database. +```js +import { PGlite } from "@electric-sql/pglite"; -However, there are many more use cases for PGlite beyond it's use as an embedded application databases: +const db = new PGlite(); +await db.query("select 'Hello world' as message;"); +// -> { rows: [ { message: "Hello world" } ] } +``` -- Unit and CI testing
- PGlite is very fast to start and tare down, perfect for unit tests, you can a unique fresh Postgres for each test. +It can be used as an ephemeral in-memory database, or with persistence either to the file system (Node/Bun), or indexedDB (Browser). -- Local development
- You can use PGlite as an alternative to a full local Postgres for local development, masivly simplifyinf your development environmant. +Unlike previous "Postgres in the browser" projects, PGlite does not use a Linux virtual machine - it is simply Postgres in WASM. -- Remote development, or local web containers
- As PGlite is so light weight it can be easily embedded into remote containerised development environments, or in-browser [web containers](https://webcontainers.io). +It's being developed by [ElectricSQL](https://electric-sql.com/) for our use case of embedding into applications, either locally or at the edge, allowing users to sync a subset of their Postgres database. -- On-device or edge AI and RAG
- PGlite has full support for [pgvector](../extensions/#pgvector), enabling a local or edge retrieval augmented generation (RAG) workflow. +However, there are many more use cases for PGlite beyond its use as an embedded application database: -We are very keen to establish PGlite as an open source, and open contribution, project, working to build a community around it to develop its capabilities for all use cases. +- **Unit and CI testing**
+ PGlite is very fast to start and tear down. It's perfect for unit tests - you can have a unique fresh Postgres for each test. -Getting started with PGlite is super easy, just install and import the NPM package, then create a your embded database: +- **Local development**
+ You can use PGlite as an alternative to a full local Postgres for development; simplifying your development environments. -```js -import { PGlite } from "@electric-sql/pglite"; +- **Remote development, or local web containers**
+ As PGlite is so lightweight it can be easily embedded into remote containerised development environments, or in-browser [web containers](https://webcontainers.io). -const db = new PGlite(); -await db.query("select 'Hello world' as message;"); -// -> { rows: [ { message: "Hello world" } ] } -``` +- **On-device or edge AI and RAG**
+ PGlite has full support for [pgvector](../extensions/#pgvector), enabling a local or edge retrieval augmented generation (RAG) workflow. -It can be used as an ephemeral in-memory database, or with persistence either to the file system (Node/Bun) or indexedDB (Browser). +We are very keen to establish PGlite both as an open source, and open contribution, project, working to build a community around it, so as to develop its capabilities for all use cases. Read more in our [getting started guide](./index.md). diff --git a/docs/docs/api.md b/docs/docs/api.md index bed1cb709..22d0d8f14 100644 --- a/docs/docs/api.md +++ b/docs/docs/api.md @@ -9,7 +9,7 @@ outline: [2, 3] `new PGlite(dataDir: string, options: PGliteOptions)`
`new PGlite(options: PGliteOptions)` -A new pglite instance is created using the `new PGlite()` constructor. +A new PGlite instance is created using the `new PGlite()` constructor. This is imported as: @@ -20,15 +20,15 @@ import { PGlite } from "@electric-sql/pglite"; `await PGlite.create(dataDir: string, options: PGliteOptions)`
`await PGlite.create(options: PGliteOptions)` -There is also an additional `PGlite.create()` static method that returns a Promise resolving to the new PGlite instance. There are a couple of advatanges to using the static method: +There is also a `PGlite.create()` static method that returns a promise, resolving to the new PGlite instance. There are a couple of advantages to using the static method: -- The Promise awaits the [`.waitReady`](#waitready) promise ensureing that database has fully initiated. -- When using TypeScript and extensions the returned PGlite instance will have the extensions namespace on it's type. This is not possible with the standard constructor. +- This awaits the [`.waitReady`](#waitready) promise, ensuring that the database has fully initiated. +- When using TypeScript and extensions, the returned PGlite instance will have the extensions namespace on its type. This is not possible with the standard constructor due to limitation with TypeScript. #### `dataDir` -Path to the directory to store the Postgres database. You can provide a url scheme for various storage backends: +Path to the directory for storing the Postgres database. You can provide a url scheme for various storage backends: - `file://` or unprefixed
File system storage, available in Node and Bun. @@ -40,23 +40,23 @@ Path to the directory to store the Postgres database. You can provide a url sche #### `options` - `dataDir: string`
- The directory to store the Postgres database when not provided as the first argument. + The directory to store the Postgres database in when not provided as the first argument. - `debug: 1-5`
the Postgres debug level. Logs are sent to the console. - `relaxedDurability: boolean`
- Under relaxed durability mode PGlite will not wait for flushes to storage to complete after each query before returning results. This is particularly useful when using the indexedDB file system. + Under relaxed durability mode, PGlite will not wait for flushes to storage to complete after each query before returning results. This is particularly useful when using the indexedDB file system. - `fs: Filesystem`
The alternative to providing a dataDir with a filesystem prefix is to initiate the Filesystem yourself and provide it here. See [Filesystems](./filesystems.md) - `loadDataDir: Blob | File`
A tarball of a PGlite `datadir` to load when the database starts. This should be a tarball produced from the related [`.dumpDataDir()`](#dumpdatadir) method. - `extensions: Extensions`
- An object containing the extensions you with to load. + An object containing the extensions you wish to load. #### `options.extensions` PGlite and Postgres extensions are loaded into a PGLite instance on start, and can include both a WASM build of a Postgres extension and/or a PGlite client plugin. -The `options.extensions` paramiter is an opbject of `namespace: extension` parings. The namespace if sued to expose the PGlite client plugin included in the extension. An example of this it the [live queries](./live-queries.md) extension. +The `options.extensions` parameter is an object of `namespace: extension` parings. The namespace if used to expose the PGlite client plugin included in the extension. An example of this is the [live queries](./live-queries.md) extension. ```ts import { PGlite } from "@electric-sql/pglite"; @@ -65,7 +65,7 @@ import { vector } from "@electric-sql/pglite/vector"; const pg = await PGlite.create({ extensions: { - live, // Live query extension, if a PGlite client plugin + live, // Live query extension, is a PGlite client plugin vector, // Postgres pgvector extension }, }); @@ -107,7 +107,7 @@ The `query` and `exec` methods take an optional `options` objects with the follo The returned row object type, either an object of `fieldName: value` mappings or an array of positional values. Defaults to `"object"`. - `parsers: ParserOptions`
An object of type `{[[pgType: number]: (value: string) => any;]}` mapping Postgres data type id to parser function. - For convenance the `pglite` package exports a const for most common Postgres types: + For convenience, the `pglite` package exports a constant for most common Postgres types: ```ts import { types } from "@electric-sql/pglite"; @@ -134,7 +134,7 @@ This is useful for applying database migrations, or running multi-statement sql Uses the *simple query* Postgres wire protocol. -Returns array of [result objects](#results-objects), one for each statement. +Returns array of [result objects](#results-objects); one for each statement. ##### Example @@ -167,9 +167,9 @@ await pg.exec(` `.transaction(callback: (tx: Transaction) => Promise)` -To start an interactive transaction pass a callback to the transaction method. It is passed a `Transaction` object which can be used to perform operations within the transaction. +To start an interactive transaction, pass a callback to the transaction method. It is passed a `Transaction` object which can be used to perform operations within the transaction. -The transaction will be committed when the Promise returned from your callback resolves, and automatically rolled back if the Promise is rejected. +The transaction will be committed when the promise returned from your callback resolves, and automatically rolled back if the promise is rejected. ##### `Transaction` objects @@ -219,7 +219,7 @@ await pg.query("NOTIFY test, 'Hello, world!'"); `.unlisten(channel: string, callback?: (payload: string) => void): Promise` -Unsubscribe from the channel. If a callback is provided it removes only that callback from the subscription, when no callback is provided is unsubscribes all callbacks for the channel. +Unsubscribe from the channel. If a callback is provided it removes only that callback from the subscription. When no callback is provided, it unsubscribes all callbacks for the channel. ### onNotification @@ -227,7 +227,7 @@ Unsubscribe from the channel. If a callback is provided it removes only that cal Add an event handler for all notifications received from Postgres. -**Note:** This does not subscribe to the notification, you will have to manually subscribe with `LISTEN channel_name`. +**Note:** This does not subscribe to the notification; you will need to manually subscribe with `LISTEN channel_name`. ### offNotification @@ -239,13 +239,13 @@ Remove an event handler for all notifications received from Postgres. `dumpDataDir(): Promise` -Dump the Postgres `datadir` to a gziped tarball. +Dump the Postgres `datadir` to a Gzipped tarball. This can then be used in combination with the [`loadDataDir`](#options) option when starting PGlite to load a dumped database from storage. ::: tip NOTE -The datadir dump may not be compatible with other Postgres versions, it is only designed for importing back into PGlite. +The datadir dump may not be compatible with other Postgres versions; it is only designed for importing back into PGlite. ::: @@ -271,7 +271,7 @@ Promise that resolves when the database is ready to use. ::: tip NOTE -Queries methods will wait for the `waitReady` promise to resolve if called before the database has fully initialised, and so it's not necessary to wait for it explicitly. +Query methods will wait for the `waitReady` promise to resolve if called before the database has fully initialised, and so it is not necessary to wait for it explicitly. ::: @@ -283,7 +283,7 @@ Result objects have the following properties: The rows retuned by the query - `affectedRows?: number`
- Count of the rows affected by the query. Note this is *not* the count of rows returned, it is the number or rows in the database changed by the query. + Count of the rows affected by the query. Note, this is *not* the count of rows returned, it is the number or rows in the database changed by the query. - `fields: { name: string; dataTypeID: number }[]`
Field name and Postgres data type ID for each field returned. @@ -300,7 +300,7 @@ The `.query()` method can take a TypeScript type describing the expected shap ::: tip NOTE -These types are not validated at run time, the result only cast to the provided type +These types are not validated at run time, the result is only cast to the provided type. ::: @@ -308,7 +308,7 @@ These types are not validated at run time, the result only cast to the provided PGlite has support for importing and exporting via the SQL `COPY TO/FROM` command by using a virtual `/dev/blob` device. -To import a file pass the `File` or `Blob` in the query options as `blob`, and copy from the `/dev/blob` device. +To import a file, pass the `File` or `Blob` in the query options as `blob`, and copy from the `/dev/blob` device. ```ts await pg.query("COPY my_table FROM '/dev/blob';", [], { @@ -316,7 +316,7 @@ await pg.query("COPY my_table FROM '/dev/blob';", [], { }) ``` -To export a table or query to a file you just have to write to the `/dev/blob` device, the file will be retied as `blob` on the query results: +To export a table or query to a file, you just need to write to the `/dev/blob` device; the file will be returned as `blob` on the query results: ```ts const ret = await pg.query("COPY my_table TO '/dev/blob';") diff --git a/docs/docs/filesystems.md b/docs/docs/filesystems.md index 8a5d85082..d17ede04b 100644 --- a/docs/docs/filesystems.md +++ b/docs/docs/filesystems.md @@ -2,9 +2,11 @@ PGlite has a virtual file system layer that allows it to run in environments that don't traditionally have filesystem access. +PGlite VFSs are under active development, and we plan to extend the range of options in future, as well as make it easy for users to create their own filesystems. + ## In-memory FS -The in-memory FS is the default when starting PGlite, and it available on all platforms. All files are kept in memory and there is no persistance, other than calling [`pg.dumpDataDir()`](./api.md#dumpdatadir) and then using the [`loadDataDir`](./api.md#options) option at start. +The in-memory FS is the default when starting PGlite, and it is available on all platforms. All files are kept in memory and there is no persistance, other than calling [`pg.dumpDataDir()`](./api.md#dumpdatadir) and then using the [`loadDataDir`](./api.md#options) option at start. To use the in-memory FS you can use one of these methods: @@ -32,7 +34,7 @@ To use the in-memory FS you can use one of these methods: ## Node FS -The Node FS uses the Node.js file system API to implement a VFS for PGLite. It is bailable in both Node and Bun. +The Node FS uses the Node.js file system API to implement a VFS for PGLite. It is available in both Node and Bun. To use the Node FS you can use one of these methods: @@ -56,11 +58,11 @@ To use the Node FS you can use one of these methods: ## IndexedDB FS -The IndexedDB FS persistes the database to IndexedDB in the browser. It's a layer over the in-memory filesystem, loading all files for the database into memory on start, and flushing them to IndexedDB after each query. +The IndexedDB FS persists the database to IndexedDB in the browser. It's a layer over the in-memory filesystem, loading all files for the database into memory on start, and flushing them to IndexedDB after each query if they have changed. To use the IndexedDB FS you can use one of these methods: -- Set the `dataDir` with a `idb://` prefix, the FS will use an IndexedDB named with the path provided +- Set the `dataDir` with a `idb://` prefix, the database will be stored in an IndexedDB named with the path provided ```ts const pg = new PGlite("idb://my-database") ``` @@ -72,7 +74,7 @@ To use the IndexedDB FS you can use one of these methods: }) ``` -The IndexedDB filesystem works at the file level, storing hole files as blobs in IndexedDB. Flushing whole files can take a few milliseconds after each query, to aid in building resposive apps we provide a `relaxedDurability` mode that can be [configured when starting](./api.md#options) PGlite. Under this mode the results of a query are returned imediatly, and the flush to IndexedDB is scheduled to happen asynchronous afterwards. Typically this is immediately after the query returns with no delay. +The IndexedDB filesystem works at the file level, storing whole files as blobs in IndexedDB. Flushing whole files can take a few milliseconds after each query. To aid in building responsive apps we provide a `relaxedDurability` mode that can be [configured when starting](./api.md#options) PGlite. Under this mode, the results of a query are returned immediately, and the flush to IndexedDB is scheduled to occur asynchronously afterwards. Typically, this is immediately after the query returns with no delay. ### Platform Support @@ -86,7 +88,7 @@ The OPFS AHP filesystem is built on top of the [Origin Private Filesystem](https To use the OPFS AHP FS you can use one of these methods: -- Set the `dataDir` to a directory with the origins OPFS +- Set the `dataDir` to a directory within the origins OPFS ```ts const pg = new PGlite("opfs-ahp://path/to/datadir/") ``` @@ -104,12 +106,12 @@ To use the OPFS AHP FS you can use one of these methods: |------|-----|--------|--------|---------| | | | ✓ | | ✓ | -Unfortunately Safari appears to have a limit of 252 open sync access handles, this prevents this VFS from working as a standard Postgres install has between 300-800 files. +Unfortunately, Safari appears to have a limit of 252 open sync access handles, this prevents this VFS from working due to a standard Postgres install consisting of over 300 files. ### What is an "access handle pool"? -The Origin Private Filesystem API provides both asynchronous ans synchronous methods, bit the synchronous are limited to read, write and flush. You are unable to travers the filesystem or open files synchronously. PGlite is a fully synchronous WASM build of Postgres and unable to call async APIs while handling a query. While it is possible to build an async WASM Postgres using [Asyncify](https://emscripten.org/docs/porting/asyncify.html), it adds significant overhead in both file size and performance. +The Origin Private Filesystem API provides both asynchronous and synchronous methods, but the synchronous methods are limited to read, write and flush. You are unable to traverse the filesystem or open files synchronously. PGlite is a fully synchronous WASM build of Postgres and unable to call async APIs while handling a query. While it is possible to build an async WASM Postgres using [Asyncify](https://emscripten.org/docs/porting/asyncify.html), it adds significant overhead in both file size and performance. -To overcome these limitations and provide a fully synchronous file system to PGlite on top of OPFS, we use something called an "access handle pool". When you first start PGlite we open a pool of OPFS access handles with randomised file names, these are then allocation to files as needed. After each query a poll maintenance job is scheduled that maintains the pool size. When you inspect the OPFS directory where the database is stored you will not see the normal Postgres directory layout, but rather a pool of files and a state file that contains the directory tree mapping along with file metadata. +To overcome these limitations, and to provide a fully synchronous file system to PGlite on top of OPFS, we use something called an "access handle pool". When you first start PGlite we open a pool of OPFS access handles with randomised file names; these are then allocated to files as needed. After each query, a pool maintenance job is scheduled that maintains its size. When you inspect the OPFS directory where the database is stored, you will not see the normal Postgres directory layout, but rather a pool of files and a state file containing the directory tree mapping along with file metadata. The PGlite OPFS AHP FS is inspired by the [wa-sqlite](https://github.com/rhashimoto/wa-sqlite) access handle pool file system by [Roy Hashimoto](https://github.com/rhashimoto). diff --git a/docs/docs/index.md b/docs/docs/index.md index 251919dbd..abef8ac92 100644 --- a/docs/docs/index.md +++ b/docs/docs/index.md @@ -1,6 +1,6 @@ # Getting started with PGlite -PGlite can be used in both Node/Bun or the browser, and cen be used with any JavaScript framework. +PGlite can be used in both Node/Bun or the browser, and with any JavaScript framework. ## Install and start in Node/Bun @@ -67,9 +67,9 @@ const db = new PGlite("idb://my-pgdata"); ## Making a query -There are two method for querying the database, `.query` and `.exec`, the former support parameters, and the latter multiple statements. +There are two methods for querying the database, `.query` and `.exec`. The former supports parameters, and the latter, multiple statements. -First, lets crate a table and insert some test data using the `.exec` method: +First, let's create a table and insert some test data using the `.exec` method: ```js await db.exec(` @@ -86,9 +86,9 @@ await db.exec(` `) ``` -The `.exec` method is perfect for migrations, or batch inserts with raw SQL. +The `.exec` method is perfect for migrations and batch inserts with raw SQL. -Now, lets retrieve an item using `.query` method: +Now, let's retrieve an item using `.query` method: ```js const ret = await db.query(` @@ -107,7 +107,7 @@ console.log(ret.rows) ## Using parametrised queries -When working with user supplied values its always best to use parametrised queries, these are supported on the `.query` method. +When working with user supplied values, it's always best to use parametrised queries; these are supported on the `.query` method. We can use this to update a task: @@ -124,15 +124,15 @@ const ret = await db.query( ## What next? -- To learn more about [querying](./api.md#query) and [transactions](./api.md#transaction) you can read the main [PGlite API documentation](./api.md). +- To learn more about [querying](./api.md#query) and [transactions](./api.md#transaction) along with the other methods and options available, you can read the main [PGlite API documentation](./api.md). - There is also a [live-query extension](./live-queries.md) that enables reactive queries to update a UI when the underlying database changes. -- PGlite has a number of built in [virtual file systems](./filesystems.md) to provided persistance to the database. +- PGlite has a number of built-in [virtual file systems](./filesystems.md) to provide persistance for your database. -- There are [framework hooks](./framework-hooks.md) to make working with PGlite within React and Vue much easer with less boilerplate. +- There are [framework hooks](./framework-hooks.md) to make working with PGlite within React and Vue much easier with less boilerplate. -- As PGlite only has single exclusive connection to the database, we provide a [multi-tab worker](./multi-tab-worker.md) to enable sharing a PGlite instance between multiple browser tabs. +- As PGlite only has a single exclusive connection to the database, we provide a [multi-tab worker](./multi-tab-worker.md) to enable sharing a PGlite instance between multiple browser tabs. - There is a [REPL component](./repl.md) that can be easily embedded into a web-app to aid in debugging and development, or as part of a database application itself. diff --git a/docs/docs/live-queries.md b/docs/docs/live-queries.md index a96559070..654c450ed 100644 --- a/docs/docs/live-queries.md +++ b/docs/docs/live-queries.md @@ -1,14 +1,14 @@ # Live Queries -The "live" extension enables you to subscribe to a query and receve updated results when the underlying tables change. +The "live" extension enables you to subscribe to a query and receive updated results when the underlying tables change. -To use the extension it needs adding to the PGlite instance when creating it: +To use the extension, it needs to be added to the PGlite instance when creating it: ```ts import { PGlite } from "@electric-sql/pglite"; import { live } from "@electric-sql/pglite/live"; -const pg = new PGlite({ +const pg = await PGlite.create({ extensions: { live, }, @@ -16,8 +16,8 @@ const pg = new PGlite({ ``` There are three methods on the `live` namespace: -- `live.query()` for basic live queries. With less machinery in PG it's quicker for small results sets and narrow rows. -- `live.incrementalQuery()` for incremental queries. It materialises the full result set on each update from only the changes emitted by the `live.changes` api. Perfect for feeding into React and good performance for large result sets and wide rows. +- `live.query()` for basic live queries. With less machinery in PGlite, it's quicker for small results sets and narrow rows. +- `live.incrementalQuery()` for incremental queries. It materialises the full result set on each update from only the changes emitted by the `live.changes` api. Perfect for feeding into React, and with good performance for large result sets and wide rows. - `live.changes()` a lower level API that emits the changes (insert/update/delete) that can then be mapped to mutations in a UI or other datastore. ## live.query @@ -42,19 +42,19 @@ interface LiveQueryReturn { } ``` -- `initialResults` is the initial results set (also sent to the callback -- `unsubscribe` allow you to unsubscribe from the live query -- `refresh` allows you to force a refresh of the query +- `initialResults` is the initial results set (also sent to the callback) +- `unsubscribe` allows you to unsubscribe from the live query +- `refresh` allows you to force a refresh of the query with the updated results sent to the callback -Internally it watches for the tables that the query depends on, and reruns the query whenever they are changed. +Internally it watches the tables that the query depends on, and reruns the query whenever they are changed. ## live.incrementalQuery `live.incrementalQuery()` -Similar to above, but maintains a temporary table inside of Postgres of the previous state. When the tables it depends on change the query is re-run and diffed with the last state. Only the changes from the last version of the query are copied from WASM into JS. +Similar to above, but maintains a temporary table of the previous state inside of Postgres. When the tables it depends on change, the query is re-run and diffed with the last state. Only the changes from the last version of the query are copied from WASM into JS. -It requires an additional `key` argument, the name of a column (often a PK) to key the diff on. +It requires an additional `key` argument - the name of a column (often a primary key) to key the diff on. ```ts const ret = pg.live.incrementalQuery( @@ -71,7 +71,7 @@ The returned value is of the same type as the `query` method above. `live.changes()` -A lower level API which is the backend for the `incrementalQuery`, it emits the change that have happened. It requires a `key` to key the diff on: +A lower-level API which is the backend for the `incrementalQuery`, it emits the changes that have occurred. It requires a `key` to key the diff on: ```ts const ret = pg.live.changes( @@ -93,7 +93,7 @@ interface LiveChangesReturn { } ``` -The results passed to the callback are array of `Change` objects: +The results passed to the callback are an array of `Change` objects: ```ts type ChangeInsert = { @@ -119,8 +119,8 @@ type Change = ChangeInsert | ChangeDelete | ChangeUpdate; Each `Change` includes the new values along with: -- `__changed_columns__` the columns names that were changes +- `__changed_columns__` the column names that were changed - `__op__` the operation that is required to update the state (`INSERT`, `UPDATE`, `DELETE`) -- `__after__` the `key` of the row that this row should be after, it will be included in `__changed_columns__` if it has been changed. +- `__after__` the `key` of the row that this row should be positioned after; it will be included in `__changed_columns__` if it has been changed. This allows for very efficient moves within an ordered set of results. This API can be used to implement very efficient in-place DOM updates. diff --git a/docs/docs/multi-tab-worker.md b/docs/docs/multi-tab-worker.md index 708c2038c..c1e596652 100644 --- a/docs/docs/multi-tab-worker.md +++ b/docs/docs/multi-tab-worker.md @@ -1,12 +1,12 @@ # Multi-tab Worker -It's likely that you will want to run PGlite in a Web Worker so that it doesn't block the main thread. Additionally as PGlite is single connection, you may want to proxy multiple browser tabs to a single PGlite instance. +It's likely that you will want to run PGlite in a Web Worker so that it doesn't block the main thread. Additionally, as PGlite is single connection only, you may want to proxy multiple browser tabs to a single PGlite instance. -To aid in this we provide a `PGliteWorker` with the same API as the standard PGlite, and a `worker` wrapper that exposes a PGlite instance to other tabs. +To aid in this, we provide a `PGliteWorker` with the same API as the standard PGlite, and a `worker` wrapper that exposes a PGlite instance to other tabs. ## Using PGliteWorker -First you need to create a js file for your worker instance. You use the `worker` wrapper with an `init` option that returns a PGlite instance to start that database and expose it to all tabs: +First, you need to create a js file for your worker instance. You use the `worker` wrapper with an `init` option that returns a PGlite instance to start that database and expose it to all tabs: ```js // my-pglite-worker.js @@ -15,12 +15,13 @@ import { worker } from "@electric-sql/pglite/worker"; worker({ async init() { + // Create and return a PGlite instance return new PGlite(); }, }); ``` -Then connect the `PGliteWorker` to your new worker process in you main script: +Then connect the `PGliteWorker` to your new worker process in your main script: ```js import { PGliteWorker } from "@electric-sql/pglite/worker"; @@ -34,9 +35,9 @@ const pg = new PGliteWorker( // `pg` has the same interface as a standard PGlite interface ``` -Internally this starts a worker for each tab, but then runs a leader election to nominate one as the leader. Only the leader then opens the PGlite and handles all queries. When the leader tab is closed, a new leader election is run and a new PGlite instance is started. +Internally, this starts a worker for each tab, but then runs an a election to nominate one as the leader. Only the leader then starts PGlite by calling the `init` function, and handles all queries. When the leader tab is closed, a new election is run, and a new PGlite instance is started. -In addition to having all the standrad methods of the [`PGlite` interface](./api.md), `PGliteWorker` also has the following methods and properties: +In addition to having all the standard methods of the [`PGlite` interface](./api.md), `PGliteWorker` also has the following methods and properties: - `onLeaderChange(callback: () => void)`
This allows you to subscribe to a notification when the leader worker is changed. It returns an unsubscribe function. @@ -47,15 +48,15 @@ In addition to having all the standrad methods of the [`PGlite` interface](./api ## Passing options to a worker -`PGliteWorker` takes an optional second paramiter `options`, this can include any standard [PGlite options](./api.md#options) along with these addtional options: +`PGliteWorker` takes an optional second parameter `options`; this can include any standard [PGlite options](./api.md#options) along with these additional options: - `id: string`
- This is an optional `id` to gide your PGlite worker group. The leader election is run between all `PGliteWorker`s with the same `id`.
- If not provided the url to the worker is concatenated with the `dataDir` option to create an id. + This is an optional `id` to group your PGlite workers. The leader election is run between all `PGliteWorker`s with the same `id`.
+ If not provided, the url to the worker is concatenated with the `dataDir` option to create an id. - `meta: any`
- Any aditional metadata you would like to pass to the worker process `init` function. + Any additional metadata you would like to pass to the worker process `init` function. -The `worker()` wrapper takes a single options argument, with a single `init` property. `init` is a function takes sed any options passed to `PGliteWorker` (excluding extensions) and returns a `PGlite` instance. You can use the options passed to decide how to configure your instance: +The `worker()` wrapper takes a single options argument, with a single `init` property. `init` is a function takes any options passed to `PGliteWorker`, excluding extensions, and returns a `PGlite` instance. You can use the options passed to decide how to configure your instance: ```js // my-pglite-worker.js @@ -66,7 +67,7 @@ worker({ async init(options) { const meta = options.meta // Do something with additional metadata. - + // or even run your own code in the leader along side the PGlite return new PGlite({ dataDir: options.dataDir }); @@ -91,9 +92,9 @@ const pg = new PGliteWorker( ## Extension support -`PGliteWorker` has support for both Postgres Extensions and PGlite plugins using the normal [extension api](./api.md#optionsextensions). +`PGliteWorker` has support for both Postgres extensions and PGlite plugins using the normal [extension api](./api.md#optionsextensions). -Any extension can be use by the PGlite instance inside the worker: +Any extension can be used by the PGlite instance inside the worker, however the extensions namespace is not exposed on a connecting `PGliteWorker` on the main thread. ```js // my-pglite-worker.js @@ -112,7 +113,7 @@ worker({ }); ``` -Extensions that only use the PGlite plugin interface, such as live queries, can be used on the main thread with `PGliteWorker` to expose their functionality, this is done by providing a standard options object as a second argument to the `PGliteWorker` constructor: +Extensions that only use the PGlite plugin interface, such as live queries, can be used on the main thread with `PGliteWorker` to expose their functionality; this is done by providing a standard options object as a second argument to the `PGliteWorker` constructor: ```js import { PGliteWorker } from "@electric-sql/pglite/worker"; @@ -147,6 +148,6 @@ const pg = await PGliteWorker.create( } ); -// TypeScript is await for the `pg.live` namespace: +// TypeScript is aware of the `pg.live` namespace: pg.live.query(/* ... */) ``` diff --git a/docs/docs/orm-support.md b/docs/docs/orm-support.md index 19b192760..bd4a1c6f3 100644 --- a/docs/docs/orm-support.md +++ b/docs/docs/orm-support.md @@ -2,13 +2,13 @@ ## Drizzle -[Drizzle](https://orm.drizzle.team) is a TypeScript ORM with support for many datbases include PGlite. Features include: +[Drizzle](https://orm.drizzle.team) is a TypeScript ORM with support for many databases include PGlite. Features include: -- A declarative realtional query API +- A declarative relational query API - An SQL-like query builder API - Migrations -To use PGlite with Drizzle just wrap you PGlite instance with a `drizzle()` call: +To use PGlite with Drizzle, wrap you PGlite instance with a `drizzle()` call: ```sh npm i drizzle-orm @electric-sql/pglite diff --git a/docs/docs/repl.md b/docs/docs/repl.md index e980d1a03..af862fdb6 100644 --- a/docs/docs/repl.md +++ b/docs/docs/repl.md @@ -14,7 +14,7 @@ const Repl = defineClientComponent(() => { A REPL, or terminal, for use in the browser with PGlite, allowing you to have an interactive session with your WASM Postgres in the page. -This is the REPL with a full PGlite Postgres embeded in the page: +This is the REPL with a full PGlite Postgres embedded in the page: @@ -52,7 +52,7 @@ function MyComponent() { The props for the `` component are described by this interface: ```ts -// The theme to use, auto is auto switching based on the system +// The theme to use, auto is auto-switching based on the system type ReplTheme = "light" | "dark" | "auto"; interface ReplProps { @@ -68,7 +68,7 @@ The `lightTheme` and `darkTheme` should be instances of a [React CodeMirror](htt ## Web Component -Although the PGlite REPL is built with React, its also available as a web component for easy inclusion in any page or other framework. +Although the PGlite REPL is built with React, it's also available as a web component for easy inclusion in any page or any other framework. ```html @@ -92,7 +92,7 @@ Although the PGlite REPL is built with React, its also available as a web compon ### With Vue.js -The REPL Web Component can be used with Vue.js, and in fact thats how its embeded above. +The REPL Web Component can be used with Vue.js: ```vue