diff --git a/docs/benchmarks.md b/docs/benchmarks.md
index 78b226554..b03dcd40e 100644
--- a/docs/benchmarks.md
+++ b/docs/benchmarks.md
@@ -20,7 +20,7 @@
There are two sets of micro-benchmarks, one testing [round trip time](#round-trip-time-benchmarks) for both PGlite and wa-sqlite, and [another](#sqlite-benchmark-suite) based on the [SQLite speed test](https://sqlite.org/src/file?name=tool/speedtest.tcl&ci=trunk) which was ported for the [wa-sqlite benchmarks](https://rhashimoto.github.io/wa-sqlite/demo/benchmarks.html).
-We also have a set of [native baseline](#native-baseline) results where we have compared native SQLite (via the Node better-sqlite3 package) to full Postgres.
+We also have a set of [native baseline](#native-baseline) results comparing native SQLite (via the Node better-sqlite3 package) to full Postgres.
Comparing Postgres to SQlite is a little difficult as they are quite different databases, particularly when you then throw in the complexities of WASM. Therefore these benchmarks provide a view of performance only as a starting point to investigate the difference between the two and the improvements we can make going forward.
@@ -28,13 +28,13 @@ The other thing to consider when analysing the speed is the performance of vario
The key finding are:
-1. wa-sqlite is a little faster than PGlite when run purely in memory. This is be expected as it is a simpler database with fewer features, its designed to go fast. Having said that, PGlite is not slow, its well withing the range you would expect when [comparing native SQLite to Postgres](#native-baseline).
+1. wa-sqlite is a little faster than PGlite when run purely in memory. This is be expected as it's a simpler database with fewer features, it's designed to go fast. Having said that, PGlite is not slow, it's well within the range you would expect when [comparing native SQLite to Postgres](#native-baseline).
-2. For single row CRUD inserts and updates, PGlite is faster then wa-sqlite. This is likely due to PGlite using the Posrgres WAL, whereas wa-sqlite is only using the SQLite rollback journal mode and not its WAL.
+2. For single row CRUD inserts and updates, PGlite is faster then wa-sqlite. This is likely due to PGlite using the Postgres WAL, whereas wa-sqlite is only using the SQLite rollback journal mode and not a WAL.
-3. An fsync or flush to the underlying storage can be quite slow, particularly in the browser with IndexedDB for PGlite, or OPFS for wa-sqlite. Both offer some level of "relaxed durability" that can be used to accelerate these queriers, and is likely suitable for many embedded use cases.
+3. An fsync or flush to the underlying storage can be quite slow, particularly in the browser with IndexedDB for PGlite, or OPFS for wa-sqlite. Both offer some level of "relaxed durability" that can be used to accelerate these queries, and this mode is likely suitable for many embedded use cases.
-We are going to continue to use these micro-benchmarks to feed back into the development of PGlite, and update them and the findings as we move forward.
+We plan to continue to use these micro-benchmarks to feed back into the development of PGlite, and update them and the findings as we move forward.
These results below were run on a M2 Macbook Air.
@@ -63,7 +63,7 @@ Values are average ms - lower is better.
## SQLite benchmark suite
-The SQLite benchmark suite, converted to web for wa-sqlite, performs a number of large queries to test the performance of the sql engin.
+The SQLite benchmark suite, converted to web for wa-sqlite, it performs a number of large queries to test the performance of the sql engine.
Values are seconds to complete the test - lower is better.
@@ -115,7 +115,7 @@ All tests run with Node, [Better-SQLite3](https://www.npmjs.com/package/better-s
## Run the benchmarks yourself
-We have a hosted version of the benchmarks runner:
+We have a hosted version of the benchmark runners that you can run yourself:
- Benchmark using the SQLite benchmark suite
- Benchmark round-trim-time for CRUD queries
diff --git a/docs/docs/about.md b/docs/docs/about.md
index cec37e0f3..eee189744 100644
--- a/docs/docs/about.md
+++ b/docs/docs/about.md
@@ -2,36 +2,36 @@
PGlite is a WASM Postgres build packaged into a TypeScript/JavaScript client library that enables you to run Postgres in the browser, Node.js and Bun, with no need to install any other dependencies. It's under 3mb gzipped, and has support for many [Postgres extensions](../extensions/), including [pgvector](../extensions/#pgvector).
+Getting started with PGlite is super easy, just install and import the NPM package, then create a your embded database:
+
+```js
+import { PGlite } from "@electric-sql/pglite";
+
+const db = new PGlite();
+await db.query("select 'Hello world' as message;");
+// -> { rows: [ { message: "Hello world" } ] }
+```
+
+It can be used as an ephemeral in-memory database, or with persistence either to the file system (Node/Bun) or indexedDB (Browser).
+
Unlike previous "Postgres in the browser" projects, PGlite does not use a Linux virtual machine - it is simply Postgres in WASM.
It's being developed by [ElectricSQL](https://electric-sql.com/) for our use case of embedding into applications, either locally or at the edge, allowing users to sync a subset of their Postgres database.
However, there are many more use cases for PGlite beyond it's use as an embedded application databases:
-- Unit and CI testing
- PGlite is very fast to start and tare down, perfect for unit tests, you can a unique fresh Postgres for each test.
+- **Unit and CI testing**
+ PGlite is very fast to start and tare down, perfect for unit tests, you can have a unique fresh Postgres for each test.
-- Local development
- You can use PGlite as an alternative to a full local Postgres for local development, masivly simplifyinf your development environmant.
+- **Local development**
+ You can use PGlite as an alternative to a full local Postgres for local development, massively simplifying your development environments.
-- Remote development, or local web containers
+- **Remote development, or local web containers**
As PGlite is so light weight it can be easily embedded into remote containerised development environments, or in-browser [web containers](https://webcontainers.io).
-- On-device or edge AI and RAG
+- **On-device or edge AI and RAG**
PGlite has full support for [pgvector](../extensions/#pgvector), enabling a local or edge retrieval augmented generation (RAG) workflow.
We are very keen to establish PGlite as an open source, and open contribution, project, working to build a community around it to develop its capabilities for all use cases.
-Getting started with PGlite is super easy, just install and import the NPM package, then create a your embded database:
-
-```js
-import { PGlite } from "@electric-sql/pglite";
-
-const db = new PGlite();
-await db.query("select 'Hello world' as message;");
-// -> { rows: [ { message: "Hello world" } ] }
-```
-
-It can be used as an ephemeral in-memory database, or with persistence either to the file system (Node/Bun) or indexedDB (Browser).
-
Read more in our [getting started guide](./index.md).
diff --git a/docs/docs/api.md b/docs/docs/api.md
index bed1cb709..32ff442e2 100644
--- a/docs/docs/api.md
+++ b/docs/docs/api.md
@@ -9,7 +9,7 @@ outline: [2, 3]
`new PGlite(dataDir: string, options: PGliteOptions)`
`new PGlite(options: PGliteOptions)`
-A new pglite instance is created using the `new PGlite()` constructor.
+A new PGlite instance is created using the `new PGlite()` constructor.
This is imported as:
@@ -20,10 +20,10 @@ import { PGlite } from "@electric-sql/pglite";
`await PGlite.create(dataDir: string, options: PGliteOptions)`
`await PGlite.create(options: PGliteOptions)`
-There is also an additional `PGlite.create()` static method that returns a Promise resolving to the new PGlite instance. There are a couple of advatanges to using the static method:
+There is also a `PGlite.create()` static method that returns a Promise resolving to the new PGlite instance. There are a couple of advantages to using the static method:
-- The Promise awaits the [`.waitReady`](#waitready) promise ensureing that database has fully initiated.
-- When using TypeScript and extensions the returned PGlite instance will have the extensions namespace on it's type. This is not possible with the standard constructor.
+- The promise awaits the [`.waitReady`](#waitready) promise ensuring that database has fully initiated.
+- When using TypeScript and extensions the returned PGlite instance will have the extensions namespace on it's type. This is not possible with the standard constructor due to limitation with TypeScript.
#### `dataDir`
@@ -40,7 +40,7 @@ Path to the directory to store the Postgres database. You can provide a url sche
#### `options`
- `dataDir: string`
- The directory to store the Postgres database when not provided as the first argument.
+ The directory to store the Postgres database in when not provided as the first argument.
- `debug: 1-5`
the Postgres debug level. Logs are sent to the console.
- `relaxedDurability: boolean`
@@ -50,13 +50,13 @@ Path to the directory to store the Postgres database. You can provide a url sche
- `loadDataDir: Blob | File`
A tarball of a PGlite `datadir` to load when the database starts. This should be a tarball produced from the related [`.dumpDataDir()`](#dumpdatadir) method.
- `extensions: Extensions`
- An object containing the extensions you with to load.
+ An object containing the extensions you wish to load.
#### `options.extensions`
PGlite and Postgres extensions are loaded into a PGLite instance on start, and can include both a WASM build of a Postgres extension and/or a PGlite client plugin.
-The `options.extensions` paramiter is an opbject of `namespace: extension` parings. The namespace if sued to expose the PGlite client plugin included in the extension. An example of this it the [live queries](./live-queries.md) extension.
+The `options.extensions` parameter is an object of `namespace: extension` parings. The namespace if used to expose the PGlite client plugin included in the extension. An example of this is the [live queries](./live-queries.md) extension.
```ts
import { PGlite } from "@electric-sql/pglite";
@@ -65,7 +65,7 @@ import { vector } from "@electric-sql/pglite/vector";
const pg = await PGlite.create({
extensions: {
- live, // Live query extension, if a PGlite client plugin
+ live, // Live query extension, is a PGlite client plugin
vector, // Postgres pgvector extension
},
});
@@ -271,7 +271,7 @@ Promise that resolves when the database is ready to use.
::: tip NOTE
-Queries methods will wait for the `waitReady` promise to resolve if called before the database has fully initialised, and so it's not necessary to wait for it explicitly.
+Query methods will wait for the `waitReady` promise to resolve if called before the database has fully initialised, and so it's not necessary to wait for it explicitly.
:::
@@ -316,7 +316,7 @@ await pg.query("COPY my_table FROM '/dev/blob';", [], {
})
```
-To export a table or query to a file you just have to write to the `/dev/blob` device, the file will be retied as `blob` on the query results:
+To export a table or query to a file you just have to write to the `/dev/blob` device, the file will be returned as `blob` on the query results:
```ts
const ret = await pg.query("COPY my_table TO '/dev/blob';")
diff --git a/docs/docs/filesystems.md b/docs/docs/filesystems.md
index 8a5d85082..0ae9cca48 100644
--- a/docs/docs/filesystems.md
+++ b/docs/docs/filesystems.md
@@ -2,6 +2,8 @@
PGlite has a virtual file system layer that allows it to run in environments that don't traditionally have filesystem access.
+PGlite VFSs are under active development, we plan to extend the range of options in future, as well as make easy for users to create their own filesystems.
+
## In-memory FS
The in-memory FS is the default when starting PGlite, and it available on all platforms. All files are kept in memory and there is no persistance, other than calling [`pg.dumpDataDir()`](./api.md#dumpdatadir) and then using the [`loadDataDir`](./api.md#options) option at start.
@@ -32,7 +34,7 @@ To use the in-memory FS you can use one of these methods:
## Node FS
-The Node FS uses the Node.js file system API to implement a VFS for PGLite. It is bailable in both Node and Bun.
+The Node FS uses the Node.js file system API to implement a VFS for PGLite. It is available in both Node and Bun.
To use the Node FS you can use one of these methods:
@@ -56,11 +58,11 @@ To use the Node FS you can use one of these methods:
## IndexedDB FS
-The IndexedDB FS persistes the database to IndexedDB in the browser. It's a layer over the in-memory filesystem, loading all files for the database into memory on start, and flushing them to IndexedDB after each query.
+The IndexedDB FS persists the database to IndexedDB in the browser. It's a layer over the in-memory filesystem, loading all files for the database into memory on start, and flushing them to IndexedDB after each query if they have changed.
To use the IndexedDB FS you can use one of these methods:
-- Set the `dataDir` with a `idb://` prefix, the FS will use an IndexedDB named with the path provided
+- Set the `dataDir` with a `idb://` prefix, the database will be stored in an IndexedDB named with the path provided
```ts
const pg = new PGlite("idb://my-database")
```
@@ -72,7 +74,7 @@ To use the IndexedDB FS you can use one of these methods:
})
```
-The IndexedDB filesystem works at the file level, storing hole files as blobs in IndexedDB. Flushing whole files can take a few milliseconds after each query, to aid in building resposive apps we provide a `relaxedDurability` mode that can be [configured when starting](./api.md#options) PGlite. Under this mode the results of a query are returned imediatly, and the flush to IndexedDB is scheduled to happen asynchronous afterwards. Typically this is immediately after the query returns with no delay.
+The IndexedDB filesystem works at the file level, storing hole files as blobs in IndexedDB. Flushing whole files can take a few milliseconds after each query, to aid in building responsive apps we provide a `relaxedDurability` mode that can be [configured when starting](./api.md#options) PGlite. Under this mode the results of a query are returned immediately, and the flush to IndexedDB is scheduled to happen asynchronously afterwards. Typically this is immediately after the query returns with no delay.
### Platform Support
@@ -86,7 +88,7 @@ The OPFS AHP filesystem is built on top of the [Origin Private Filesystem](https
To use the OPFS AHP FS you can use one of these methods:
-- Set the `dataDir` to a directory with the origins OPFS
+- Set the `dataDir` to a directory within the origins OPFS
```ts
const pg = new PGlite("opfs-ahp://path/to/datadir/")
```
@@ -104,12 +106,12 @@ To use the OPFS AHP FS you can use one of these methods:
|------|-----|--------|--------|---------|
| | | ✓ | | ✓ |
-Unfortunately Safari appears to have a limit of 252 open sync access handles, this prevents this VFS from working as a standard Postgres install has between 300-800 files.
+Unfortunately Safari appears to have a limit of 252 open sync access handles, this prevents this VFS from working due to a standard Postgres install having between over 300 files.
### What is an "access handle pool"?
-The Origin Private Filesystem API provides both asynchronous ans synchronous methods, bit the synchronous are limited to read, write and flush. You are unable to travers the filesystem or open files synchronously. PGlite is a fully synchronous WASM build of Postgres and unable to call async APIs while handling a query. While it is possible to build an async WASM Postgres using [Asyncify](https://emscripten.org/docs/porting/asyncify.html), it adds significant overhead in both file size and performance.
+The Origin Private Filesystem API provides both asynchronous and synchronous methods, but the synchronous methods are limited to read, write and flush. You are unable to traverse the filesystem or open files synchronously. PGlite is a fully synchronous WASM build of Postgres and unable to call async APIs while handling a query. While it is possible to build an async WASM Postgres using [Asyncify](https://emscripten.org/docs/porting/asyncify.html), it adds significant overhead in both file size and performance.
-To overcome these limitations and provide a fully synchronous file system to PGlite on top of OPFS, we use something called an "access handle pool". When you first start PGlite we open a pool of OPFS access handles with randomised file names, these are then allocation to files as needed. After each query a poll maintenance job is scheduled that maintains the pool size. When you inspect the OPFS directory where the database is stored you will not see the normal Postgres directory layout, but rather a pool of files and a state file that contains the directory tree mapping along with file metadata.
+To overcome these limitations and provide a fully synchronous file system to PGlite on top of OPFS, we use something called an "access handle pool". When you first start PGlite we open a pool of OPFS access handles with randomised file names, these are then allocated to files as needed. After each query a pool maintenance job is scheduled that maintains its size. When you inspect the OPFS directory where the database is stored you will not see the normal Postgres directory layout, but rather a pool of files and a state file that contains the directory tree mapping along with file metadata.
The PGlite OPFS AHP FS is inspired by the [wa-sqlite](https://github.com/rhashimoto/wa-sqlite) access handle pool file system by [Roy Hashimoto](https://github.com/rhashimoto).
diff --git a/docs/docs/index.md b/docs/docs/index.md
index 251919dbd..f5b6634f8 100644
--- a/docs/docs/index.md
+++ b/docs/docs/index.md
@@ -69,7 +69,7 @@ const db = new PGlite("idb://my-pgdata");
There are two method for querying the database, `.query` and `.exec`, the former support parameters, and the latter multiple statements.
-First, lets crate a table and insert some test data using the `.exec` method:
+First, lets create a table and insert some test data using the `.exec` method:
```js
await db.exec(`
@@ -86,7 +86,7 @@ await db.exec(`
`)
```
-The `.exec` method is perfect for migrations, or batch inserts with raw SQL.
+The `.exec` method is perfect for migrations and batch inserts with raw SQL.
Now, lets retrieve an item using `.query` method:
@@ -107,7 +107,7 @@ console.log(ret.rows)
## Using parametrised queries
-When working with user supplied values its always best to use parametrised queries, these are supported on the `.query` method.
+When working with user supplied values it's always best to use parametrised queries, these are supported on the `.query` method.
We can use this to update a task:
@@ -124,15 +124,15 @@ const ret = await db.query(
## What next?
-- To learn more about [querying](./api.md#query) and [transactions](./api.md#transaction) you can read the main [PGlite API documentation](./api.md).
+- To learn more about [querying](./api.md#query) and [transactions](./api.md#transaction) along with the other methods and options available you can read the main [PGlite API documentation](./api.md).
- There is also a [live-query extension](./live-queries.md) that enables reactive queries to update a UI when the underlying database changes.
-- PGlite has a number of built in [virtual file systems](./filesystems.md) to provided persistance to the database.
+- PGlite has a number of built in [virtual file systems](./filesystems.md) to provided persistance for your database.
- There are [framework hooks](./framework-hooks.md) to make working with PGlite within React and Vue much easer with less boilerplate.
-- As PGlite only has single exclusive connection to the database, we provide a [multi-tab worker](./multi-tab-worker.md) to enable sharing a PGlite instance between multiple browser tabs.
+- As PGlite only has a single exclusive connection to the database, we provide a [multi-tab worker](./multi-tab-worker.md) to enable sharing a PGlite instance between multiple browser tabs.
- There is a [REPL component](./repl.md) that can be easily embedded into a web-app to aid in debugging and development, or as part of a database application itself.
diff --git a/docs/docs/live-queries.md b/docs/docs/live-queries.md
index a96559070..d67d36f04 100644
--- a/docs/docs/live-queries.md
+++ b/docs/docs/live-queries.md
@@ -1,14 +1,14 @@
# Live Queries
-The "live" extension enables you to subscribe to a query and receve updated results when the underlying tables change.
+The "live" extension enables you to subscribe to a query and receive updated results when the underlying tables change.
-To use the extension it needs adding to the PGlite instance when creating it:
+To use the extension it needs to be added to the PGlite instance when creating it:
```ts
import { PGlite } from "@electric-sql/pglite";
import { live } from "@electric-sql/pglite/live";
-const pg = new PGlite({
+const pg = await PGlite.create({
extensions: {
live,
},
@@ -42,11 +42,11 @@ interface LiveQueryReturn {
}
```
-- `initialResults` is the initial results set (also sent to the callback
-- `unsubscribe` allow you to unsubscribe from the live query
-- `refresh` allows you to force a refresh of the query
+- `initialResults` is the initial results set (also sent to the callback)
+- `unsubscribe` allows you to unsubscribe from the live query
+- `refresh` allows you to force a refresh of the query with the updated results sent to the callback
-Internally it watches for the tables that the query depends on, and reruns the query whenever they are changed.
+Internally it watches the tables that the query depends on, and reruns the query whenever they are changed.
## live.incrementalQuery
@@ -54,7 +54,7 @@ Internally it watches for the tables that the query depends on, and reruns the q
Similar to above, but maintains a temporary table inside of Postgres of the previous state. When the tables it depends on change the query is re-run and diffed with the last state. Only the changes from the last version of the query are copied from WASM into JS.
-It requires an additional `key` argument, the name of a column (often a PK) to key the diff on.
+It requires an additional `key` argument, the name of a column (often a primary key) to key the diff on.
```ts
const ret = pg.live.incrementalQuery(
@@ -119,8 +119,8 @@ type Change = ChangeInsert | ChangeDelete | ChangeUpdate;
Each `Change` includes the new values along with:
-- `__changed_columns__` the columns names that were changes
+- `__changed_columns__` the column names that were changes
- `__op__` the operation that is required to update the state (`INSERT`, `UPDATE`, `DELETE`)
-- `__after__` the `key` of the row that this row should be after, it will be included in `__changed_columns__` if it has been changed.
+- `__after__` the `key` of the row that this row should be positioned after, it will be included in `__changed_columns__` if it has been changed. This allows for very efficient moves withing an order set of results.
This API can be used to implement very efficient in-place DOM updates.
diff --git a/docs/docs/multi-tab-worker.md b/docs/docs/multi-tab-worker.md
index 708c2038c..afb63b53c 100644
--- a/docs/docs/multi-tab-worker.md
+++ b/docs/docs/multi-tab-worker.md
@@ -1,6 +1,6 @@
# Multi-tab Worker
-It's likely that you will want to run PGlite in a Web Worker so that it doesn't block the main thread. Additionally as PGlite is single connection, you may want to proxy multiple browser tabs to a single PGlite instance.
+It's likely that you will want to run PGlite in a Web Worker so that it doesn't block the main thread. Additionally as PGlite is single connection only, you may want to proxy multiple browser tabs to a single PGlite instance.
To aid in this we provide a `PGliteWorker` with the same API as the standard PGlite, and a `worker` wrapper that exposes a PGlite instance to other tabs.
@@ -15,6 +15,7 @@ import { worker } from "@electric-sql/pglite/worker";
worker({
async init() {
+ // Create and return a PGlite instance
return new PGlite();
},
});
@@ -34,9 +35,9 @@ const pg = new PGliteWorker(
// `pg` has the same interface as a standard PGlite interface
```
-Internally this starts a worker for each tab, but then runs a leader election to nominate one as the leader. Only the leader then opens the PGlite and handles all queries. When the leader tab is closed, a new leader election is run and a new PGlite instance is started.
+Internally this starts a worker for each tab, but then runs a leader election to nominate one as the leader. Only the leader then starts the PGlite, by calling the `init` function, and handles all queries. When the leader tab is closed, a new leader election is run and a new PGlite instance is started.
-In addition to having all the standrad methods of the [`PGlite` interface](./api.md), `PGliteWorker` also has the following methods and properties:
+In addition to having all the standard methods of the [`PGlite` interface](./api.md), `PGliteWorker` also has the following methods and properties:
- `onLeaderChange(callback: () => void)`
This allows you to subscribe to a notification when the leader worker is changed. It returns an unsubscribe function.
@@ -47,15 +48,15 @@ In addition to having all the standrad methods of the [`PGlite` interface](./api
## Passing options to a worker
-`PGliteWorker` takes an optional second paramiter `options`, this can include any standard [PGlite options](./api.md#options) along with these addtional options:
+`PGliteWorker` takes an optional second parameter `options`, this can include any standard [PGlite options](./api.md#options) along with these additional options:
- `id: string`
- This is an optional `id` to gide your PGlite worker group. The leader election is run between all `PGliteWorker`s with the same `id`.
+ This is an optional `id` to group your PGlite workers. The leader election is run between all `PGliteWorker`s with the same `id`.
If not provided the url to the worker is concatenated with the `dataDir` option to create an id.
- `meta: any`
- Any aditional metadata you would like to pass to the worker process `init` function.
+ Any additional metadata you would like to pass to the worker process `init` function.
-The `worker()` wrapper takes a single options argument, with a single `init` property. `init` is a function takes sed any options passed to `PGliteWorker` (excluding extensions) and returns a `PGlite` instance. You can use the options passed to decide how to configure your instance:
+The `worker()` wrapper takes a single options argument, with a single `init` property. `init` is a function takes any options passed to `PGliteWorker`, excluding extensions, and returns a `PGlite` instance. You can use the options passed to decide how to configure your instance:
```js
// my-pglite-worker.js
@@ -66,7 +67,7 @@ worker({
async init(options) {
const meta = options.meta
// Do something with additional metadata.
-
+ // or even run your own code in the leader along side the PGlite
return new PGlite({
dataDir: options.dataDir
});
@@ -91,9 +92,9 @@ const pg = new PGliteWorker(
## Extension support
-`PGliteWorker` has support for both Postgres Extensions and PGlite plugins using the normal [extension api](./api.md#optionsextensions).
+`PGliteWorker` has support for both Postgres extensions and PGlite plugins using the normal [extension api](./api.md#optionsextensions).
-Any extension can be use by the PGlite instance inside the worker:
+Any extension can be use by the PGlite instance inside the worker, however the extensions namespace if not exposed on a connecting `PGliteWorker` on the main thread.
```js
// my-pglite-worker.js
@@ -147,6 +148,6 @@ const pg = await PGliteWorker.create(
}
);
-// TypeScript is await for the `pg.live` namespace:
+// TypeScript is aware of the `pg.live` namespace:
pg.live.query(/* ... */)
```
diff --git a/docs/docs/orm-support.md b/docs/docs/orm-support.md
index 19b192760..70c96b28a 100644
--- a/docs/docs/orm-support.md
+++ b/docs/docs/orm-support.md
@@ -2,9 +2,9 @@
## Drizzle
-[Drizzle](https://orm.drizzle.team) is a TypeScript ORM with support for many datbases include PGlite. Features include:
+[Drizzle](https://orm.drizzle.team) is a TypeScript ORM with support for many databases include PGlite. Features include:
-- A declarative realtional query API
+- A declarative relational query API
- An SQL-like query builder API
- Migrations
diff --git a/docs/docs/repl.md b/docs/docs/repl.md
index e980d1a03..346c52817 100644
--- a/docs/docs/repl.md
+++ b/docs/docs/repl.md
@@ -14,7 +14,7 @@ const Repl = defineClientComponent(() => {
A REPL, or terminal, for use in the browser with PGlite, allowing you to have an interactive session with your WASM Postgres in the page.
-This is the REPL with a full PGlite Postgres embeded in the page:
+This is the REPL with a full PGlite Postgres embedded in the page:
@@ -68,7 +68,7 @@ The `lightTheme` and `darkTheme` should be instances of a [React CodeMirror](htt
## Web Component
-Although the PGlite REPL is built with React, its also available as a web component for easy inclusion in any page or other framework.
+Although the PGlite REPL is built with React, it's also available as a web component for easy inclusion in any page or any other framework.
```html
@@ -92,7 +92,7 @@ Although the PGlite REPL is built with React, its also available as a web compon
### With Vue.js
-The REPL Web Component can be used with Vue.js, and in fact thats how its embeded above.
+The REPL Web Component can be used with Vue.js:
```vue